Why a dedicated box
I run a Hermes Agent (an autonomous terminal agent) on its own VPS rather than anywhere I care about. The agent has passwordless sudo. That’s the whole point: it installs packages, edits configs, and restarts services without me in the loop. Blast radius is real. A dedicated, snapshotted box keeps the damage contained to something I can restore in minutes.
This post walks through the full setup on a fresh Debian 13 VPS. The big pieces: base hardening, a break-glass recovery user, SSH on a non-default port with the socket-activation override Debian 13 needs, UFW, fail2ban, a sysctl drop-in, the Hermes install, and a systemd-managed messaging gateway so I can reach the agent from my phone. Two manual snapshot checkpoints sit along the way that you shouldn’t skip.
You can paste each block in sequence. The only points you can’t speed through are the two checkpoints, which I’ll flag as they come up.
Base system
Update everything, clear out anything that’s no longer needed, and install the small set of tools the rest of this post assumes are present:
sudo apt update && sudo apt full-upgrade -y
sudo apt autoremove --purge -y && sudo apt autoclean
sudo apt install -y git curl ca-certificates gnupg ufw fail2ban \
unattended-upgrades apt-listchanges needrestart sudo vim htop rsyncTurn on automatic security updates so the box keeps patching itself while I’m not looking:
echo 'APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "7";' | sudo tee /etc/apt/apt.conf.d/20auto-upgradesTwo users: deploy and a break-glass recovery
deploy is the account Hermes runs as. recovery is a second sudo-capable user that exists as a recovery hatch. If the agent ever mangles deploy’s shell config or authorized_keys, I still have a way in. Both get the same SSH key copied over from root:
sudo adduser deploy && sudo usermod -aG sudo deploy
sudo rsync --archive --chown=deploy:deploy ~/.ssh /home/deploy/
sudo adduser recovery && sudo usermod -aG sudo recovery
sudo rsync --archive --chown=recovery:recovery ~/.ssh /home/recovery/SSH hardening
A drop-in config that disables password auth, moves SSH to 2222, and tightens a few defaults. Keeping root login enabled is deliberate. The break-glass story needs a way in if deploy is broken; recovery covers that, but I still want the option.
sudo tee /etc/ssh/sshd_config.d/99-hardening.conf > /dev/null <<'EOF'
Port 2222
PasswordAuthentication no
KbdInteractiveAuthentication no
PubkeyAuthentication yes
PermitEmptyPasswords no
X11Forwarding no
MaxAuthTries 3
LoginGraceTime 30
ClientAliveInterval 300
ClientAliveCountMax 2
EOFValidate the config before restarting anything:
sudo sshd -tNote
Debian 13 uses systemd socket activation for SSH, so Port 2222 in sshd_config isn’t enough on its own; the socket unit still listens on 22. Override it:
sudo mkdir -p /etc/systemd/system/ssh.socket.d
sudo tee /etc/systemd/system/ssh.socket.d/override.conf > /dev/null <<'EOF'
[Socket]
ListenStream=
ListenStream=2222
EOFThe blank ListenStream= resets the list before appending 2222. Otherwise systemd merges the two and you end up listening on both.
Reload, then stop both units and start the socket on its own so the new port takes effect:
sudo systemctl daemon-reload
sudo systemctl stop ssh.service ssh.socket
sudo systemctl start ssh.socketRestarting them together (restart ssh.socket ssh) won’t work: ssh.service is still bound to port 22 when systemd tries to rebind the socket, so ssh.socket fails with Socket service ssh.service already active, refusing. Stop the service first, then let the socket come up and trigger the service on demand.
Firewall and fail2ban
UFW defaults to deny-incoming, allow-outgoing, with one hole for the new SSH port:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 2222/tcp comment 'SSH'
sudo ufw --force enablefail2ban handles the other half. UFW is a static gate; fail2ban is what bans hosts that keep hammering it. The jail has to know about 2222 explicitly:
sudo tee /etc/fail2ban/jail.local > /dev/null <<'EOF'
[DEFAULT]
bantime = 1h
findtime = 10m
maxretry = 5
[sshd]
enabled = true
port = 2222
EOF
sudo systemctl enable --now fail2banKernel sysctl hardening
A small drop-in for the usual network and kernel-info-leak knobs:
sudo tee /etc/sysctl.d/99-hardening.conf > /dev/null <<'EOF'
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
kernel.kptr_restrict = 2
kernel.dmesg_restrict = 1
EOF
sudo sysctl --systemWarning
Before you reboot, open a second terminal and confirm you can SSH in on the new port as both deploy and recovery:
ssh -p 2222 deploy@your-server-ip
ssh -p 2222 recovery@your-server-ipIf either fails, fix it from your still-open original session. Do not reboot until both work. If you lock yourself out, your only way back in is the provider’s web console, and you don’t want to learn that the hard way.
Once both logins work, reboot to pick up any kernel updates and confirm the box comes back cleanly:
sudo rebootThen reconnect:
ssh -p 2222 deploy@your-server-ipPasswordless sudo for deploy
Hermes installs packages, edits configs, and restarts services. That all needs sudo, and the agent doesn’t sit there typing passwords, so deploy gets NOPASSWD. Write the drop-in, lock its permissions (sudo refuses to read the file otherwise), and validate the sudoers tree before trusting it:
echo 'deploy ALL=(ALL) NOPASSWD:ALL' | sudo tee /etc/sudoers.d/deploy-nopasswd
sudo chmod 0440 /etc/sudoers.d/deploy-nopasswd
sudo visudo -cImportant
Go to your VPS provider’s web console (Hetzner, DigitalOcean, Vultr, Linode, OVH, whatever) and take a snapshot now. Label it something like debian13-hardened-pre-hermes.
This is your clean, fully-hardened baseline. If anything downstream goes wrong, you restore to this and you’re back in minutes without redoing any of the hardening above.
After the snapshot completes, lock it from deletion. Most providers call this “protected”, “locked”, or “prevent deletion” on the snapshot’s detail page. The exact spot varies. Do it in the web console; there’s no command line for this.
Install Hermes
Hermes’ installer needs git, which we already have, but a quick sanity check never hurts:
git --versionRun the one-line installer and reload your shell so hermes is on PATH:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
source ~/.bashrcConfigure Hermes
The interactive wizard handles the model, API key, and tool picks:
hermes setupThen a few knobs that matter for a dedicated agent box. Local terminal backend, since the agent is running on the box rather than SSH-ing into it. Smart approval mode, which lets an auxiliary LLM auto-approve safe commands, auto-deny dangerous ones, and escalate the rest to me. A 60s approval timeout that fails closed. A longer command timeout so apt upgrades and builds don’t get killed mid-run.
hermes config set terminal.backend local
hermes config set approvals.mode smart
hermes config set approvals.timeout 60
hermes config set terminal.timeout 600Lock down the secrets file:
chmod 600 ~/.hermes/.envSmoke test with a quick chat that exercises a tool call and sudo:
hermesMessaging gateway
Once the local chat works, the gateway is what makes this useful. It’s how I reach the agent from Telegram, Discord, Signal, whatever:
hermes gateway setupLock the gateway down to your own user ID. GATEWAY_ALLOW_ALL_USERS=true is never safe on an internet-exposed bot: anyone who finds the bot’s handle can DM it and drive the agent. The gateway authorizer reads TELEGRAM_ALLOWED_USERS from the environment, so the allowlist has to land in ~/.hermes/.env. Running hermes config set would route an unknown key to config.yaml, which the gateway never reads. Either let hermes gateway setup write the value for you, or append it directly:
echo 'TELEGRAM_ALLOWED_USERS=your_telegram_user_id' >> ~/.hermes/.env
chmod 600 ~/.hermes/.envInstall the gateway as a systemd unit so it survives reboots and restarts on failure, then check it’s running:
hermes gateway install
sudo systemctl status hermes-gatewayTip
In your provider’s web console, turn on scheduled or automatic snapshots, daily or weekly as you prefer. The locked pre-Hermes snapshot is your nuclear-option restore; these ongoing ones let you roll back to “yesterday” if the agent only messes something minor up.
Exact location and terminology vary. Hetzner calls them “Backups” (paid per-server toggle), DigitalOcean and Vultr call them “Automatic Backups” or “Auto-backup” under the server’s settings, Linode has a “Backups” tab, OVH has automated snapshots on newer plans. Keep the locked baseline separate from the rotating automatic ones. Most providers treat them as different resources, but if yours lumps them together, mark the baseline as protected so the rotation can’t evict it.
Closing notes
A few things I’ve landed on after running this setup for a while:
- The two checkpoints (SSH verify, baseline snapshot) are the only points you can’t skip. Everything else is fire-and-forget.
- After Hermes has been running for a week or two and you’ve built up skills, memories, and config tweaks, take a second locked snapshot labeled something like
hermes-configured-working. That’s your “agent is set up the way I like” baseline, separate from the clean-OS one. - Tail
~/.hermes/logs/errors.logandgateway.logevery so often to see what the agent is doing, and whethersmartmode is gating the right things. If I find myself approving the same benign pattern over and over, it goes intocommand_allowlistin~/.hermes/config.yaml.