A containerized sandbox for running coding agents (Claude Code, Gemini CLI, OpenAI Codex, GitHub Copilot) with --dangerously-skip-permissions / --dangerously-bypass-approvals-and-sandbox enabled by default.
Agents work best when they can freely run shell commands, edit files, install packages, and poke at databases — but you don't want them doing that against your host. This image gives each session its own throwaway Linux environment with:
- The project you're working on mounted at
/app. - Language runtimes, databases (PostgreSQL, LavinMQ), and common tools preinstalled, so agents don't spend turns bootstrapping.
- OAuth credentials and API keys mounted from
~/ai/settings, with multi-profile support and automatic token refresh. - Shell history, agent session history, and cloned repos persisted on the host across container restarts.
mitmproxyavailable for inspecting what the agent actually sends over the wire.
Drop per-profile OAuth credentials in $HOME/ai/settings as
.credentials.<profile>.json. They get mounted into the container and copied
into ~/.claude/ when you launch claude with a matching profile. See the
OAuth Login section for how to generate these.
Optionally, add AGENTS.md to $HOME/ai/settings — it becomes CLAUDE.md
for Claude Code and GEMINI.md for Gemini CLI.
# start podman and share the current working directory
bin/ai
# start services (and run "bundle install" if Gemfile exists)
s
# launch claude with a specific oauth profile
c <profile>
# or launch claude with an Anthropic API key
c --apikey sk-ant-...
# resume a prior session (searches /history for the session id; a prefix is enough).
# profile is auto-detected from the session's saved .profile file.
c --resume <session-id>
# launch gemini
g
# launch openai codex
cx
# exit the container
xYour Anthropic API key in 1Password.
brew install podman
# memory is in MiB, disk in GiB
podman machine init --disk-size 300 --memory 16384 --now
./build_image
# rebuild all layers and pull latest base image
./build_image --forceFirst-time setup to get OAuth credentials:
# inside the container, or via podman run
refresh-tokens --login
refresh-tokens --login <profile>This generates an OAuth authorization URL. Open it in your browser, sign in, and paste the code back into the terminal. Credentials are saved to ~/.claude/.credentials.json (or .credentials.<profile>.json).
OAuth tokens expire periodically. A systemd service (refresh-tokens.service) runs in every container, keeping ~/.claude/.credentials*.json files fresh automatically.
# check service status
systemctl status refresh-tokens
# view logs
journalctl -u refresh-tokens
# follow logs
journalctl -u refresh-tokens -f
# one-shot refresh (e.g. before launching a session)
refresh-tokens --onceThe service can also run as a standalone container to refresh /settings credentials:
podman run -d --name token-refresh \
--env CREDENTIALS_DIR=/settings \
--volume ${HOME}/ai/settings:/settings \
ai:latest /usr/local/bin/refresh-tokens --daemonEnvironment variables:
| Variable | Default | Description |
|---|---|---|
CREDENTIALS_DIR |
~/.claude |
Directory containing .credentials*.json files |
CHECK_INTERVAL |
300 |
Seconds between checks |
REFRESH_BEFORE |
3600 |
Seconds before expiry to trigger refresh |
zsh things:
# pod # list all running containers
# pod <id> # launch bash shell in selected container
# pod last # launch bash shell in the youngest container
function pod() {
[ $# -lt 1 ] && podman ps && return 0
[ "$1" = "last" ] && podman exec -it $(podman ps | tail -1 | cut -d ' ' -f 1) ${2:-bash} && return
local container
container=$1
podman exec -it $container ${2:-bash}
}Start it capturing everything:
./mitmdump --mode regular --listen-port 8080 --ssl-insecure --set flow_detail=3 -w claude.flowmitmproxy generates its CA at ~/.mitmproxy/mitmproxy-ca-cert.pem on first run.
export NODE_EXTRA_CA_CERTS=~/.mitmproxy/mitmproxy-ca-cert.pem
export HTTPS_PROXY=http://127.0.0.1:8080Start the agent
claudeOutput the (partially) binary dump as text (--mode picking another port is important if proxy already running)
./mitmdump --mode regular@8082 --set flow_detail=3 -r claude.flow --set export_format=curltcpdump
podman run --rm -it --cap-add=NET_RAW --cap-add=NET_ADMIN --net=container:<container> nicolaka/netshoot tcpdump -i eth0podman
# to see current settings
podman machine inspect
# when we can't build because we're out of space
podman system prune--all
# to combat this error related to "Linux Kernel Keyring quota"
# Error: preparing container ... for attach: crun: join keyctl `...`: Disk quota exceeded: OCI runtime error
podman machine ssh sudo sysctl -w kernel.keys.maxkeys=20000
podman machine ssh sudo sysctl -w kernel.keys.maxbytes=200000- Claude Code
- GitHub Copilot
- Google Gemini
- Node.js
- Bun
TypeScript - Ruby
- Crystal
- Python
- Rust
- Go
- SSH (
ssh-keygen, ...) - PostgreSQL
- LavinMQ
- amqpcat