Proxy Server
Run proxy-server on a machine your developer workstations can reach. This is where secrets stay, where placeholders are resolved, and where the tunnel listener runs.
This guide follows the upstream repository quickstart for the transparent local-agent flow, from cloning the repo to running the local agent and verifying placeholder-based secret injection.
Run proxy-server on a machine your developer workstations can reach. This is where secrets stay, where placeholders are resolved, and where the tunnel listener runs.
Run heimdall-local-agent on the developer machine. In transparent mode it intercepts HTTPS traffic locally, then forwards approved requests through the authenticated tunnel.
Enable the built-in panel if you want runtime management for clients, stored secrets, AWS-backed secrets, and audit logs. It lives under /panel/.
Use this path when you want the developer machine to work without setting HTTPS_PROXY per app.
If you are validating on a VPS, CI runner, or a root-owned workload, the upstream docs recommend starting with the explicit proxy guide instead.
Heimdall is open source and the setup starts with the public GitHub repository.
git clone https://github.com/BenTimor/Heimdall.git
cd Heimdall
pnpmOPENAI_API_KEYsudo on LinuxOn the host that will keep the real secrets, install dependencies, generate certificates, and create the server config from the example file.
cd proxy-server
pnpm install
pnpm run generate-ca
pnpm run generate-tunnel-cert proxy.example.com
cp config/server-config.example.yaml config/server-config.yaml
Then update config/server-config.yaml so it has a reachable public host, at least one authenticated client, the secrets you want to resolve, and the tunnel listener enabled.
proxy:
host: "0.0.0.0"
port: 8080
publicHost: "proxy.example.com"
ca:
certFile: "certs/ca.crt"
keyFile: "certs/ca.key"
secrets:
OPENAI_API_KEY:
provider: "env"
path: "OPENAI_API_KEY"
allowedDomains: ["api.openai.com"]
auth:
enabled: true
clients:
- machineId: "dev-machine-01"
token: "some-secure-token-here"
logging:
level: "info"
audit:
enabled: true
tunnel:
enabled: true
host: "0.0.0.0"
port: 8443
tls:
certFile: "certs/tunnel.crt"
keyFile: "certs/tunnel.key"
heartbeatIntervalMs: 30000
heartbeatTimeoutMs: 90000
Start the proxy after exporting the real secret value on that host:
export OPENAI_API_KEY="sk-your-real-key"
pnpm run dev
The panel is optional, but it is the easiest way to manage clients, stored secrets, AWS-backed secrets, and audit logs at runtime. To enable it, add or uncomment the panel section in proxy-server/config/server-config.yaml.
panel:
enabled: true
port: 9090
host: "127.0.0.1"
dbPath: "data/heimdall.db"
defaultAdminPassword: "change-me-immediately"
sessionTtlHours: 24
encryptionKeyFile: "data/encryption.key"
Keep panel.host on 127.0.0.1, start the server, then open http://127.0.0.1:9090/panel/ in your browser.
The upstream deployment guide recommends SSH port-forwarding instead of exposing the panel broadly.
ssh -L 9090:127.0.0.1:9090 your-user@proxy.example.com
After the tunnel is open, browse to http://127.0.0.1:9090/panel/ locally.
adminpanel.defaultAdminPasswordCopy proxy-server/certs/ca.crt from the server to the developer machine as something like heimdall-ca.crt.
heimdall-local-agentUse a release archive if one is published for your team, or build the local agent from source.
cd local-agent
cargo build --release
cp config/agent-config.example.yaml config/agent-config.yaml
Then edit config/agent-config.yaml so it points at your tunnel host and uses credentials that match an entry under auth.clients on the proxy.
server:
host: "proxy.example.com"
port: 8443
ca_cert: "/path/to/heimdall-ca.crt"
auth:
machine_id: "dev-machine-01"
token: "some-secure-token-here"
transparent:
enabled: true
host: "0.0.0.0"
port: 19443
method: "auto"
Run the install step with elevated privileges so Heimdall can install the CA certificate and enable transparent interception.
heimdall-local-agent install \
--config ./agent-config.yaml \
--ca-cert /path/to/heimdall-ca.crt
If you built from source instead of using a packaged binary, use:
target/release/heimdall-local-agent install \
--config config/agent-config.yaml \
--ca-cert /path/to/heimdall-ca.crt
uninstall to reverse the changes lateriptables and ip6tables. Root-owned client processes are excluded to avoid tunnel loops.
heimdall-local-agent run --config ./agent-config.yaml
If you built from source instead of using a packaged binary, use:
target/release/heimdall-local-agent run --config config/agent-config.yaml
With the transparent install in place, apps should work without setting HTTPS_PROXY per process. Verify with the placeholder token shown in the repo docs:
curl -4 https://api.openai.com/v1/models \
-H "Authorization: Bearer __OPENAI_API_KEY__"
curl -6 https://api.openai.com/v1/models \
-H "Authorization: Bearer __OPENAI_API_KEY__"
Helpful extra commands from the local-agent docs:
heimdall-local-agent test --config config/agent-config.yaml
heimdall-local-agent status
The current repo examples use placeholders such as __OPENAI_API_KEY__. Your app or agent should send that placeholder value, and the proxy replaces it only when the destination is allowed by allowedDomains.
Authorization: Bearer __OPENAI_API_KEY__
That means your AI agent never gets the real secret in its prompt, memory, or logs.
Once the transparent flow is working, these upstream docs cover the deeper operator paths.
Use this if you want per-process proxy routing instead of machine-wide interception.
Recommended topology, panel hardening, backups, and production-style operating guidance.
Packaging, services, health checks, and workstation-specific lifecycle commands.