This looks really good - the CLI interface design is solid, and I especially like the secrets / network proxy pattern - but the thing it needs most is copiously detailed documentation about exactly how the sandbox mechanism works - and how it was tested.
There are dozens of projects like this emerging right now. They all share the same challenge: establishing credibility.
I'm loathe to spend time evaluating them unless I've seen robust evidence that the architecture is well thought through and the tool has been extensively tested already.
My ideal sandbox is one that's been used by hundreds of people in a high-stakes environment already. That's a tall order, but if I'm going to spend time evaluating one the next best thing is documentation that teaches me something about sandboxing and demonstrates to me how competent and thorough the process of building this one has been.
UPDATE: On further inspection there's a lot that I like about this one. The CLI design is neat, it builds on a strong underlying library (the OpenAI Codex implementation) and the features it does add - mainly the network proxy being able to modify headers to inject secrets - are genuinely great ideas.
> There are dozens of projects like this emerging right now. They all share the same challenge: establishing credibility.
Care to elaborate on the kind of "credibility" to be established here? All these bazillion sandboxing tools use the same underlying frameworks for isolation (e.g., ebpf, landlock, VMs, cgroups, namespaces) that are already credible.
The problem is that those underlying frameworks can very easily be misconfigured. I need to know that the higher level sandboxing tools were written by people with a deep understanding of the primitives that they are building on, and a very robust approach to testing that their assumptions hold and they don't have any bugs in their layer that affect the security of the overall system.
Most people are building on top of Apple's sandbox-exec which is itself almost entirely undocumented!
I'm sure 100% of them are vibe coded. We were all wondering where this new era of software is, and now it's here, a bunch of nominally different tools that all claim to do the same thing.
I'm thinking the LocalLLM crowd should take their LLMs to trying to demolish these sandboxes.
I agree to some extend. I'm using the OpenAI Codex crates for sandboxing though, which I think it's properly tested? They launched last year and iterated many times. I will add a note though, thanks!
Compare with and steal any ideas you like from mine if you like. I've got a semi-decent curl|bash pattern covered, and also add network filtering via pasta (which may be more robust than rolling your own). https://github.com/reubenfirmin/bubblewrap-tui
Ohh! thanks for sharing this. You are using DNS proxy which is interesting and useful if a process doesn't respect the HTTPS_PROXY/HTTP_PROXY/etc. env vars that I'm injecting. I will take a look, very interesting.
Oh wow, this looks nicely done! It's also nice that it's cross platform. I've done something similar with https://github.com/Gerharddc/litterbox which takes things a bit further by allowing you to easily sandbox your entire development environment (i.e. IDE and everything) using containers. Unfortunately I have not gotten around to the network sandboxing part though, that seems very tricky to get useful without being too "annoying".
Hey - I'd love for you to add a documented / standard way to use this inside dockers so we can use build on it for various agentic efforts. I've solved getting bubblewrap to work inside a docker once for the nanobot project, but the folks there are dragging their feet on incorporating sandboxing.
I've been testing this on Docker today, including the credential injection, env vars, net calls control. I will add more docs but one interesting use case would be to have something like `zerobox --profile nanoclaw -- nanoclaw`, or something similar.
I'll give it a shot later today, but basically you need a pretty specific seccomp profile (see my example - I pulled from the podman repo) to allow bubblewrap to run inside an unpriviledged docker.
It’s terrific to see this. I’m definitely going to give it a whirl. I’ve been working on a specific JavaScript isolate[^1]. This is great source of inspiration for it.
Linux by default allows all users to read CLI arguments of running processes. While it looks like your bwrap invocation prevents the sandbox from looking at this process (--unshare-pid), any other process running on your system can read the secret.
That's true and the expected behaviour but I see your point. The example there is not great, I should've used `sk_s123...` to show that you are passing the env var to the sandbox as opposed to setting it on the host, then proxying it. I will update it.
Very interesting. I just started researching this topic yesterday to build something for adjacent use cases (sandboxing LLM authored programs). My initial prototype is using a wasm based sandbox, but I want something more robust and flexible.
Some of my use cases are very latency sensitive. What sort of overhead are you seeing?
Wasm sandboxes are fast for pure compute but get painful the moment LLM code needs filesystem access or subprocess spawning. And it will, constantly. Containers with seccomp filters give you near-native speed and way broader syscall support — overhead is basically startup time (~2s cold, sub-second warm). For anything IO-heavy it's not even close. We're doing throwaway containers at https://cyqle.in if anyone's curious.
Again, it’s blacklisting so kind of impossible to get right. I’ve looked at this many times, but in order for things to properly work, you have to create a huge, huge, huge, huge sandbox file.
Especially for your application that you any kind of Apple framework.
I'd feel safer with default-deny on reads as well, but I know from past experience that this gets tricky fast - tools like Node.js and uv and Python all have a bunch of files they need to be able to read that you might not predict in advance.
Might still be possible to do that in a DX-friendly way though, if you make it easy to manually approve reads the first time and use that to build a profile that can be reused on subsequent command invocations.
That being said, what the default DX shouldl be? What paths to deny by default? That's something I've been thinking about and I'd love to hear your thoughts.
That's a really tough question. I always worry about credentials that are tucked away in ~/.folders in my home directory like in ~/.aws - but you HAVE to provide access to some of those like ~/.claude because otherwise Claude Code won't work.
That's why rather than a default set I'm interested in an option where I get to approve things on first run - maybe something like this:
zerobox --build-profile claude-profile.txt -- claude
The above command would create an empty claude-profile.txt file and then give me a bunch of interactive prompts every time Claude tried to access a file, maybe something like:
claude wants to read ~/.claude/config.txt
A) allow that file, D) allow full ~/.claude directory, X) exit
You would then clatter through a bunch of those the first time you run Claude and your decisions would be written to claude-profile.txt - then once that file exists you can start Claude in the future like this:
zerobox --profile claude-profile.txt -- claude
(This is literally the first design I came up with after 30s of thought, I'm certain you could do much better.)
Fantastic! I like that idea. I'm also exploring an option to define profiles, but also have predefines profiles that ships with the binary (e.g. Claude, then block all `.env` reads, etc.)
The `--build-profile` / `--profile` thing is a good idea, but typically you'd want to just save all of the access that the program does without prompting.
Programs will access many files and directories on startup, and it would be extremely tedious to have to manually approve each one. So you'd auto-approve all and save them to the profile. This is TOFU principles applied to sandboxing. The assumption being that "this first time I run it naked, it's unlikely to do anything malicious, let me enforce that behavior for the future."
Let the user play with the app and after they exit the profile should contain all of the access attempts in a human readable format that's editable by the developer.
There might be many access attempts to folders in one directory, e.g.:
~/Documents/...
So instead of having a massive list of files it should be easy for developers to edit the profile to say, "Allow everything there", e.g. ~/Documents/*
Technical debt is not always bad. Deliberate technical debt taken on with eyes open to ship faster is a legitimate business strategy. The problem is accidental technical debt from poor decisions compounding silently.
Thanks for sharing that. Zerobox _does_ use the native OS sandboxing mechanisms (e.g. seatbelt) under the hood. I'm not trying to reinvent the wheel when it comes to sandboxing.
Re the URLs, I agree, that's why I added wildcard support, e.g. `*.openai.com` for secret injection as well as network call filtering.
You know, the thing is, that it is super easy to create such tools with AI nowadays. …and if you create your own, you can avoid these unnecessary abstractions. You get exactly what you want.
Zerobox creates a cert in `~/.zerobox/cert` on the first proxy run and reuses that. The MTIM process uses that cert to make the calls, inject certs, etc. This is actually done by the underlying Codex crate.
Yeah, but how does the sandboxed process “know” that it has to go through the proxy? How does it trust your certificate? Is the proxy fully transparent?
Great question! On Linux, yes, network namespaces enforce that and all net traffic goes through the proxy. Direct connections are blocked at the kernel level even if the program ignores proxy env vars, but I will test this case a bit more (unsure how to though, most network calls would respect HTTPS_PROXY and other similar env vars).
That being said, the default behaviour is no network, so nothing will be routed if it's not allowed regardless of whether the sandboxed process respects env vars or not.
On macOS, the proxy is best effort. Programs that ignore HTTPS_PROXY/HTTP_PROXY can connect directly. This is a platform limitation (macOS Seatbelt doesn't support forced proxy routing).
BUT, the default behaviour (no net) is fully enforced at the kernel level. Domain filtering relies on the program respecting proxy env vars.
It does but because I'm inheriting the seatbelt settings from Codex, I'm not resetting it in Zerobox (I thought it's a safer option). Let me look into this, there should be a way to take Codex' profile and safely combine/modify it.
Forgot about that, was mostly thinking about how AI agents with unrestricted permissions would ideally have some external logging and monitoring, so there would be a record of what it touched. A trace has all of the raw information, so some kind of wrapper around that would be useful.
I think there is still a valid case for sandbox logs/otel. strace would give you the syscalls/traces but not _why_ a particular call was blocked in side the sandbox (e.g. the decision making bit).
This is more a criticism of codex's linux-sandboxing, which you're just wrapping, but it's the first I've ever looked at it. I don't see how it makes sense to invoke bwrap as a forked subprocess. Bubblewrap can't do anything beyond what you can do with unshare directly, which you can simply invoke as a system call without needing to spawn a subprocess or requiring the user to have bwrap installed. It kinds of reeks of amateur hour when developers effectively just translate shell scripts into compiled languages by using whatever variant of "system" is available to make the same command invocations you would make through a shell, as opposed to actually using the system call API. Especially when the invocation is crafted from user input, there's a long history of exploits arising from stuff like this. Writing it in Rust does nothing for you when you're just using Rust to call a different CLI tool that isn't written in Rust.
Thanks for sharing this, I read your comment multiple times. What would be the alternative though? It is true that the program being written in Rust doesn't solve the problem of spawning subprocesses, but what's the alternative in that case?
I like tools like this, but they all seem to share the same underlying shape: take an arbitrary process and try to restrict it with OS primitives + some policy layer (flags, proxies, etc).
That works, but it also means correctness depends heavily on configuration, i.e. you’re starting with a lot of ambient authority and trying to subtract from it enforcement ends up split across multiple layers (kernel, wrapper, proxy)
An alternative model is to flip it: Instead of sandboxing arbitrary programs, run workflows in an environment where there is no general network/filesystem access at all, and every external interaction has to go through explicit capabilities.
In that setup, there’s nothing to "block" because the dangerous primitives aren’t exposed, execution can be deterministic/replayable, so you can actually audit behavior. Thus, secrets don’t enter the execution context, they’re only used at the boundary
It feels closer to capability-based systems than traditional sandboxing. Curious how people here think about that tradeoff vs OS-level sandbox + proxy approaches.
Zerobox uses the same kernel mechanisms (namespaces + seccomp) but no daemon, no root and cold start ~10ms (Docker is much worse in that regard).
Docker gives you full filesystem isolation and resource limits. Zerobox gives you granular file/network/credential controls with near zero overhead. You can in fact use Zerobox _inside_ Docker (e.g. for secret management)
I'd love to learn more please. I'm interested in sandboxing AI tools/agents regardless of the underlying mechanism (I explored Firecracker VMs briefly as well, terrible cross platform support though).
Thanks and agreed! Zerobox uses the Deno sandboxing policy and also the same pattern for cred injection (placeholders as env vars, replaced at network call time).
Real secrets are never readable by any processes inside the sandbox:
Do you know if there's a widely shared name for this pattern? I've been collecting examples of it recently - it's a really good idea - but I'm not sure if there's good terminology. "Credential injection" is one option I've seen floating around.
simonw, I have been seeing "credential injection" and "credential tokenizing" (a la tokenizer: https://github.com/superfly/tokenizer). I'm also seeing credential "surrogates" mentioned.
I am currently working on a mitm proxy for use with devcontainers to try to implement this pattern, but I'm certainly not the only one!
Not sure. I took this idea from the Deno sandboxing docs. They also do the exact same thing, different sandboxing mechanism though (I think Deno has it's own way of sandboxing subprocesses).
There are dozens of projects like this emerging right now. They all share the same challenge: establishing credibility.
I'm loathe to spend time evaluating them unless I've seen robust evidence that the architecture is well thought through and the tool has been extensively tested already.
My ideal sandbox is one that's been used by hundreds of people in a high-stakes environment already. That's a tall order, but if I'm going to spend time evaluating one the next best thing is documentation that teaches me something about sandboxing and demonstrates to me how competent and thorough the process of building this one has been.
UPDATE: On further inspection there's a lot that I like about this one. The CLI design is neat, it builds on a strong underlying library (the OpenAI Codex implementation) and the features it does add - mainly the network proxy being able to modify headers to inject secrets - are genuinely great ideas.
Care to elaborate on the kind of "credibility" to be established here? All these bazillion sandboxing tools use the same underlying frameworks for isolation (e.g., ebpf, landlock, VMs, cgroups, namespaces) that are already credible.
Most people are building on top of Apple's sandbox-exec which is itself almost entirely undocumented!
Agreed. I'm sure a number of these sandboxing solutions are vibe-coded, which makes your concerns regarding misconfigurations even more relevant.
I'm thinking the LocalLLM crowd should take their LLMs to trying to demolish these sandboxes.
Related, a direct comparison to other sandboxes and what you offer over those would be nice
I appreciate that alternate sandboxing tools can reduce some of the heavier parts of docker though (i.e. building or downloading the correct image)
How would you compare this tool to say bubblewrap https://github.com/containers/
https://github.com/HKUDS/nanobot/pull/1940
I'd like to hear your thoughts.
[^1]: https://github.com/jonathannen/hermit
Linux by default allows all users to read CLI arguments of running processes. While it looks like your bwrap invocation prevents the sandbox from looking at this process (--unshare-pid), any other process running on your system can read the secret.
Some of my use cases are very latency sensitive. What sort of overhead are you seeing?
Also, I'm literally wrapping Claude with zerobox now! No latency issues at all.
Especially for your application that you any kind of Apple framework.
```
zerobox -- curl https://example.com
Could not resolve host: example.com
```
I'd feel safer with default-deny on reads as well, but I know from past experience that this gets tricky fast - tools like Node.js and uv and Python all have a bunch of files they need to be able to read that you might not predict in advance.
Might still be possible to do that in a DX-friendly way though, if you make it easy to manually approve reads the first time and use that to build a profile that can be reused on subsequent command invocations.
```
zerobox --deny-read=/ -- cat /etc/passwd
```
That being said, what the default DX shouldl be? What paths to deny by default? That's something I've been thinking about and I'd love to hear your thoughts.
That's why rather than a default set I'm interested in an option where I get to approve things on first run - maybe something like this:
The above command would create an empty claude-profile.txt file and then give me a bunch of interactive prompts every time Claude tried to access a file, maybe something like: You would then clatter through a bunch of those the first time you run Claude and your decisions would be written to claude-profile.txt - then once that file exists you can start Claude in the future like this: (This is literally the first design I came up with after 30s of thought, I'm certain you could do much better.)Programs will access many files and directories on startup, and it would be extremely tedious to have to manually approve each one. So you'd auto-approve all and save them to the profile. This is TOFU principles applied to sandboxing. The assumption being that "this first time I run it naked, it's unlikely to do anything malicious, let me enforce that behavior for the future."
Let the user play with the app and after they exit the profile should contain all of the access attempts in a human readable format that's editable by the developer.
There might be many access attempts to folders in one directory, e.g.:
~/Documents/...
So instead of having a massive list of files it should be easy for developers to edit the profile to say, "Allow everything there", e.g. ~/Documents/*
MITM proxy is nice idea to avoid leaking secrets. Isn’t it very brittle though? Anthropic changes some URL-s and it’ll break.
Re the URLs, I agree, that's why I added wildcard support, e.g. `*.openai.com` for secret injection as well as network call filtering.
That being said, the default behaviour is no network, so nothing will be routed if it's not allowed regardless of whether the sandboxed process respects env vars or not.
BUT, the default behaviour (no net) is fully enforced at the kernel level. Domain filtering relies on the program respecting proxy env vars.
```
Read file /etc/passwd
Made network call to httpbin.org
Write file /tmp/access
```
etc.? I'm really interested to hear your thoughts and I will add that feature (I need something like that, too).
```
$ zerobox --debug --allow-net=httpbin.org -- curl
2026-04-01T18:06:33.928486Z CONNECT blocked (client=127.0.0.1:59225, host=example.com, reason=not_allowed)
curl: (56) CONNECT tunnel failed, response 403
```
I'm planning on adding otel integration as well.
I'd much rather a system call bwrap than re-implement bwrap, because bwrap has already been extensively tested.
Because I am worried about sandbox escapes. This is what we currently use to sandbox JS inside Browsers and Node (without anything extra) : https://github.com/Qbix/Platform/blob/main/platform/plugins/...
I like tools like this, but they all seem to share the same underlying shape: take an arbitrary process and try to restrict it with OS primitives + some policy layer (flags, proxies, etc).
That works, but it also means correctness depends heavily on configuration, i.e. you’re starting with a lot of ambient authority and trying to subtract from it enforcement ends up split across multiple layers (kernel, wrapper, proxy)
An alternative model is to flip it: Instead of sandboxing arbitrary programs, run workflows in an environment where there is no general network/filesystem access at all, and every external interaction has to go through explicit capabilities.
In that setup, there’s nothing to "block" because the dangerous primitives aren’t exposed, execution can be deterministic/replayable, so you can actually audit behavior. Thus, secrets don’t enter the execution context, they’re only used at the boundary
It feels closer to capability-based systems than traditional sandboxing. Curious how people here think about that tradeoff vs OS-level sandbox + proxy approaches.
Docker gives you full filesystem isolation and resource limits. Zerobox gives you granular file/network/credential controls with near zero overhead. You can in fact use Zerobox _inside_ Docker (e.g. for secret management)
Real secrets are never readable by any processes inside the sandbox:
```
zerobox -- echo $OPENAI_API_KEY
ZEROBOX_SECRET_a1b2c3d4e5...
```
I am currently working on a mitm proxy for use with devcontainers to try to implement this pattern, but I'm certainly not the only one!