I've written over ~10k lines of Ansible playbooks and roles to fully automate setting up servers to deploy Docker based web apps, so I do like the concept of declaring the state of a system in configuration and then having that become a reality. I know NixOS is not directly comparable to Ansible but in general I think IaC is a good idea.
It was important to me that my dotfiles work on a number of systems so I avoided NixOS. For example, the command line version works on Arch, Debian and Ubuntu based distros along with WSL 2 support and macOS too. The desktop version works on Arch and Arch based distros.
Beyond that, I also use my dotfiles on 2 different Linux systems so I wanted a way to differentiate certain configs for certain things. I also have a company issued laptop running macOS where I want everything to work, but it's a managed device so I can't go hog wild with full system level management.
Beyond that, since I make video courses I wanted to make it easy for anyone to replicate my set up if they wanted but also make it super easy for them to personalize any part of the set up without forking my repo (but they can still fork it if they want).
All of the above was achievable with shell scripts and symlinks. I might be wrong since I didn't research it in depth but I'm not sure NixOS can handle all of the above use cases in an easy to configure manner.
To have your Nix-based setup reproducible across different OS (Arch, Debian, Ubuntu, WSL2, MacOS, and NixOS), and have an extensible base config that can be customized to different situations, the go-to framework is home-manager (not NixOS, which only works on NixOS, or NixOS on WSL 2).
I have never been more stress-free than when I was running nixos as a daily driver. Had to return to macos as primary unfortunately but still use nix as much as possible.
You are screwed either way. If you don't update your container has a ton of known security issues, if you do the container is not reproducable. reproducable is neat with some useful security benefits, but it is something a non goal if the container is more than a month old - day might even be a better max age.
So if i have a docker container which needs a handful of packages, you would handle it how?
I'm handling it by using a slim debian or ubuntu, then using apt to install these packages with necessary dependencies.
For everything easy, like one basic binary, I use the most minimal image but as soon as it gets just a little bit annoying to set it up and keep it maintained, i start using apt and a nightly build of the image.
If you are on github/gitlab, renovate bot is a good option for automating dependency updates via PRs while still maintaining pinned versions in your source.
I know it's an anti-pattern, but what is the alternative if you need to install some software? Pulling its tagged source code, gcc and compile everything?
Run “nix flake update”. Commit the lockfile. Build a docker image from that; the software you need is almost certainly there, and there’s a handy docker helper.
Recently I’ve been noticing that Nix software has been falling behind. So “the software you need is almost certainly there” is less true these days. Recently = April 2026.
Are you referring to how the nixpkgs-unstable branch hasn't been updated in the past five days? Or do you have some specific software in mind? (not arguing, just curious)
It’s a variety of different software that just isn’t updated very often.
I don’t mind being somewhat behind, but it seems like there are a lot of packages that don’t get regular updates. It’s okay to have packages that aren’t updated, but those packages should be clearly distinguishable.
> the old snapshot has security holes attackers know how to exploit.
So is running `docker build` and the `RUN apt update` line doing a cache hit, except the latter is silent.
The problem solved by pinning to the snapshot is not to magically be secure, it's knowing what a given image is made of so you can trivially assert which ones are safe and which ones aren't.
In both cases you have to rebuild an image anyway so updating the snapshot is just a step that makes it explicit in code instead of implicit.
where does the apt update connect to? If it is an up to date package repo you get fixes. Howerer there are lots of reasons it would not. You better know if this is your plan.
I don't really see how that's different from a normal binary install of a reproducible package. Especially with the lacking quality of a lot of Nix packages.
I disagree with that as a hard rule and with the opinion that it's an anti-pattern. Reproducible containers are fine, but not always necessary. There's enough times when I do want to run apt-get in a container and don't care about reproducibility.
This is to solve such issues that I am using and running StableBuild.
It is a managed service that keeps a cached copy of your dependencies at a specific time.
You can pin your dependencies within a Dockerfile and have reproducible docker images.
reproducible images are one of those features where the payoff is mostly emotional until the day it isn't. we had an incident where two supposedly identical images on two machines had a three byte delta in a timestamp and it cost us an afternoon to bisect from the wrong end. boring win, but a real one.
Gill probaby already knows this but for the uninitiated: something logged in, did a thing to potentially every container, and then deleted any sign of it doing the thing.
all that's left is a single timestamp of a log or something getting deleted
A totally unrelated comment; but — there is an animation on that page that moves practically everything on the page about 20 pixels down over the course of 1 second.
I thought that would completely trash the Cumulative Layout Shift core web vital. Because, hey! the layout is shifting in front of my very eyes. But no, the CLS on the page is 0.
It's happening as a result of a deliberate animation. The CLS metric relates to initial render. So yes, there is layout shift, but it's not CLS per se.
It's just that the spirit of Google's core web vitals has been to measure the properties of a web page that have the most impact on users. How quickly content appears on a page, how visually stable the content is, and how long it takes the page to respond to an interaction.
In the case of this page, I don't think it can be considered visually stable at all in the first second after it's loaded.
This is a really interesting accomplishment - I am also working heavily on reproducible builds for my firmware projects, and .. lo and behold .. the package manager key administrivia is the final bone to be broken.
I wonder if Arch leading the way on this will prompt other distro's to attempt the same feat. Reproducible builds are important for certification, security and safety-critical applications .. it'd be great to see Linux distros become more conformant to this method.
This is a huge accomplishment! But it wouldn't be so huge if compilers were trivially deterministic. It took 5 decades of development for compilers to get here. I'm sure ChatGPT in 2073 is going to be more deterministic than it was in 2023.
I ran Arch Linux for almost a year in WSL 2, it was really good.
Then I ran Arch natively for ~5 months, it's really good.
Now I still run Arch natively, but I also use the Arch Docker image to test my dotfiles[0] with a fresh file system.
Also, for when I want to run end to end tests for my dotfiles that set up a complete desktop environment I run Arch in a VM.
I have 99 problems but running Arch isn't one of them.
[0]: https://github.com/nickjj/dotfiles
I've written over ~10k lines of Ansible playbooks and roles to fully automate setting up servers to deploy Docker based web apps, so I do like the concept of declaring the state of a system in configuration and then having that become a reality. I know NixOS is not directly comparable to Ansible but in general I think IaC is a good idea.
It was important to me that my dotfiles work on a number of systems so I avoided NixOS. For example, the command line version works on Arch, Debian and Ubuntu based distros along with WSL 2 support and macOS too. The desktop version works on Arch and Arch based distros.
Beyond that, I also use my dotfiles on 2 different Linux systems so I wanted a way to differentiate certain configs for certain things. I also have a company issued laptop running macOS where I want everything to work, but it's a managed device so I can't go hog wild with full system level management.
Beyond that, since I make video courses I wanted to make it easy for anyone to replicate my set up if they wanted but also make it super easy for them to personalize any part of the set up without forking my repo (but they can still fork it if they want).
All of the above was achievable with shell scripts and symlinks. I might be wrong since I didn't research it in depth but I'm not sure NixOS can handle all of the above use cases in an easy to configure manner.
https://github.com/nix-community/home-manager
(and, also presumably, that you do Crossfit, etc.)
You meet a vegan crossfitter that uses Arch, what does it tell you about first?
Build your container/vm image elsewhere and deploy updates as entirely new images or snapshots or whatever you want.
Personally I prefer buildroot and consider VM as another target for embedded o/s images.
I'm handling it by using a slim debian or ubuntu, then using apt to install these packages with necessary dependencies.
For everything easy, like one basic binary, I use the most minimal image but as soon as it gets just a little bit annoying to set it up and keep it maintained, i start using apt and a nightly build of the image.
FROM ubuntu:24.04
COPY --from=ghcr.io/owner/image:latest /usr/local/bin/somebinary /usr/local/bin/somebinary
CMD ["somebinary"]
Not as simple when you need shared dependencies
I don’t mind being somewhat behind, but it seems like there are a lot of packages that don’t get regular updates. It’s okay to have packages that aren’t updated, but those packages should be clearly distinguishable.
So is running `docker build` and the `RUN apt update` line doing a cache hit, except the latter is silent.
The problem solved by pinning to the snapshot is not to magically be secure, it's knowing what a given image is made of so you can trivially assert which ones are safe and which ones aren't.
In both cases you have to rebuild an image anyway so updating the snapshot is just a step that makes it explicit in code instead of implicit.
software component image
both should be version pinned for auditing
It is a managed service that keeps a cached copy of your dependencies at a specific time. You can pin your dependencies within a Dockerfile and have reproducible docker images.
NIX FIXES THIS.
all that's left is a single timestamp of a log or something getting deleted
https://reproducible-builds.org/
Closely related is the Boostrappable Builds community:
https://bootstrappable.org/
I thought that would completely trash the Cumulative Layout Shift core web vital. Because, hey! the layout is shifting in front of my very eyes. But no, the CLS on the page is 0.
Is CLS a misleading metric then?
The CLS measures the total sum of layout shifts over the entire lifespan of a page, not just during initial render.
And it's not unexpected, because it comes from a css transition.
It's just that the spirit of Google's core web vitals has been to measure the properties of a web page that have the most impact on users. How quickly content appears on a page, how visually stable the content is, and how long it takes the page to respond to an interaction.
In the case of this page, I don't think it can be considered visually stable at all in the first second after it's loaded.
And yet, core web vitals cannot demonstrate this.
I wonder if Arch leading the way on this will prompt other distro's to attempt the same feat. Reproducible builds are important for certification, security and safety-critical applications .. it'd be great to see Linux distros become more conformant to this method.
https://reproducible-builds.org/
This is a huge accomplishment! But it wouldn't be so huge if compilers were trivially deterministic. It took 5 decades of development for compilers to get here. I'm sure ChatGPT in 2073 is going to be more deterministic than it was in 2023.