Ron from Flox here, woke up to feed a brand new 3 day old to see this here! On about 3 hours of sleep (over the lat 48 hours) but excited to try and answer some questions! Feel free to also drop any below <3
I used to love both, Kubernetes and Nix. But after a few years of using both I felt like the abstraction levels are a bit too deep.
Sure, it's easy to stand up a mail server in NixOS, or to just use docker/kubernetes to deploy stuff. But after a few years it felt like I don't have a single understanding of the stack. When shit hits the fan, it makes it very difficult to troubleshoot.
I am now back on running my servers on FreeBSD/OpenBSD and jails or VMM respectively. And also dumbing the stack down to just "run it in a jail, but set it up manually".
The only outlier is Immich. For some reason they only officially support the docker images but not a single clear instruction on how to set it up manually. Sure, I could look at the Dockerfiles, but many of the scripts also expect docker to be present.
And now that FreeBSD also has reproducible builds, it took one more stone away from Nix.
Going to sound weird but with both my hats on I super appreciate this perspective. I can only speak to some areas of Nix and Flox obviously and I know folks are looking into doing this to your point a whole lot better. Zooming in way more into solving for us that just want to run and fix it fast when it breaks.
Also, think it's a huge ecosystem win for FreeBSD pushing on reproducibility too. I think we are trending in a direction where this just becomes a critical principle for certain stacks. (also needed when you dive into AI stacks/infra...)
Yes, but I also think that the BSDs are the last bastions you will find any AI usage in. And I for one am grateful for that.
I like it when my system comes with a complete set of manpages and good docs.
But you mentioned Flox, which I didn't even know about. First I thought that's what they renamed the Nix fork to after the schism, but now I see it's a paid product and yuck...just further deepens my believe in going more bare bones manual control, even if sometimes bothersome.
We have six dev teams and are just about done with migrating to k8s. It's an immense improvement over what we had before.
It's a version of Greenspun's tenth rule: "Any sufficiently complicated distributed system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Kubernetes."
When I worked on an enterprise data analytics platform, a big problem was docker image growth. People were using different python versions, different cuda versions, all kinds of libraries. With Cuda being over a gigabyte, this all explodes.
The solution is to decompose the docker images and make sure that every layer is hash equivalent. So if people update their Cuda version, it result in a change within the Python layers.
But it looks like Flox now simplifies this via Nix. Every Nix package already has a hash and you can combine packages however you would like.
Yes, this hits the nail on the head. We’ve seen the same explosion in image size and rebuild complexity, especially with AI/ML workloads where Python + CUDA + random pip wheels + system libs = image bloat and massive rebuilds.
With the Kubernetes shim, you can run the hash-pinned environments without building or pulling an image at all. It starts the pod with a stub, then activates the exact runtime from a node-local store.
I was an early and enthusiastic adopter of docker. I really liked how it would let me use layers to keep track of dependency between files.
After spending a few years using nix, the docker image situation looks pretty bonkers. If two files end up in separate layers, the system assumes dependency so if the lower file changes you need to build a separate copy of the higher one just in case there's actual dependency there.
Within nix you can be more precise about what depends on what, which is nice, but you do have to be thoughtful about it or you can summon the same footgun that got you with docker, just in smaller form. Because a nix derivation, while a box with nicely labeled inputs and output, is still a black box. If you insert a readme as an input to a derivation that does a build, nix will assume that the compiled binary depends on it and when you fix a typo in the readme and rebuild you'll end up with a duplicate binary build in the nix store despite the contents of the binary not actually depending on the text of the readme.
> you can combine packages however you would like
So this is true, more or less, but be aware that while nix lets you do this in ways that don't force needless duplication, it doesn't force you to avoid that duplication. Things carelessly packaged with nix can easily recreate the problem you mentioned with docker.
The problem is that whiteouts are not commutative. If the layers you build turn out to be bit for bit identical the layers will be shared anyway, but its much mroe complex than Nix where the composition operation is commutative.
Yes, there were various attempts to do this in the container ecosystem, but there is a hard limit on layers on Docker images (because there are hard limits on overlay mounts; you don't really need to overlay all the Nix store mounts of course as they have different paths but the code is for teh geenral case). So then there were various ways of bundling sets of packages into layers, but just managing it directly through Nix store is much simpler.
Cool if so, I didn't see it prominently linked or mentioned on the landing page. Maintainers: being open source is a big feature, mention it prominently and have your repo links front and center.
Jotting down a few quick thoughts here but we can totally go deep.
This is something Michael Brantley started working on a few months ago to test out how to make it super easy to ease and leverage existing Nix & Flox architecture.
One of the core differences from my quick perspective is that it specifically leverages the unique way that Flox environments are rendered without performing a nix evaluation, making it safe and optimally performant for the k8s node to realize the packages directly on the node, outside of a container.
so kind of allowing pull images from nix store, mounting shared host nix store per node into each container, incremental fast rebuilds, generating basic pod configs are good things.
and local, ci and remote runs same flows and envs.
Ron from Flox here, woke up to feed a brand new 3 day old to see this here! On about 3 hours of sleep (over the lat 48 hours) but excited to try and answer some questions! Feel free to also drop any below <3
We did just launch this last week after a good bit of work from the team. Steve wrote up a deeper technical dive here if anyone is interested - https://flox.dev/blog/kubernetes-uncontained-explained-unloc...
congrats on the little one, here’s to many wonderful moments.
online community love was not in my cards going into day 3 of a newborn but I'll take it + definitely needed! thank you!
I used to love both, Kubernetes and Nix. But after a few years of using both I felt like the abstraction levels are a bit too deep.
Sure, it's easy to stand up a mail server in NixOS, or to just use docker/kubernetes to deploy stuff. But after a few years it felt like I don't have a single understanding of the stack. When shit hits the fan, it makes it very difficult to troubleshoot.
I am now back on running my servers on FreeBSD/OpenBSD and jails or VMM respectively. And also dumbing the stack down to just "run it in a jail, but set it up manually".
The only outlier is Immich. For some reason they only officially support the docker images but not a single clear instruction on how to set it up manually. Sure, I could look at the Dockerfiles, but many of the scripts also expect docker to be present.
And now that FreeBSD also has reproducible builds, it took one more stone away from Nix.
Going to sound weird but with both my hats on I super appreciate this perspective. I can only speak to some areas of Nix and Flox obviously and I know folks are looking into doing this to your point a whole lot better. Zooming in way more into solving for us that just want to run and fix it fast when it breaks.
Also, think it's a huge ecosystem win for FreeBSD pushing on reproducibility too. I think we are trending in a direction where this just becomes a critical principle for certain stacks. (also needed when you dive into AI stacks/infra...)
Yes, but I also think that the BSDs are the last bastions you will find any AI usage in. And I for one am grateful for that.
I like it when my system comes with a complete set of manpages and good docs.
But you mentioned Flox, which I didn't even know about. First I thought that's what they renamed the Nix fork to after the schism, but now I see it's a paid product and yuck...just further deepens my believe in going more bare bones manual control, even if sometimes bothersome.
Kubernetes can be a godsend at larger orgs.
We have six dev teams and are just about done with migrating to k8s. It's an immense improvement over what we had before.
It's a version of Greenspun's tenth rule: "Any sufficiently complicated distributed system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Kubernetes."
When I worked on an enterprise data analytics platform, a big problem was docker image growth. People were using different python versions, different cuda versions, all kinds of libraries. With Cuda being over a gigabyte, this all explodes.
The solution is to decompose the docker images and make sure that every layer is hash equivalent. So if people update their Cuda version, it result in a change within the Python layers.
But it looks like Flox now simplifies this via Nix. Every Nix package already has a hash and you can combine packages however you would like.
Yes, this hits the nail on the head. We’ve seen the same explosion in image size and rebuild complexity, especially with AI/ML workloads where Python + CUDA + random pip wheels + system libs = image bloat and massive rebuilds.
With the Kubernetes shim, you can run the hash-pinned environments without building or pulling an image at all. It starts the pod with a stub, then activates the exact runtime from a node-local store.
I was an early and enthusiastic adopter of docker. I really liked how it would let me use layers to keep track of dependency between files.
After spending a few years using nix, the docker image situation looks pretty bonkers. If two files end up in separate layers, the system assumes dependency so if the lower file changes you need to build a separate copy of the higher one just in case there's actual dependency there.
Within nix you can be more precise about what depends on what, which is nice, but you do have to be thoughtful about it or you can summon the same footgun that got you with docker, just in smaller form. Because a nix derivation, while a box with nicely labeled inputs and output, is still a black box. If you insert a readme as an input to a derivation that does a build, nix will assume that the compiled binary depends on it and when you fix a typo in the readme and rebuild you'll end up with a duplicate binary build in the nix store despite the contents of the binary not actually depending on the text of the readme.
> you can combine packages however you would like
So this is true, more or less, but be aware that while nix lets you do this in ways that don't force needless duplication, it doesn't force you to avoid that duplication. Things carelessly packaged with nix can easily recreate the problem you mentioned with docker.
The problem is that whiteouts are not commutative. If the layers you build turn out to be bit for bit identical the layers will be shared anyway, but its much mroe complex than Nix where the composition operation is commutative.
Yes, there were various attempts to do this in the container ecosystem, but there is a hard limit on layers on Docker images (because there are hard limits on overlay mounts; you don't really need to overlay all the Nix store mounts of course as they have different paths but the code is for teh geenral case). So then there were various ways of bundling sets of packages into layers, but just managing it directly through Nix store is much simpler.
https://github.com/pdtpartners/nix-snapshotter/blob/main/doc...
Too bad this isn't open source, I'm 3/4ths of the way through building pretty much this exact product in order to support my actual products.
Is it not GPL?
The license file in their github seems to indicate that it is. https://github.com/flox/flox?tab=GPL-2.0-1-ov-file
Cool if so, I didn't see it prominently linked or mentioned on the landing page. Maintainers: being open source is a big feature, mention it prominently and have your repo links front and center.
How does this differ from the tooling that lets you build containers from nix?
Jotting down a few quick thoughts here but we can totally go deep. This is something Michael Brantley started working on a few months ago to test out how to make it super easy to ease and leverage existing Nix & Flox architecture. One of the core differences from my quick perspective is that it specifically leverages the unique way that Flox environments are rendered without performing a nix evaluation, making it safe and optimally performant for the k8s node to realize the packages directly on the node, outside of a container.
I read this a few times but there's no info.
seems similar to this
https://github.com/pdtpartners/nix-snapshotter
so kind of allowing pull images from nix store, mounting shared host nix store per node into each container, incremental fast rebuilds, generating basic pod configs are good things.
and local, ci and remote runs same flows and envs.
There was also Nixery paving the way