I think they explain a compelling problem about typical commerical software vs FOSS, then they dive into their GPU accelerated VM solution. I don't see how it helps solve the original problem.
Is is that FOSS needs a standard sandbox and they think some kind of peer to peer app store that disturbes images for VMs is the way to do it?
We work on GPU accelerated VMs, so that in future we can also bring NixOS + VPNs to desktops/end users to machines that don't run NixOS. We will use it as an application runtime where can control the whole stack. Just now we are mostly focused on managing distributed NixOS machines. The VPN helps to provide services on any kind computer, even if not running in a datacenter. You can read the description here for context: https://docs.clan.lol/
Maybe I'm in the same boat as people who didn't get docker before it was popular, but this seems really convoluted to me... is there really a market for this? Why do other existing things not solve this problem?
P2P app distribution is cool in theory but the security model gets complex fast. Without centralized review, you're basically trusting individual developers to not ship malicious code.
I'm not understanding the need for this? I cant believe i'm parroting corporate lobbyists, but this seems like a solution in search of a problem.
It sounds more like a way to take freedom away from people. Commercial systems are designed in such a way that offering that convenience is at the expense of control and ownership. Just because people trade freedoms for this level of ease, doesn't make it right.
It's a bit of a two edged sword but it's something we definitely need. Look at project like Qubes and Secureblue that try to implement this. It solves several issues:
Packaging Apps on Linux has been and always will be, a nightmare. Just giving up and sending whole VMs is basically a variant of what docker does.
Permission Management is also quite necessary and Linux Desktop/DBUS is horrible in that regard. There's recently been a post about this[0]. Especially part 5 is just... GNOME Developers being GNOME Developers...
A lot of Apps also open untrusted files and even run untrusted code. Browsers, PDFs, or Excel Macros? God only knows what kind of exploits and hidden software landmines there are.
And last but not least there's also just badly coded apps that can get pwned from remote sources. Think some game running horrible c++ code connecting peer to peer with random clients. All of them could easily buffer overflow some random function and take over all your files.
Yet another reminder that Nix does not sign commits, does not sign reviews, allows any maintainer to merge their own code, does not compile all packages from source, and Hydra admins can absolutely tamper with builds at any time. It is a massive supply chain attack waiting to happen.
The Nix team is aware of all of this and made these tradeoffs intentionally to maximize package support and reduce contributor friction. Nix, for all its good design choices, landed on a supply chain integrity threat model that unfortunately makes it suitable only as hobby OS that must not be used to protect anything of value.
Guix at least signs commits, but individual maintainers are still trusted so it is not much better, so there really is no production safe nix based package tree I am aware of.
Nothing should advertise itself as secure while being based on nix.
Just because something is popular, does not make it safe.
> The Nix team is aware of all of this and made these tradeoffs intentionally to maximize package support and reduce contributor friction. Nix, for all its good design choices, landed on a supply chain integrity threat model that unfortunately makes it suitable only as hobby OS that must not be used to protect anything of value.
The risks you list are shared by many distributions, meanwhile NixOS does better in some fronts, particularly around monorepo of open build recipes, SBOM, and flexible overrides to allow security sensitive usecases to limit and control dependencies.
But nonetheless, you list valid limitations, but they aren't inherent.
I'll discuss them below, but note that I don't speak on behalf of NixOS.
> Yet another reminder that Nix does not sign commits, does not sign reviews
I agree we should do this.
> allows any maintainer to merge their own code
The convention is now not to do that. I believe a maintainer recently had their commit bit revoked due to doing this. I don't know why it isn't enforced, but it should be.
> does not compile all packages from source
The vast majority are, and the exceptions are odd cases:
* firefox-bin, where some people prefer Mozilla's build. A source-built alternative "Firefox" is available too.
* firmware stuff
* Proprietary software, e.g. factorio.
* I'm not familiar with the Haskell bootstrapping case you mention in another comment, but if ghc can't be bootstrapped, are you suggesting that GHC shouldn't be available, or that a binary GHC should compile GHC from source? I agree that would be nice to have and I'm just clarifying the issue here.
> Hydra admins can absolutely tamper with builds at any time
I believe build reproduciblity is required to mitigate this risk. That is a useful property that OSS should have, but the reality is that no distribution has that, since so many packages has non-determinism.
Is there a distro that does this well? (I know Debian has spearheaded this, but they too have remaining build reproduciblity issues, and so presumably have similar risks).
> The convention is now not to do that. I believe a maintainer recently had their commit bit revoked due to doing this. I don't know why it isn't enforced, but it should be.
Unless you actually know who is committing code and who is reviewing it with 100% mandated commit and review signing, with well vetted maintainer keys, anyone can trivially make a PR under a pseudonym, then merge their own code from their maintainer identity. In effect there is no way to know or enforce who is merging their own code without the hard work of long lived maintainer identity key management that most distros other than Nix and Alpine choose to skip.
I am the one that submitted an RFC to Nix to fix this, that was ultimately rejected citing it would increase developer friction too much. In that moment Nix had chosen to favor drive-by unvetted hobby contributions over security, and thus decided Nix was never going to be useful for the high risk applications.
What offends me about Nix is that it skips all this hard work other distros do for hand waiving reasons, and has teams of paid consultants charging high risk clients money to integrate nix without disclosing they are opening their clients up for any of thousands of people to have the power to backdoor their servers with low chance of swift detection. Worse, most nix maintainers I talk to do not even understand these risks or how other distros solve them.
If nix wants to be a hobby distro, fine, but put some giant warnings on the tin so people can give informed consent for these major security tradeoffs.
I actually believe this trend of over-promising security in Nix is going to get people hurt.
> The risks you list are shared by many distributions, meanwhile NixOS does better in some fronts
I totally grant it is better at some of these risks, however it is also worse than classic distros on other fronts, lacking maintainer signatures and web of trust requirements of most OG distros like Debian. Even Guix, forked from nix with a very tiny team, gets at least this much right.
The excuses are just not acceptable for this major security regression in nix given the types of high risk things it was encouraged to be used for. It is like a new hospital that decided to not do basic sanitation because it would make it easier to hire.
To be clear, I would also not recommend Debian or any distro that places trust in single individuals for production use in the high risk applications.
Stagex (which I founded, so all bias on the table) was created because no existing distro before it was willing to hit a responsible supply chain security bar in my opinion that I was comfortable recommending for high risk applications.
It is a container native, 100% full source bootstrapped, and deterministic, with every commit signed by one maintainer, and every review/merge signed by a different maintainer, and every artifact built and signed with matching hashes by at least two maintainers. As of our next release we will be LLVM native. Also it relies on the OCI standard, instead of making up our own, which means dramatically less code to audit and you can use any OCI compatible toolchains to build any package (though our scripts support docker for now). It also means any individual can sit down and review our entire tree in a few hours due to how succinct it makes things.
Stagex was built to satisfy a threat model of trusting no single maintainer or machine, for high security use cases.
We also indeed reject any packages that are currently not possible to build this way because it is not possible to publish package artifacts under the stagex release process that cannot be full source bootstrapped and built deterministically by multiple maintainers on hardware they independently control. This means we reject packages we cannot safely distribute like Haskell and Ada, but while giving best available supply chain security for most popular languages we do support.
Haskell and Ada do not currently have any way to be built without centralized trust unfortunately but efforts are underway in both cases we will adopt when complete. Any exceptions are a place malware can hide, so no exceptions are permitted.
What is the catch? We were willing to ignore desktop use cases, at least for now, in order to hit a high security bar for software build use cases. As such we have dramatically fewer packages which allow us to hold this line. We are primarily a high supply chain security focused build distribution, though we are used to build a number of other specialized distros like Talos Linux, and AirgapOS. The latter runs on laptops but very minimal for use cases like cryptographic key management.
We will also probably never have the number of packages of Nix, but for the overwhelming majority of organizations a trusted toolchain to build their production services is sufficient to eliminate all forms of linux-distribution-level supply chain attacks by any single compromised maintainer or machine.
Just a couple examples off the top of my head I have bumped into: Packages that cannot be full source bootstrapped like Haskell are allowed, so total trust is placed in a third party compiler binaries. Also in cases like qemu where binary blob firmware is in the repo, it is kept as is and not rebuilt from source. Determinism is also not mandated so there is no way to know if any of the non deterministic packages were faithfully built from source. There are no hard enforced rules in cases like these, only cultural guidelines that are followed optionally.
Compare to e.g. stagex which I founded specifically because nix did not wish to adopt a strict threat model that trusts no single individual, build machine, or third party binary.
I think they explain a compelling problem about typical commerical software vs FOSS, then they dive into their GPU accelerated VM solution. I don't see how it helps solve the original problem.
Is is that FOSS needs a standard sandbox and they think some kind of peer to peer app store that disturbes images for VMs is the way to do it?
We work on GPU accelerated VMs, so that in future we can also bring NixOS + VPNs to desktops/end users to machines that don't run NixOS. We will use it as an application runtime where can control the whole stack. Just now we are mostly focused on managing distributed NixOS machines. The VPN helps to provide services on any kind computer, even if not running in a datacenter. You can read the description here for context: https://docs.clan.lol/
Maybe I'm in the same boat as people who didn't get docker before it was popular, but this seems really convoluted to me... is there really a market for this? Why do other existing things not solve this problem?
There is huge demand right now to create sandboxes for agents. VMs are one way, and Clan one solution for VM management.
Maybe they are not the right solution, but they are working on the right problem.
Of course, they don't say the focus on agents, but if the solution works with them, it doesn't matter that it was built for gamers.
P2P app distribution is cool in theory but the security model gets complex fast. Without centralized review, you're basically trusting individual developers to not ship malicious code.
Flatpak is the only foss solution close to building compartmentalisation?
Last I checked, docker was FOSS? Containerisation is built in in Linux, does that not compartmentalise enough?
What am I missing here, the article seems wildly inaccurate, surely I've misunderstood something?
I think they mean for packaging and distributing applications
Is clan some kind of p2p server config management framework based on Nix?
I think this fits the description well.
I'm not understanding the need for this? I cant believe i'm parroting corporate lobbyists, but this seems like a solution in search of a problem.
It sounds more like a way to take freedom away from people. Commercial systems are designed in such a way that offering that convenience is at the expense of control and ownership. Just because people trade freedoms for this level of ease, doesn't make it right.
Taking the control from you is a corporate decision, not the inherent property of compartmentalization.
It's a bit of a two edged sword but it's something we definitely need. Look at project like Qubes and Secureblue that try to implement this. It solves several issues:
Packaging Apps on Linux has been and always will be, a nightmare. Just giving up and sending whole VMs is basically a variant of what docker does.
Permission Management is also quite necessary and Linux Desktop/DBUS is horrible in that regard. There's recently been a post about this[0]. Especially part 5 is just... GNOME Developers being GNOME Developers...
A lot of Apps also open untrusted files and even run untrusted code. Browsers, PDFs, or Excel Macros? God only knows what kind of exploits and hidden software landmines there are.
And last but not least there's also just badly coded apps that can get pwned from remote sources. Think some game running horrible c++ code connecting peer to peer with random clients. All of them could easily buffer overflow some random function and take over all your files.
[0] https://blog.vaxry.net/articles/2025-dbusSucks
for the private networking problem, openziti (apache 2.0) is now integrated with Nix:
https://github.com/NixOS/nixpkgs/pull/453502
Yet another reminder that Nix does not sign commits, does not sign reviews, allows any maintainer to merge their own code, does not compile all packages from source, and Hydra admins can absolutely tamper with builds at any time. It is a massive supply chain attack waiting to happen.
The Nix team is aware of all of this and made these tradeoffs intentionally to maximize package support and reduce contributor friction. Nix, for all its good design choices, landed on a supply chain integrity threat model that unfortunately makes it suitable only as hobby OS that must not be used to protect anything of value.
Guix at least signs commits, but individual maintainers are still trusted so it is not much better, so there really is no production safe nix based package tree I am aware of.
Nothing should advertise itself as secure while being based on nix.
Just because something is popular, does not make it safe.
> The Nix team is aware of all of this and made these tradeoffs intentionally to maximize package support and reduce contributor friction. Nix, for all its good design choices, landed on a supply chain integrity threat model that unfortunately makes it suitable only as hobby OS that must not be used to protect anything of value.
The risks you list are shared by many distributions, meanwhile NixOS does better in some fronts, particularly around monorepo of open build recipes, SBOM, and flexible overrides to allow security sensitive usecases to limit and control dependencies.
But nonetheless, you list valid limitations, but they aren't inherent.
I'll discuss them below, but note that I don't speak on behalf of NixOS.
> Yet another reminder that Nix does not sign commits, does not sign reviews
I agree we should do this.
> allows any maintainer to merge their own code
The convention is now not to do that. I believe a maintainer recently had their commit bit revoked due to doing this. I don't know why it isn't enforced, but it should be.
> does not compile all packages from source
The vast majority are, and the exceptions are odd cases:
* firefox-bin, where some people prefer Mozilla's build. A source-built alternative "Firefox" is available too.
* firmware stuff
* Proprietary software, e.g. factorio.
* I'm not familiar with the Haskell bootstrapping case you mention in another comment, but if ghc can't be bootstrapped, are you suggesting that GHC shouldn't be available, or that a binary GHC should compile GHC from source? I agree that would be nice to have and I'm just clarifying the issue here.
> Hydra admins can absolutely tamper with builds at any time
I believe build reproduciblity is required to mitigate this risk. That is a useful property that OSS should have, but the reality is that no distribution has that, since so many packages has non-determinism.
Is there a distro that does this well? (I know Debian has spearheaded this, but they too have remaining build reproduciblity issues, and so presumably have similar risks).
> The convention is now not to do that. I believe a maintainer recently had their commit bit revoked due to doing this. I don't know why it isn't enforced, but it should be.
Unless you actually know who is committing code and who is reviewing it with 100% mandated commit and review signing, with well vetted maintainer keys, anyone can trivially make a PR under a pseudonym, then merge their own code from their maintainer identity. In effect there is no way to know or enforce who is merging their own code without the hard work of long lived maintainer identity key management that most distros other than Nix and Alpine choose to skip.
I am the one that submitted an RFC to Nix to fix this, that was ultimately rejected citing it would increase developer friction too much. In that moment Nix had chosen to favor drive-by unvetted hobby contributions over security, and thus decided Nix was never going to be useful for the high risk applications.
What offends me about Nix is that it skips all this hard work other distros do for hand waiving reasons, and has teams of paid consultants charging high risk clients money to integrate nix without disclosing they are opening their clients up for any of thousands of people to have the power to backdoor their servers with low chance of swift detection. Worse, most nix maintainers I talk to do not even understand these risks or how other distros solve them.
If nix wants to be a hobby distro, fine, but put some giant warnings on the tin so people can give informed consent for these major security tradeoffs.
I actually believe this trend of over-promising security in Nix is going to get people hurt.
> The risks you list are shared by many distributions, meanwhile NixOS does better in some fronts
I totally grant it is better at some of these risks, however it is also worse than classic distros on other fronts, lacking maintainer signatures and web of trust requirements of most OG distros like Debian. Even Guix, forked from nix with a very tiny team, gets at least this much right.
The excuses are just not acceptable for this major security regression in nix given the types of high risk things it was encouraged to be used for. It is like a new hospital that decided to not do basic sanitation because it would make it easier to hire.
To be clear, I would also not recommend Debian or any distro that places trust in single individuals for production use in the high risk applications.
Stagex (which I founded, so all bias on the table) was created because no existing distro before it was willing to hit a responsible supply chain security bar in my opinion that I was comfortable recommending for high risk applications.
It is a container native, 100% full source bootstrapped, and deterministic, with every commit signed by one maintainer, and every review/merge signed by a different maintainer, and every artifact built and signed with matching hashes by at least two maintainers. As of our next release we will be LLVM native. Also it relies on the OCI standard, instead of making up our own, which means dramatically less code to audit and you can use any OCI compatible toolchains to build any package (though our scripts support docker for now). It also means any individual can sit down and review our entire tree in a few hours due to how succinct it makes things.
https://codeberg.org/stagex/stagex/src/branch/staging#compar... can offer a high level comparison across distros in some of the areas we feel matter most.
Stagex was built to satisfy a threat model of trusting no single maintainer or machine, for high security use cases.
We also indeed reject any packages that are currently not possible to build this way because it is not possible to publish package artifacts under the stagex release process that cannot be full source bootstrapped and built deterministically by multiple maintainers on hardware they independently control. This means we reject packages we cannot safely distribute like Haskell and Ada, but while giving best available supply chain security for most popular languages we do support.
Haskell and Ada do not currently have any way to be built without centralized trust unfortunately but efforts are underway in both cases we will adopt when complete. Any exceptions are a place malware can hide, so no exceptions are permitted.
What is the catch? We were willing to ignore desktop use cases, at least for now, in order to hit a high security bar for software build use cases. As such we have dramatically fewer packages which allow us to hold this line. We are primarily a high supply chain security focused build distribution, though we are used to build a number of other specialized distros like Talos Linux, and AirgapOS. The latter runs on laptops but very minimal for use cases like cryptographic key management.
We will also probably never have the number of packages of Nix, but for the overwhelming majority of organizations a trusted toolchain to build their production services is sufficient to eliminate all forms of linux-distribution-level supply chain attacks by any single compromised maintainer or machine.
which packages are not built from source?
Just a couple examples off the top of my head I have bumped into: Packages that cannot be full source bootstrapped like Haskell are allowed, so total trust is placed in a third party compiler binaries. Also in cases like qemu where binary blob firmware is in the repo, it is kept as is and not rebuilt from source. Determinism is also not mandated so there is no way to know if any of the non deterministic packages were faithfully built from source. There are no hard enforced rules in cases like these, only cultural guidelines that are followed optionally.
Compare to e.g. stagex which I founded specifically because nix did not wish to adopt a strict threat model that trusts no single individual, build machine, or third party binary.
Sublime Text for example[0], the source is closed, so what else is there to do
[0]: https://github.com/NixOS/nixpkgs/blob/76701a179d3a98b07653e2... (does a fetch URL against the pre built .tar.gz from https://download.sublimetext.com)