I run K3s on NixOS as the central piece of my homelab and it was actually even too much easy to setup (although now that I think about it there was some gotcha I had to manually tweak in K3s config). This "Kubernetes on NixOS the hard way" seems very interesting and I will have a look at it via the QEMU image at some point. Thanks for sharing!
resetting k3s in NixOS is not that straightforward and requires manual input. It cannot be fully automated as removing the statement from your config afaik unless this has changed recently.
Now I don't recall which issue I had, I think it was something related to CoreDNS config or passing through the /etc/hosts of the NixOS node, but I do remember having to touch the K3s YAML directly, and maybe having issues also persisting it. It's actually the only thing I fear would break if I had to reinstall NixOS from scratch...
First off, amazing post. I learned a lot about networking, Linux and Kubernetes.
As a learning project, this is absolutely awesome.
I run Kubernetes via Kind on Docker on NixOS.
There's a ton of other ways to get a development environment on your NixOS developer PC.
I don't pretend this one is very good, I just copy what my colleagues have come up with (+ NixOS).
For production workloads, I wouldn't run the kubelet using this much custom wiring.
I'd run Talos. It's vastly simpler, you can run them in NixOS VMs, it's declarative and lowers the surface area of things that need interaction, no SSH'ing in.
It seems like the author is torn between where to put control: In NixOS, or in Kubernetes?
You can move stuff, e.g. CoreDNS, out of Kubernetes for a "simpler" setup.
But the point of running workloads inside Kubernetes is that you get redundancy between nodes.
So if a single node dies, your services don't die.
Embracing Kubernetes, I certainly haven't let go of NixOS. My personal servers still just run NixOS.
It's much simpler, much cheaper, and resilient in its own way.
Selling Kubernetes and Cloud Native users on using NixOS, I'd probably go another way, e.g. via dev environments.
You're right, it's very much a trade-off and preference where you put control, NixOS or Kubernetes. I'm not so much torn, but more believe you always have to weigh pros and cons.
For CoreDNS specifically, this setup adds CoreDNS to every node, and every node does DNS locally, so there's no redundancy benefit to using a Kubernetes deployment for CoreDNS. It does become a benefit as soon as you can't have a CoreDNS per node. I guess the obvious downsides to CoreDNS per node are that cache becomes very spread out in larger setups, and you may end up hammering your API server and upstream DNS servers more.
I wonder what the author’s development process for all this is like. It’s fascinating seeing the end result, and I certainly learned a ton. However, I’d love to see some of the trial and error process. My own process is starting from a configuration.nix on a node and iterate with `nixos-rebuild switch`, but it still feels like there’s better methodologies out there.
Author here. Yeah, unfortunately, that's kinda it: just rebuild a lot. At work we have a custom setup with a build server and agents for provisioning, which is nice for multiple nodes, but also even slower. The QEMU setup in the attached repo was added later and also handy for testing multiple nodes. QEMU is also nice because you can just trash the disk images to get a clean start.
Because it's a fundamental shift in design away from most other linux distributions, which is exciting and and perhaps a breath of fresh air for longtime linux users.
I run K3s on NixOS as the central piece of my homelab and it was actually even too much easy to setup (although now that I think about it there was some gotcha I had to manually tweak in K3s config). This "Kubernetes on NixOS the hard way" seems very interesting and I will have a look at it via the QEMU image at some point. Thanks for sharing!
Any resources you'd recommend for the k3s+NixOS setup? Been eyeing the same
I followed the official NixOS documentation [1] which is... scarce, to say the least. But it also basically worked just like that.
[1] https://github.com/NixOS/nixpkgs/blob/master/pkgs/applicatio...
resetting k3s in NixOS is not that straightforward and requires manual input. It cannot be fully automated as removing the statement from your config afaik unless this has changed recently.
Now I don't recall which issue I had, I think it was something related to CoreDNS config or passing through the /etc/hosts of the NixOS node, but I do remember having to touch the K3s YAML directly, and maybe having issues also persisting it. It's actually the only thing I fear would break if I had to reinstall NixOS from scratch...
First off, amazing post. I learned a lot about networking, Linux and Kubernetes.
As a learning project, this is absolutely awesome.
I run Kubernetes via Kind on Docker on NixOS.
There's a ton of other ways to get a development environment on your NixOS developer PC.
I don't pretend this one is very good, I just copy what my colleagues have come up with (+ NixOS).
For production workloads, I wouldn't run the kubelet using this much custom wiring.
I'd run Talos. It's vastly simpler, you can run them in NixOS VMs, it's declarative and lowers the surface area of things that need interaction, no SSH'ing in.
It seems like the author is torn between where to put control: In NixOS, or in Kubernetes?
You can move stuff, e.g. CoreDNS, out of Kubernetes for a "simpler" setup.
But the point of running workloads inside Kubernetes is that you get redundancy between nodes.
So if a single node dies, your services don't die.
Embracing Kubernetes, I certainly haven't let go of NixOS. My personal servers still just run NixOS.
It's much simpler, much cheaper, and resilient in its own way.
Selling Kubernetes and Cloud Native users on using NixOS, I'd probably go another way, e.g. via dev environments.
Author here.
You're right, it's very much a trade-off and preference where you put control, NixOS or Kubernetes. I'm not so much torn, but more believe you always have to weigh pros and cons.
For CoreDNS specifically, this setup adds CoreDNS to every node, and every node does DNS locally, so there's no redundancy benefit to using a Kubernetes deployment for CoreDNS. It does become a benefit as soon as you can't have a CoreDNS per node. I guess the obvious downsides to CoreDNS per node are that cache becomes very spread out in larger setups, and you may end up hammering your API server and upstream DNS servers more.
I wonder what the author’s development process for all this is like. It’s fascinating seeing the end result, and I certainly learned a ton. However, I’d love to see some of the trial and error process. My own process is starting from a configuration.nix on a node and iterate with `nixos-rebuild switch`, but it still feels like there’s better methodologies out there.
Author here. Yeah, unfortunately, that's kinda it: just rebuild a lot. At work we have a custom setup with a build server and agents for provisioning, which is nice for multiple nodes, but also even slower. The QEMU setup in the attached repo was added later and also handy for testing multiple nodes. QEMU is also nice because you can just trash the disk images to get a clean start.
why nixos gets to first page?
Because it's a fundamental shift in design away from most other linux distributions, which is exciting and and perhaps a breath of fresh air for longtime linux users.
Nix is crazy powerful, it can have all my upvotes.
People here are interested in reading about it. Is there a reason you think it shouldn't?
someone posts it and people upvote it