I’ve been running an Incus cluster of 3 fairly beefy servers for about a year now. It’s my go-to recommendation for anyone wanting to setup a new virtualized environment.
One of my favorite features is how you can tag different cluster members for different architectures. In the same cluster, I can have traditional dual-socket x86 servers with a dozen DIMM slots as well as Raspberry Pis. The architecture tagging lets me strategize execution of ARM-based container workloads to be only on the Pis, or opt to run them via QEMU on the x86 platforms if that makes more sense in a particular scenario. Since I deal with a lot of embedded firmware, this offers a nice, flexible platform.
Stephen Graeber is also a long time contributor to the LXC project and his reasoning behind this fork and other changes are quite sound. I hope the project sees continued success. Stephen’s business model of offering consulting services for Incus systems also seems quite sound.
I used Proxmox for years to run a fairly comprehensive homelab, and a few months ago replaced the entire thing with Incus (on a debian host, haven't tried IncusOS yet). Incus is amazing and it makes so many things so much easier compared to Proxmox.
One thing in particular is permissions in unprivileged containers. In Proxmox, you have to do a bunch of somewhat confusing ID mapping. In Incus, it's as simple as setting "shift=true".
Also the profile system in Incus is really powerful and allowed me to deduplicate a ton of config.
> The Incus project was created by Aleksa Sarai as a community driven alternative to Canonical's LXD.
Today, it's led and maintained by many of the same people that once created LXD.
I've been using Incus containers (not VMs) for running tests against a "real" OS and it's been an absolute game changer for me. It's granted me the ability to simultaneously spin up-and-down a plethora of fresh OSes on my local dev machine, which I then use as testing targets for components of my codebase that require Docker or systemd. With traditional containers, it's tricky to mimic those capabilities as they would exist on a normal VM.
Because both my project and Incus are written in Go, orchestrating Incus resources in my test code has been pretty seamless. And with "ephemeral" containers, if things start to get out of hand, I just need to stop the container to clean it up. Much easier than a 2-step process like it usually is.
Looking forward to seeing what's to come in IncusOS!
I use incus to pass a containerized kali os the Wayland and x11 sockets, and whatever else maybe in the /run/user/1000 folder and x11 socket folder, like pipewire. It isn't perfect, but it's really nice spawning a shell/bar/etc inside the container and it goes over the current Wayland desktop. Then I am able to use it to spawn other graphical apps. It works really well. Incus is amazing, or lxc and wayland in general.
Really excited to try this out. I have a fleet of containers on ubuntu + incus. Not only does this do ZFS optimization, I look forward having easy container optimized backup, live cluster migration (to a different machine without downtime) and so much more.
I use Proxmox on fat servers, but for homelab-like setup Incus OS seems more like a sweet spot
I was hoping for easy backup via zfs send as well, but turns out it’s not so easy atm.
IncusOS does not give you shell access, you have to figure out IncusOS ways to do things via their CLI/API. I haven’t found an easy way to do incremental backup of the whole system yet. You can backup individual instances/volumes via incus export (which seems to use zfs send under the hood), but not the whole thing.
I have mixed feelings about their decision not to give you shell access. Guess those who want flexibility can always just install Incus on top of any Linux they like, but it would be nice to have an escape hatch for when IncusOS gives you almost everything you want…
I occasionally contemplate that, if I were designing an OS meant to be sort-of-immutable (like Incus OS or Fedora Silverblue etc or MacOS), I would probably build it like this:
The main filesystem is verified and immutable. Everything that isn't configuration or the user-controlled payload is genuinely read-only, and the system will even cryptographically verify it on boot or first use. You cannot modify /bin/bash, etc.
If you want to test a modification, you can configure an overlay, and you can boot with that overlay live. You can configure the overlay to also be immutable or you can make the overlay mutable. But the choice of booting into the overlay is controlled by code that cannot by overlaid, so you can always turn the overlay off no matter how much you screw it up.
The user may get root access, but if your system is remotely attested or uses a TPM or such for security, then that policy will find out if you do so before you can do anything as root. So you can shell in and attach a debugger to a system service, but you cannot do that and also pretend to your orchestration tools that you have not done so.
The default configuration is mostly empty. When you change a default, you are not modifying the middle of a giant plist where no one will ever understand what happened. You only create new configuration, and deleting it is just fine.
The result would, I think, give system owners plenty of ability to hack on their own systems, but they could also unhack their systems easily. There are very few systems out there with both of these properties right now...
Check out SmartOS, it's illumos/solaris based but I think you'll find it is a nice middle ground. Not as abstracted, nice tooling that makes common tasks simple but not so opinionated you have to de-abstract things to get under the hood. Not painless but what is?
AFAIK LXD is still opensource, as are most if not all products from Canonical. I think the fork is because LXD when it was moved to Canonical made the community uneasy because of the way that they would integrate with Ubuntu lifecycle and tooling.
It's indeed still open source, but was moved from Apache 2.0 to AGPLv3 and from not having any requirements to contributions to requiring all contributors sign a CLA.
So it's definitely still open source, but the changes they made allows them to still look and import any change from Incus that they wish, whilst preventing us from looking at any LXD code without risk of tainting ourselves...
Thanks for the correction. I kind of remember this as a more hostile thing done by Canonical at the time but this fork announcement from that time does not support that view either. Perhaps I am misremembering the little I do remember.
My number one reason for moving away from using LXD in production after this change is that LXD is only available through snap, which caused multiple downtimes in the cluster because of the forced updates.
Exactly. And depending on whether you are installing it with snap or other package managers, like pacman in arch, it'll actually use differently folders for configs, so if you are writing automation for say automatically manage remotes without relying on the cli, you'll have to account for that. Better to just use Incus whenever possible.
It seems to suffer from a chicken and egg problem. To get an image you are supposed to run `incus remote get-client-certificate` to put into the "image customizer", and you cannot generate an image without it. So how do you get started?
perhaps add installation instructions in the README? Most people already know they need the binary to run that command. For those who don't, I don't recommend you baby them because next thing you know, they've downloaded the wrong binary and it doesn't run.
Yes. And most importantly, the Incus API and CLI client (which uses the API) presents a consistent management language for system containers (the default ones with a init/systemd-controlled userspace), OCI containers (unpacked, not layered), and VMs. Well, as consistent as makes sense for each. There are a number of options/properties that are specific to each, but it feels very consistent.
The Incus server inside IncusOS is the same software. The difference is as little userspace as possible alongside it (not even busybox).
It's a lot more than that. Clustering, storage drivers, networking, etc makes up a whole virtual machine manager. It never says it's a hypervisor, it's a VMM as outlined on it's github: "Powerful system container and virtual machine manager"
Incus is very nice and super featured, but suffers from a few issues, namely unintuitive/hard onboarding and bad defaults, which makes giving access to people annoying, as it requires teaching them first and that they can't just make a vm with a few clicks immediately, limited authentication and user control options, like if no external auth users must exist on underlying system, and with limited but very strict auth options requires a full domain and no proxying, currently (might get fixed partially later).
And finally, it suffers from hardcore tracking upstream, ie canonical/lxd(-ui), meaning they won't really do any changes that lxd wouldn't do, and thus are slaved to them : (
Is there such a thing as DIMM modules with ROM chips? It would be useful for some applications to be able to burn the immutable OS into a read only memory as a form of tamper-resistance in key infrastructure.
There are CPUs with fuses that can store keys, e.g. Intel Boot Guard. The tooling to use it as an end user is not friendly to say the least.
You can set your own Secure Boot keys. The history of outrageous security vulnerabilities that break it is long and storied. The underlying architecture is abysmal.
I have switched to incus and it's really great. It's lightweight, has a working terraform provider, easy-to-use cli, pre-built images (LXC and VM) of major distros (while in proxmox, you have to create templates all the time for VMs), runs on any distro (on proxmox, you're stuck with debian), clustering is nice, supports bunch of storage drivers (dir, btrfs, ceph, zfs), simple web UI and active community. The project leader is also very active and helpful while in proxmox, it's a little unresponsive. You can even install `incus-base` package which only contains LXC specific components for only running LXC containers.
I have noticed incus has better security configs by default. For instance, all pre-built images come with secureboot enabled and there are ACLs which are easy to configure for fine-grained network rules. The only downside I feel like is lack of something like PBS
I’ve been running an Incus cluster of 3 fairly beefy servers for about a year now. It’s my go-to recommendation for anyone wanting to setup a new virtualized environment.
One of my favorite features is how you can tag different cluster members for different architectures. In the same cluster, I can have traditional dual-socket x86 servers with a dozen DIMM slots as well as Raspberry Pis. The architecture tagging lets me strategize execution of ARM-based container workloads to be only on the Pis, or opt to run them via QEMU on the x86 platforms if that makes more sense in a particular scenario. Since I deal with a lot of embedded firmware, this offers a nice, flexible platform.
Stephen Graeber is also a long time contributor to the LXC project and his reasoning behind this fork and other changes are quite sound. I hope the project sees continued success. Stephen’s business model of offering consulting services for Incus systems also seems quite sound.
* Stéphane Graber.
What hypervisior environments don't have this?
I used Proxmox for years to run a fairly comprehensive homelab, and a few months ago replaced the entire thing with Incus (on a debian host, haven't tried IncusOS yet). Incus is amazing and it makes so many things so much easier compared to Proxmox.
One thing in particular is permissions in unprivileged containers. In Proxmox, you have to do a bunch of somewhat confusing ID mapping. In Incus, it's as simple as setting "shift=true".
Also the profile system in Incus is really powerful and allowed me to deduplicate a ton of config.
Can Incus do regular vms too, or only LXCs? I think I looked at it before but wrote it off because I still have some workloads that have to be in VMs.
It can do VMs, "system containers" like LXC and Docker/OCI-compatible application containers.
There was a project to implement a dockcer-compose compatible "incus-compose" but unfortunately, it looks dead, right now.
You can even set up a kubernetes cluster entirely composed of containers: https://github.com/lxc/cluster-api-provider-incus
Yes, it can do both. The image server will build for both options if possible, so you have to specify “—vm” on the command line creating the domain.
Incus is more comparable to LXD than proxmox. IncusOS is different though.
LXD containers also are unprivileged by default.
Incus is specifically an LXD fork.
Incus was an LXD fork in the very beginning but it's evolved a lot since then. Incus is far superior than LXD in number of ways
You might be mixing up LXC and LXD
From Incus main page:
> The Incus project was created by Aleksa Sarai as a community driven alternative to Canonical's LXD. Today, it's led and maintained by many of the same people that once created LXD.
Thé confusion si real
Even I that worked for a long while with this tech would mix them up time and again, I think it's understandable.
No, LXD’s LXCs. I use it and it’s good.
The UID mappings are correctly setup in Ubuntu so the containers run non-privileged by default.
I hear Incus, a fork of LXD, is better. It’s used in truenas.
Interesting. Is there anything else that is better than proxmox? Like performance etc?
Besides VMs and LXC/Proxmox-style containers, it can also run docker containers out of the box.
I like harvester by suse
Profiles are really great. It's like cloud-init on steroids
Nice! Reminds me of Galos: https://github.com/ascension-association/galos
I've been using Incus containers (not VMs) for running tests against a "real" OS and it's been an absolute game changer for me. It's granted me the ability to simultaneously spin up-and-down a plethora of fresh OSes on my local dev machine, which I then use as testing targets for components of my codebase that require Docker or systemd. With traditional containers, it's tricky to mimic those capabilities as they would exist on a normal VM.
Because both my project and Incus are written in Go, orchestrating Incus resources in my test code has been pretty seamless. And with "ephemeral" containers, if things start to get out of hand, I just need to stop the container to clean it up. Much easier than a 2-step process like it usually is.
Looking forward to seeing what's to come in IncusOS!
I use incus to pass a containerized kali os the Wayland and x11 sockets, and whatever else maybe in the /run/user/1000 folder and x11 socket folder, like pipewire. It isn't perfect, but it's really nice spawning a shell/bar/etc inside the container and it goes over the current Wayland desktop. Then I am able to use it to spawn other graphical apps. It works really well. Incus is amazing, or lxc and wayland in general.
Really excited to try this out. I have a fleet of containers on ubuntu + incus. Not only does this do ZFS optimization, I look forward having easy container optimized backup, live cluster migration (to a different machine without downtime) and so much more.
I use Proxmox on fat servers, but for homelab-like setup Incus OS seems more like a sweet spot
I was hoping for easy backup via zfs send as well, but turns out it’s not so easy atm.
IncusOS does not give you shell access, you have to figure out IncusOS ways to do things via their CLI/API. I haven’t found an easy way to do incremental backup of the whole system yet. You can backup individual instances/volumes via incus export (which seems to use zfs send under the hood), but not the whole thing.
I have mixed feelings about their decision not to give you shell access. Guess those who want flexibility can always just install Incus on top of any Linux they like, but it would be nice to have an escape hatch for when IncusOS gives you almost everything you want…
I occasionally contemplate that, if I were designing an OS meant to be sort-of-immutable (like Incus OS or Fedora Silverblue etc or MacOS), I would probably build it like this:
The main filesystem is verified and immutable. Everything that isn't configuration or the user-controlled payload is genuinely read-only, and the system will even cryptographically verify it on boot or first use. You cannot modify /bin/bash, etc.
If you want to test a modification, you can configure an overlay, and you can boot with that overlay live. You can configure the overlay to also be immutable or you can make the overlay mutable. But the choice of booting into the overlay is controlled by code that cannot by overlaid, so you can always turn the overlay off no matter how much you screw it up.
The user may get root access, but if your system is remotely attested or uses a TPM or such for security, then that policy will find out if you do so before you can do anything as root. So you can shell in and attach a debugger to a system service, but you cannot do that and also pretend to your orchestration tools that you have not done so.
The default configuration is mostly empty. When you change a default, you are not modifying the middle of a giant plist where no one will ever understand what happened. You only create new configuration, and deleting it is just fine.
The result would, I think, give system owners plenty of ability to hack on their own systems, but they could also unhack their systems easily. There are very few systems out there with both of these properties right now...
Check out SmartOS, it's illumos/solaris based but I think you'll find it is a nice middle ground. Not as abstracted, nice tooling that makes common tasks simple but not so opinionated you have to de-abstract things to get under the hood. Not painless but what is?
Sorry to say but that is bad advice. SmartOS is great and it was cool tech, but it is not Linux and it doesn't act like Linux is certain scenarios.
My favourite example is OOM .. Linux will kill your docker container. SmartOS locks it up and makes it super hard to see understand why it failed.
I like smartos but I have painful memories from about a decade ago.
Incus however is what in use now in Linux.
In case there might be people who are not familiar with Incus, it was forked from LXD to keep it open source. It's very good software.
AFAIK LXD is still opensource, as are most if not all products from Canonical. I think the fork is because LXD when it was moved to Canonical made the community uneasy because of the way that they would integrate with Ubuntu lifecycle and tooling.
https://github.com/canonical/lxd it's AGPL-V3
It's indeed still open source, but was moved from Apache 2.0 to AGPLv3 and from not having any requirements to contributions to requiring all contributors sign a CLA.
So it's definitely still open source, but the changes they made allows them to still look and import any change from Incus that they wish, whilst preventing us from looking at any LXD code without risk of tainting ourselves...
Thanks for the correction. I kind of remember this as a more hostile thing done by Canonical at the time but this fork announcement from that time does not support that view either. Perhaps I am misremembering the little I do remember.
https://discuss.linuxcontainers.org/t/introducing-incus/1778...
My number one reason for moving away from using LXD in production after this change is that LXD is only available through snap, which caused multiple downtimes in the cluster because of the forced updates.
Exactly. And depending on whether you are installing it with snap or other package managers, like pacman in arch, it'll actually use differently folders for configs, so if you are writing automation for say automatically manage remotes without relying on the cli, you'll have to account for that. Better to just use Incus whenever possible.
It seems to suffer from a chicken and egg problem. To get an image you are supposed to run `incus remote get-client-certificate` to put into the "image customizer", and you cannot generate an image without it. So how do you get started?
You can download the CLI client for Linux, Windows and MacOS from our Github releases: https://github.com/lxc/incus/releases/latest/
I've filed https://github.com/lxc/incus-os/issues/551 which we should be able to sort out later today.
perhaps add installation instructions in the README? Most people already know they need the binary to run that command. For those who don't, I don't recommend you baby them because next thing you know, they've downloaded the wrong binary and it doesn't run.
Not _technically_ a hypervisor since these are Linux (system) containers and use the same cgroup magic under the hood as docker/containerd.
But this is definitely neat. I've found Incus quite handy for development environments, and a good compliment to docker.
You can also start QEMU/KVM powered VMs with Incus, I assume that's also possible with IncusOS?
Yes. And most importantly, the Incus API and CLI client (which uses the API) presents a consistent management language for system containers (the default ones with a init/systemd-controlled userspace), OCI containers (unpacked, not layered), and VMs. Well, as consistent as makes sense for each. There are a number of options/properties that are specific to each, but it feels very consistent.
The Incus server inside IncusOS is the same software. The difference is as little userspace as possible alongside it (not even busybox).
Incus supports both qemu and lxc
It's a lot more than that. Clustering, storage drivers, networking, etc makes up a whole virtual machine manager. It never says it's a hypervisor, it's a VMM as outlined on it's github: "Powerful system container and virtual machine manager"
Incus is very nice and super featured, but suffers from a few issues, namely unintuitive/hard onboarding and bad defaults, which makes giving access to people annoying, as it requires teaching them first and that they can't just make a vm with a few clicks immediately, limited authentication and user control options, like if no external auth users must exist on underlying system, and with limited but very strict auth options requires a full domain and no proxying, currently (might get fixed partially later).
And finally, it suffers from hardcore tracking upstream, ie canonical/lxd(-ui), meaning they won't really do any changes that lxd wouldn't do, and thus are slaved to them : (
Is there such a thing as DIMM modules with ROM chips? It would be useful for some applications to be able to burn the immutable OS into a read only memory as a form of tamper-resistance in key infrastructure.
There are CPUs with fuses that can store keys, e.g. Intel Boot Guard. The tooling to use it as an end user is not friendly to say the least.
You can set your own Secure Boot keys. The history of outrageous security vulnerabilities that break it is long and storied. The underlying architecture is abysmal.
I guess IncusOS (and Incus) achieve similar goals to ProxMox? Has anyone used both and have any opinions on how they perform?
I have switched to incus and it's really great. It's lightweight, has a working terraform provider, easy-to-use cli, pre-built images (LXC and VM) of major distros (while in proxmox, you have to create templates all the time for VMs), runs on any distro (on proxmox, you're stuck with debian), clustering is nice, supports bunch of storage drivers (dir, btrfs, ceph, zfs), simple web UI and active community. The project leader is also very active and helpful while in proxmox, it's a little unresponsive. You can even install `incus-base` package which only contains LXC specific components for only running LXC containers.
I have noticed incus has better security configs by default. For instance, all pre-built images come with secureboot enabled and there are ACLs which are easy to configure for fine-grained network rules. The only downside I feel like is lack of something like PBS
I love the ability to directly run docker containers.
I think their approach to authentication / authorization is insane (not in a good way).
Though IncusOS itself is based on Debian so for the first point against Proxmox I guess using Incus on your OS of choice would be better?
IncusOS is different. You can use incus itself on all the major distros: https://linuxcontainers.org/incus/docs/main/installing/
It's software for a private cloud and can convert your legacy VMWare into IncusOS.
My home lab is not a private cloud and I don't use VMWare.