I played with this a bit today. Only downside is, no easy way to update containers yet. But on the other hand, no more dealing with macvlan or custom docker networks.
The idea is that your container image is the thing you want, and is (relatively) immutable, so you delete and create containers when you want things to change. If you need state you can do that with volume mounts, but the idea is that you don't need to 'update' a container, you just replace it with a new one.
That's also what docker compose does, under the hood. It doesn't 'update' a container, it just deletes it and recreates it with the new image and the same settings/name/ports/volumes/etc.
It's unclear to me why running Docker directly in Proxmox (it's just Debian) and using it like any other Docker host is a bad idea, and why this extra layer of abstractions is preferable.
Docker has security issues if you're not careful, and it's frankly kind of a shitshow out of the box with defaults. Maybe that's part of the reason. But I struggle to see how a bespoke solution like this is the right answer.
Proxmox is a hypervisor OS, and it's value comes from it's virtualization and container-management features. These feature include being able to pause, snapshot, backup/restore, and live-migrate VMs or LXCs to another server in just a couple hundred milliseconds of downtime. Once you run docker on the hypervisor itself, you lose these features, defeating the purpose of running Proxmox in the first place.
There's also the security angle. Containers managed by Proxmox are strongly isolated from the host, but installing Docker on the host bypasses this security. Docker is not insecure by design, but it greatly increases the attack surface. If the hypervisor gets compromised, the entire cluster of servers will also get compromised. In general, as little software as possible should be installed onto the host.
You have a bunch of tooling that deals with apples. You have a clear conceptual picture of what an apple is and what it does.
Then someone brings you a pear. It's kind of like an apple but not exactly. Their pear however works well with some other toolscape that's beyond the shire. You want to do things with their pears.
You invent a way to put a pear inside an apple (docker in VM). That works but you lose some functionality and break some stuff in the conversion, plus now you don't have the clean conceptual integrity of your apple-only system.
Largely management, observability, and then the way that docker mucks with firewalls. Running them this way will allow proxmox to handle all that in the same way {I assume) as the LXC and VMS so automation, and all the rest can be consistent
I've been running Docker natively on the host since Proxmox 7. The only major problem was an iptables rule that I had to add so that the containers are accessible from outside. Besides that, it runs smoothly.
Perhaps in spirit? But I don't think you can term LXC a microVM, and I doubt they start close to as fast as Firecracker or smolbsd, and similar ilk. EDIT - appears I am probably wrong about firecracker being faster than LXC as LXC is kernel based virtualization and likely has faster startup than microVMs?
I played with this a bit today. Only downside is, no easy way to update containers yet. But on the other hand, no more dealing with macvlan or custom docker networks.
“update”, I assume you mean “recreate with new image”?
I think docker itself doesn’t support that.
I use Docker compose to recreate containers with a new image regularly.
I'm sure you could be creative with volumes in Proxmox and build a new LXC container from a new OCI image with the old volumes attached.
> I use Docker compose to recreate containers with a new image regularly.
try doing so without the compose file though.
With podman its just `podman auto-update` Will pull the latest version of the image down.
Not too hard. The original run command is stored if you inspect a running container.
That's true, isn't it? It was one of those features you'd think they would have had figured out, but no.
The idea is that your container image is the thing you want, and is (relatively) immutable, so you delete and create containers when you want things to change. If you need state you can do that with volume mounts, but the idea is that you don't need to 'update' a container, you just replace it with a new one.
That's also what docker compose does, under the hood. It doesn't 'update' a container, it just deletes it and recreates it with the new image and the same settings/name/ports/volumes/etc.
Isn't the ability to do blue/green deployments, canary releases and easy rollbacks huge incentives to use containers?
I think virtually nobody cares about being able to change the image of a container when you can so easily start a new one.
People figuring out how to use containers as pets.
It's unclear to me why running Docker directly in Proxmox (it's just Debian) and using it like any other Docker host is a bad idea, and why this extra layer of abstractions is preferable.
Docker has security issues if you're not careful, and it's frankly kind of a shitshow out of the box with defaults. Maybe that's part of the reason. But I struggle to see how a bespoke solution like this is the right answer.
Proxmox is a hypervisor OS, and it's value comes from it's virtualization and container-management features. These feature include being able to pause, snapshot, backup/restore, and live-migrate VMs or LXCs to another server in just a couple hundred milliseconds of downtime. Once you run docker on the hypervisor itself, you lose these features, defeating the purpose of running Proxmox in the first place.
There's also the security angle. Containers managed by Proxmox are strongly isolated from the host, but installing Docker on the host bypasses this security. Docker is not insecure by design, but it greatly increases the attack surface. If the hypervisor gets compromised, the entire cluster of servers will also get compromised. In general, as little software as possible should be installed onto the host.
It's a kind of apples vs pears problem:
You have a bunch of tooling that deals with apples. You have a clear conceptual picture of what an apple is and what it does.
Then someone brings you a pear. It's kind of like an apple but not exactly. Their pear however works well with some other toolscape that's beyond the shire. You want to do things with their pears.
You invent a way to put a pear inside an apple (docker in VM). That works but you lose some functionality and break some stuff in the conversion, plus now you don't have the clean conceptual integrity of your apple-only system.
This is a way to transform a pear into an apple.
Largely management, observability, and then the way that docker mucks with firewalls. Running them this way will allow proxmox to handle all that in the same way {I assume) as the LXC and VMS so automation, and all the rest can be consistent
I've been running Docker natively on the host since Proxmox 7. The only major problem was an iptables rule that I had to add so that the containers are accessible from outside. Besides that, it runs smoothly.
This is something I've always loved about Unraid. The whole apps/containers ecosystem is so well done.
They are converted to LXC images then run. No compose file either. Still pretty neat.
I have an "error" "I am not a teapot"
719 - I am not a teapot Espresso Web (Red Hat Enterprise Linux) at raymii.org
Looks suspicious, ... not 418, 719.
I think 418 is 'I am a teapot' so it would not be correct to use it in your case. 719 must be a typo though, perhaps it should be 419.
Haha, this was funny. https://datatracker.ietf.org/doc/html/rfc2324
Is this similar to what FlyIO is doing? Running containers as microVMs?
Perhaps in spirit? But I don't think you can term LXC a microVM, and I doubt they start close to as fast as Firecracker or smolbsd, and similar ilk. EDIT - appears I am probably wrong about firecracker being faster than LXC as LXC is kernel based virtualization and likely has faster startup than microVMs?