> As an aside... Because one node didn't start, and my Proxmox cluster has only two nodes, it can't reach quorum, meaning I can't really make any changes to my other node, and I can't start any containers that are stopped.
I've recently added another Zigbee dongle, that supports Thread, and it happens to share same VID:PID combo as the old dongle, so due to how these were mapped into guest OS, all my light switches stopped working. I had to fix the issue fast.
Lesson in here somewhere. Something about about a toaster representing the local intelligence maxima?
At least I was laughing at the Cloudflare oopsie, since all my light switches (et al) are all local. Unlike those people with a fancy smart bed that went into a W shape because it couldn't talk to AWS.
For even n>2 you define a tie breaker node in advance and only the partition connected to that node can make a quorum at 50%. For n=2 going from no quorum to quorum requires both nodes but losing a node doesn't lose quorum, and when you lose a node you stop, shoot the other node, and continue. For split brain the fastest draw wins the shootout.
I recently gave up on Proxmox for my home lab needs after a failed upgrade from 8 to 9. I also never liked the feeling of not having an easy to use API.
Not to mention, Proxmox does not support running Docker in an LXC officially (of course many users still do it). It is not a supported configuration as of now
I ran into the same issue over the weekend. The end-goal for my Proxmox setup is basically the same deployment you have. It's good to see the issue was addressed quickly by the community.
I can't speak for the author, but they said they have a Coral TPU passed into the LXC & container, which I also have on my Proxmox setup for Frigate NVR.
Depending on your hardware platform, there could be valid reasons why you wouldn't want to run Frigate NVR in a VM. Frigate NVR it works best when it can leverage the GPU for video transcoding and TPU for object detection. If you pass the GPU to the VM, then the Proxmox host no longer has video output (without a secondary GPU).
Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM. This is a non-starter for systems where there is no extra PCIe slot for a graphics card, such as the many power-efficient Intel N100 systems that do a good job running Frigate.
The reason why you'd put Docker into LXC is that's the best supported way to get docker engine working on Proxmox without a VM. You'd want to do it on Proxmox because it brings other benefits like a familiar interface, clustering, Proxmox Backup Server, and a great community. You'd want to run Frigate NVR within Docker because it is the best supported way to run it.
At least, this was the case in Proxmox 8. I haven't checked what advancements in Proxmox 9 may have changed this.
At first I had the unholy abomination that is Frigate LXC container, but since it's not trivially updatable and breaks other subtle things, I ended up going with Docker. Was debating getting it into a VM, but for most part, docker on LXC only gave me solvable problems.
> Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM.
This is changing, specifically on QEMU with virtio-gpu, virgl, and Venus.
Virgl exposes a virtualized GPU in the guest that serializes OpenGL commands and sends them to the host for rendering. Venus is similar, but exposes Vulkan in the guest. Both of these work without dedicating the host GPU to the guest, it gives mediated access to the GPU without any specific hardware.
There's also another path known as vDRM/host native context that proxies the direct rendering manager (DRM) uAPI from the guest to the host over virtio-gpu, which allows the guest to use the native mesa driver for lower overhead compared to virgl/Venus. This does, however, require a small amount of code to support per driver in virglrenderer. There are patches that have been on the QEMU mailing list to add this since earlier this year, while crosvm already supports it.
I have Frigate and a Coral USB running happily in a VM on an N97. GPU pass through is slightly annoying (need to use a custom ROM from here: https://github.com/LongQT-sea/intel-igpu-passthru). I think SRIOV works but haven’t tried. And Coral only works in USB3 mode if you pass the whole PCIe controller.
I've been debating if I should move my frigate off an aging Unraid server to spare mini PC with Proxmox. The mini has a N97 with 16gb ram. How cameras do you have in your frigate instance on that N97? Just wondering if a N97 is capable of handling 4+ cameras. I do have a Coral TPU for inference & detection.
I have around 6 cameras, mostly 1080p, and about 8 GB RAM and 3 cores on the VM (plus Coral USB and Intel VAAPI). CPU usage is about 30 - 70% depending on how much activity there is. I also have other VMs on the machine running container services and misc stuff.
There are some camera stability issues which are probably WiFi related (2.4 GHz is overloaded) and Frigate also has its own issues (e.g. with detecting static objects as moving) but generally I’m happy with it. If I optimize my setup some more I could probably get it to a < 50% utilization.
Perfect thanks. I'll give the N97 a go and put it to good use as a dedicated frigate NVR box. It certainly has a much lower power draw than my Unraid server.
It's not always better. Docker on lxc has a lot of advantages. I would rather use plain lxc on production systems, but I've been homelabbing on lxc+docker for years.
It's blazing fast and I cut down around 60% of my RAM consumption. It's easy to manage, boots instantly, allows for more elastic separation while still using docker and/or k8s. I love that it allows me to keep using Proxmox Backup Server.
I'm postponing homelab upgrade for a few years thanks to that.
> While it can be convenient to run “Application Containers” directly as Proxmox Containers, doing so is currently a tech preview. For use cases requiring container orchestration or live migration, it is still recommended to run them inside a Proxmox QEMU virtual machine.
The way I understand it is that Docker with LXC allows for compute / resource sharing, where as dedicated VMs will will require passing through the entire discrete GPU. So, the VMs require a total passthrough of those Zigbees, container wouldn't?
I'm not exactly sure how the outcome would have changed here though.
It should in an ideal world but docker is a very leaky abstraction imho and you will run into a number of problems.
It has improved as of newer kernel and docker versions but they were problems (overlayfs/zfs incompatibilities/ uid mapping problems in docker images/ capabilities requested by docker not available in LXC, rootless docker problems,...)
> As an aside... Because one node didn't start, and my Proxmox cluster has only two nodes, it can't reach quorum, meaning I can't really make any changes to my other node, and I can't start any containers that are stopped. I've recently added another Zigbee dongle, that supports Thread, and it happens to share same VID:PID combo as the old dongle, so due to how these were mapped into guest OS, all my light switches stopped working. I had to fix the issue fast.
Lesson in here somewhere. Something about about a toaster representing the local intelligence maxima?
The lesson is use dumb light switches and have a shotgun ready if the printer starts to act up.
Also regularly print out sheets of electronic recycling facts to remind the printer of its place.
At least I was laughing at the Cloudflare oopsie, since all my light switches (et al) are all local. Unlike those people with a fancy smart bed that went into a W shape because it couldn't talk to AWS.
Lesson 1: clusters should have an odd number of nodes.
Originally I was planning on building the NAS with just the Minisforum MS-01, but truenas and USB enclosures do not play well together.
So I went for the AOOSTAR NAS mini-pc as a "proper" solution. Ended up with two machines, so why not join them into the cluster!
Probably can chuck proxmox on a RasPi somewhere, just for quorum purposes :)
I really, really think there are better lessons there. Maybe more like "Lesson 0. Don't put distributed clusters in control of your light switches"
Two node / even node clusters can work fine.
For even n>2 you define a tie breaker node in advance and only the partition connected to that node can make a quorum at 50%. For n=2 going from no quorum to quorum requires both nodes but losing a node doesn't lose quorum, and when you lose a node you stop, shoot the other node, and continue. For split brain the fastest draw wins the shootout.
In fairness to proxmox, that's the recommended way.
Most homelabbers ignore recommendations because if anything breaks nothing of corporate value is lost and no one's gonna lose their job.
Related ongoing thread:
Proxmox virtual environment 9.1 available - https://news.ycombinator.com/item?id=45980005 - Nov 2025 (56 comments)
I recently gave up on Proxmox for my home lab needs after a failed upgrade from 8 to 9. I also never liked the feeling of not having an easy to use API.
Ive put off that upgrade as I just dont have the time to fix it if goes sideways. What did you end up moving to?
Look ma, I'm on the TV. The merch is in the back, sub to the YouTube channel...
It seems like a lot of the pain comes from the fact that hardware passthrough behaves so differently under LXC vs VMs.
Has anyone here found a stable way to handle USB / PCIe device identity changes across updates or reboots?
That part always feels like the weak point in otherwise solid Proxmox setups
For most part it's okay to pass through by I'd, unless it's some chinese device, which reminds me of the scene from Life Aquatic with Steve Zissou:
"- do interns get Glocks? - no, they all share one"
I just use UUID to make sure the mountpoint for each device stays the same across reboots.
Btw the issue that the author encountered are not really with proxmox itself but with an out-of-tree kernel driver they installed.
Any debian system (proxmox is based on debian) would have broken in a similar (if not the exact same) way.
Not to mention, Proxmox does not support running Docker in an LXC officially (of course many users still do it). It is not a supported configuration as of now
Man Proxmox... I love it, I use it, but I swear there has to be a more straightforward way to implement this technology.
I ran into the same issue over the weekend. The end-goal for my Proxmox setup is basically the same deployment you have. It's good to see the issue was addressed quickly by the community.
> Running docker inside LXC is weird.
Knowing when to use a vm and when to use a container is sometimes an opaque problem.
This is one of those cases where a VM is a much better choice.
I can't speak for the author, but they said they have a Coral TPU passed into the LXC & container, which I also have on my Proxmox setup for Frigate NVR.
Depending on your hardware platform, there could be valid reasons why you wouldn't want to run Frigate NVR in a VM. Frigate NVR it works best when it can leverage the GPU for video transcoding and TPU for object detection. If you pass the GPU to the VM, then the Proxmox host no longer has video output (without a secondary GPU).
Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM. This is a non-starter for systems where there is no extra PCIe slot for a graphics card, such as the many power-efficient Intel N100 systems that do a good job running Frigate.
The reason why you'd put Docker into LXC is that's the best supported way to get docker engine working on Proxmox without a VM. You'd want to do it on Proxmox because it brings other benefits like a familiar interface, clustering, Proxmox Backup Server, and a great community. You'd want to run Frigate NVR within Docker because it is the best supported way to run it.
At least, this was the case in Proxmox 8. I haven't checked what advancements in Proxmox 9 may have changed this.
At first I had the unholy abomination that is Frigate LXC container, but since it's not trivially updatable and breaks other subtle things, I ended up going with Docker. Was debating getting it into a VM, but for most part, docker on LXC only gave me solvable problems.
> Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM.
This is changing, specifically on QEMU with virtio-gpu, virgl, and Venus.
Virgl exposes a virtualized GPU in the guest that serializes OpenGL commands and sends them to the host for rendering. Venus is similar, but exposes Vulkan in the guest. Both of these work without dedicating the host GPU to the guest, it gives mediated access to the GPU without any specific hardware.
There's also another path known as vDRM/host native context that proxies the direct rendering manager (DRM) uAPI from the guest to the host over virtio-gpu, which allows the guest to use the native mesa driver for lower overhead compared to virgl/Venus. This does, however, require a small amount of code to support per driver in virglrenderer. There are patches that have been on the QEMU mailing list to add this since earlier this year, while crosvm already supports it.
I have Frigate and a Coral USB running happily in a VM on an N97. GPU pass through is slightly annoying (need to use a custom ROM from here: https://github.com/LongQT-sea/intel-igpu-passthru). I think SRIOV works but haven’t tried. And Coral only works in USB3 mode if you pass the whole PCIe controller.
I've been debating if I should move my frigate off an aging Unraid server to spare mini PC with Proxmox. The mini has a N97 with 16gb ram. How cameras do you have in your frigate instance on that N97? Just wondering if a N97 is capable of handling 4+ cameras. I do have a Coral TPU for inference & detection.
I have around 6 cameras, mostly 1080p, and about 8 GB RAM and 3 cores on the VM (plus Coral USB and Intel VAAPI). CPU usage is about 30 - 70% depending on how much activity there is. I also have other VMs on the machine running container services and misc stuff.
There are some camera stability issues which are probably WiFi related (2.4 GHz is overloaded) and Frigate also has its own issues (e.g. with detecting static objects as moving) but generally I’m happy with it. If I optimize my setup some more I could probably get it to a < 50% utilization.
Perfect thanks. I'll give the N97 a go and put it to good use as a dedicated frigate NVR box. It certainly has a much lower power draw than my Unraid server.
I'm running thingino cameras off wifi and the stability is kinda meh... Want to try a wired setup with a PoE USB Ethernet adapter...
It's not always better. Docker on lxc has a lot of advantages. I would rather use plain lxc on production systems, but I've been homelabbing on lxc+docker for years.
It's blazing fast and I cut down around 60% of my RAM consumption. It's easy to manage, boots instantly, allows for more elastic separation while still using docker and/or k8s. I love that it allows me to keep using Proxmox Backup Server.
I'm postponing homelab upgrade for a few years thanks to that.
Proxmox FAQ calls running Docker on LXC a tech preview and “kind of” recommends VMs. At the very bottom of the page.
https://pve.proxmox.com/wiki/FAQ
> While it can be convenient to run “Application Containers” directly as Proxmox Containers, doing so is currently a tech preview. For use cases requiring container orchestration or live migration, it is still recommended to run them inside a Proxmox QEMU virtual machine.
The way I understand it is that Docker with LXC allows for compute / resource sharing, where as dedicated VMs will will require passing through the entire discrete GPU. So, the VMs require a total passthrough of those Zigbees, container wouldn't?
I'm not exactly sure how the outcome would have changed here though.
Am I crazy or is converting a dockerfile into LXC something that should be possible?
It should in an ideal world but docker is a very leaky abstraction imho and you will run into a number of problems.
It has improved as of newer kernel and docker versions but they were problems (overlayfs/zfs incompatibilities/ uid mapping problems in docker images/ capabilities requested by docker not available in LXC, rootless docker problems,...)
In the new Proxmox VE 9.1 release this should be possible, from the changelog:
> OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This seems like a niche issue, running Docker in LXC for years with dozens of images without a problem.
Guessing you are only running a single node though, not a cluster with HA and live migration and all that.
See also "Upgrade from 8 to 9":
* https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
And "Known Issues & Breaking Changes (9.1)":
* https://pve.proxmox.com/wiki/Roadmap#9.1-known-issues
Upgrading Proxmox can be tricky, but it’s rewarding to see improved stability, features, and performance after a successful upgrade.