Nutanix is popular with traditional larger enterprise VMware type customers, Proxmox is popular with the smaller or homelabber refugees. Exceptions exist to each of course.
That people consolidated their business atop VMware's hypervisor, got screwed by Broadcom, and as a result are moving everything to Nutanix (from whom they need to buy the hypervisor, the compute stack, the storage stack, etc.) is insane to me.
Most don't even consider the amounts as getting screwed, just enough change that on the next refresh cycle it was worth switching to a different provider. For a lot of these places it was just 10-15 years ago they went from 0 VMs to 80%+ VMs so they aren't worried about needing to move around, just the quality of the support contract etc.
You can have a public company that invests in private companies, as opposed to investing in publicly listed companies (like $BRK/Buffett does (in addition to PE stuff)).
Talking to midmarket and enterprise customers and nobody is taking Proxmox seriously quite yet, I think due to concerns around support availability and long term viability. Hyper-V and Azure Local come up a lot in these conversations if you run a lot of Windows (Healthcare in the US is nearly entirely Windows based). Have some folks kicking tires on OpenShift, which is a HEAVY lift and not much less expensive than modern Broadcom licenses.
My personal dark horse favorite right now is HPE VM Essentials. HPE has a terrible track record of being awesome at enterprise software, but their support org is solid and the solution checks a heck of a lot of boxes, including broad support for non-HPE servers, storage, and networking. Solution is priced to move and I expect HPE smells blood in these waters, they're clearly dumping a lot of development resources into the product in this past year.
I've used them professionally during 0.9 times (2008.) and it was already quite useful and very stable (all advertised features worked).
17 years looks pretty good to me, Proxmox will not go away (neither product or company)
>(Healthcare in the US is nearly entirely Windows based).
This wasn't my experience in over a decade in the industry.
It's Windows dominant, but our environment was typically around a 70/30 split of Windows/Linux servers.
Cerner shops in particular are going to have a larger Linux footprint. Radiology, biomed, interface engines, and med records also tended to have quite a bit of nix infrastructure.
One thing that can be said is that containerization has basically zero penetration with any vendors in the space. Pretty much everyone is still doing a pets over cattle model in the industry.
HPE VM Essentials and Proxmox are just UI/wrappers/+ on top of kvm/virsh/libvirt for the virtualization side.
You can grow out of either by just moving to self hosted, or you can avoid both for the virtualization part if you don't care about the VMware like GUI if you are an automation focused company.
If we could do it 20 years ago once VT-x for production Oracle EBS instances for a smaller but publicly traded company with a IT team of 4, almost any midmarket enterprise could do it today, especially with modern tools.
It is culture and web-ui requirements and FUD that cause issues, not the underlying products that are stable today, but hidden from view.
Correction: In Proxmox VE we're not using virsh/libvirt at all, rather we have our own stack for driving QEMU on a low-level, our in-depth integration, especially with live local storage migration our Backup Servers dirty-bitmap (known as change block tracking in vmware worlds) would be possible in the form we have it. Same w.r.t. our own stack for managing LXC container.
The web UI part is actually one of our smaller code bases relative to the whole API and lower level backend code.
Correct sorry I don't use the web-ui's and was confusing oVirt, I forgot that you are using perl modules to call qemu/lxc.
I would strongly suggest more work on your NUMA/cpuset limitations. I know people have been working on it slowly but with the rise of E and P cores, you can't stick to pinning for many use cases and while I get hyperconvergence has it's costs, and platforms have to choose simple, the kernels cpuset proc system works pretty well there and dramatically reduces latency, especially for lakehouse style DP.
I do have customers who would be better served by a proxmox type solution, but need to isolate critical loads and/or avoid the problems with asymmetric cores and non-locality in the OLAP space.
IIRC lots of things that have worked for years in qemu-kvm are ignored when added to <VMID>.conf etc...
PVE itself is still made of a lot of perl, but nowadays, we actually do almost everything new in rust.
We already support CPUsets and pinning for Container VMs, but definitively can be improved, especially if you mean something more automated/guided by the PVE stack.
If you have something more specific, ideally somewhat actionable, it would be great if you could create an enhancement request at https://bugzilla.proxmox.com/ so that we can actually keep track of these requests.
While the input for qemu is called a "pve-cpuset" for affinity[0], it is using explicitly the taskset[1][3] command.
This is different than cpuset[2], or how libvirt allows the creation of partitions[3] using systemd slices in your case.
The huge advantage is that setting up basic slices can be done when provisioning the hypervisor, and you don't have the hard code cpu pinning numbers as you would in taskset, plus in theory it could be dynamic.
As cpusets are hierarchical, one could use various namespace schemes, which change per hypervisor, not exposing that implementation detail to the guest configuration. Think migrating from an old 16 core CPU to something more modern, and how all those guests will be pinned to a fraction of the new cores without user interaction.
Unfortunately I am deep into podman right now and don't have a proxmox at the moment or I would try to submit a bug.
This page[5] covers how even inter CCD traffic even on Ryzen is ~5x compared to local. That is something that would break the normal affinity if you move to a chip with more cores on a CCD as an example. And you can't see CCD placement in the normal numa-ish tools.
To be honest most of what I do wouldn't generalize, but you could use cpusets, with a hierarchy and open the choice to try and improve latency without requiring each person launching a self service VM to hard code the core ID's.
I do wish I had the time and resources to document this well, but hopefully that helps explain more about at least the cpuset part, not even applying the hard partitioning you could do to ensure say ceph is still running when you start to thrash etc...
KVM is awesome enough that there isn’t a lot of room left to differentiate at the hypervisor level. Now the problem is dealing with thousands of the things, so it’s the management layer where the product space is competing.
Thus why libvirt was added, it works with KVM, Xen, VMware ESXi, QEMU etc... but yes most of the tools like ansible only support libvirt_lxc and libvirt_qemu today but it isn't too hard to use for any modern admin with automation experiance.
Libvirt is the abstraction API that mostly hides the concrete implementation details.
I haven't tried oVirt or the other UIs on top of libvirt, but it seems less painful to me than digging through the Proxmox Perl modules when I hit a limitation of their system, but most people may not.
All of those UI's have to make sacrifices to be usable, I just miss the full power of libvirt/qemu/kvm for placement and reduced latency, especially in the era of p vs e cores, dozen's of numa nodes etc...
I would argue for long lived machines, automation is the trick for dealing with 1000's of things, but I get that is not always true for others use-cases.
I think some people may be supprised by just targeting libvirt vs looking for some web-ui.
The only thing missing making Proxmox difficult in traditional environment is a replacement for VMware's VMFS (cluster aware VM file system).
Lots and lots of organizations already have SAN/storage fabric networks presenting block storage over the network which was heavily used for VMware environments.
You could use NFS if your arrays support it, but MPIO block storage via iscsi is ubiquitous in my experience.
Not really, that works if you want to have converged storage in your hypervisors, but most large VMWare deployments I've seen use external storage from remote arrays.
Shared across a cluster of multiple hosts, such that you can hot migrate VMs? I am not aware of that being possible in Proxmox the same way you can in VMware with VMFS.
It's not like VMFS (not a cluster filesystem), for Proxmox+iSCSI you get a large LVM PV that gets sliced up into volumes for your VMs. All of your Proxmox nodes are connected to that same LVM PV and you can live migrate your VMs around all you wish, have HA policies so if a node dies its VMs start up right away on a surviving node, etc.
You lose snapshots (but can have your SAN doing snaps, of course) and a few other small things I can't recall right now, but overall it works great. Have had zero troubles.
Watching hypervisors slowly improve over the last few years has been amazing. They aren't quite to the point that I will install them under any new hardware I buy and then put my daily driver OS on top, but they are very close. I think a strong focus on creating 'the OS under your OS' experience seamless could open up a lot more here.
VMware has been so good and reasonably priced for so long that there hasn't been a competitive market in the enterprise virtualization space for the past two decades. In a way, I think Broadcom's moves here might be healthy for the enterprise datacenter longer term, it has created the opportunity for others to step in and broadened the ecosystem significantly.
As my main desktop computers I've been using Fedora and Windows (for gaming only) virtualised on top of a single proxmox host with 2 GPUs passed through for more than 10 years... Upgraded all the way to latest versions (guests and hosts) without ever having to reinstall from scratch. I upgraded the hardware a few times (just cloned the disks), and since the desktops are virtualised, Windows always worked fine without complaining about new hardware drivers (only thing to change was GPU driver)
Another benefit is block-level backups of the VMs (either with qcow2 disks files or ZFS block storage, which both support snapshots and easy incremental backups of changed block data only)
Proxmox is great for this, although maybe not on a laptop unless you're ready to do a lot of tweaks for sleep, etc.
I have a PC where I installed Proxmox on bare metal and put a daily-use desktop OS on top. It works surprisingly well, the trickiest part was making sure the desktop OS took control of video/audio/peripherals.
Yup my primary Windows machine is a VM and after passing through all the relevant peripherals (GPU, USB) it’s pretty seamless and you’d never know.
Cool part is I needed a more powerful Linux shell than my regular servers (NUCs, etc.) for a one off project, so I spun up a VM on it and instantly had more than enough compute.
For many folk's workflows, I'd wager that hypervisors are there and ready. I had a nice time setting up xcp-ng before deciding microk8s fits my needs more betterer; they're just plum good, well documented, and blazing fast.
I think the possibilities are huge with this area. I'd love to see more 'manager' layers that build on top of any 'cloud' system, even a local one, to give you a standard stack that is easy to move. Imagine something that lives at the hypervisor level (that you trust and was mature) taking control of your various cloud accounts to merge them and make it easy to migrate/leave one provider for another. I know that is the promise of terraform but we all want a good, consistent, interface to play with and then build the automation tools on top of. Maybe that is a good direction for proxmox? integrating with cloud providers in a seamless way. Anyway, a lot of promise in this area no matter the direction it takes.
I'm not sure I would want my daily driver to be a hypervisor... Whats controlling audio, do I really need audio kernel extensions on my hypervisor? Whos in charge when I shut the lid on my laptop...
But the moment you stop trying to do everything locally Proxmox, as it is today, is a dream.
It's easy enough to spin up a VM, throw a clients docker/podman + other insanity onto it and have a running dev instance in minutes. It's easy enough to work remotely in your favorite IDE/dev env. DO I need to "try something wild", clone it... build a new one... back it up and restore if it doesn't work...
Do I need to emulate production at a more fine grained level than what docker can provide: easy enough to build something that looks like production on my proxmox box.
And when I'm done with all that work... my daily driver laptop and desktop remain free of cruft and clutter.
Somehow, their web developer managed to break scrolling on Safari, so I am unable to navigate the linked site. If anyone else was looking for a list of what has changed in recent releases, it can be found at https://pve.proxmox.com/wiki/Roadmap
So with support for OCI container images, does this mean I can run docker images as LXCs natively in proxmox? I guess it's an entirely manual process, no mature orchestration like portainer or even docker-compose, no easy upgrades, manually setting up bind mounts, etc. It would be a nice first step.
Also hoping that this work continues and tooling is made available. I suppose eventually someone could even make a wrapper around it that implements Docker's remote API
15-20 years ago this wouldn't have been a company. It would have been a strong but informal open collaboration where
smart and just people funded by various entities around the world kept it running.
Then the opportunity to get rich by offering an open source product combined with closed source extras+support was invented. I don't like this new world.
Edit: Somewhere along the line, we also lost the concept of having a sysadmin/developer person working at like a municipality contributring like 20% of their time towards maintenance of such projects. Invaluable when keeping things running.
Funny enough, Proxmox VE is 17 years old. I want to say it was ballpark 13-14 years ago I was using it to replace ESXi to get features (HA/Live migration) that only came with expensive licensing. 15-20 years ago there were definitely companies doing exactly this.
But in many cases, like Proxmox, there is nothing proprietary. What they provide is the glue, the polish, the interface that ties together the technologies they build on. If they began to be nasty, you can just leave (or even, continue to use it however long you like).
In general, I don't think this is a threat. I think the problems begin with proprietary offerings, like they so often do in the cloud. Then's the time when vendor lock-in takes its toll. But even with AWS, if you stick to open interfaces, it's easy to leave.
What is this "application containers" BS, just add native docker stack support. Most folks in the self hosting community already deploy nested dockers in LXCs, just add native support so we can cut out the middle man and squeeze out that indirection.
It makes no sense to add an extra layer, and we definitively do not want to make us and our users dependent of docker project.
There exist many OCI runtimes, and our container toolkit already provides a (ball parked) 90% feature overlap with them. Maintaining two stacks here is just needless extra work and asking for extra pain for us devs and our users, so no, thanks.
That said, PVE is not OCI runtime compatible yet, that's why this is marked as tech preview, but it can be still useful for many that control their OCI images themselves or have an existing automation stack that can drive the current implementation. That said, we plan to work more on this in the future, but for the midterm it will be not that interesting for those that want a very simple hand-off approach (let's call it "casual hobby homelabber"), or want to replace some more complex stack with it; but I think we'll get there.
People stuck with Docker for a reason, even after they became user hostile. Almost every selfhosted project in existence provides a docker-compose.yml that's easy to expand and configure to immediately get started. None provide generic OCI containers to run in generic OCI runtimes.
I understand sticking with compatibility at that layer from an "ideal goal" POV, but that is unlikely to see a lot of adoption precisely because applications don't target generic OCI runtimes.
Sorry, but I bought Proxmox 7, but it is not comparable. Incus does everything (and more) with better interface, WAY better reliability, and also not like a hundred EUR or whatever. (100 EUR is fine with me if better, but not if not better...)
I’ve been looking at incus, and some aspects are appealing (creating a vm/container via cli). But I think proxmox having better clustering, and built in support for ceph, backups (with proxmox backup server)… proxmox just had a little more maturity behind it. I’ll be watching incus though.
Incus has a pretty bare-bones web UI without nary any metrics, which is a bit of a pain when you're trying to track down CPU hogs. Proxmox, on the other hand, makes things very much visible and easy to use.
I was looking to setup Proxmox for my homelab soon but this comment got me interested in Incus. Mostly because I've never heard of any Proxmox alternatives before this. You can try out Incus in your browser here: https://linuxcontainers.org/incus/try-it/
The demo does take ~10m to get into a working instance.
also, looking at the link you posted, it looks like incus can only do like a fraction of what proxmox can do. is that the case or is that web ui a limiting factor?
Incus looks nice, though it looks to be more API driven , at least from the landing page. I can't attest to Proxmox in a production/cluster environment but (barring GPU passthrough) it's very accessible for homelab and small network.
I don't remember if I tried and failed, or if it seemed too much for me... I have an Arc A series; if you have a verified guide I would like to take a look!
Proxmox (and XCP-ng?) seems to be "the" (?) popular alternative to VMware after Broadcom's private equity-fuel cash grab.
(Perhaps if you're a Microsoft shop you're looking at Hyper-V?)
Nutanix is popular with traditional larger enterprise VMware type customers, Proxmox is popular with the smaller or homelabber refugees. Exceptions exist to each of course.
That people consolidated their business atop VMware's hypervisor, got screwed by Broadcom, and as a result are moving everything to Nutanix (from whom they need to buy the hypervisor, the compute stack, the storage stack, etc.) is insane to me.
Most don't even consider the amounts as getting screwed, just enough change that on the next refresh cycle it was worth switching to a different provider. For a lot of these places it was just 10-15 years ago they went from 0 VMs to 80%+ VMs so they aren't worried about needing to move around, just the quality of the support contract etc.
Two days ago saw a shop that moved to Incus. Seems to be a viable alternative too.
um broadcom is publicly traded as $AVGO...?
So is $KKR:
> KKR & Co. Inc., also known as Kohlberg Kravis Roberts & Co., is an American global private equity and investment company.
* https://en.wikipedia.org/wiki/KKR_%26_Co.
You can have a public company that invests in private companies, as opposed to investing in publicly listed companies (like $BRK/Buffett does (in addition to PE stuff)).
Plenty of people describe Broadcom as "Publicly traded Private Equity"
now that is something I can totally get behind
Talking to midmarket and enterprise customers and nobody is taking Proxmox seriously quite yet, I think due to concerns around support availability and long term viability. Hyper-V and Azure Local come up a lot in these conversations if you run a lot of Windows (Healthcare in the US is nearly entirely Windows based). Have some folks kicking tires on OpenShift, which is a HEAVY lift and not much less expensive than modern Broadcom licenses.
My personal dark horse favorite right now is HPE VM Essentials. HPE has a terrible track record of being awesome at enterprise software, but their support org is solid and the solution checks a heck of a lot of boxes, including broad support for non-HPE servers, storage, and networking. Solution is priced to move and I expect HPE smells blood in these waters, they're clearly dumping a lot of development resources into the product in this past year.
I've used them professionally during 0.9 times (2008.) and it was already quite useful and very stable (all advertised features worked). 17 years looks pretty good to me, Proxmox will not go away (neither product or company)
>(Healthcare in the US is nearly entirely Windows based).
This wasn't my experience in over a decade in the industry.
It's Windows dominant, but our environment was typically around a 70/30 split of Windows/Linux servers.
Cerner shops in particular are going to have a larger Linux footprint. Radiology, biomed, interface engines, and med records also tended to have quite a bit of nix infrastructure.
One thing that can be said is that containerization has basically zero penetration with any vendors in the space. Pretty much everyone is still doing a pets over cattle model in the industry.
HPE VM Essentials and Proxmox are just UI/wrappers/+ on top of kvm/virsh/libvirt for the virtualization side.
You can grow out of either by just moving to self hosted, or you can avoid both for the virtualization part if you don't care about the VMware like GUI if you are an automation focused company.
If we could do it 20 years ago once VT-x for production Oracle EBS instances for a smaller but publicly traded company with a IT team of 4, almost any midmarket enterprise could do it today, especially with modern tools.
It is culture and web-ui requirements and FUD that cause issues, not the underlying products that are stable today, but hidden from view.
Correction: In Proxmox VE we're not using virsh/libvirt at all, rather we have our own stack for driving QEMU on a low-level, our in-depth integration, especially with live local storage migration our Backup Servers dirty-bitmap (known as change block tracking in vmware worlds) would be possible in the form we have it. Same w.r.t. our own stack for managing LXC container.
The web UI part is actually one of our smaller code bases relative to the whole API and lower level backend code.
Correct sorry I don't use the web-ui's and was confusing oVirt, I forgot that you are using perl modules to call qemu/lxc.
I would strongly suggest more work on your NUMA/cpuset limitations. I know people have been working on it slowly but with the rise of E and P cores, you can't stick to pinning for many use cases and while I get hyperconvergence has it's costs, and platforms have to choose simple, the kernels cpuset proc system works pretty well there and dramatically reduces latency, especially for lakehouse style DP.
I do have customers who would be better served by a proxmox type solution, but need to isolate critical loads and/or avoid the problems with asymmetric cores and non-locality in the OLAP space.
IIRC lots of things that have worked for years in qemu-kvm are ignored when added to <VMID>.conf etc...
PVE itself is still made of a lot of perl, but nowadays, we actually do almost everything new in rust.
We already support CPUsets and pinning for Container VMs, but definitively can be improved, especially if you mean something more automated/guided by the PVE stack.
If you have something more specific, ideally somewhat actionable, it would be great if you could create an enhancement request at https://bugzilla.proxmox.com/ so that we can actually keep track of these requests.
There is a bit of a problem with polysemy here.
While the input for qemu is called a "pve-cpuset" for affinity[0], it is using explicitly the taskset[1][3] command.
This is different than cpuset[2], or how libvirt allows the creation of partitions[3] using systemd slices in your case.
The huge advantage is that setting up basic slices can be done when provisioning the hypervisor, and you don't have the hard code cpu pinning numbers as you would in taskset, plus in theory it could be dynamic.
From the libvirt page[4]
As cpusets are hierarchical, one could use various namespace schemes, which change per hypervisor, not exposing that implementation detail to the guest configuration. Think migrating from an old 16 core CPU to something more modern, and how all those guests will be pinned to a fraction of the new cores without user interaction.Unfortunately I am deep into podman right now and don't have a proxmox at the moment or I would try to submit a bug.
This page[5] covers how even inter CCD traffic even on Ryzen is ~5x compared to local. That is something that would break the normal affinity if you move to a chip with more cores on a CCD as an example. And you can't see CCD placement in the normal numa-ish tools.
To be honest most of what I do wouldn't generalize, but you could use cpusets, with a hierarchy and open the choice to try and improve latency without requiring each person launching a self service VM to hard code the core ID's.
I do wish I had the time and resources to document this well, but hopefully that helps explain more about at least the cpuset part, not even applying the hard partitioning you could do to ensure say ceph is still running when you start to thrash etc...
[0] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=src/PVE/...
[1] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=src/PVE/...
[2] https://docs.kernel.org/admin-guide/cgroup-v2.html#cpuset
[3] https://man7.org/linux/man-pages/man1/taskset.1.html
[4] https://libvirt.org/cgroups.html#using-custom-partitions
[5] https://kb.blockbridge.com/technote/proxmox-tuning-low-laten...
KVM is awesome enough that there isn’t a lot of room left to differentiate at the hypervisor level. Now the problem is dealing with thousands of the things, so it’s the management layer where the product space is competing.
Thus why libvirt was added, it works with KVM, Xen, VMware ESXi, QEMU etc... but yes most of the tools like ansible only support libvirt_lxc and libvirt_qemu today but it isn't too hard to use for any modern admin with automation experiance.
Libvirt is the abstraction API that mostly hides the concrete implementation details.
I haven't tried oVirt or the other UIs on top of libvirt, but it seems less painful to me than digging through the Proxmox Perl modules when I hit a limitation of their system, but most people may not.
All of those UI's have to make sacrifices to be usable, I just miss the full power of libvirt/qemu/kvm for placement and reduced latency, especially in the era of p vs e cores, dozen's of numa nodes etc...
I would argue for long lived machines, automation is the trick for dealing with 1000's of things, but I get that is not always true for others use-cases.
I think some people may be supprised by just targeting libvirt vs looking for some web-ui.
The only thing missing making Proxmox difficult in traditional environment is a replacement for VMware's VMFS (cluster aware VM file system).
Lots and lots of organizations already have SAN/storage fabric networks presenting block storage over the network which was heavily used for VMware environments.
You could use NFS if your arrays support it, but MPIO block storage via iscsi is ubiquitous in my experience.
The Proxmox answer to this is Ceph - https://ceph.io/en/
> The Proxmox answer to this is Ceph - https://ceph.io/en/
And how does Ceph/RBD work over Fibre Channel SANs? (Speaking as someone who is running Proxmox-Ceph (and at another gig did OpenStack-Ceph).)
Not really, that works if you want to have converged storage in your hypervisors, but most large VMWare deployments I've seen use external storage from remote arrays.
Proxmox works fine with iSCSI.
Shared across a cluster of multiple hosts, such that you can hot migrate VMs? I am not aware of that being possible in Proxmox the same way you can in VMware with VMFS.
It's not like VMFS (not a cluster filesystem), for Proxmox+iSCSI you get a large LVM PV that gets sliced up into volumes for your VMs. All of your Proxmox nodes are connected to that same LVM PV and you can live migrate your VMs around all you wish, have HA policies so if a node dies its VMs start up right away on a surviving node, etc.
You lose snapshots (but can have your SAN doing snaps, of course) and a few other small things I can't recall right now, but overall it works great. Have had zero troubles.
Watching hypervisors slowly improve over the last few years has been amazing. They aren't quite to the point that I will install them under any new hardware I buy and then put my daily driver OS on top, but they are very close. I think a strong focus on creating 'the OS under your OS' experience seamless could open up a lot more here.
VMware has been so good and reasonably priced for so long that there hasn't been a competitive market in the enterprise virtualization space for the past two decades. In a way, I think Broadcom's moves here might be healthy for the enterprise datacenter longer term, it has created the opportunity for others to step in and broadened the ecosystem significantly.
As my main desktop computers I've been using Fedora and Windows (for gaming only) virtualised on top of a single proxmox host with 2 GPUs passed through for more than 10 years... Upgraded all the way to latest versions (guests and hosts) without ever having to reinstall from scratch. I upgraded the hardware a few times (just cloned the disks), and since the desktops are virtualised, Windows always worked fine without complaining about new hardware drivers (only thing to change was GPU driver)
Another benefit is block-level backups of the VMs (either with qcow2 disks files or ZFS block storage, which both support snapshots and easy incremental backups of changed block data only)
Proxmox is great for this, although maybe not on a laptop unless you're ready to do a lot of tweaks for sleep, etc.
I have a PC where I installed Proxmox on bare metal and put a daily-use desktop OS on top. It works surprisingly well, the trickiest part was making sure the desktop OS took control of video/audio/peripherals.
Yup my primary Windows machine is a VM and after passing through all the relevant peripherals (GPU, USB) it’s pretty seamless and you’d never know.
Cool part is I needed a more powerful Linux shell than my regular servers (NUCs, etc.) for a one off project, so I spun up a VM on it and instantly had more than enough compute.
I always thought it might be cool to be able to do this with a laptop.
I thought it might need gpu virtualization?
do you do it with passthrough?
For many folk's workflows, I'd wager that hypervisors are there and ready. I had a nice time setting up xcp-ng before deciding microk8s fits my needs more betterer; they're just plum good, well documented, and blazing fast.
I think the possibilities are huge with this area. I'd love to see more 'manager' layers that build on top of any 'cloud' system, even a local one, to give you a standard stack that is easy to move. Imagine something that lives at the hypervisor level (that you trust and was mature) taking control of your various cloud accounts to merge them and make it easy to migrate/leave one provider for another. I know that is the promise of terraform but we all want a good, consistent, interface to play with and then build the automation tools on top of. Maybe that is a good direction for proxmox? integrating with cloud providers in a seamless way. Anyway, a lot of promise in this area no matter the direction it takes.
I'm not sure I would want my daily driver to be a hypervisor... Whats controlling audio, do I really need audio kernel extensions on my hypervisor? Whos in charge when I shut the lid on my laptop...
But the moment you stop trying to do everything locally Proxmox, as it is today, is a dream.
It's easy enough to spin up a VM, throw a clients docker/podman + other insanity onto it and have a running dev instance in minutes. It's easy enough to work remotely in your favorite IDE/dev env. DO I need to "try something wild", clone it... build a new one... back it up and restore if it doesn't work...
Do I need to emulate production at a more fine grained level than what docker can provide: easy enough to build something that looks like production on my proxmox box.
And when I'm done with all that work... my daily driver laptop and desktop remain free of cruft and clutter.
[dead]
I've always just clean-installed then copied the containers/vms using vzdump and pct restore/qmrestore
I learned stuff like this years ago with upgrades to debian/ubuntu/etc - upgrading a distribution is a mess, and I've learned not to trust it.
Somehow, their web developer managed to break scrolling on Safari, so I am unable to navigate the linked site. If anyone else was looking for a list of what has changed in recent releases, it can be found at https://pve.proxmox.com/wiki/Roadmap
There’s a cookie popup, so maybe something on your browser is removing the layer and causing all the events to be ignored.
Related ongoing thread:
Adventures in upgrading Proxmox - https://news.ycombinator.com/item?id=45981666 - Nov 2025 (10 comments)
Still no way to use clound-init for LXCs (apparently). But upgrading from 9.0 on my four servers went fine, zero issues (including ZFS).
So with support for OCI container images, does this mean I can run docker images as LXCs natively in proxmox? I guess it's an entirely manual process, no mature orchestration like portainer or even docker-compose, no easy upgrades, manually setting up bind mounts, etc. It would be a nice first step.
Also hoping that this work continues and tooling is made available. I suppose eventually someone could even make a wrapper around it that implements Docker's remote API
There is a vid showing the process on their youtube
https://youtu.be/4-u4x9L6k1s?t=21
>no mature orchestration
Seems to borrow the LXC tooling...which has a decent command line tool at least. You could in theory automate against that.
Presumably it'll mature
Been waiting to update from v8. Time might be right now
15-20 years ago this wouldn't have been a company. It would have been a strong but informal open collaboration where smart and just people funded by various entities around the world kept it running.
Then the opportunity to get rich by offering an open source product combined with closed source extras+support was invented. I don't like this new world.
Edit: Somewhere along the line, we also lost the concept of having a sysadmin/developer person working at like a municipality contributring like 20% of their time towards maintenance of such projects. Invaluable when keeping things running.
Funny enough, Proxmox VE is 17 years old. I want to say it was ballpark 13-14 years ago I was using it to replace ESXi to get features (HA/Live migration) that only came with expensive licensing. 15-20 years ago there were definitely companies doing exactly this.
What do you find wrong with "this new world"? For context, I'm using their free offering for my home server, for 6-7 years now. Happy as a clam.
It enables Oracle-like behavior. Once you're locked in as commercial user they can do whatever they want.
Remember: Not all commercial users are FAANG rich. Counties/local municipalities count as commercial users, as an example.
But in many cases, like Proxmox, there is nothing proprietary. What they provide is the glue, the polish, the interface that ties together the technologies they build on. If they began to be nasty, you can just leave (or even, continue to use it however long you like).
In general, I don't think this is a threat. I think the problems begin with proprietary offerings, like they so often do in the cloud. Then's the time when vendor lock-in takes its toll. But even with AWS, if you stick to open interfaces, it's easy to leave.
What is this "application containers" BS, just add native docker stack support. Most folks in the self hosting community already deploy nested dockers in LXCs, just add native support so we can cut out the middle man and squeeze out that indirection.
It makes no sense to add an extra layer, and we definitively do not want to make us and our users dependent of docker project.
There exist many OCI runtimes, and our container toolkit already provides a (ball parked) 90% feature overlap with them. Maintaining two stacks here is just needless extra work and asking for extra pain for us devs and our users, so no, thanks.
That said, PVE is not OCI runtime compatible yet, that's why this is marked as tech preview, but it can be still useful for many that control their OCI images themselves or have an existing automation stack that can drive the current implementation. That said, we plan to work more on this in the future, but for the midterm it will be not that interesting for those that want a very simple hand-off approach (let's call it "casual hobby homelabber"), or want to replace some more complex stack with it; but I think we'll get there.
People stuck with Docker for a reason, even after they became user hostile. Almost every selfhosted project in existence provides a docker-compose.yml that's easy to expand and configure to immediately get started. None provide generic OCI containers to run in generic OCI runtimes.
I understand sticking with compatibility at that layer from an "ideal goal" POV, but that is unlikely to see a lot of adoption precisely because applications don't target generic OCI runtimes.
docker errors or escapes taking down the root system? Not for me....
Docker is mostly based on the same stuff that LXC uses under the hood.
Nah. Incus.
Sorry, but I bought Proxmox 7, but it is not comparable. Incus does everything (and more) with better interface, WAY better reliability, and also not like a hundred EUR or whatever. (100 EUR is fine with me if better, but not if not better...)
I’ve been looking at incus, and some aspects are appealing (creating a vm/container via cli). But I think proxmox having better clustering, and built in support for ceph, backups (with proxmox backup server)… proxmox just had a little more maturity behind it. I’ll be watching incus though.
> aspects are appealing (creating a vm/container via cli)
Nothing is stopping you from doing this with Proxmox, right?
Incus has a pretty bare-bones web UI without nary any metrics, which is a bit of a pain when you're trying to track down CPU hogs. Proxmox, on the other hand, makes things very much visible and easy to use.
I was looking to setup Proxmox for my homelab soon but this comment got me interested in Incus. Mostly because I've never heard of any Proxmox alternatives before this. You can try out Incus in your browser here: https://linuxcontainers.org/incus/try-it/
The demo does take ~10m to get into a working instance.
Their site might be getting hugged, even the non-demo page is taking ages to load.
Interesting. Does Incus has support for storing virtual machine assets in a NFS store so they could be easily migrated?
Looks like Incus has no GUI?
Proxmox has nice web GUI
It has one[1] (optional). Proxmox has a shittier, but more featureful, web UI.
[1]: https://blog.simos.info/how-to-install-and-setup-the-incus-w...
i like the proxmox web ui.
also, looking at the link you posted, it looks like incus can only do like a fraction of what proxmox can do. is that the case or is that web ui a limiting factor?
IncusOS looks pretty interesting. It’s a new immutable distribution designed for Incus.
When it matures I’ll look into switching from Proxmox.
Proxmox is free, too.
Incus looks nice, though it looks to be more API driven , at least from the landing page. I can't attest to Proxmox in a production/cluster environment but (barring GPU passthrough) it's very accessible for homelab and small network.
GPU passthrough works fine? I use that for transcoding in Jellyfin.
Takes 10 minutes to setup and one reboot. Works flawlessly, Linux or Windows. vGPU is a different story though.
I don't remember if I tried and failed, or if it seemed too much for me... I have an Arc A series; if you have a verified guide I would like to take a look!
Some folks got Intel Arc working:
* https://forum.proxmox.com/threads/pci-passthrough-arc-a380-i...
This may have more to do with your underlying hardware than proxmox itself.
It's been forever, but to do passthrough you need proper bios support and configuration.
Dang it! I’ve just got comfortable with Proxmox, but now I have to start looking into Incus because of your comment.
I tried demo'ing Incus from their "Try it online" page but it just spins endlessly and nothing happens.
Proxmox is entirely free and very, very reliable. Personal preference is fine, but I really don't think any of your claims are true.
I thought you had to pay a fee to access their updates repository? It's been a while though so I may be mistaken.
There's community update repositories you can swap out.
I'm not really sure what the difference is.
The difference: the free repositories get the updates before the enterprise repo.
So, the software versions that go into the enterprise repo are considered stable by then.
(If we're talking about Proxmox, that is.)
Enterprise repos are better tested and somewhat guarantee updates don’t explode anything.