Hobbyist game dev here with random systemd thoughts. I’ve recently started to lean on systemd more as my ‘local game server process manager’ process. At first I thought I’d have to write this up myself as a whole slew of custom code, but then I realized the linux distros I use have systemd. That + cgroups and profiling my game server’s performance lets me pack an OS with as many game servers dynamically (target 80% resource utilization, funny things happen after that — things I don’t quite understand).
In this way I’m able to set up AWS EC2 instances or digital ocean droplets, a bunch of game servers spin up and report back their existence to a backend game services API. So far it’s working but this part of my project is still in development.
I used to target containerizing my apps, which adds complexity, but often in AWS I have to care about VMs as resources anyways (e.g. AWS gamelift requires me to spin up VMs, same with AWS EKS). I’m still going back and forth between containerizing and using systemd; having a local stack easily spun up via docker compose is nice, but with systemd what I write locally is basically what runs in prod environment, and there’s less waiting for container builds and such.
I share all of this in case there’s a gray beard wizard out there who can offer opinions. I have a tendency to explore and research (it’s fuuun!) so I’m not sure if I’m on a “this is cool and a great idea” path or on a “nobody does this because <reasons>” path.
This is sort of how I designed Accelbytes managed gameserver system (previously called: Armada).
You provide us a docker image, and we unpack it, turn it into a VM image and run as many instances as you want side-by-side with CPU affinity and NUMA awareness. Obviating the docker network stack for latency/throughput reasons - since you can
They had tried nomad, agones and raw k8s before that.
Checking out the website now. Looks enticing. Would a user of accelbyte multiplayer services still be in the business of knowing about underlying VMs? I caught some copy on the website that led me to question.
As a hobbyist part of me wants the VM abstracted completely (which may not be realistic). I want to say “here’s my game server process, it needs this much cpu/mem/network per unit, and I need 100 processes” and not really care about the underlying VM(s), at least until later. The closest thing I’ve found to this is AWS fargate.
Also holy smokes if you were a part of the team that architected this solution I’d love to pick your brain.
There’s a couple of providers that give you that kind of abstraction. Playfab is _pretty close_ but it’s fairly slow to ramp up and down. There is/was multiplay - they’ve had some changes recently and I’m not sure what their situation is right now. There’s also stuff like Hathora (they’re great but expensive).
At a previous job, we used azure container apps - it’s what you _want_ fargate to be. AIUI, Google Cloud Run is pretty much the same deal but I’ve no experience with it. I’ve considered deploying them as lambdas in the past depending on session length too…
Cloud Run tries to be this but every service like this has quirks. For example, GCR doesn’t let you deploy to high-CPU/MEM instances, has lower performance due to multi-tenant hosts, etc
That was was actually the original intent. If we scale to bare metal providers we can get much more performance. m
By making it an “us” problem to run the infrastructure at a good cost, and be cheaper then than AWS for us to run, meaning we could take no profit on cloud vms. making us cost competitive as hell.
Definitely don't recommend going down this path if you're not already familiar with Nix, but if you are, a strategy that I find works really well is to package your software with Nix, then you can run it easily via systemd but also create super lightweight containers using nix-snapshotter[0] so you don't have to "build" container images if you still want the flexibility of containers. You can then run the containers on Docker or Kubernetes without having to build heavy images.
This actually works really well with custom user scripts to do the initial setup. It’s also trivial to do this with docker/podman if you don’t want it to take over the machine. Batching/Matchmaking is the hard part of this, setting up a fleet is the fun part of this.
I’ve also done Microsoft Orleans clusters and still recommend the single pid, multiple containers/processes approach. If you can avoid Orleans and kubernetes and all that, the better. It just adds complexity to this setup.
If you use podman quadlets, you get containers and systemd together as a first class citizen, in a config that is easily portable to kubernetes if you need more complex features.
The shift from docker to podman was originally quite painful at first, but it's much better, very usable, and quite stable now.
Still, I can see the draw for independent devs to use docker compose. Teams and orgs though makes sense to use podman and systemd for the smaller stuff or dev, and then literally export the config as a kubernetes yaml.
Yes! It’s a great project. I’m super happy they have a coherent local development story. I kinda abandoned using it though when I said “keeeep it simple” and stopped using containers/k8s. I think I needed to journey through understanding why multiplayer game services like Agones/gamelift/photon were set up like they were. I read through Multiplayer Game Programming: Architecting Networked Games by Joshua Glazer and Sanjay Madhav really helped (not to mention allowed me to better understand GDC talks over multiplayer topics much better).
This all probably speaks to my odd prioritization: I want to understand and use. I’ve had to step back and realize part of the fun I have in pursuing these projects is the research.
I wrote a blog post about using nspawn from an Arch Linux host. The Arch Wiki shows more information about how to get a Debian base if you want that instead. Link to the wiki is at the bottom of the blog post along with more references.
> (target 80% resource utilization, funny things happen after that — things I don’t quite understand).
The closer you get to 100% resource utilization the more regular your workload has to become. If you can queue requests and latency isn't a problem, no problem, but then you have a batch process and not a live one (obviously not for games).
The reason is because live work doesn't come in regular beats, it comes in clusters that scale in a fractal way. If your long term mean is one request per second what actually happens is you get five requests in one second, three seconds with one request each, one second with two requests, and five seconds with 0 requests (you get my point). "fractal burstiness"
You have to have free resources to handle the spikes at all scales.
Also very many systems suffer from the processing time for a single request increasing as overall system loads increase. "queuing latency blowup"
So what happens? You get a spike, get behind, and never ever catch up.
Yea. I realize I ought to dig into things more to understand how to push past into 90%-95% utilization territory. Thanks for the resource to read through.
You absolutely do not want 90-95% utilization. At that level of utilitization random variability alone is enough to cause massive whiplash in average queue lengths.
The cycle time impact of variability of a single-server/single-queue system at 95% load is nearly 25x the impact on the same system at 75% load, and there are similar measures for other process queues.
As the other comment notes, you should really work from an assumption that 80% is max loading, just as you'd never aim to have a swap file or swap partition of exactly the amount of memory overcommit you expect.
One way to think about it is 80% IS full utilization.
The engineering time, the risks of decreased performance, and the fragility of pushing the limit at some point become not worth the benefits of reaching some higher metric of utilization. If it's not where you are, that optimum trade off point is somewhere.
systemd-networkd now implements a resolve hook for its internal DHCP
server, so that the hostnames tracked in DHCP leases can be resolved
locally. This is now enabled by default for the DHCP server running
on the host side of local systemd-nspawn or systemd-vmspawn networks.
That was almost 15 years ago and the support is evidently not as useful.
Also it's entirely contained within a program that creates systemd .service files. It's super easy to extract it in a separate project. I bet someone will do it very quickly if there's need.
> Why would a server use a different init system than a desktop or embedded device?
The server and desktop have a lot more disk+RAM+CPU than the embedded device, to the point that running systemd on the low end of "just enough to run Linux" would be a pain.
Outside embedded, though, it probably works uniformly enough.
If this is a hard job for you well maybe get another career mate. Especially now with LLMs.
The thing to me is that services sometimes do have cause to be more complex, or more secure, or to be better managed in various ways. Over time we might find (for ex) oh actually waiting for this other service to be up and available first helps.
And if you went to run a service in the past, you never know what you are going to get. Each service that came with (for ex) Debian was it's own thing. Many forked off from one template or a other. But often forked long ago, with their own idiosyncratic threads woven in over time. Complexity emerged, and it wasn't contained, and it crrtainly wasn't normalized complexity across services: there would be dozens of services each one requiring careful staring at an init script to understand, with slightly different operational characteristics and nuance.
I find the complaints about systemd being complex almost always look at the problem in isolation. "I just want to run my (3 line) service, but I don't want to have to learn how systemd works & manages unit: this is complex!". But it ignores the sprawl of what's implied: that everyone else was out there doing whatever, and that you stumble in blind to all manners of bespoke homegrown complexity.
Systemd offers a gradient of complexity, that begins with extremely simple (but still offering impressive management and oversight), and that lets services wade into more complexity as they need. I think it is absolutely humbling and to some people an affront to see man pages with so so so many options, that it's natural to say: I don't need this, this is complex. But given how easy it is, how much great ability to see the state of the world we get that SysV never offered, given the standard shared culture tools and means, and given the divergent evolutionary chaos of everyone muddling through init scripts themselves, systemd feels vastly more contained, learnable, useful, concise, and less complex than the nightmares of old. And it has simple starting points, as shown at the top, that you can add onto and embelish onwards as you find cause to move further along the gradient of complexity, and you can do so in a simple way.
It's also incredibly awesome how many amazing tools for limiting process access, for sandboxing and securing services systemd has. The security wins can be enormous.
> Because last time I wrote systemd units it looked like a job
Last, an LLM will be able to help you with systemd, since it is common knowledge with common practice. If you really dislike having to learn anything.
Yeah, I've been using Claude and Codex to create bespoke systemd services for my random tools and automation stuff and have been really impressed by how easy it is and how rock solid they are once setup. It's really nice not living in constant terror that a reboot, network connectivity loss or gentle breeze will cause my duct taped scripts to collapse under their own weight.
Probably no biggie to google the necessary copypasta to launch stuff from .service files instead. Which, being custom, won't have their timeout set back to "infinity" with every update. Unlike the existing rc.local wrapper service. Which, having an infinity timeout, and sometimes deciding that whatever was launched by rc.local can't be stopped, can cause shutdown hangs.
Despite being philosophically opposed to it, I can't deny that it is as common as it, because of how easy it seems to make the initial setup. By comparison, when I recently tried void linux, it simply requires ( maybe even demands ) more of its user.
Who needs to read mail when you can even make it receive mail!
Make an `smtp.socket`, which calls `smtp.service`, which receives the mail and prints it on standard output, which goes to a custom journald namespace (thanks `LogNamespace=mail` in the unit) so you can read your mail with `journalctl --namespace=mail`.
Hobbyist game dev here with random systemd thoughts. I’ve recently started to lean on systemd more as my ‘local game server process manager’ process. At first I thought I’d have to write this up myself as a whole slew of custom code, but then I realized the linux distros I use have systemd. That + cgroups and profiling my game server’s performance lets me pack an OS with as many game servers dynamically (target 80% resource utilization, funny things happen after that — things I don’t quite understand).
In this way I’m able to set up AWS EC2 instances or digital ocean droplets, a bunch of game servers spin up and report back their existence to a backend game services API. So far it’s working but this part of my project is still in development.
I used to target containerizing my apps, which adds complexity, but often in AWS I have to care about VMs as resources anyways (e.g. AWS gamelift requires me to spin up VMs, same with AWS EKS). I’m still going back and forth between containerizing and using systemd; having a local stack easily spun up via docker compose is nice, but with systemd what I write locally is basically what runs in prod environment, and there’s less waiting for container builds and such.
I share all of this in case there’s a gray beard wizard out there who can offer opinions. I have a tendency to explore and research (it’s fuuun!) so I’m not sure if I’m on a “this is cool and a great idea” path or on a “nobody does this because <reasons>” path.
This is sort of how I designed Accelbytes managed gameserver system (previously called: Armada).
You provide us a docker image, and we unpack it, turn it into a VM image and run as many instances as you want side-by-side with CPU affinity and NUMA awareness. Obviating the docker network stack for latency/throughput reasons - since you can
They had tried nomad, agones and raw k8s before that.
Checking out the website now. Looks enticing. Would a user of accelbyte multiplayer services still be in the business of knowing about underlying VMs? I caught some copy on the website that led me to question.
As a hobbyist part of me wants the VM abstracted completely (which may not be realistic). I want to say “here’s my game server process, it needs this much cpu/mem/network per unit, and I need 100 processes” and not really care about the underlying VM(s), at least until later. The closest thing I’ve found to this is AWS fargate.
Also holy smokes if you were a part of the team that architected this solution I’d love to pick your brain.
There’s a couple of providers that give you that kind of abstraction. Playfab is _pretty close_ but it’s fairly slow to ramp up and down. There is/was multiplay - they’ve had some changes recently and I’m not sure what their situation is right now. There’s also stuff like Hathora (they’re great but expensive).
At a previous job, we used azure container apps - it’s what you _want_ fargate to be. AIUI, Google Cloud Run is pretty much the same deal but I’ve no experience with it. I’ve considered deploying them as lambdas in the past depending on session length too…
Cloud Run tries to be this but every service like this has quirks. For example, GCR doesn’t let you deploy to high-CPU/MEM instances, has lower performance due to multi-tenant hosts, etc
That was was actually the original intent. If we scale to bare metal providers we can get much more performance. m
By making it an “us” problem to run the infrastructure at a good cost, and be cheaper then than AWS for us to run, meaning we could take no profit on cloud vms. making us cost competitive as hell.
Definitely don't recommend going down this path if you're not already familiar with Nix, but if you are, a strategy that I find works really well is to package your software with Nix, then you can run it easily via systemd but also create super lightweight containers using nix-snapshotter[0] so you don't have to "build" container images if you still want the flexibility of containers. You can then run the containers on Docker or Kubernetes without having to build heavy images.
[0] https://github.com/pdtpartners/nix-snapshotter
I don't recommend getting familiar with Nix because your chances of getting nerd sniped by random HN comments increase exponentially.
This actually works really well with custom user scripts to do the initial setup. It’s also trivial to do this with docker/podman if you don’t want it to take over the machine. Batching/Matchmaking is the hard part of this, setting up a fleet is the fun part of this.
I’ve also done Microsoft Orleans clusters and still recommend the single pid, multiple containers/processes approach. If you can avoid Orleans and kubernetes and all that, the better. It just adds complexity to this setup.
If you use podman quadlets, you get containers and systemd together as a first class citizen, in a config that is easily portable to kubernetes if you need more complex features.
O.O this may be the feature that gets me into podman over docker.
The shift from docker to podman was originally quite painful at first, but it's much better, very usable, and quite stable now.
Still, I can see the draw for independent devs to use docker compose. Teams and orgs though makes sense to use podman and systemd for the smaller stuff or dev, and then literally export the config as a kubernetes yaml.
You sound like you've explored at least a few options in this space. Have you looked at https://agones.dev/ ?
Yes! It’s a great project. I’m super happy they have a coherent local development story. I kinda abandoned using it though when I said “keeeep it simple” and stopped using containers/k8s. I think I needed to journey through understanding why multiplayer game services like Agones/gamelift/photon were set up like they were. I read through Multiplayer Game Programming: Architecting Networked Games by Joshua Glazer and Sanjay Madhav really helped (not to mention allowed me to better understand GDC talks over multiplayer topics much better).
This all probably speaks to my odd prioritization: I want to understand and use. I’ve had to step back and realize part of the fun I have in pursuing these projects is the research.
Did you try systemd's containers (nspawn)?
…no. TIL.
I wrote a blog post about using nspawn from an Arch Linux host. The Arch Wiki shows more information about how to get a Debian base if you want that instead. Link to the wiki is at the bottom of the blog post along with more references.
https://adamgradzki.com/lightweight-development-sandboxes-wi...
Portable services are another option.
And podman systemd quadlets yet another
https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
Wow systemd can do more than I thought to imagine it could
Technically that's part of podman, not systemd. But it's the same architecture that was used to support sysvinit scripts.
(In fact, nothing prevents anyone from extracting and repackaging the sysvinit generator, now that I think of it).
> (target 80% resource utilization, funny things happen after that — things I don’t quite understand).
The closer you get to 100% resource utilization the more regular your workload has to become. If you can queue requests and latency isn't a problem, no problem, but then you have a batch process and not a live one (obviously not for games).
The reason is because live work doesn't come in regular beats, it comes in clusters that scale in a fractal way. If your long term mean is one request per second what actually happens is you get five requests in one second, three seconds with one request each, one second with two requests, and five seconds with 0 requests (you get my point). "fractal burstiness"
You have to have free resources to handle the spikes at all scales.
Also very many systems suffer from the processing time for a single request increasing as overall system loads increase. "queuing latency blowup"
So what happens? You get a spike, get behind, and never ever catch up.
https://en.wikipedia.org/wiki/Network_congestion#Congestive_...
Yea. I realize I ought to dig into things more to understand how to push past into 90%-95% utilization territory. Thanks for the resource to read through.
You absolutely do not want 90-95% utilization. At that level of utilitization random variability alone is enough to cause massive whiplash in average queue lengths.
The cycle time impact of variability of a single-server/single-queue system at 95% load is nearly 25x the impact on the same system at 75% load, and there are similar measures for other process queues.
As the other comment notes, you should really work from an assumption that 80% is max loading, just as you'd never aim to have a swap file or swap partition of exactly the amount of memory overcommit you expect.
One way to think about it is 80% IS full utilization.
The engineering time, the risks of decreased performance, and the fragility of pushing the limit at some point become not worth the benefits of reaching some higher metric of utilization. If it's not where you are, that optimum trade off point is somewhere.
> Support for System V service scripts is deprecated and will be removed in v260
All the services you forgot you were running for ten whole years, will fail to launch someday soon.
Every release of redhat software makes me happy I switched to openbsd for my human scale computers.
Wasn't this support listed as one of the reasons why systemD would be fine for everyone to adopt?
That was almost 15 years ago and the support is evidently not as useful.
Also it's entirely contained within a program that creates systemd .service files. It's super easy to extract it in a separate project. I bet someone will do it very quickly if there's need.
For me it is quite a list.
However, it is not easy figuring out which of those script are actually a SysVInit script and which simply wrap systemd.
As I wrote in another comment, just check out /run/systemd/system. You'll find the wrapper units that systemd creates for your sysvinit scripts.
How hard is it to just call your init.d scripts from a systemd unit?
Not only it's easy, the exact contents of the systemd unit can already be found in /run/systemd/system.
Honestly. I'm sick of people complaining about systemd.
Were you paid to learn it?
Because last time I wrote systemd units it looked like a job.
Also, way over complex for anything but a multi user multi service server. The kind you're paid to maintain.
Why would a server use a different init system than a desktop or embedded device?
Why wouldn't you want unit files instead of much larger init shell scripts which duplicate logic across every service?
It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.
> Why wouldn't you want unit files instead of much larger init shell scripts which duplicate logic across every service?
Indeed, that criticism makes no sense at all.
> It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.
Don't forget VMs. Even in server space, they use hotplug/hotunplug as much as traditional desktops.
> Why would a server use a different init system than a desktop or embedded device?
The server and desktop have a lot more disk+RAM+CPU than the embedded device, to the point that running systemd on the low end of "just enough to run Linux" would be a pain.
Outside embedded, though, it probably works uniformly enough.
I think you're way overstating things. Systemd units can be complex, but for most things they are dead simple to write.
> a multi user multi service server. The kind you're paid to maintain.
TIL. Didn't know I can get paid to maintain my PC because I have a background service that does not run as my admin user.
> Because last time I wrote systemd units it looked like a job.
Fascinating. Last time I wrote a .service file I thought how muhc easier it was than a SysV init script.
A systemd service can be:
If this is a hard job for you well maybe get another career mate. Especially now with LLMs.The thing to me is that services sometimes do have cause to be more complex, or more secure, or to be better managed in various ways. Over time we might find (for ex) oh actually waiting for this other service to be up and available first helps.
And if you went to run a service in the past, you never know what you are going to get. Each service that came with (for ex) Debian was it's own thing. Many forked off from one template or a other. But often forked long ago, with their own idiosyncratic threads woven in over time. Complexity emerged, and it wasn't contained, and it crrtainly wasn't normalized complexity across services: there would be dozens of services each one requiring careful staring at an init script to understand, with slightly different operational characteristics and nuance.
I find the complaints about systemd being complex almost always look at the problem in isolation. "I just want to run my (3 line) service, but I don't want to have to learn how systemd works & manages unit: this is complex!". But it ignores the sprawl of what's implied: that everyone else was out there doing whatever, and that you stumble in blind to all manners of bespoke homegrown complexity.
Systemd offers a gradient of complexity, that begins with extremely simple (but still offering impressive management and oversight), and that lets services wade into more complexity as they need. I think it is absolutely humbling and to some people an affront to see man pages with so so so many options, that it's natural to say: I don't need this, this is complex. But given how easy it is, how much great ability to see the state of the world we get that SysV never offered, given the standard shared culture tools and means, and given the divergent evolutionary chaos of everyone muddling through init scripts themselves, systemd feels vastly more contained, learnable, useful, concise, and less complex than the nightmares of old. And it has simple starting points, as shown at the top, that you can add onto and embelish onwards as you find cause to move further along the gradient of complexity, and you can do so in a simple way.
It's also incredibly awesome how many amazing tools for limiting process access, for sandboxing and securing services systemd has. The security wins can be enormous.
> Because last time I wrote systemd units it looked like a job
Last, an LLM will be able to help you with systemd, since it is common knowledge with common practice. If you really dislike having to learn anything.
Yeah, I've been using Claude and Codex to create bespoke systemd services for my random tools and automation stuff and have been really impressed by how easy it is and how rock solid they are once setup. It's really nice not living in constant terror that a reboot, network connectivity loss or gentle breeze will cause my duct taped scripts to collapse under their own weight.
Somehow that's never enough though.
So they're finally nuking rc.local altogether.
Probably no biggie to google the necessary copypasta to launch stuff from .service files instead. Which, being custom, won't have their timeout set back to "infinity" with every update. Unlike the existing rc.local wrapper service. Which, having an infinity timeout, and sometimes deciding that whatever was launched by rc.local can't be stopped, can cause shutdown hangs.
Despite being philosophically opposed to it, I can't deny that it is as common as it, because of how easy it seems to make the initial setup. By comparison, when I recently tried void linux, it simply requires ( maybe even demands ) more of its user.
> The cgroup2 file system is now mounted with the "memory_hugetlb_accounting" mount option, supported since kernel 6.6.
> Required minimum versions of following components are planned to be raised in v260:
* Linux kernel >= 5.10 (recommended >= 5.14),
Don't these two statements contradict each other?
It gracefully falls back if the new option is not available at runtime
Can it read mail yet?
* https://en.wikipedia.org/wiki/Jamie_Zawinski#Zawinski's_Law
* https://www.jwz.org/hacks/
:)
Who needs to read mail when you can even make it receive mail!
Make an `smtp.socket`, which calls `smtp.service`, which receives the mail and prints it on standard output, which goes to a custom journald namespace (thanks `LogNamespace=mail` in the unit) so you can read your mail with `journalctl --namespace=mail`.
I find musl support most remarkable.
Breaking systemd was a thorn on distributions trying to use musl.
The downside of drawing the interest of Brewsters (https://youtu.be/fwYy8R87JMA) in Linux.
v259? [cue https://youtu.be/lHomCiPFknY]
What has it taken over this time?