How do you people even keep up with this? I'm going back to cybersecurity after trying DevOps for a year, it's not for me. I miss my sysadmin days, things were simple back then and worked. Maybe I'm just getting old and my cognitive abilities are declining. It seems to me that the current tech scene doesn't reward simple.
It's exactly why taking a trip through the ops/infra side is so important for people - you learn why LTS-style engineering is so important. You learn to pick technologies that are stable, reliable, well-supported by a large-enough people who are conservative in their approach, for anything foundational, because the alternative is migration pain again and again.
I also feel like we as an industry should steer towards a state of "doneness" for OSS solutions. As long as it works, it's fine to keep using technologies that are only sparsely maintained.
I often find myself trying to tell people that KISS is a good thing. If something is somewhat complex it will be really complex after a few years and a few rotations of personnel.
Quite often the tradeoff is not between complexity (to cover a bunch of different cases) and simplicity (do one thing simply), but rather where that complexity lies. Do you have dependency fanout? It probably makes sense to shove all that complexity into the central component and manage it centrally. Otherwise it probably makes sense to make all the components a bit more complex than they could be, but still manageable.
> It seems to me that the current tech scene doesn't reward simple.
A deal with the devil was made. The C suite gets to tell a story that k8s practices let you suck every penny out of the compute you already paid for. Modern devs get to do constant busy work adding complexity everywhere, creating job security and opportunities to use fun new toys. "Here's how we're using AI to right size our pods! Never mind the actual costs and reliability compared to traditional infrastructure, we only ever need to talk about the happy path/best case scenarios."
This just seems like sensationalist nonsense spoken by someone who hasn’t done a second of Ops work.
Kubernetes is incredibly reliable compared to traditional infrastructure. It eliminates a ton of the configuration management dependency hellscape and inconsistent application deployments that traditional infrastructure entails.
Immutable containers provide a major benefit to development velocity and deployment reliability. They are far faster to pull and start than deploying to VMs, which end up needing some kind of annoying deployment pipeline involving building images or having some kind of complex and failure-prone deployment system.
Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)
And who is really working for a company that has a small deployment? I’d say that most medium-sized tech companies can easily justify the complexity of running a kubernetes cluster.
Networking can be complex with Kubernetes, but it’s only as complex as your service architecture.
These days there are more solutions than ever that remove a lot of the management burden but leave you with all the benefits of having a cluster, e.g., Talos Linux.
The problem is that some Kubernetes features would have a positive impact on development velocity in theory, however in my experience (25 years of ops and devops), the cost of keeping up often eats up those benefits and often results in a net-negative.
This is not always a problem of Kubernetes itself though, but of teams always chasing after the latest shiny thing.
It was clear they didn't know what they were saying when they think the main reason for kubernetes was to save money. Kubernetes is just easy to complain about.
If you were working in the orgs targeted by k8s, I think it was generally more of a mess. Think about managing a park of 100~200 servers with home made bash scripts and crappy monitoring tools and a modicum of dashboards.
Now, k8s has engulfed a lot more than the primary target, but smaller shops go for it because they'r also hoping to hit it big someday I guess. Otherwise, there will be far easier solutions at lower scale.
You can manage and reason about ~2000+ servers without Kubernetes, even with a relatively small team, say about 100 - 150, depending on what kind of business you're in. I'd recommend either Puppet, Ansible (with AWX) and/or Ubuntu Landscape (assuming that your in the Ubuntu ecosystem).
Kubernetes is for rather special case environments. I am coming around to the idea of using Kubernetes more, but I still think that if you're not provisioning bare-metal worker nodes, then don't bother with Kubernetes.
The problem is that Kubernetes provides orchestration which is missing, or at least limited, in the VM and bare-metal world, so I can understand reaching for Kubernetes, because it is providing a relatively uniform interface for your infrastructure. It just comes at the cost of additional complexity.
Generally speaking I think people need to be more comfortable with build packages for their operating system of choice and install applications that way. Then it's mostly configuration that needs to be pushed and that simplifies things somewhat.
> You can manage and reason about ~2000+ servers without Kubernetes, even with a relatively small team, say about 100 - 150
Oh wow, so uh... I'm managing around 1000 nodes over 6 clusters, alone. There's others able to handle things when I'm not around or on leave and meticulously updated docs for them to do so but in general am the only one touching our infra.
I also do dev work the other half of the week for our company.
imo if you are on a cloud like aws and using a config management system for mutable infra like puppet you are taking unnecessary complexity and living in the dark ages
> Generally speaking I think people need to be more comfortable with build packages for their operating system of choice and install applications that way. The it's mostly configuration that needs
why, it’s 2025, docker / container makes life so easy
Meanwhile we manage over 1200 instances with multiple kubernetes clusters with a team of 10, including complex mesh networking and everything else the team does. It might be complex but it also gives you so much for free that you don't have to deal with.
> Otherwise, there will be far easier solutions at lower scale.
Which solutions do you have in mind?
- VPS with software installed on the host
- VPS(s) with Docker (or similar) running containers built on-host
- Server(s) with Docker Swarm running containers in a registry
- Something Kubernetes like k3s?
In a way there's two problems to solve for small organisations (often 1 server per app, but up to say 3): the server, monitoring it and keeping it up to date, and the app(s) running on each server and deploying and updating them. The app side has more solutions, so I'd rather focus on the server side here.
Like the sibling commenter I strongly dislike the configuration management landscape (with particular dislike of Ansible and maintaining it - my takeaway is never use 3rd party playbooks, always write your own). As often for me these servers are set up, run for a bit and then a new one is set up and the app redeployed to that (easier than an OS upgrade in production) I've gone back to a bash provisioning script, slightly templated config files and copying them into place. It sucks, but not as much as debugging Ansible has.
I think you underestimate what can be done with actual code because the devops industry seems entirely code averse and seem to prefer a "infrastructure as data" paradigm instead and not even using good well tested/understood formats like sql databases or even object storage but seems to lean towards more fragile formats like yaml.
yes the possix shell is not a good language which is why thinks like perl, python and even php or C got widely used but there is a intermediate layer with tools like fabric(https://www.fabfile.org/) solving a lot of the problems with the fully homegrown without locking you into the "Infrastructure as(manually edited) Data" paradigm that only really works for problems of big scale and low complexity which is exactly the opposite of what you see in many enterprise environments.
Even after the bash script era, I don’t think the configuration management landscape gets enough discredit for how bad it is. I never felt like it stopped feeling hacked together and unreliable.
E.g., Chef Software, especially after its acquisition, is just a dumpster fire of weird anti-patterns and seemingly incomplete, buggy implementations.
Ansible is more of the gold standard but I actually moved to Chef to gain a little more capability. But now I hate both of them.
When I just threw this all in the trash in my HomeLab and went to containerization it was a major breath of fresh air and resulted in getting a lot of time back.
For organizations, of the best parts about Kubernetes is that it’s so agnostic so that you can drop in replacements with a level of ease that is just about unheard of in the Ops world.
If you are a small shop you can just start with something simpler and more manageable like k3s or Talos Linux and basically get all the benefits without the full blown k8s management burden.
Would it be simpler to use plain Docker, Docker Swarm, Portainer, something like that? Yeah, but the amount of effort saved versus your ability to adapt in the future seems to favor just choosing Kubernetes as a default option.
To quote an ex coworker: all configuration management systems are broken, in equal measure - just in different fashion. They are all trying to shoehorn fundamentally brittle, complex and often mutually exclusive goals behind a single facade.
If you are in the position to pick a config management system, the best you can do is to chart out your current and known upcoming use cases. Then choose the tool that sucks the least for your particular needs.
And three years down the line, pray that you made the right choice.
Yes, kube is hideously complex. Yes, it comes with enormous selection of footguns. But what it does do well, is to allow decoupling host behaviour from service/container behaviour more than 98% of the time. Combined with immutable infrastructure, it is possible to isolate host configuration management to the image pre-bake stage. Leave just the absolute minimum of post-launch config to the boot/provisioning logic, and you have at least a hope of running something solid.
Distributed systems are inherently complex. And the fundamental truth is that inherent complexity can never be eliminated, only moved around.
with EKS and cloud-init these days i dont find any need to even bake AMIs anymore. scaling / autoscaling so easy now with karpenter to create/destroy nodes to fit current demand. i think if you use kubernetes in a very dumb way to just run X copies of Y container behind an ALB with no funny business it just works.
Yup. K8s is a bit of a pain to keep up with, but Chef and even Ansible are much more painful for other reasons once you have more than a handful of nodes to manage.
It's also basically a standard API that every cloud provider is forced to implement, meaning it's really easy to onboard new compute from almost anyone. Each K8s cloud provider has its own little quirks, but it's much simpler than the massive sea of difference that each cloud's unique API for VM management was (and the tools to paper over that were generally very leaky abstractions in the pre-K8s world).
I have to say I hate ansible too (and puppet and cfengine that I have previously used). But it's unclear to me how containers fix the problems ansible solves.
So instead of an ansible playbook/role that installs, say, nginx from the distro package repository, and then pushes some specific configuration, I have a dockerfile that does the same thing? Woohoo?
/r/kubernetes had this announcement up about five mins after it dropped at Kubecon. It's a huge deal. So many tutorials and products used ingress-nginx for basic ingress, so them throwing in the towel (but not really) is big news.
That said, (a) the Gateway API supercedes Ingress and provides much more functionality without much more complexity, and (b) NGINX and HAproxy have Gateway controllers.
To generally answer your question, I use HN, /r/devops and /r/kubernetes to stay current. I'm also working on a weekly blog series wherein I'll be doing an overview and quick start guide for every CNCF project in their portfolio. There's hundreds (thousands?) of projects in the collection, so it will keep me busy until I retire, probably :)
> /r/kubernetes had this announcement up about five mins after it dropped at Kubecon. It's a huge deal. So many tutorials and products used ingress-nginx for basic ingress, so them throwing in the towel (but not really) is big news.
I was one of those whose first reaction was surprise, because ingress was the most critical and hardest aspect of a kubernetes rollout to implement and get up and running on a vanilla deployment. It's what cloud providers offer out of the box as a major selling point to draw in customers.
But then I browsed through the Gateway API docs, and it is a world of difference. It turns a hard problem that requires so many tutorials and products to help anyone get something running into a trivially solvable problem. The improvements on their security model is undoubtedly better and alone clearly justifies getting rid of ingress.
Change might be inconvenient, but you need change to get rid of pain points.
I like devops. It means you get to get ahead of all the issues that you could potentially find in cybersecurity. Sure it's complicated, but at least you'll never be bored. I think the hardest part is that you always feel like you don't have enough time to do everything you need to.
DevOps teams are always running slightly behind and rarely getting ahead of technical debt because they are treated as cost centers by the business (perpetually understaffed) and as “last minute complicated requests that sound simple enough” and “oops our requirements changed” dumping grounds for engineering teams.
Plus, the ops side has a lot of challenges that can really be a different beast compared to the application side. The breadth of knowledge needed for the job is staggering and yet you also need depth in terms of knowing how operating systems and networks work.
i prefer current era where i never have to ssh to debug a node. if a node is misbehaving or even needs a patch i destroy it. one command, works every time.
Honestly, a lot of the Hacker News discourse every single time anything having to do with Kubernetes comes up reads like uninformed annoyed griping from people who have barely or not used it. Kubernetes itself has been around since 2014. ingress-nginx was the original example of how to implement an Ingress controller. Ingress itself is not going away, which seems to a misconception of a lot of replies to your comment. A lot of tutorials use this because a lot of tutorials simply copied the Kubernetes upstream documentation's own tutorials, which used toy examples of how to do things, including ingress-nginx itself, which was meant to be a toy example of how to implement an Ingress controller.
Nonetheless, it was around a full decade before they finally decided to retire it. It's not like this is something they introduced, advertised as the ideal fit for all production use cases, and then promptly changed their minds. It's been over a decade.
Part of the problem here is the Kubernetes devs not really following their own advice, as annotations are supposed to be notes that don't implement functionality, but ingress-nginx allowed you to inject arbitrary configuration with them, which ended up being a terrible idea in the main use Kubernetes is really meant for, which is you're an organization running a multi-tenant platform offering application layer services to other organizations, which it is great for, but Hacker News with its "everything is either a week one startup or a solo indy dev" is blind to for whatever reason.
Nonetheless, they still kept it alive for over a decade. Hacker News also has the exact wrong idea about who does and should use Kubernetes. It's not FAANGs, which operate at a scale way too big for it and do this kind of thing using in-house tech they develop themselves. Even Google doesn't use it. It's more for the Home Depots and BMWs of the world, organizations which are large-scale but not primarily software companies, running thousands if not millions of applications in different physical locations run by different local teams, but not necessarily serving planet-scale web users. They can deal with changing providers once every ten years. I would invite everyone who thinks this is unmanageable complexity to try dipping their toes into the legal and accounting worlds that Fortune 500s have to deal with. They can handle some complexity.
The Ingress API has been on ice for like 5 years. The core Kubernetes API doesn't change that much, at least these days. There's an infinite number of (questionable) add-ons you can deploy in your cluster, and I think that's mostly where folks get stuck in the mud.
But the Gateway API has only been generally available for two years now. And the last time I checked, most managed K8S solutions recommend the Ingress API while Gateway support is still experimental.
They are not retiring the API. Nginx Ingress is one of the many projects that implements this API, and you are free to migrate to another implementation.
ingress-nginx is older than 5-7 years tough. In that time frame you would’ve needed to update your Linux system, which gets hairy most often as well.
The sad thing is just that the replacement is just not there and gateway api has a lot of drawbacks that might get fixed in the next release (working with cert manager)
In my experience, many teams keep up with this by spending a lot of time keeping up with this and less time developing the actual product. Which, you probably guessed it, results in products much shittier than what we had 10 or 20 years ago.
But hey, it keeps a lot of people busy, which means it also keeps a lot of managers and consultants and trainers busy.
When I was choosing ingress controller few years ago, I think it was the most popular ingress controller by far, according to various polls. As I didn't have any specific requirements, I chose it and it worked for me. Over years I've used few proprietary annotations, so migrating away going to be a bit of pain. Not awesome news.
1) ingress still works but is on the path to deprecation. It's a super popular API, so this process will take a lot of time. That's why service meshes have been moving to Gateway API. Retiring ingress-nginx, the most popular ingress controller, is a very loud warning shot.
To be fair, this is not the first time we'e heard about this, https://github.com/kubernetes/ingress-nginx/issues/13002 exists since March. However I also thought that the timeline to a complete project halt would be much longer considering the prevalence of the nginx ingress controller. Might also mean that InGate is dead, since it's not mentioned in this post and doesn't seem to be close to any kind of stable release.
It's not a service shutting down though. It will still work fine for a while and it there is a critical security patch required, the community might still be able to add it.
This is terrible. Of all things k8s, ingress was the part I just did not want to have to mess with. It just worked and was stable, this gateway is completely unnecessary. And it seems to me that nginx retiring is just because people were pushing for the gateway so much that they threw in the towel. Infra is not react, people need to leave it alone.
I've been using Envoy Gateway in my homelab and have found it to be good for my modest needs (single node k3s cluster running on an old PC). I needed to configure the underlying EnvoyProxy so that it would listen on specific IPs provided by MetalLB, and their docs were good enough to find my way through that.
^ I second Envoy Gateway! It has support for HTTPRoute like all the others, but also TCPRoute, UDPRoute, TLSRoute, GRPCRoute backed by Envoy and they have worked great for me on EKS clusters I manage for work. The migration from Ingress API to Gateway API hasn’t been bad, as you can have both running side-by-side (just not using the same LB) and the EnvoyPatchPolicy has been great for making advanced changes for things not covered by the manifests
But envoy configs are unreadable abominations, why would you choose it? How did you even learn how to configure it? It's documentation is so confusing.
Envoy is designed with the intent that a machine is dynamically reconfiguring it at runtime. It is not designed to be configured directly by a human.
The tradeoff is that you can do truly zero downtime configuration changes. Granted, this is important to a very small number of companies, but if it's important to you, Envoy is great.
You don't. Envoy is great if you programmatically configure it, or if you have very small and simple configs. It can't be maintained by a human. But if you have tools that generate it programmatically based on other config, you can read through it.
Another triumph for open source: popular project probably used by many megacorps only propped up by the weekend charity of a couple unpaid suckers over the years.
I don’t think this is the https://xkcd.com/2347/ of the ops world? People will usually use the ingress controller of their cloud provider. I’ve been using the tailscale ingresses for tailscale funnel. But the transition from ingress to gateway api is seeming to take forever so I’m just running a caddy pod with a static config until the dust settles.
There are no maintainers. It was maintained by one engineer for years, he stepped down, and F5 (who bought nginx) don't want to contribute since they have a competitor.
I (and others) have offered to create a PR for issues opened — just point us in the right direction we asked. The maintainer always came back with “I fixed it for you”.
The maintainer had plenty of people who wanted to help, but never spent the time to teach them.
How do you people even keep up with this? I'm going back to cybersecurity after trying DevOps for a year, it's not for me. I miss my sysadmin days, things were simple back then and worked. Maybe I'm just getting old and my cognitive abilities are declining. It seems to me that the current tech scene doesn't reward simple.
It's exactly why taking a trip through the ops/infra side is so important for people - you learn why LTS-style engineering is so important. You learn to pick technologies that are stable, reliable, well-supported by a large-enough people who are conservative in their approach, for anything foundational, because the alternative is migration pain again and again.
I also feel like we as an industry should steer towards a state of "doneness" for OSS solutions. As long as it works, it's fine to keep using technologies that are only sparsely maintained.
Ingress-Nginx is commonly internet facing though; I think everyone wants at least base image and ssl upgrades on that component…
I often find myself trying to tell people that KISS is a good thing. If something is somewhat complex it will be really complex after a few years and a few rotations of personnel.
Quite often the tradeoff is not between complexity (to cover a bunch of different cases) and simplicity (do one thing simply), but rather where that complexity lies. Do you have dependency fanout? It probably makes sense to shove all that complexity into the central component and manage it centrally. Otherwise it probably makes sense to make all the components a bit more complex than they could be, but still manageable.
> It seems to me that the current tech scene doesn't reward simple.
A deal with the devil was made. The C suite gets to tell a story that k8s practices let you suck every penny out of the compute you already paid for. Modern devs get to do constant busy work adding complexity everywhere, creating job security and opportunities to use fun new toys. "Here's how we're using AI to right size our pods! Never mind the actual costs and reliability compared to traditional infrastructure, we only ever need to talk about the happy path/best case scenarios."
Mhm! And Google just sit there laughing at everyone. Mission accomplished
This just seems like sensationalist nonsense spoken by someone who hasn’t done a second of Ops work.
Kubernetes is incredibly reliable compared to traditional infrastructure. It eliminates a ton of the configuration management dependency hellscape and inconsistent application deployments that traditional infrastructure entails.
Immutable containers provide a major benefit to development velocity and deployment reliability. They are far faster to pull and start than deploying to VMs, which end up needing some kind of annoying deployment pipeline involving building images or having some kind of complex and failure-prone deployment system.
Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)
And who is really working for a company that has a small deployment? I’d say that most medium-sized tech companies can easily justify the complexity of running a kubernetes cluster.
Networking can be complex with Kubernetes, but it’s only as complex as your service architecture.
These days there are more solutions than ever that remove a lot of the management burden but leave you with all the benefits of having a cluster, e.g., Talos Linux.
The problem is that some Kubernetes features would have a positive impact on development velocity in theory, however in my experience (25 years of ops and devops), the cost of keeping up often eats up those benefits and often results in a net-negative.
This is not always a problem of Kubernetes itself though, but of teams always chasing after the latest shiny thing.
It was clear they didn't know what they were saying when they think the main reason for kubernetes was to save money. Kubernetes is just easy to complain about.
Exactly, if anything, Kubernetes will require a lot more money.
> things were simple back then
If you were working in the orgs targeted by k8s, I think it was generally more of a mess. Think about managing a park of 100~200 servers with home made bash scripts and crappy monitoring tools and a modicum of dashboards.
Now, k8s has engulfed a lot more than the primary target, but smaller shops go for it because they'r also hoping to hit it big someday I guess. Otherwise, there will be far easier solutions at lower scale.
You can manage and reason about ~2000+ servers without Kubernetes, even with a relatively small team, say about 100 - 150, depending on what kind of business you're in. I'd recommend either Puppet, Ansible (with AWX) and/or Ubuntu Landscape (assuming that your in the Ubuntu ecosystem).
Kubernetes is for rather special case environments. I am coming around to the idea of using Kubernetes more, but I still think that if you're not provisioning bare-metal worker nodes, then don't bother with Kubernetes.
The problem is that Kubernetes provides orchestration which is missing, or at least limited, in the VM and bare-metal world, so I can understand reaching for Kubernetes, because it is providing a relatively uniform interface for your infrastructure. It just comes at the cost of additional complexity.
Generally speaking I think people need to be more comfortable with build packages for their operating system of choice and install applications that way. Then it's mostly configuration that needs to be pushed and that simplifies things somewhat.
> You can manage and reason about ~2000+ servers without Kubernetes, even with a relatively small team, say about 100 - 150
Oh wow, so uh... I'm managing around 1000 nodes over 6 clusters, alone. There's others able to handle things when I'm not around or on leave and meticulously updated docs for them to do so but in general am the only one touching our infra.
I also do dev work the other half of the week for our company.
Ask your boss if he needs a hand :)
imo if you are on a cloud like aws and using a config management system for mutable infra like puppet you are taking unnecessary complexity and living in the dark ages
> Generally speaking I think people need to be more comfortable with build packages for their operating system of choice and install applications that way. The it's mostly configuration that needs
why, it’s 2025, docker / container makes life so easy
Meanwhile we manage over 1200 instances with multiple kubernetes clusters with a team of 10, including complex mesh networking and everything else the team does. It might be complex but it also gives you so much for free that you don't have to deal with.
> Otherwise, there will be far easier solutions at lower scale.
Which solutions do you have in mind?
- VPS with software installed on the host
- VPS(s) with Docker (or similar) running containers built on-host
- Server(s) with Docker Swarm running containers in a registry
- Something Kubernetes like k3s?
In a way there's two problems to solve for small organisations (often 1 server per app, but up to say 3): the server, monitoring it and keeping it up to date, and the app(s) running on each server and deploying and updating them. The app side has more solutions, so I'd rather focus on the server side here.
Like the sibling commenter I strongly dislike the configuration management landscape (with particular dislike of Ansible and maintaining it - my takeaway is never use 3rd party playbooks, always write your own). As often for me these servers are set up, run for a bit and then a new one is set up and the app redeployed to that (easier than an OS upgrade in production) I've gone back to a bash provisioning script, slightly templated config files and copying them into place. It sucks, but not as much as debugging Ansible has.
I think you underestimate what can be done with actual code because the devops industry seems entirely code averse and seem to prefer a "infrastructure as data" paradigm instead and not even using good well tested/understood formats like sql databases or even object storage but seems to lean towards more fragile formats like yaml.
yes the possix shell is not a good language which is why thinks like perl, python and even php or C got widely used but there is a intermediate layer with tools like fabric(https://www.fabfile.org/) solving a lot of the problems with the fully homegrown without locking you into the "Infrastructure as(manually edited) Data" paradigm that only really works for problems of big scale and low complexity which is exactly the opposite of what you see in many enterprise environments.
I've managed a couple of hundred virtual servers on vCenter with Ansible. It was fine. Syslog is your friend.
Even after the bash script era, I don’t think the configuration management landscape gets enough discredit for how bad it is. I never felt like it stopped feeling hacked together and unreliable.
E.g., Chef Software, especially after its acquisition, is just a dumpster fire of weird anti-patterns and seemingly incomplete, buggy implementations.
Ansible is more of the gold standard but I actually moved to Chef to gain a little more capability. But now I hate both of them.
When I just threw this all in the trash in my HomeLab and went to containerization it was a major breath of fresh air and resulted in getting a lot of time back.
For organizations, of the best parts about Kubernetes is that it’s so agnostic so that you can drop in replacements with a level of ease that is just about unheard of in the Ops world.
If you are a small shop you can just start with something simpler and more manageable like k3s or Talos Linux and basically get all the benefits without the full blown k8s management burden.
Would it be simpler to use plain Docker, Docker Swarm, Portainer, something like that? Yeah, but the amount of effort saved versus your ability to adapt in the future seems to favor just choosing Kubernetes as a default option.
To quote an ex coworker: all configuration management systems are broken, in equal measure - just in different fashion. They are all trying to shoehorn fundamentally brittle, complex and often mutually exclusive goals behind a single facade.
If you are in the position to pick a config management system, the best you can do is to chart out your current and known upcoming use cases. Then choose the tool that sucks the least for your particular needs.
And three years down the line, pray that you made the right choice.
Yes, kube is hideously complex. Yes, it comes with enormous selection of footguns. But what it does do well, is to allow decoupling host behaviour from service/container behaviour more than 98% of the time. Combined with immutable infrastructure, it is possible to isolate host configuration management to the image pre-bake stage. Leave just the absolute minimum of post-launch config to the boot/provisioning logic, and you have at least a hope of running something solid.
Distributed systems are inherently complex. And the fundamental truth is that inherent complexity can never be eliminated, only moved around.
with EKS and cloud-init these days i dont find any need to even bake AMIs anymore. scaling / autoscaling so easy now with karpenter to create/destroy nodes to fit current demand. i think if you use kubernetes in a very dumb way to just run X copies of Y container behind an ALB with no funny business it just works.
Yup. K8s is a bit of a pain to keep up with, but Chef and even Ansible are much more painful for other reasons once you have more than a handful of nodes to manage.
It's also basically a standard API that every cloud provider is forced to implement, meaning it's really easy to onboard new compute from almost anyone. Each K8s cloud provider has its own little quirks, but it's much simpler than the massive sea of difference that each cloud's unique API for VM management was (and the tools to paper over that were generally very leaky abstractions in the pre-K8s world).
I have to say I hate ansible too (and puppet and cfengine that I have previously used). But it's unclear to me how containers fix the problems ansible solves.
So instead of an ansible playbook/role that installs, say, nginx from the distro package repository, and then pushes some specific configuration, I have a dockerfile that does the same thing? Woohoo?
/r/kubernetes had this announcement up about five mins after it dropped at Kubecon. It's a huge deal. So many tutorials and products used ingress-nginx for basic ingress, so them throwing in the towel (but not really) is big news.
That said, (a) the Gateway API supercedes Ingress and provides much more functionality without much more complexity, and (b) NGINX and HAproxy have Gateway controllers.
To generally answer your question, I use HN, /r/devops and /r/kubernetes to stay current. I'm also working on a weekly blog series wherein I'll be doing an overview and quick start guide for every CNCF project in their portfolio. There's hundreds (thousands?) of projects in the collection, so it will keep me busy until I retire, probably :)
> /r/kubernetes had this announcement up about five mins after it dropped at Kubecon. It's a huge deal. So many tutorials and products used ingress-nginx for basic ingress, so them throwing in the towel (but not really) is big news.
I was one of those whose first reaction was surprise, because ingress was the most critical and hardest aspect of a kubernetes rollout to implement and get up and running on a vanilla deployment. It's what cloud providers offer out of the box as a major selling point to draw in customers.
But then I browsed through the Gateway API docs, and it is a world of difference. It turns a hard problem that requires so many tutorials and products to help anyone get something running into a trivially solvable problem. The improvements on their security model is undoubtedly better and alone clearly justifies getting rid of ingress.
Change might be inconvenient, but you need change to get rid of pain points.
I like devops. It means you get to get ahead of all the issues that you could potentially find in cybersecurity. Sure it's complicated, but at least you'll never be bored. I think the hardest part is that you always feel like you don't have enough time to do everything you need to.
DevOps teams are always running slightly behind and rarely getting ahead of technical debt because they are treated as cost centers by the business (perpetually understaffed) and as “last minute complicated requests that sound simple enough” and “oops our requirements changed” dumping grounds for engineering teams.
Plus, the ops side has a lot of challenges that can really be a different beast compared to the application side. The breadth of knowledge needed for the job is staggering and yet you also need depth in terms of knowing how operating systems and networks work.
Cybersecurity is easier? Isn't it all about constantly updating and patching obsolete vulnerable stuff - most annoying part of ops?
i prefer current era where i never have to ssh to debug a node. if a node is misbehaving or even needs a patch i destroy it. one command, works every time.
How can you not be interested in what took down your node???
Honestly, a lot of the Hacker News discourse every single time anything having to do with Kubernetes comes up reads like uninformed annoyed griping from people who have barely or not used it. Kubernetes itself has been around since 2014. ingress-nginx was the original example of how to implement an Ingress controller. Ingress itself is not going away, which seems to a misconception of a lot of replies to your comment. A lot of tutorials use this because a lot of tutorials simply copied the Kubernetes upstream documentation's own tutorials, which used toy examples of how to do things, including ingress-nginx itself, which was meant to be a toy example of how to implement an Ingress controller.
Nonetheless, it was around a full decade before they finally decided to retire it. It's not like this is something they introduced, advertised as the ideal fit for all production use cases, and then promptly changed their minds. It's been over a decade.
Part of the problem here is the Kubernetes devs not really following their own advice, as annotations are supposed to be notes that don't implement functionality, but ingress-nginx allowed you to inject arbitrary configuration with them, which ended up being a terrible idea in the main use Kubernetes is really meant for, which is you're an organization running a multi-tenant platform offering application layer services to other organizations, which it is great for, but Hacker News with its "everything is either a week one startup or a solo indy dev" is blind to for whatever reason.
Nonetheless, they still kept it alive for over a decade. Hacker News also has the exact wrong idea about who does and should use Kubernetes. It's not FAANGs, which operate at a scale way too big for it and do this kind of thing using in-house tech they develop themselves. Even Google doesn't use it. It's more for the Home Depots and BMWs of the world, organizations which are large-scale but not primarily software companies, running thousands if not millions of applications in different physical locations run by different local teams, but not necessarily serving planet-scale web users. They can deal with changing providers once every ten years. I would invite everyone who thinks this is unmanageable complexity to try dipping their toes into the legal and accounting worlds that Fortune 500s have to deal with. They can handle some complexity.
The Ingress API has been on ice for like 5 years. The core Kubernetes API doesn't change that much, at least these days. There's an infinite number of (questionable) add-ons you can deploy in your cluster, and I think that's mostly where folks get stuck in the mud.
But the Gateway API has only been generally available for two years now. And the last time I checked, most managed K8S solutions recommend the Ingress API while Gateway support is still experimental.
We also now have multiple full featured Ingress implementations that work better than the old nginx-ingress
> doesn’t change that much
Yet they are retiring a core Ingress that has been around for almost as long as Kubernetes has.
They are not retiring the API. Nginx Ingress is one of the many projects that implements this API, and you are free to migrate to another implementation.
ingress-nginx is older than 5-7 years tough. In that time frame you would’ve needed to update your Linux system, which gets hairy most often as well. The sad thing is just that the replacement is just not there and gateway api has a lot of drawbacks that might get fixed in the next release (working with cert manager)
In my experience, many teams keep up with this by spending a lot of time keeping up with this and less time developing the actual product. Which, you probably guessed it, results in products much shittier than what we had 10 or 20 years ago.
But hey, it keeps a lot of people busy, which means it also keeps a lot of managers and consultants and trainers busy.
Kubernetes is never maturing. It keeps moving. An installation just a year ago would have things that would require significant planning to upgrade.
What is missing is an open source orchestrator that has a feature freeze and isn't Nomad or docker swarm.
Just out of curiosity, what's wrong with either of those two?
hear hear!
When I was choosing ingress controller few years ago, I think it was the most popular ingress controller by far, according to various polls. As I didn't have any specific requirements, I chose it and it worked for me. Over years I've used few proprietary annotations, so migrating away going to be a bit of pain. Not awesome news.
Reading few blogs and forums about it today - people talking about switching to Gateway API (from "legacy" Ingress).
And I do not understand it:
1. Ingress still works, it's not deprecated.
2. There a lot of controllers, which supports both: Gateway API and Ingress (for example Traefik)
So, how Ingress Nginx retiring related / affects switch to Gateway API?
1) ingress still works but is on the path to deprecation. It's a super popular API, so this process will take a lot of time. That's why service meshes have been moving to Gateway API. Retiring ingress-nginx, the most popular ingress controller, is a very loud warning shot.
2) see (1).
It doesn't but Kubernetes team was kind of like "Hey, while you are switching, maybe switch away from Ingress API?"
This is required steps but the timing plan is bad. Looks like a google product closing. Let people time to move out, 6 month is not enough.
To be fair, this is not the first time we'e heard about this, https://github.com/kubernetes/ingress-nginx/issues/13002 exists since March. However I also thought that the timeline to a complete project halt would be much longer considering the prevalence of the nginx ingress controller. Might also mean that InGate is dead, since it's not mentioned in this post and doesn't seem to be close to any kind of stable release.
> InGate development never progressed far enough to create a mature replacement; it will also be retired
It's not a service shutting down though. It will still work fine for a while and it there is a critical security patch required, the community might still be able to add it.
No they are going to forbid people to commit anything to the project so even security patch will be blocked.
The chance of this not having a fork keeping security updates running is effectively zero.
RIP, end of an era. Thank you everyone who worked on this, it was an extraordinarily useful and reliable project.
Traefik has an Nginx compatibility for annotations as well to make it easy to switch.
The list of supported annotations is quite short though
This is terrible. Of all things k8s, ingress was the part I just did not want to have to mess with. It just worked and was stable, this gateway is completely unnecessary. And it seems to me that nginx retiring is just because people were pushing for the gateway so much that they threw in the towel. Infra is not react, people need to leave it alone.
Does anyone know good resources on how to migrate and which gateway controllers are suitable replacements?
Ingresses with custom nginx attributes might be tricky to migrate.
Literally the second link in the article is "migrating to API Gateway" and points to https://gateway-api.sigs.k8s.io/guides/
Which has this section about migration: https://gateway-api.sigs.k8s.io/guides/migrating-from-ingres...
And this list of Gateway controllers: https://gateway-api.sigs.k8s.io/implementations/
I've been using Envoy Gateway in my homelab and have found it to be good for my modest needs (single node k3s cluster running on an old PC). I needed to configure the underlying EnvoyProxy so that it would listen on specific IPs provided by MetalLB, and their docs were good enough to find my way through that.
https://gateway.envoyproxy.io/
^ I second Envoy Gateway! It has support for HTTPRoute like all the others, but also TCPRoute, UDPRoute, TLSRoute, GRPCRoute backed by Envoy and they have worked great for me on EKS clusters I manage for work. The migration from Ingress API to Gateway API hasn’t been bad, as you can have both running side-by-side (just not using the same LB) and the EnvoyPatchPolicy has been great for making advanced changes for things not covered by the manifests
But envoy configs are unreadable abominations, why would you choose it? How did you even learn how to configure it? It's documentation is so confusing.
Envoy is designed with the intent that a machine is dynamically reconfiguring it at runtime. It is not designed to be configured directly by a human.
The tradeoff is that you can do truly zero downtime configuration changes. Granted, this is important to a very small number of companies, but if it's important to you, Envoy is great.
You don't. Envoy is great if you programmatically configure it, or if you have very small and simple configs. It can't be maintained by a human. But if you have tools that generate it programmatically based on other config, you can read through it.
It's in beta, but HAProxy has a gateway product:
https://www.haproxy.com/blog/announcing-haproxy-unified-gate...
There are many Gateway implementations: https://gateway-api.sigs.k8s.io/implementations/
Love haproxy but if we’re shilling projects istio is superior. Multi cluster, hbone, ambient.
> istio is superior
It's also eating a significant amount of your compute and memory
What is hbone? What is ambient?
Lots more moving pieces though
I have tens of clusters to maintain. Quite an advertisement for ECS!
Kubernetes behaves like a JavaScript framework. See what has been happening in React and Sevelte for past few years.
Infrastructure is the underlying fabric and it needs stability and maturity.
Inadvertently we migrated to ECS just last week
Ingress nginx was the default ingress for pretty much the entire life of k8s. F5 bought nginx and made nginx ingress, which I've never met a user of.
Sad to see such a core component die, but I guess now everyone has to migrate to gateways.
F5 bought nginx? Isn't (wasn't?) nginx a simple open source web server?
Nginx Inc was founded by Nginx developers in 2011. They were selling commercial support. They were bought by F5 in 2019 for $670M.
And see how confusing the naming is.
ingress ngnix. ngnix ingress.
Another triumph for open source: popular project probably used by many megacorps only propped up by the weekend charity of a couple unpaid suckers over the years.
I don’t think this is the https://xkcd.com/2347/ of the ops world? People will usually use the ingress controller of their cloud provider. I’ve been using the tailscale ingresses for tailscale funnel. But the transition from ingress to gateway api is seeming to take forever so I’m just running a caddy pod with a static config until the dust settles.
Why would you kill a thing that works so well, is so flexible, and does not have an equal yet?
I do not understand.
There are no maintainers. It was maintained by one engineer for years, he stepped down, and F5 (who bought nginx) don't want to contribute since they have a competitor.
The project is still active even not pushing new big features.
What's the security back-story here?
Only a single maintainer for years, and it's fallen now to best-effort.
I (and others) have offered to create a PR for issues opened — just point us in the right direction we asked. The maintainer always came back with “I fixed it for you”.
The maintainer had plenty of people who wanted to help, but never spent the time to teach them.
Are you blaming the maintainer? lol, lmao even
Not exactly blaming them. But saying opportunities were missed, for sure.
It wasn't the most loved part of k8s, to say the least.
Great, another deprecation to address in my EKS clusters :(