Years ago, I joined a company, took over a dev team and was asked to launch the product in 3 months.
They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.
The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.
I had to have the two tables open, cross check the specs and price.
If I had learned one thing from my past life was that if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.
Created a DigitalOcean account, moved everything over. Set up our CI/CDs to deploy there, and spent the next two months on the product, launching one month earlier than promised.
Some years before that I saw a video online where a person digs a hole near a river and puts a pipe the river and the hole. The fishes push themselves hard in the pipe to get to their trap. Choosing the path of least resistance, and never backing off from a mistake: recipes to end up like those fishes. The video left a big impression on me.
> AWS stomped on open source projects - despite the clear desire of projects like Elasticsearch, Redis, and MongoDB not to be cloned and monetized, AWS pushed ahead with OpenSearch, Valkey, and DocumentDB anyway, capturing the hosted-service money after those communities and companies had built the markets; the result was a wave of defensive licenses like SSPL, Elastic License, RSAL, and other source-available models designed less to stop ordinary users than to stop AWS from stripping open-source infrastructure for parts, owning the customer relationship.
This is completely backwards, at least with OpenSearch and Valkey. AWS didn't create the forks until after the upstream projects changed their license, so it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes. With Valkey in particular it was members of the former redis core development team that created Valkey.
A lot of these projects work on a business model where they open-source their core product, and provide advanced services, installation, maintenance or fully-managed services around their product. AWS was bypassing them by providing fully-managed services. On this, I am on the side of the people behind the projects. Basically AWS was eating their lunch. They had no choice but to change the licenses.
They have a problem with their business model, then. License changes to a formerly open source project are costly. The community reacts very strongly when license terms change after they've come to depend on a product, and they should.
Why do we apply this standard to MongoDB but not to Apache, Linux, Postgres, or MariaDB? One purpose of an open source license is to allow many providers to provide the service. As I've talked about here previously, Elasticsearch wasn't able to provide the service I needed, so I had to move to AWS.
It's weird to me that the Hacker News community doesn't think that sort of competition is good. The narrative seems to be that all these businesses are somehow victims of AWS, when it seems the truth is much more straightforward: they provided open source software and people used it. The fact that their business had no working plan to actually monetize that foundation should not be taken out on the community.
Competition would mean Amazon creating their own software. Taking software others made and using your monopoly eco-system and scale to drive the original creator out of the game kills the product.
Many support breaking up Amazon so others could compete not killing small entities and growing Amazon.
It's not just Amazon, it's also smaller providers like Dreamhost, which I've been using for 20 years. I feel like people are in favor of killing the hosting ecosystem so that we can support businesses that didn't have a working plan to monetize their open source offering.
Sometimes I wonder how much it would hurt Amazon to pay the creators and maintainers of OSS software they sell 1 cent per billing period of use(1 hr?). I also wonder how much money that would offer an oss team. To contribute risk free to improving the product
> But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects
Or seen from the other side, these projects chose initial licenses that didn't fit with their wants for how others should use their project, in this mind.
If you use a license that gives people the freedom to host your project as a service and make money that way, without paying you, and your goal was to make money that specific way, it kind of feels like you chose the wrong license here.
What was unsustainable (considering this perspective) was less that outside actors did what they were allowed to do, and more that they chose a license that was incompatible with their actual goals.
Of course AWS didn't create the forks until the projects changed their license to disallow AWS from making money from their code! That's the whole point here.
AWS has been systematically hollowed out of technical staff since 2023. Either through mass layoffs or via 2 cycles of performance improvement plans. Often I find most skilled peers in presales or support are not with AWS whilst the ones with most ambiguous work history have been retained at promoted.
Use AWS at your own risk, Paul Vixie is not there to save you.
I've transitioned between cloud services and self-hosting a few times:
1. Vercel Phase
My first project used Vercel. Since my project was Next.js, the experience was decent. But as my project gained some users, I found that even for projects under 100 users, I needed to pay $20 per month. Since my service didn't require high performance, this cost felt steep.
2. Self-host Phase (Hetzner + Coolify)
Later, I started setting up my own server with Hetzner and deploying with Coolify. Since Coolify is open-source and free, I only had to cover the cost of a VPS (even $5 a month was sufficient). I could deploy PostgreSQL instances and run a web server on it.
But later I discovered that even this way, I still had to spend a lot of effort maintaining PostgreSQL and Redis. Even though they were containerized with Docker, managing them was still troublesome. I needed to pass various system and environment variables between services, which was very tedious.
3. Cloudflare Phase
So later I switched to Cloudflare. With Cloudflare Workers, I can deploy fullstack applications and use D1 Database and Cloudflare KV to replace Redis. These features can be called directly within the Worker without needing to pass environment variables.
Plus, the local development experience is excellent and the pricing is very reasonable, so I've been using Cloudflare's entire suite ever since.
I don't work in that area, so I only touch AWS once in a while for personal fun projects.
And every time it's a nightmare. I'm just banging out a server for my experimental card game, not setting up an new financial institution. Everything looks as if I'm preparing to scale to infinity tomorrow, with a staff of a thousand and a budget backed by VCs.
Fortunately there's Netlify and similar, who put a gloss on it so that I don't have to boil the ocean. I figure that one of these days I might actually be forced to learn IAM and VPNs and God only knows what else. Meantime, every time I touch it my eyes bug out.
You can just spin up a raw VPS on EC2 or Lightsail, give it a public IP, and call it a day. You aren't required to implement every enterprise pattern in the book.
If there is any single service I'd avoid on AWS it's Lightsail, it'll cost you a lot more than almost anything out there, is slow as molasses (even tiny services can need tens of minutes to deploy) and you'll experience random failures not even AWS reps can explain to you. Avoid at all costs.
It's a ghost of its former self, but I'd probably still rather use Heroku today than being forced to use Lightsail even once again.
I sure prefer plain EC2 to Lightsail as well, and prefer Hetzner over either, but looking at these replies ... can someone tell me where the goal posts are right now?
But that's costly. Speaking of my own experience: going from a webapp fully hosted on an EC2 instance to a railway and vercel setup reduced my costs 10x.
I miss heroku dearly. somewhere at Salesforce there is an exec who killed the product and shifted it to enterprise and is now looking at the vibe coding revolution seeing their opportunity missed.
AWS is aimed at enterprise, not personal projects. Personal projects wouldn’t give them any meaningful revenue because the only thing that matters is cost.
Something that has always bothered me an outsized amount is Elasticache.
I will bite the bullet and pay for RDS because it adds a lot of value - scalability, a reasonably optimized config, backups I don’t have to worry about.
But Elasticache is exploitatively priced with almost no value add.
It is slower, less optimized, less stable, and only supports one DB compared to a vanilla redis install with zero configuration.
There are some scalability improvements, but it’s extremely rare they’re even required because vanilla redis so wildly outperforms elasticache on a similar instance.
the A.I (LLM) merchants will tell you - that AI is now writing software (agentic coding they call it ) - yet one they can't bill you properly or have a broken billing mechanism.
their dashboards are trash & don't work - Google Cloud, AWS Console, Google Ads, Meta Ad manager
I won't even mention the hyped up LLM vendors.
but here we r - people being laid off due to A.I - money being funneled into Gigawatt datacenters
I don't think that's the real issue. The problems with billing and dashboards at cloud vendors are not new within the past few years, they have existed far longer than the LLM coding.
The set of core services on AWS remains amazing: EC2, S3, IAM, EKS, Route53, RDS etc.
AWS IAM is extremely well designed when you compare it with the spaghetti monster IAM systems of other clouds.
Every time I try the new cool thing supposed to replace these services on some other provider - I understand how mature and polished the AWS ones are.
With that said, the rest 90% of AWS services like WorkMail, Cognito, API Gateway, are absolute hot garbage which no good meaning AWS expert will touch with a 10 meter stick.
Slightly different but related topic - for people who work with people vibe coding, what is the easiest way to allow that for non tech users (and reducing risk)? AWS or something like vercel? Coolify?
Vercel and supabase seems to be the norm around here.
DX is simple, integrations between the two, and the stack is well understood by the LLM.
Lovable uses supabase, and is surprisingly easy to eject from too; I've done the lovable to Vercel + supabase a couple of times, even managing to keep it syncing via the Git integration. You can get proper scalable infra and minimal vendor lock in whilst the vibe coder gets to play with the pretty.
+1 on the IAM over engineering, though to AWS credit, I suspect it was evolved rather than design, and that's what you get when evolution has to maintain some level of backward compatibility (think humans still having to be able to lay eggs).
Another thing that happens occasionally for saas companies is AWS creating a copy of their product in a bit sus way - but it's not a technical problem, it's a business model problem.
> I am reminded why I left AWS and how I need to finish the job, get off AWS Workmail, move my domains from Route53 and never return.
Well, besides for the fact that the author's got suspended for no reason, WorkMail is being shut down March 2027 anyway. I recommend checking out Purelymail for a budget, batteries included option.
> Cloud computing was an absolutely mind blowing revolution - suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center. This was an absolute game changer, and I really drank the AWS Kool Aid down to every last drop then I licked out the cup. I was all in on AWS in a big way.
Am I the only one who remembers that VPSes and dedicated hosting services were a thing before AWS came around? Yes you had to pay for a month at a time and scaling wasn’t as instant, but it wasn’t like the only option before cloud computing was having to drive to the datacentre and install your own server.
> suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center.
The “in minutes” is doing a lot of the work in that sentence above.
I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.
AWS changed that, and the rest of the industry eventually followed.
> I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.
VPSes and non-custom configs for dedicated servers were pretty instant as far as I know, I think the advantage of AWS was more that you could scale up and down much more easily since you weren’t locked down in a monthly contract, and that you could automate server provisioning through an API.
At last my quest to find the stooge has come to a bitter end!
I saw some 192 core instances on Vultr, but I haven't tried them yet. What are you doing with all them cores?
I often fantasized about spinning up hundreds of nodes for various projects that needed number crunching. Then realized "wait I can just rent one big box for an hour" haha. It's really cool that we can do that now.
> Of course I do not pay for premium support, so I have to wait the 24 hours that they said it would take them to reply. It's 3 days and AWS support has not replied.
The writing has been on the wall for a few years now, and this is particularly evident to those thar have worked at AWS: Amazon is in its day-2 era.
Amazon being in its day-2 era means that most of what has been written in the past twenty years about Amazon is bot valid anymore.
“Customer obsession” is literally their first leadership principle, and stellar support was their defining characteristic.
There was a time when AWS was truly innovative, but it’s long since transformed into Amazon’s cash cow and is behaving like such.
Innovation has ground to a halt of mostly just meh “hey us too” launches. Pricing and design patterns feel increasingly focused on locking you in. AWS folks tell me internally they talk a lot about making sure things are “sticky” with customers. The best engineering talent no longer wants to work there and it shows, especially in places like AI where AWS has just released wave after wave of discombobulated nonsense.
As a core “rent-a-server” concept with a few add on services there’s still a lot of utility, but AWS is gradually becoming a boring baseline utility with a ton of distracting half baked stuff jammed on top. Most companies I talk to are no longer focused on single cloud and increasingly are bringing a lot of workloads back on prem or in colos. Not everything, but for a lot of stuff that just makes more sense and is a heck of a lot cheaper.
The chips business in Annapurna is probably the most interesting thing and that plays to its strength of the boring low level infrastructure stuff. Nearly everything AWS tries to do beyond chips and rent-a-server plays is a hot mess.
AWS isn’t going away, but its future looks a lot less exciting and inspiring than the story that got us to this point.
Lambda is incredibly simple to use, it just runs a function for you.
Not sure how you could burn so much with dynamodb. It’s serverless and incredibly cheap. Must have been doing something insane like a huge dataset where you scan through it over and over.
Being salty that Gary couldn’t sell enough of his paid service and AWS is competing with it isn’t a meaningful complaint. I want something in AWS, not on Gary’s servers.
I’ve a couple of apps doing a few million a day. I am using Hetzner and before that used DigitalOcean. Mind you, for close to a decade.
People are unnecessarily complicating stuff, and these clouds can go very expensive very quickly.
Recently, I came across a company and they were spending $20k a month on GCP. I am like, are you kidding me, $20K for the kind of stuff you do??? It seems you do not understand how CPU, RAM and Disk work to plaster such "autoscaling hyper solutions" burning money in cloud.
I moved their stuff out of the GCP managed solution and ended up with a $200-400 per month bill. The CEO can still not believe how it's even possible.
I suggested them move to Dedicated servers but they didn't want it, they said they must show they are on Hyperscaling cloud.
OK fine, we'll stay in Hyperscaler but not use any of their service other than VMs.
They racked up a ton of bills by using cloud monitoring, Datastore, and autoscalers (with no proper tuning), Kubernetes.
I replaced all of it with Prometheus, Grafana, Loki, and most stuff from Datastore to Postgres and Mongo with replicas. I added Redis.
I implemented a custom scaler where you can scale off of app metrics, not by just using a random peg on CPU.
I implement hot data reload by packing the data updates in gzip file, uploading to GCS and pulling from autoscaled units. Moved the stuff to Spot VMs.
The complexity of stuff in cloud is high for nothing.
At my previous startup: because AWS gave us a bunch of credits and helped us design the infra. It meant we ran for free what they designed for free.
At a previous bigger company, getting procurement to sign up to a new provider requires writing a business case, justifying the spend and then getting multiple competing quotes and speaking to their sales teams. Signing up to a new service takes _months_ even for $10/mo as they’ll negotiate for bulk discounts and the best possible terms for something that will literally cost less per year than one of meetings they hold to discuss the “value”. Meanwhile on AWS I can click a button in the marketplace and it gets thrown in the AWS account which is pre approved spending.
I think AWS is liked is because when AWS started, being able to get a new VPS up in minutes was still quite unusual. Many hosts would require about 24hr, I suspect, for getting a new VM up. At least those are some experiences I had. But nowaways, they are probably many options for getting a VM instantly.
I agree that it's overcomplicated. Although having the self-service portal also for assigning IPs is useful. But most of it seems overkill. Although, being able to detach storage from VMs and such is also quite flexible. But still.
It’s flexible but slow. we ran our C++ CI/CD on AWS at a previous company, and we used spot instances with volumes attached and detached dynamically. The performance was absolutely abysmal because in effect you’re running compilation across a networked file system, no matter what AWS says your throughput is.
Our 64 core spot instances on windows were taking 8-10x longer than our developer machines with the same core count, and there was a bunch of engineering went into the scaling, queue management, etc. if we’d just had a single bare metal machine from hetzner we could have saved money _and_ reduced our iteration times.
The ease of getting things set up quickly and usually for free when starting up is very tempting. Later, migration is usually considered risky and not worth it because of maintenance overhead - which I would argue has become very easy.
I worked for a startup company - the founders were really nice people and had put their own money in - quite a lot of money - to get the software built for the vision they had.
By the time I joined, 18 months after development had started, a giant, complex, hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use.
It should have been built on a single Linux box by a single senior developer with Python and Postgres or nodejs or Ruby or whatever.
They went out of business after not too long and I couldn't help wondering if things might have been different if they hadn't spent a fortune building a giant money making machine for AWS, instead of making a web application on a Linux box.
Every AWS project I have worked on has had some significant work put into programming AWS instead of writing business functionality.
> hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use
To be fair, if they had a AWS Solution Architect involved they heavily push you down this road and if they manage to get in management's ear they'll push the idea that server-less AWS features is vastly cheaper. No servers running overnight when you're all tucked up in bed, it has to be cheaper right?
If you're only responding to a handful of requests that's true, but once things ramp up you get "nickel and dimed" for everything: API Gateway requests, lambda execution time, DynamoDB read/write units, CloudWatch logs, outgoing data, step function transitions, S3 requests.
I understand all those services cost money and they shouldn't be free, but I question if paying all those micro-transactions is worse then paying for your own VMs, especially once your customers complain about the cold starts and you think you can fix it with "lambda warming"
To be fair that’s an AWS problem not a lambda problem. If you replace lambda with EC2 the only thing you save in is lambda and step functions(and maybe api gateway but now you need to pay for a load balancer or a public IP), the rest you need to pay for anyway.
Years ago, I joined a company, took over a dev team and was asked to launch the product in 3 months.
They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.
The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.
I had to have the two tables open, cross check the specs and price.
If I had learned one thing from my past life was that if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.
Created a DigitalOcean account, moved everything over. Set up our CI/CDs to deploy there, and spent the next two months on the product, launching one month earlier than promised.
Some years before that I saw a video online where a person digs a hole near a river and puts a pipe the river and the hole. The fishes push themselves hard in the pipe to get to their trap. Choosing the path of least resistance, and never backing off from a mistake: recipes to end up like those fishes. The video left a big impression on me.
> AWS stomped on open source projects - despite the clear desire of projects like Elasticsearch, Redis, and MongoDB not to be cloned and monetized, AWS pushed ahead with OpenSearch, Valkey, and DocumentDB anyway, capturing the hosted-service money after those communities and companies had built the markets; the result was a wave of defensive licenses like SSPL, Elastic License, RSAL, and other source-available models designed less to stop ordinary users than to stop AWS from stripping open-source infrastructure for parts, owning the customer relationship.
This is completely backwards, at least with OpenSearch and Valkey. AWS didn't create the forks until after the upstream projects changed their license, so it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes. With Valkey in particular it was members of the former redis core development team that created Valkey.
A lot of these projects work on a business model where they open-source their core product, and provide advanced services, installation, maintenance or fully-managed services around their product. AWS was bypassing them by providing fully-managed services. On this, I am on the side of the people behind the projects. Basically AWS was eating their lunch. They had no choice but to change the licenses.
They have a problem with their business model, then. License changes to a formerly open source project are costly. The community reacts very strongly when license terms change after they've come to depend on a product, and they should.
Why do we apply this standard to MongoDB but not to Apache, Linux, Postgres, or MariaDB? One purpose of an open source license is to allow many providers to provide the service. As I've talked about here previously, Elasticsearch wasn't able to provide the service I needed, so I had to move to AWS.
It's weird to me that the Hacker News community doesn't think that sort of competition is good. The narrative seems to be that all these businesses are somehow victims of AWS, when it seems the truth is much more straightforward: they provided open source software and people used it. The fact that their business had no working plan to actually monetize that foundation should not be taken out on the community.
Competition would mean Amazon creating their own software. Taking software others made and using your monopoly eco-system and scale to drive the original creator out of the game kills the product.
Many support breaking up Amazon so others could compete not killing small entities and growing Amazon.
It's not just Amazon, it's also smaller providers like Dreamhost, which I've been using for 20 years. I feel like people are in favor of killing the hosting ecosystem so that we can support businesses that didn't have a working plan to monetize their open source offering.
Walmart pulling up top a small town, opening a single business, paying everyone minimum wage is not 'competition is good'.
Just try a little bit of understanding.
Sometimes I wonder how much it would hurt Amazon to pay the creators and maintainers of OSS software they sell 1 cent per billing period of use(1 hr?). I also wonder how much money that would offer an oss team. To contribute risk free to improving the product
> it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes
But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects.
> But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects
Or seen from the other side, these projects chose initial licenses that didn't fit with their wants for how others should use their project, in this mind.
If you use a license that gives people the freedom to host your project as a service and make money that way, without paying you, and your goal was to make money that specific way, it kind of feels like you chose the wrong license here.
What was unsustainable (considering this perspective) was less that outside actors did what they were allowed to do, and more that they chose a license that was incompatible with their actual goals.
The situation changed. A license that's the right choice at one point may not be the right license a decade later.
Agree, as long as existing contributors agree the license should be changed, projects should feel free to do so, no harm, no foul.
Yes, this was my impression as well.
Of course AWS didn't create the forks until the projects changed their license to disallow AWS from making money from their code! That's the whole point here.
AWS has been systematically hollowed out of technical staff since 2023. Either through mass layoffs or via 2 cycles of performance improvement plans. Often I find most skilled peers in presales or support are not with AWS whilst the ones with most ambiguous work history have been retained at promoted.
Use AWS at your own risk, Paul Vixie is not there to save you.
I've transitioned between cloud services and self-hosting a few times:
1. Vercel Phase My first project used Vercel. Since my project was Next.js, the experience was decent. But as my project gained some users, I found that even for projects under 100 users, I needed to pay $20 per month. Since my service didn't require high performance, this cost felt steep.
2. Self-host Phase (Hetzner + Coolify) Later, I started setting up my own server with Hetzner and deploying with Coolify. Since Coolify is open-source and free, I only had to cover the cost of a VPS (even $5 a month was sufficient). I could deploy PostgreSQL instances and run a web server on it. But later I discovered that even this way, I still had to spend a lot of effort maintaining PostgreSQL and Redis. Even though they were containerized with Docker, managing them was still troublesome. I needed to pass various system and environment variables between services, which was very tedious.
3. Cloudflare Phase So later I switched to Cloudflare. With Cloudflare Workers, I can deploy fullstack applications and use D1 Database and Cloudflare KV to replace Redis. These features can be called directly within the Worker without needing to pass environment variables.
Plus, the local development experience is excellent and the pricing is very reasonable, so I've been using Cloudflare's entire suite ever since.
I don't work in that area, so I only touch AWS once in a while for personal fun projects.
And every time it's a nightmare. I'm just banging out a server for my experimental card game, not setting up an new financial institution. Everything looks as if I'm preparing to scale to infinity tomorrow, with a staff of a thousand and a budget backed by VCs.
Fortunately there's Netlify and similar, who put a gloss on it so that I don't have to boil the ocean. I figure that one of these days I might actually be forced to learn IAM and VPNs and God only knows what else. Meantime, every time I touch it my eyes bug out.
You can just spin up a raw VPS on EC2 or Lightsail, give it a public IP, and call it a day. You aren't required to implement every enterprise pattern in the book.
If there is any single service I'd avoid on AWS it's Lightsail, it'll cost you a lot more than almost anything out there, is slow as molasses (even tiny services can need tens of minutes to deploy) and you'll experience random failures not even AWS reps can explain to you. Avoid at all costs.
It's a ghost of its former self, but I'd probably still rather use Heroku today than being forced to use Lightsail even once again.
I sure prefer plain EC2 to Lightsail as well, and prefer Hetzner over either, but looking at these replies ... can someone tell me where the goal posts are right now?
Congrats, your raw EC2-hosted 500MB WebGL experimental card game went to the HN Front Page! You now owe AWS $30k in egress costs.
But that's costly. Speaking of my own experience: going from a webapp fully hosted on an EC2 instance to a railway and vercel setup reduced my costs 10x.
t4g.nano is $3/m; a similar spec-ed fargate on ecs (just any docker container) is $10/m
Maybe so, but it's still not the complexity nightmare that some would have us believe it is.
What amazes me is how Heroku absolutely nailed what most web apps need nearly 20 years ago.
I miss heroku dearly. somewhere at Salesforce there is an exec who killed the product and shifted it to enterprise and is now looking at the vibe coding revolution seeing their opportunity missed.
Render has been excellent replacement, in my experience.
I suspect the people responsible have fully justified to themselves any decisions they made, helped along with any bonuses they got for doing it.
Why? It is still up, and working just as it used to.
That won't last. https://www.heroku.com/blog/an-update-on-heroku/
Digital ocean is the answer. You give it a container and off you go.
Use to be now they are requiring 2fa for addon domains over a certain amount
Of all the things to be upset about, mandatory 2FA doesn't seem like one.
It’s negligent to not use 2FA for any cloud platform where credentials can be used to spin up resources.
Fly and Render are what heroku would be if they didn’t stop innovating. And neon db for Postgres.
> And neon db for Postgres.
For 90% of the time when they're up.
it's only a nightmare if you had not to deal with Azure
AWS is aimed at enterprise, not personal projects. Personal projects wouldn’t give them any meaningful revenue because the only thing that matters is cost.
I switched to Cloudflare and it's been a breath of fresh air - everything I need and the pricing is reasonable.
Something that has always bothered me an outsized amount is Elasticache.
I will bite the bullet and pay for RDS because it adds a lot of value - scalability, a reasonably optimized config, backups I don’t have to worry about.
But Elasticache is exploitatively priced with almost no value add.
It is slower, less optimized, less stable, and only supports one DB compared to a vanilla redis install with zero configuration.
There are some scalability improvements, but it’s extremely rare they’re even required because vanilla redis so wildly outperforms elasticache on a similar instance.
the A.I (LLM) merchants will tell you - that AI is now writing software (agentic coding they call it ) - yet one they can't bill you properly or have a broken billing mechanism.
their dashboards are trash & don't work - Google Cloud, AWS Console, Google Ads, Meta Ad manager
I won't even mention the hyped up LLM vendors.
but here we r - people being laid off due to A.I - money being funneled into Gigawatt datacenters
I don't think that's the real issue. The problems with billing and dashboards at cloud vendors are not new within the past few years, they have existed far longer than the LLM coding.
The set of core services on AWS remains amazing: EC2, S3, IAM, EKS, Route53, RDS etc.
AWS IAM is extremely well designed when you compare it with the spaghetti monster IAM systems of other clouds.
Every time I try the new cool thing supposed to replace these services on some other provider - I understand how mature and polished the AWS ones are.
With that said, the rest 90% of AWS services like WorkMail, Cognito, API Gateway, are absolute hot garbage which no good meaning AWS expert will touch with a 10 meter stick.
GCP would be perfect if they didn't have a history of randomly dropping quotas on startups, causing them downtime
Slightly different but related topic - for people who work with people vibe coding, what is the easiest way to allow that for non tech users (and reducing risk)? AWS or something like vercel? Coolify?
Vercel and supabase seems to be the norm around here.
DX is simple, integrations between the two, and the stack is well understood by the LLM.
Lovable uses supabase, and is surprisingly easy to eject from too; I've done the lovable to Vercel + supabase a couple of times, even managing to keep it syncing via the Git integration. You can get proper scalable infra and minimal vendor lock in whilst the vibe coder gets to play with the pretty.
+1 on the IAM over engineering, though to AWS credit, I suspect it was evolved rather than design, and that's what you get when evolution has to maintain some level of backward compatibility (think humans still having to be able to lay eggs). Another thing that happens occasionally for saas companies is AWS creating a copy of their product in a bit sus way - but it's not a technical problem, it's a business model problem.
> I am reminded why I left AWS and how I need to finish the job, get off AWS Workmail, move my domains from Route53 and never return.
Well, besides for the fact that the author's got suspended for no reason, WorkMail is being shut down March 2027 anyway. I recommend checking out Purelymail for a budget, batteries included option.
Preach, brother.
> Cloud computing was an absolutely mind blowing revolution - suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center. This was an absolute game changer, and I really drank the AWS Kool Aid down to every last drop then I licked out the cup. I was all in on AWS in a big way.
Am I the only one who remembers that VPSes and dedicated hosting services were a thing before AWS came around? Yes you had to pay for a month at a time and scaling wasn’t as instant, but it wasn’t like the only option before cloud computing was having to drive to the datacentre and install your own server.
> suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center.
The “in minutes” is doing a lot of the work in that sentence above.
I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.
AWS changed that, and the rest of the industry eventually followed.
No you could rent virtualised servers way before AWS. AWS simply had good marketing.
> I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.
VPSes and non-custom configs for dedicated servers were pretty instant as far as I know, I think the advantage of AWS was more that you could scale up and down much more easily since you weren’t locked down in a monthly contract, and that you could automate server provisioning through an API.
Not first, but it was the first with a planet-scale marketing budget.
I miss the Media Temple days.
At last my quest to find the stooge has come to a bitter end!
I saw some 192 core instances on Vultr, but I haven't tried them yet. What are you doing with all them cores?
I often fantasized about spinning up hundreds of nodes for various projects that needed number crunching. Then realized "wait I can just rent one big box for an hour" haha. It's really cool that we can do that now.
>> 192 cores What are you doing with all them cores?
The ancient forgotten art of Vertical Scaling.
It's remarkably zen and effective.
> Of course I do not pay for premium support, so I have to wait the 24 hours that they said it would take them to reply. It's 3 days and AWS support has not replied.
The writing has been on the wall for a few years now, and this is particularly evident to those thar have worked at AWS: Amazon is in its day-2 era.
Amazon being in its day-2 era means that most of what has been written in the past twenty years about Amazon is bot valid anymore.
“Customer obsession” is literally their first leadership principle, and stellar support was their defining characteristic.
There was a time when AWS was truly innovative, but it’s long since transformed into Amazon’s cash cow and is behaving like such.
Innovation has ground to a halt of mostly just meh “hey us too” launches. Pricing and design patterns feel increasingly focused on locking you in. AWS folks tell me internally they talk a lot about making sure things are “sticky” with customers. The best engineering talent no longer wants to work there and it shows, especially in places like AI where AWS has just released wave after wave of discombobulated nonsense.
As a core “rent-a-server” concept with a few add on services there’s still a lot of utility, but AWS is gradually becoming a boring baseline utility with a ton of distracting half baked stuff jammed on top. Most companies I talk to are no longer focused on single cloud and increasingly are bringing a lot of workloads back on prem or in colos. Not everything, but for a lot of stuff that just makes more sense and is a heck of a lot cheaper.
The chips business in Annapurna is probably the most interesting thing and that plays to its strength of the boring low level infrastructure stuff. Nearly everything AWS tries to do beyond chips and rent-a-server plays is a hot mess.
AWS isn’t going away, but its future looks a lot less exciting and inspiring than the story that got us to this point.
These complaints are very weak.
Lambda is incredibly simple to use, it just runs a function for you.
Not sure how you could burn so much with dynamodb. It’s serverless and incredibly cheap. Must have been doing something insane like a huge dataset where you scan through it over and over.
Being salty that Gary couldn’t sell enough of his paid service and AWS is competing with it isn’t a meaningful complaint. I want something in AWS, not on Gary’s servers.
Why do people even bother with cloud?
I’ve a couple of apps doing a few million a day. I am using Hetzner and before that used DigitalOcean. Mind you, for close to a decade.
People are unnecessarily complicating stuff, and these clouds can go very expensive very quickly.
Recently, I came across a company and they were spending $20k a month on GCP. I am like, are you kidding me, $20K for the kind of stuff you do??? It seems you do not understand how CPU, RAM and Disk work to plaster such "autoscaling hyper solutions" burning money in cloud.
I moved their stuff out of the GCP managed solution and ended up with a $200-400 per month bill. The CEO can still not believe how it's even possible.
I suggested them move to Dedicated servers but they didn't want it, they said they must show they are on Hyperscaling cloud.
OK fine, we'll stay in Hyperscaler but not use any of their service other than VMs.
They racked up a ton of bills by using cloud monitoring, Datastore, and autoscalers (with no proper tuning), Kubernetes.
I replaced all of it with Prometheus, Grafana, Loki, and most stuff from Datastore to Postgres and Mongo with replicas. I added Redis.
I implemented a custom scaler where you can scale off of app metrics, not by just using a random peg on CPU.
I implement hot data reload by packing the data updates in gzip file, uploading to GCS and pulling from autoscaled units. Moved the stuff to Spot VMs.
The complexity of stuff in cloud is high for nothing.
At my previous startup: because AWS gave us a bunch of credits and helped us design the infra. It meant we ran for free what they designed for free.
At a previous bigger company, getting procurement to sign up to a new provider requires writing a business case, justifying the spend and then getting multiple competing quotes and speaking to their sales teams. Signing up to a new service takes _months_ even for $10/mo as they’ll negotiate for bulk discounts and the best possible terms for something that will literally cost less per year than one of meetings they hold to discuss the “value”. Meanwhile on AWS I can click a button in the marketplace and it gets thrown in the AWS account which is pre approved spending.
At my current team at a “bigcorp” I have noticed a similar pattern. We use aws not because it’s efficient in any way.
We use it because we don’t want to deal with slow procurement process. It kills all the momentum.
Have seen this repeatedly also.
Watched one company end up with a $250k AWS bill when their credits expired (which they could not pay).
I think AWS is liked is because when AWS started, being able to get a new VPS up in minutes was still quite unusual. Many hosts would require about 24hr, I suspect, for getting a new VM up. At least those are some experiences I had. But nowaways, they are probably many options for getting a VM instantly.
I agree that it's overcomplicated. Although having the self-service portal also for assigning IPs is useful. But most of it seems overkill. Although, being able to detach storage from VMs and such is also quite flexible. But still.
It’s flexible but slow. we ran our C++ CI/CD on AWS at a previous company, and we used spot instances with volumes attached and detached dynamically. The performance was absolutely abysmal because in effect you’re running compilation across a networked file system, no matter what AWS says your throughput is.
Our 64 core spot instances on windows were taking 8-10x longer than our developer machines with the same core count, and there was a bunch of engineering went into the scaling, queue management, etc. if we’d just had a single bare metal machine from hetzner we could have saved money _and_ reduced our iteration times.
The ease of getting things set up quickly and usually for free when starting up is very tempting. Later, migration is usually considered risky and not worth it because of maintenance overhead - which I would argue has become very easy.
I worked for a startup company - the founders were really nice people and had put their own money in - quite a lot of money - to get the software built for the vision they had.
By the time I joined, 18 months after development had started, a giant, complex, hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use.
It should have been built on a single Linux box by a single senior developer with Python and Postgres or nodejs or Ruby or whatever.
They went out of business after not too long and I couldn't help wondering if things might have been different if they hadn't spent a fortune building a giant money making machine for AWS, instead of making a web application on a Linux box.
Every AWS project I have worked on has had some significant work put into programming AWS instead of writing business functionality.
> hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use
To be fair, if they had a AWS Solution Architect involved they heavily push you down this road and if they manage to get in management's ear they'll push the idea that server-less AWS features is vastly cheaper. No servers running overnight when you're all tucked up in bed, it has to be cheaper right?
If you're only responding to a handful of requests that's true, but once things ramp up you get "nickel and dimed" for everything: API Gateway requests, lambda execution time, DynamoDB read/write units, CloudWatch logs, outgoing data, step function transitions, S3 requests.
I understand all those services cost money and they shouldn't be free, but I question if paying all those micro-transactions is worse then paying for your own VMs, especially once your customers complain about the cold starts and you think you can fix it with "lambda warming"
To be fair that’s an AWS problem not a lambda problem. If you replace lambda with EC2 the only thing you save in is lambda and step functions(and maybe api gateway but now you need to pay for a load balancer or a public IP), the rest you need to pay for anyway.
AWS AIM is hot garbage, GCP might not be the coolest kid of the block but its AIM rocks.
AWS CLI??? Holy guacamole, what a mess. AWS CLI looks what is now the digital identification to get the basics done.
While GCP CLI is like "sure, here"!
It's a shame GCP's console and their CLI are both so painfully slow.
You're also putting your business at risk with Google randomly banning accounts and not providing timely appeals. [1]
[1]: https://news.ycombinator.com/item?id=45798827
I mean this article is about AWS doing the exact same thing.
it's funny how being used to something makes it easier to use
I love you baby, I need you! I'd never cheat on you! Come back!
Hey good lookin'
Looks like a blogpost written to get attention and resolve his personal problem.