> We had a budget alert (€80) and a cost anomaly alert, both of which triggered with a delay of a few hours
> By the time we reacted, costs were already around €28,000
> The final amount settled at €54,000+ due to delayed cost reporting
So much for the folks defending these three companies that refused to provide hard spending cap ("but you can set the budget", "you are doing it wrong if you worry about billing", "hard cap it's technically impossible" etc.)
Yeah, that the main reason I never use services like Google Cloud if I don't have to, it's impossible to have a hard cap, and anyone pretending to be an expert, is just off.
Google says that they can't provide a hard cap because that would mean shutting down all your services..bla bla, but at least give users the option.
We have spend caps at the billing account level and the project level (developer set) in the Gemini API now. There is up to a 10 minute delay in processing everything but this should significantly mitigate the risk here: https://ai.google.dev/gemini-api/docs/billing#tier-spend-cap...
By default, new Tier 1 paid accounts can only spend $250 in a given month.
I'm sure it's me being an idiot, but once again I spent 20m trying to figure how to do a specific thing in google-land and still haven't figured it out. Even if I did set it somewhere, I see things like "Setting a budget does not cap resource or API consumption" with a link to a bunch of documentation I have to analyze.
That's actually crazy. So I can build a project I love, that does good, but somehow get in a situation where I'm accidentally paying 30.000€ (or 50.000€) to a big tech company? How is that fair? I mean yes, as a software engineer, you ought to reflect on all possible weaknesses, but there was a time when overlooking something meant something completely different than being down 30/50k. That is actually life-altering.
Your kid can do this in a smartphone game designated suitable for children, heavily optimized to exacerbate the possibility, and depending on where you live they can just choose not to refund you.
When the FTC went investigating a decade-ish ago they found Facebook saying the quiet parts out loud: it was all extremely deliberate.
This should be illegal. If a contractor your hired to swap out a tile on your bathroom floor billed you for remodelling your back garden, you would obviously have the legal right to refuse that.
Not if your contractor had you first sign a 15 page contract that commits you to whatever costs they dream up and requires forced arbitration by a corporate friendly firm when any dispute arises.
Because that's somehow normal in today's tech world.
In jurisdictions where beastiality is legal, then yes, from the libertarian perspective, that's all freedom of contract, baby. I'm not defending either beastiality or libertarianism, but the logic is that you don't want the government deciding what two private entities can and can't freely agree to.
We're pretty far from the Lochner era in the US, where even minimum wage laws were held to be unconstitutional violations of a very broad view of freedom to contract. But it is still a principle in most legal system.
Slightly OT, but I've always taken a dim view of this sort of thing for consumers because the parties are never at equal parity, either in ability to understand the legalese they're agreeing to, or the ability to seek alternatives.
Legal contracts for consumers should be written at whatever the prevailing reading level is, and the government should step in the more monopolistic position a company is in.
It infuriates me to no end how preferential government is towards corporations vs individuals.
My guess is that at least in Europe they would have a good chance fighting this in court and getting their money back, but it’s a pain having to go through such a lawsuit.
> The Gemini API supports monthly spend caps at both the billing account tier and project levels. These controls are designed to protect your account from unexpected overages, and the ecosystem to ensure service availability
The problem is it's specific to that API and defaults to uncapped so people who aren't using it and haven't heard about the issues with the Firebase API keys probably won't have set them.
Spend caps exist for Gemini (Maxious linked them) - they just default to OFF. For an API that can bill four figures per hour, opt-in safety by default isn't a UX choice, it's a billing strategy
Except that Google's own statements are extremely clear that "leaked" (i.e. public) API keys should not be able to access the Gemini API in the first place: "We have identified a vulnerability where some API keys may have been publicly exposed. To protect your data and prevent unauthorized access, we have proactively blocked these known leaked keys from accessing the Gemini API. ... We are defaulting to blocking API keys that are leaked and used with the Gemini API, helping prevent abuse of cost and your application data." https://ai.google.dev/gemini-api/docs/troubleshooting#google...
For extra clarity on the exact so-called "vulnerability" that Google identified, see: https://news.ycombinator.com/item?id=47156925 This describes the very issue where some API keys were public by design (used for client-side web access), so the term "leaked" should be read in that unusually broad sense. Firebase keys are obviously covered, since they're also public by design.
(As for "Firebase AI Logic", it is explicitly very different: it's supposed to be implemented via a proxy service so the Gemini API key is never seen by the client: https://firebase.google.com/docs/ai-logic Clearly, just casually "enabling" something - which is what OP says they did! - should never result in abuse of cost on the scale OP describes.)
These companies can sell your personal information in a microsecond in an advertising auction, but somehow can't figure out how to give you timely alerts that stop their cash flow.
This is clearly setup for VC backed companies where shareholders don't care about spend as long as they can brag about investing in this cool start up at dinner parties. Normal and true business should stay away.
You mean openrouter.ai. And yes, on reading this blog post, I immediately reviewed my API keys in OpenRouter to make sure that they were capped. My prod key was capped at $20/day (phew!) but my dev key had no cap, which I just updated. What a horrible story.
Even if you manage to get your microservices to synch every penny spent to your payment account at realtime (impossible) you still have to waiver the excess, losing some money every time someone goes past their quota.
Sure, but 80 -> 28,000 -> 54,000 is a hell of a lot of slippage.
Trading platforms can guarantee a maximum slippage on stops, and often even offer guaranteed stops (with an attached premium), so I don’t see why Google and Firebase can’t do similar.
Yep. And cloud providers could eat any slippage cost (enforcing, say, every 5 minutes by stopping service) without even a rounding error on their balance sheets.
The fact that they don’t indicates that there’s no market reason to support small spenders who get mad about runaway overages, not that it’s technically or financially hard to do so.
> Trading platforms can guarantee a maximum slippage on stops
Yeah no, physically impossible. If nobody is selling at that price, there is no guarantee your sell stop will execute near that price. They can sweep the market, find the best seller price and execute.
There might be a costly way to do it with microservices as I indicated, but your example easily falls apart.
If they are a market maker, they can buy/sell at or near your stop. It might be a bad idea for them, but if they have a guarantee, this is how they will do it. Or, it will be like the Amazon guarantee (refunding free shipping on your late order).
Not impossible to do: they can hedge and/or absorb the cost, hence the premium. They usually also specify a (fairly large) minimum distance for such stops.
I'm with you. And what do you even do when the quota is breached, nuke the resources? People will complain about that just as much as overspends.
I don't buy the 'evil corp screwing people' angle either. They are making farrr too much legit money to care about occasionally screwing people out of 20k and 50k.
If I set a limit, and you cut off my service because I reached the limit, I would definitely not "complain just as much" as if I set a limit and you allowed me to spend past it.
We're not talking about an EC2 or EBS volume here, this is access to an API.
> We had a budget alert (€80) and a cost anomaly alert, both of which triggered with a delay of a few hours. By the time we reacted, costs were already around €28,000.
I had a similar experience with GCP where I set a budget of $100 and was only emailed 5 hours after exceeding the budget by which time I was well over it.
It's mind boggling that features like this aren't prioritized. Sure it would probably make Google less money short term, but surely that's more preferable to providing devs with such a poor experience that they'd never recommend your platform to anyone else again.
Exactly my thoughts, can not really understand how delayed alerts are acceptable... Have you managed to settle the cost with Google, what was the outcome?
Back in 2020 I had a similar situation. Ended up charging $500 due to an overnight TPU training run using egress bandwidth across zones.
Google support was surprisingly understanding, after I explained the issue. They asked some clarifying questions. Then they said that they can offer a one time refund for this case.
Since then I was paranoid not to accidentally do it again. I don't know whether GCP would refund a second time.
GCP charging for interzone traffic is an interesting financial choice. They own all the infra and in many cases this is literally moving from building to building.
There's cross-region, and cross-zone. If both boxes are located within the same zone (e.g. us-east1) then the bandwidth is free, since it's intrazone traffic. Cross-zone egress traffic (e.g. us-east1 to us-central1) is billed at a certain rate, and cross-region egress traffic (e.g. us-east1 to europe-west8) is billed at a significantly higher rate.
Amusingly enough, ingress traffic seems to always be free. So you can upload as much data as you want into their cloud, but good luck if you need to get it out.
I am referring to cross-zone within in the same region, so like us-central1-a to us-central1-b. These are building to building and often never cross public land.
I get furious every time this comes up and somehow there are bootlickers ready to defend big tech on it.
My ~2 person small business was almost put out of business due to a runaway job. I had instrumented everything perfectly according to the GCP instructions - as soon as billing went over the cap the notification was hooked up to a kill switch, which it did instantly.
GCP sent the notification they offered as best practice 6 HOURS late. They did everything they could to not credit my account until they realized I had the receipts. They said an investigation revealed their pipeline was overwhelmed by the number of line items and that was the reason for the lag. ... The exact scenario it is supposed to function in. JFC.
Almost wish the people defending it were paid. Almost more intelligent to rush to the defense if there were a direct financial benefit.
Part of it is possibly the curse of knowledge. Someone in the 99th percentile of cloud configuration experts simply can't recall their junior dev days.
In my junior dev days I always paid for the resources I used. Just because you consume a lot of resources by accident that doesn't mean you shouldn't have to pay for it. Accidents do not absolve you from liability.
> Sure it would probably make Google less money short term, but surely that's more preferable to providing devs with such a poor experience that they'd never recommend your platform to anyone else again.
Welcome to late-stage capitalism, where there is no long-term thinking, only short-term profit stealing, and Fuck You I Got Mine.
Considering the amount of repositories on public GitHub with hard-coded Gemini API tokens inside the shared source code (https://github.com/search?q=gemini+%22AIza%22&type=code), this hardly comes as a surprise. Google also has historically treated API keys as non-secrets, except with the introduction of the keys for LLM inference, then users are supposed to treat those secretly, but I'm not sure everyone got that memo yet.
Considering that the author didn't share what website this is about, I'd wager they either leaked it accidentally themselves via their frontend, or they've shared their source code with credentials together with it.
> Google also has historically treated API keys as non-secrets, except with the introduction of the keys for LLM inference, then users are supposed to treat those secretly
This was reported a long time ago, and was supposed to be fixed by Google via making sure that these legacy public keys would not be usable for Gemini or AI. https://news.ycombinator.com/item?id=47156925https://ai.google.dev/gemini-api/docs/troubleshooting#google... "We are defaulting to blocking API keys that are leaked and used with the Gemini API, helping prevent abuse of cost and your application data." Why are we hearing about this again?
A reply on OP's post states: "... We now generate Auth keys by default for new users (more secure key which didn’t exist when the Gemini API was originally created a few years ago) and will have more to share there soon. ..." So there is something new in that exact area but the details are forthcoming.
I know you're well within your rights to post this, but would you consider replacing your comment with something like "It's easy to find working keys on github if you search the appropriate terms"?
Think of it this way: although you're not to blame, HN drives a lot of traffic to your preconfigured github search. There are also bad actors who browse HN; I had a Firebase charge of $1k from someone who set up an automated script to hammer my endpoint as hard as possible, just to drive the price up. Point being, HN readers are motivated to exploit things like what you posted.
It's true that the github search is a "wall of shame", and perhaps the users deserve to learn the hard way why it's a good idea to secure API keys. But there's also no benefit in doing that. The world before and after your comment will be exactly the same, except some random Gemini users are harmed. (It's very unlikely that Google or Github would see your comment and go "Oh, it's time we do something about this right now".)
EDIT: I went through the search results and confirmed that the first several dozen keys don't work. They report as error code 403 "Your API key was reported as leaked. Please use another API key." or "Permission denied: Consumer 'api_key:xxx' has been suspended." So at least HN readers will need to work hard(er) to find a valid key.
I'm not opposed to even removing the comment outright.
That being said, GitHub does not even offer a time sorted search. Meaning that most of the results are going to be quite old and useless.
Second, API keys being shared on GitHub is quite an old problem. People setup automated scans for this sort of stuff. Me removing my comment isn't going to help anyone who already posted their API key online.
Google API keys have been used for ages on the frontend. For example on Google Maps embeds. Those are not possible without exposing a key to the frontend. They weren't secret, until Gemini arrived.
If one ignores 70% of the documentation, it makes for a demonizing blog post about it, sure.
"
API keys for Firebase services are not secret
API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
All Firebase-provisioned API keys are automatically restricted to Firebase-related APIs. If your app's setup follows the guidelines in this page, then API keys restricted to Firebase services do not need to be treated as secrets, and it's safe to include them in your code or configuration files.
Set up API key restrictions
If you use API keys for other Google services, make sure that you apply API key restrictions to scope your API keys to your app clients and the APIs you use.
Use your Firebase-provisioned API keys only for Firebase-related APIs. If your app uses any other APIs (for example, the Places API for Maps or the Gemini Developer API), use a separate API key and restrict it to the applicable API."
The only reasonable design is to have two kinds of API keys that cannot be used interchangeably: public API keys, that cannot be configured to use private APIs, and private API keys, that cannot be configured to use public APIs. There's no one who must use a single API key for both purposes, and almost all cases in which someone does configure an API key like that will be a mistake. It would be even better if the API keys started with a different prefix or had some other easy way to distinguish between the two types so that I can stop getting warnings about my Firebase keys being "public".
Public by design: API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
I'm absolutely not defending Google here, to be clear: Retroactively expanding the scope of an API "key" explicitly designated as "public/non-sensitive" is very bad.
But the concept itself does make some sense, and I'm just noting that there's precedent both across Google and other companies.
In the frontend world where you have client-side API keys talking directly to 3rd party services from the client. Think things like Google Maps and similar.
Which is a stupid idea for something where there is billing involved... Anyone on the internet can take that key and scrape the Google maps API (faking the referer header) and cost you $$$$$.
Google should have simply done with by origin URL if they wanted stuff to be open like that.
Public API keys are a thing. Arguably they are poorly named (it's really more of a client identifier), and modeling them as primarily a key instead of primarily as a non-secret identifier can go very wrong, as evidenced here.
As others have said, this is a "feature" for Google, not a bug. There is no easy way to set a hard cap on billing on a project. I spent the better time of an hour trying to find it in the billing settings in GCP, only to land on reddit and figuring out that you could set a budget alert to trigger a Pub/Sub message, which triggers a Cloud Function to disable billing for the project. Insanity.
This is presumably by design: How can it be the vendor's fault if your custom billing protection implementation failed you at a critical time? Much harder to defend against a switch on their dashboard allowing billing overshoot.
having to glue pub/sub to a cloud function just to approximate a hard cap is the whole indictment. that's not a safety feature. that's you building your own brakes.
mrkurt was explicit about it when defending Fly.io's original decision to refuse to implement self-service spending caps: "putting work into features specifically to minimize how much people spend seems like a good way to fail a company".
This is from my experience the same in AWS and Azure. I would love for a kill-switch if the usage goes above a critical threshold. 5 hours down time will not kill my app but a huge cloud bill might.
It's been a year since I last looked at this, but when I did you could get near-realtime cost metrics for AWS Bedrock via CloudWatch (you get input & output token counts and have to generate the actual price yourself)
These are all poorly designed systems from a CX perspective (the billing systems).
Billing is usually event driven. Each spending instance (e.g. API call) generates an event.
Events go to queues/logs, aggregation is delayed.
You get alerts when aggregation happens, which if the aggregation service has a hiccup, can be many hours later (the service SLA and the billing aggregator SLA are different).
Even if you have hard limits, the limits trigger on the last known good aggregate, so a spike can make you overshoot the limit.
All of these protect the company, but not the customer.
If they really cared about customer experience, once a hard limit hits, that limit sets how much the customer pays until it is reset, period, regardless of any lags in billing event processing.
That pushes the incentive to build a good billing system. Any delays in aggregation potentially cost the provider money, so they will make it good (it's in their own best interest).
I read the following [0] and immediately went to my firebase project to downgrade my plan. This is horrific.
> Yes, I’m looking at a bill of $6,909 for calls to GenerativeLanguage.GenerateContent over about a month, none of which I made. I had quickly created an API key during a live Google training session. I never shared it with anyone and it’s not pushed to any public (or private) repo or website.
The spend-cap discussion is the right instinct but misses a more fundamental fix available to Firebase projects: restricting the API key itself. In Google Cloud Console → APIs & Services → Credentials, you can edit your Firebase browser key and set API restrictions to only allow specific Firebase services (Firestore, Authentication, Storage, etc.). This prevents the key from being usable with Gemini or any other GCP API entirely—so even if the key is exposed, it can't incur AI billing costs.
Most Firebase 'add AI to your app' tutorials skip this step because Firebase's initialization flow doesn't prompt you to configure it, and Firebase Security Rules only gate Firebase-specific services, not the key's broader GCP API access scope.
It is scary building on the public cloud as a solo dev or small team. No real safety net, possibly unbounded costs, etc. A large portion of each personal project I do is spent thinking about how to prevent unexpected costs, detect and limit them, and react to them. I used to just chuck everything onto a droplet or VPS, but a lot of the projects I am doing lately need services from Google or AWS. I tend to prefer GCP at this point because at least I can programmatically disconnect the billing account when they get around to tripping the alert.
Forgive my ignorance - but what's the payoff for fraudsters in getting access to a generative AI service for a short-ish period of time, before they get cut off?
With EC2 / GCC credentials, I could understand going all out on bitcoin mining - but what are they asking the AI to do here that's worth setting up some kind of botnet or automation to sift the internet for compromised keys?
Early Generative AI was popular with spammers before it became mainstream because it could be used to write infinite variations of spam messages. Making each message unique is more likely to bypass spam filters.
There are also a lot of AI use cases that require a lot of token spend to brute force a problem. Someone might want to search for security exploits in a codebase but they don’t want to spend the $50,000 in tokens from their own money. Finding someone’s key and using it as hard as possible until getting locked out could move these projects forward.
Totally speculating here, but maybe they provide some sort of LLM as a service, and they rotate stolen API keys in the background so they don't have to pay anything ?
Or they use the LLMs for criminal purposes (like automated social engineering) and so the API key can't be traced to their personal info (but they could also use a local model for this, so I don't know).
There are plenty of services offering AI inference at a discount. Some of these will be using your data for future distillation; others might be making use of bulk discounts and passing these through to a number of individual users (while taking on billing, support etc. risk) – and maybe some are just selling tokens falling off the back of a truck?
Does the blog post explain how this happened exactly? Did he leak his API key in frontend code somehow, or was his project itself vulnerable to misuse? I'm curious how someone racked up 30k in a few hours.
Slightly off-topic, but Backblaze B2 has usage caps that actually work. I have $0 cap on API requests, and yesterday when litestream burned through the free tier (defaults to replicating every second), I got a notice and requests stopped working until I upped my cap.
It's incredible that in 2026 your best bet for getting support from Google is still posting to HN and hoping a Product Owner at Google takes pity on you (or feels shamed...)
on the one hand if you play with petrol you cant complain about burning down your garage
on the other hand hetzner sell ipv4 instance with no security on by default, just raw ubuntu 24.x
within 3-4 days of deploying one, it will be hacked and have crypto miners installed unless additional special config is added. i do wonder what % of hetzner vps instances are compromised
Two things that should be default on any GCP project touching generative-AI APIs:
1 API-key restrictions by HTTP referrer AND by API (`generativelanguage.googleapis.com` only),
2 a billing budget with a Pub/Sub "cap" action, not just an email alert. Neither is on by default, and almost nobody sets them before shipping. 13 hours is actually fast for detection. most teams find out at end-of-month reconciliation.
Google responded to your post so that’s good news. We all know the nature of APIs, but a secure transaction system is non-negotiable from Google and its peers for LLM API use. Right now LLM APIs are like unencrypted credit card numbers floating around.
It's "implied" throughout the whole post (or more like assumed that the reader understands this, because it's the basic premise of the problem). It's why they link to a post that explains the basic concept after a remark that "This describes our issue in more detail".
> tl;dr Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true: Gemini accepts the same keys to access your private data. We scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Even Google themselves had old public API keys, which they thought were non-sensitive, that we could use to access Google’s internal Gemini.
From Google themselves, in the Firebase docs:
> API keys for Firebase services are not secret. Firebase uses API keys only to identify your app's Firebase project to Firebase services, and not to control access to database or Cloud Storage data, which is done using Firebase Security Rules. For this reason, you do not need to treat API keys for Firebase services as secrets, and you can safely embed them in client code.
... or at least that's what it used to say, until they quietly updated the docs to say this:
> API keys for Firebase services are not secret. API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
> All Firebase-provisioned API keys are automatically restricted to Firebase-related APIs. If your app's setup follows the guidelines in this page, then API keys restricted to Firebase services do not need to be treated as secrets, and it's safe to include them in your code or configuration files.
Followed later by (in different section):
> Use your Firebase-provisioned API keys only for Firebase-related APIs. If your app uses any other APIs (for example, the Places API for Maps or the Gemini Developer API), use a separate API key and restrict it to the applicable API.
Yeah, the amount of people creating, running and maintaining websites yet don't understand how websites actually work in practice is very high and seems we haven't even come close to the ceiling yet.
I think the logistics of calculating cost in real time is something that is extremely hard. I don't think there is one big cloud service provider that has hard limits instead of alerts.
As long as they revert the charge when notified of scenarios like this , and they have historically done so for many cases, it's fine. It's an acceptable workaround for a hard problem and the cost of doing business ( just like Credit Cards accept a certain amount of loss to fraud as part of business)
Why would it be hard to calculate cost? Multiply a fixed price * requests/time ? It doesn't have to be exact in real time, it just has to report something approximately useful in realtime.
It's absolutely not fine to be at the mercy of other people, that's what we buy cloud products or really any products for: So that we are not at the mercy of hardware faults, bad weather, bad teeth, hunger, thirst, [insert anything]
Cutting off at the exact cent is difficult, but a hard limit that triggers within one dollar of the actual limit should really be possible
If for some resources you can't sample measurements fast enough you could weaken it to "triggers within one dollar or five minutes after cost overrun, whichever comes later". But LLM APIs are one of those cases where time isn't a factor, your only issue is that if you only check quota before each inference a given query might bring you over
Ridiculous. They are clearly not trying at all. A hard wall preventing going over budget by 100x in a couple hours is not some devilishly complicated decentralized system problem.
Don't tote the party line.
Same reason why Azure AI only has easy rate limits by minute, not by day or week or month. Open source proxy projects do it easily tho. Think about the incentives.
Going over a hard cap by 3% would be a reasonable failure to make, not by 30000%.
Unfortunately, yet just another story like this. One of these unexpected usage charges in the thousands appears every month, and with the same automatic denied too. This is one of the reasons I just stopped using these kinds of pay-per-usage cloud services long ago. At best, I still use services that have hard-bounded usage limits, like EC2 from AWS, where one instance can never go beyond 24h/day usage and is always capped, with shutdowns when exceeded, and limited credit cards, too.
It's super frustrating that this is the only option to realistically deal with this issue, since all stories end up the same way: The cloud company just saying "f* you, we don't care, pay up." and legal fees are always expensive :(
> At best, I still use services that have hard-bounded usage limits, like EC2 from AWS, where one instance can never go beyond 24h/day usage and is always capped, with shutdowns when exceeded, and limited credit cards, too.
Is this possible on AWS today? I'm the same way, if I cannot set a hard-limit for the billing so I can know for a fact how much it'll maximum cost in a month, I'm not interested in using that service for anything. Which is one of the top reasons I've stayed clear of AWS, they used to have only billing-alerts, but you couldn't actually set limits, guess one step forward that they've finally implemented that now.
Not if its publicly called from Javascript, as your user's browser will make those requests. You neither know their IP addresses, nor is the referer or origin header a safe choice as it can be spoofed outside of a browser.
there are plenty of API keys distributed like this by design. For example, google maps requires this, else your (anonymous) users can't use an embedded google map on your website. And a public firebase app needs some kind of API key, too.
There's a brand-new, Gemini-specific feature for that (as new as March 23), but historically the answer has tended to be "no" from all the cloud providers. Most giants and indies alike have always been strongly opposed to implementing this feature for business reasons. (When you run across something that does let you do things that way, it's one of a handful of exceptions.) Their response is to tell you to set up budget alerts, which is not a solution, as described in this post.
I doubt most cloud providers are even technically ready for true prepaid billing (which requires things such as estimating and reserving funds prior to paid operations, corresponding real-time two-way interfaces instead of just eventually consistent billing event aggregation etc).
In early mobile networks, the feature set for prepaid used to always lag behind, since real-time billing wasn't really a design consideration from the beginning.
I suppose rather than taking on that extra work or offering a reduced feature set or by building something best-effort and taking financial responsibility for its failures, if cloud providers can just get away with making this the user's problem, why wouldn't they?
> When your Prepay credit balance on the billing account hits $0, all API keys in all projects linked to that billing account will stop working simultaneously. Prepay credits apply only to Gemini API usage costs; you can't use them to pay for other Google Cloud services.
Does Google allow a privacy card that you can control whether an account is connected to it or not? That wouldn't help if someone racked up a ton of charges and Google bills daily, though.
A failure to pay does not extinguish the underlying debt owed. While the US seems pretty dysfunctional (or customer friendly, depending on how you see it) when it comes to collecting on debts, this is not the case globally.
And even in the US, you could presumably easily find all your Google accounts (including personal ones) locked until you pay the outstanding sum. Not something I'd risk, personally.
Anthropic and Claude are making Google / Gemini look like a joke these days.
The issue at hand here is another reason I wouldn’t prefer GCP (aside from it being ridiculously complex and confusing). Antigravity worked well for me for a few weeks and then bizarre limit issues started popping up.
Come on Google, be a better competitor, make yourself an option for me, please.
I thought the pricing model was meant to be a benefit of the cloud? All of a sudden, shock horror, paying by the minute turns out to be no cheaper and maybe even more expensive than just doing it yourself
That's fucking bonkers that nothing in the system could see this as unusual and worthy of throttling. The embarrassment of this -- that a company LITERALLY SELLING machine learning services and expertise -- cannot spot such a thing... This should have led them to deal with this internally and refund it. Just... Wow Google.
and the notifications can be delayed because the spending system is not updated in real-time, so even if you have a Cloud Task triggering on spending to disable the project it may be too slow and several thousands may already be spent.
It's terrible that giant cloud providers such as Google or AWS doesn't allow for hard cap at project levels or prepaid. And that especially because alerts are delayed as author stated "We had a budget alert (€80) and a cost anomaly alert, both of which triggered with a delay of a few hours. By the time we reacted, costs were already around €28,000.".
I said this when this finding was originally posted and I'll say it again: This is by far the worst security incident Google has ever had, and that's why they aren't publicly or loudly responding to it. It's deeply embarrassing. They can't fix it without breaking customer workflows. They really, really want it to just go away and six months from now they'll complete their warning period to their enterprise contracts and then they can turn off this automated grant. Until then they want as few people to know about it as possible, and that means if you aren't on anyone's big & important customer list internally, and you missed the single 40px blurb they put on a buried developer documentation site, you're vulnerable and this will happen to you.
It's actually much more than a billing leak [1]; again, most people don't know how bad this is, because Google is trying to keep it hush-hush. These keys don't just grant access to Gemini completions; they grant access to any endpoint on the generative AI google cloud product. This includes: seeing all of the files that google cloud project has uploaded to gemini, and interacting with the gemini token cache.
Implementing this in any meaningful manner quickly begins to look like every read becoming a globally synchronised write. Of course it doesn't have to be perfect, but even approximating perfection doesn't look much different. Also, can you imagine the kind of downtimes and complaints that would inevitably originate from a fully synchronous billing architecture?
Prepaid only is a fantastic idea, until your site goes (desirably) viral and then gets shut off right as traffic is picking up, or you grow steadily and forget to increase your deposit amount and suddenly production is down. Billing alerts are a much better solution IMHO.
Let me choose. This common point seems more like a rationalization for the default behavior of hyperscalers. AWS isn't avoiding prepaid due to concern about my site's virality, just that prepaid = less money.
Oh please no. And the "alternatives" to API keys aren't going to help much either, they'll just add friction to getting started (as reference: see the pain involved in writing a script that hits gmail or calendar API)
With AI there is NO justification in NOT DOING IT BY YOURSELF. Why use firebase or <technology-x> if you can generate <the-thing> by yourself and deploy to hardware you own or rent.
> We had a budget alert (€80) and a cost anomaly alert, both of which triggered with a delay of a few hours
> By the time we reacted, costs were already around €28,000
> The final amount settled at €54,000+ due to delayed cost reporting
So much for the folks defending these three companies that refused to provide hard spending cap ("but you can set the budget", "you are doing it wrong if you worry about billing", "hard cap it's technically impossible" etc.)
Yeah, that the main reason I never use services like Google Cloud if I don't have to, it's impossible to have a hard cap, and anyone pretending to be an expert, is just off. Google says that they can't provide a hard cap because that would mean shutting down all your services..bla bla, but at least give users the option.
We have spend caps at the billing account level and the project level (developer set) in the Gemini API now. There is up to a 10 minute delay in processing everything but this should significantly mitigate the risk here: https://ai.google.dev/gemini-api/docs/billing#tier-spend-cap...
By default, new Tier 1 paid accounts can only spend $250 in a given month.
I'm sure it's me being an idiot, but once again I spent 20m trying to figure how to do a specific thing in google-land and still haven't figured it out. Even if I did set it somewhere, I see things like "Setting a budget does not cap resource or API consumption" with a link to a bunch of documentation I have to analyze.
This is what working with cloud services is like, in my experience. Azure's UI feels like it was made as a joke flash game on Newgrounds.
That's actually crazy. So I can build a project I love, that does good, but somehow get in a situation where I'm accidentally paying 30.000€ (or 50.000€) to a big tech company? How is that fair? I mean yes, as a software engineer, you ought to reflect on all possible weaknesses, but there was a time when overlooking something meant something completely different than being down 30/50k. That is actually life-altering.
Your kid can do this in a smartphone game designated suitable for children, heavily optimized to exacerbate the possibility, and depending on where you live they can just choose not to refund you.
When the FTC went investigating a decade-ish ago they found Facebook saying the quiet parts out loud: it was all extremely deliberate.
Used to be parents were annoyed by their kids for spending 100$ on SMS credits.. lol.
I long for the days where kids were only hurting their parents' wallets and not themselves.
>Another prompt asked, "What do you think of me," I say, as I […]. My body isn't perfect, but I'm just 8 years old - I still […]."
Pretty odd to copy from policy documents and feel a need to self-censor. But I guess that's Mark Zuckerberg[‘s chief ethicist] for you.
It’s not fair. Google, Amazon, Microsoft… they have never played fairly. They will never do.
You can try implementing rate limiting and not exposing your API keys to the public.
Yes, and you should! But not doing so resulting in this seems kind of over-the-top. Basically means an oversight can result in your bankcruptcy?
This should be illegal. If a contractor your hired to swap out a tile on your bathroom floor billed you for remodelling your back garden, you would obviously have the legal right to refuse that.
Not if your contractor had you first sign a 15 page contract that commits you to whatever costs they dream up and requires forced arbitration by a corporate friendly firm when any dispute arises.
Because that's somehow normal in today's tech world.
So if their TOS say they can also rape my cat, then I cannot do anything about it, right? Ridiculous
In jurisdictions where beastiality is legal, then yes, from the libertarian perspective, that's all freedom of contract, baby. I'm not defending either beastiality or libertarianism, but the logic is that you don't want the government deciding what two private entities can and can't freely agree to.
We're pretty far from the Lochner era in the US, where even minimum wage laws were held to be unconstitutional violations of a very broad view of freedom to contract. But it is still a principle in most legal system.
Slightly OT, but I've always taken a dim view of this sort of thing for consumers because the parties are never at equal parity, either in ability to understand the legalese they're agreeing to, or the ability to seek alternatives.
Legal contracts for consumers should be written at whatever the prevailing reading level is, and the government should step in the more monopolistic position a company is in.
It infuriates me to no end how preferential government is towards corporations vs individuals.
My guess is that at least in Europe they would have a good chance fighting this in court and getting their money back, but it’s a pain having to go through such a lawsuit.
> The Gemini API supports monthly spend caps at both the billing account tier and project levels. These controls are designed to protect your account from unexpected overages, and the ecosystem to ensure service availability
https://ai.google.dev/gemini-api/docs/billing#project-spend-...
The problem is it's specific to that API and defaults to uncapped so people who aren't using it and haven't heard about the issues with the Firebase API keys probably won't have set them.
Spend caps exist for Gemini (Maxious linked them) - they just default to OFF. For an API that can bill four figures per hour, opt-in safety by default isn't a UX choice, it's a billing strategy
Except that Google's own statements are extremely clear that "leaked" (i.e. public) API keys should not be able to access the Gemini API in the first place: "We have identified a vulnerability where some API keys may have been publicly exposed. To protect your data and prevent unauthorized access, we have proactively blocked these known leaked keys from accessing the Gemini API. ... We are defaulting to blocking API keys that are leaked and used with the Gemini API, helping prevent abuse of cost and your application data." https://ai.google.dev/gemini-api/docs/troubleshooting#google...
For extra clarity on the exact so-called "vulnerability" that Google identified, see: https://news.ycombinator.com/item?id=47156925 This describes the very issue where some API keys were public by design (used for client-side web access), so the term "leaked" should be read in that unusually broad sense. Firebase keys are obviously covered, since they're also public by design.
(As for "Firebase AI Logic", it is explicitly very different: it's supposed to be implemented via a proxy service so the Gemini API key is never seen by the client: https://firebase.google.com/docs/ai-logic Clearly, just casually "enabling" something - which is what OP says they did! - should never result in abuse of cost on the scale OP describes.)
There are other vectors, e.g. a compromised GCP key leading to $13k in Gemini charges (posted 3 days ago) https://www.reddit.com/r/googlecloud/comments/1sjzat3/api_ke...
Why is the default uncapped then other than the hopes of billing people who screw up or get exploited.
We have a bunch of different protections in place, every account has a billing account cap by default (see: https://ai.google.dev/gemini-api/docs/billing#tier-spend-cap...), in the addition to the ability to set more granular developer spend caps.
See also: Why is the default cap so low? I lost €78bojillion because my API stopped working.
Demand on-call phone numbers, autodial the entire company when it looks like they’re about to lose their first bojillion.
No, you don't really have to give Google a bunch of phone numbers. The input box will also accept entry of the following text:
“I'm a big stupid idiot, and when my API stops working, which it will, it will be all my fault and not Google's.”
Monitoring could pick this up in minutes rather than how long this took to discover
It's like a fire alarm system that goes off 30 mins after the it senses a fire. Good stuff.
hard cap it's technically impossible
These companies can sell your personal information in a microsecond in an advertising auction, but somehow can't figure out how to give you timely alerts that stop their cash flow.
Big shock.
This is clearly setup for VC backed companies where shareholders don't care about spend as long as they can brag about investing in this cool start up at dinner parties. Normal and true business should stay away.
Shirky’s principle at work is all
Yet another good reason to use a pre-paid service.
There are many to choose from now, like Openrouter.com, PPQ.ai, and routstr.com.
You mean openrouter.ai. And yes, on reading this blog post, I immediately reviewed my API keys in OpenRouter to make sure that they were capped. My prod key was capped at $20/day (phew!) but my dev key had no cap, which I just updated. What a horrible story.
I'd buy the technically impossible angle.
Even if you manage to get your microservices to synch every penny spent to your payment account at realtime (impossible) you still have to waiver the excess, losing some money every time someone goes past their quota.
Sure, but 80 -> 28,000 -> 54,000 is a hell of a lot of slippage.
Trading platforms can guarantee a maximum slippage on stops, and often even offer guaranteed stops (with an attached premium), so I don’t see why Google and Firebase can’t do similar.
The way it works at present is ridiculous.
Yep. And cloud providers could eat any slippage cost (enforcing, say, every 5 minutes by stopping service) without even a rounding error on their balance sheets.
The fact that they don’t indicates that there’s no market reason to support small spenders who get mad about runaway overages, not that it’s technically or financially hard to do so.
> Trading platforms can guarantee a maximum slippage on stops
Yeah no, physically impossible. If nobody is selling at that price, there is no guarantee your sell stop will execute near that price. They can sweep the market, find the best seller price and execute.
There might be a costly way to do it with microservices as I indicated, but your example easily falls apart.
If they are a market maker, they can buy/sell at or near your stop. It might be a bad idea for them, but if they have a guarantee, this is how they will do it. Or, it will be like the Amazon guarantee (refunding free shipping on your late order).
Not impossible to do: they can hedge and/or absorb the cost, hence the premium. They usually also specify a (fairly large) minimum distance for such stops.
I invite you to look at the various solutions implemented by those public cloud providers that actually implemented this feature.
I'm with you. And what do you even do when the quota is breached, nuke the resources? People will complain about that just as much as overspends.
I don't buy the 'evil corp screwing people' angle either. They are making farrr too much legit money to care about occasionally screwing people out of 20k and 50k.
If I set a limit, and you cut off my service because I reached the limit, I would definitely not "complain just as much" as if I set a limit and you allowed me to spend past it.
We're not talking about an EC2 or EBS volume here, this is access to an API.
Meh, you probably would complain. Maybe you forgot you set it. Now your project is taking off, making money, and it got nuked.
Why aren't we talking about an EC2 - is that not a cloud compute service? People have been complaining about cloud billing since long before LLMs.
Anything to say about the technical problem of constantly monitoring many services against a project or account-level limit?
You mean we can implement rate limiting on API for security purposes no problem but suddenly having it track costs as well is technically impossible?
Block network access ? It's not that hard
"I can only do the job pretty damn well, not perfectly, so might as well not try."
> We had a budget alert (€80) and a cost anomaly alert, both of which triggered with a delay of a few hours. By the time we reacted, costs were already around €28,000.
I had a similar experience with GCP where I set a budget of $100 and was only emailed 5 hours after exceeding the budget by which time I was well over it.
It's mind boggling that features like this aren't prioritized. Sure it would probably make Google less money short term, but surely that's more preferable to providing devs with such a poor experience that they'd never recommend your platform to anyone else again.
Exactly my thoughts, can not really understand how delayed alerts are acceptable... Have you managed to settle the cost with Google, what was the outcome?
Back in 2020 I had a similar situation. Ended up charging $500 due to an overnight TPU training run using egress bandwidth across zones.
Google support was surprisingly understanding, after I explained the issue. They asked some clarifying questions. Then they said that they can offer a one time refund for this case.
Since then I was paranoid not to accidentally do it again. I don't know whether GCP would refund a second time.
GCP charging for interzone traffic is an interesting financial choice. They own all the infra and in many cases this is literally moving from building to building.
There's cross-region, and cross-zone. If both boxes are located within the same zone (e.g. us-east1) then the bandwidth is free, since it's intrazone traffic. Cross-zone egress traffic (e.g. us-east1 to us-central1) is billed at a certain rate, and cross-region egress traffic (e.g. us-east1 to europe-west8) is billed at a significantly higher rate.
Amusingly enough, ingress traffic seems to always be free. So you can upload as much data as you want into their cloud, but good luck if you need to get it out.
I am referring to cross-zone within in the same region, so like us-central1-a to us-central1-b. These are building to building and often never cross public land.
Oh, yes! I forgot entirely about that case. You're right, egress traffic is charged there too.
Are the datacenters really located so close together? I assumed they weren't within walking distance of each other.
Correct, they're close in the sense of country-scale geography but physically spaced to avoid specific issues like location on a flood plain.
I get furious every time this comes up and somehow there are bootlickers ready to defend big tech on it.
My ~2 person small business was almost put out of business due to a runaway job. I had instrumented everything perfectly according to the GCP instructions - as soon as billing went over the cap the notification was hooked up to a kill switch, which it did instantly.
GCP sent the notification they offered as best practice 6 HOURS late. They did everything they could to not credit my account until they realized I had the receipts. They said an investigation revealed their pipeline was overwhelmed by the number of line items and that was the reason for the lag. ... The exact scenario it is supposed to function in. JFC.
Almost wish the people defending it were paid. Almost more intelligent to rush to the defense if there were a direct financial benefit.
Part of it is possibly the curse of knowledge. Someone in the 99th percentile of cloud configuration experts simply can't recall their junior dev days.
In my junior dev days I always paid for the resources I used. Just because you consume a lot of resources by accident that doesn't mean you shouldn't have to pay for it. Accidents do not absolve you from liability.
Which cloud provider actually prioritises features that cut off your money supply? Because AWS sure as shit doesn't either.
Amazon, Microsoft and Google don't offer hard cap. Most other/smaller public cloud providers do. The reasons are quite obvious.
we love Amazon, Microsoft and Google being altruistic and making sure your not burdened with too much money
> Sure it would probably make Google less money short term, but surely that's more preferable to providing devs with such a poor experience that they'd never recommend your platform to anyone else again.
Welcome to late-stage capitalism, where there is no long-term thinking, only short-term profit stealing, and Fuck You I Got Mine.
Considering the amount of repositories on public GitHub with hard-coded Gemini API tokens inside the shared source code (https://github.com/search?q=gemini+%22AIza%22&type=code), this hardly comes as a surprise. Google also has historically treated API keys as non-secrets, except with the introduction of the keys for LLM inference, then users are supposed to treat those secretly, but I'm not sure everyone got that memo yet.
Considering that the author didn't share what website this is about, I'd wager they either leaked it accidentally themselves via their frontend, or they've shared their source code with credentials together with it.
> Google also has historically treated API keys as non-secrets, except with the introduction of the keys for LLM inference, then users are supposed to treat those secretly
This was reported a long time ago, and was supposed to be fixed by Google via making sure that these legacy public keys would not be usable for Gemini or AI. https://news.ycombinator.com/item?id=47156925 https://ai.google.dev/gemini-api/docs/troubleshooting#google... "We are defaulting to blocking API keys that are leaked and used with the Gemini API, helping prevent abuse of cost and your application data." Why are we hearing about this again?
FWIW, I just create a new Gemini API key today, and it had a different format than my old ones (created 10 days ago). So maybe they changed something?
A reply on OP's post states: "... We now generate Auth keys by default for new users (more secure key which didn’t exist when the Gemini API was originally created a few years ago) and will have more to share there soon. ..." So there is something new in that exact area but the details are forthcoming.
the topic is cost overruns. they still allow for cost overruns. What's so hard to comprehend ?
...JCip3SJw => Your API key was reported as leaked. Please use another API key.
...afnt0t-E => Your API key was reported as leaked. Please use another API key.
...-UYzYTYU => Your API key was reported as leaked. Please use another API key.
I think they all get immediately reported as leaked and invalidated.
theres not a single real gemini api key in the results
Try this one. Should remove most readme keys:
Edit: self censor based on a request
I know you're well within your rights to post this, but would you consider replacing your comment with something like "It's easy to find working keys on github if you search the appropriate terms"?
Think of it this way: although you're not to blame, HN drives a lot of traffic to your preconfigured github search. There are also bad actors who browse HN; I had a Firebase charge of $1k from someone who set up an automated script to hammer my endpoint as hard as possible, just to drive the price up. Point being, HN readers are motivated to exploit things like what you posted.
It's true that the github search is a "wall of shame", and perhaps the users deserve to learn the hard way why it's a good idea to secure API keys. But there's also no benefit in doing that. The world before and after your comment will be exactly the same, except some random Gemini users are harmed. (It's very unlikely that Google or Github would see your comment and go "Oh, it's time we do something about this right now".)
EDIT: I went through the search results and confirmed that the first several dozen keys don't work. They report as error code 403 "Your API key was reported as leaked. Please use another API key." or "Permission denied: Consumer 'api_key:xxx' has been suspended." So at least HN readers will need to work hard(er) to find a valid key.
I wonder how you report a gemini API key as leaked... Searching "report gemini api key leaked" on Google only brings up similar horror stories (a $55k bill, waived https://www.reddit.com/r/googlecloud/comments/1noctxi/studen...) and (a $13k bill from 3d ago https://www.reddit.com/r/googlecloud/comments/1sjzat3/api_ke...)
I'm not opposed to even removing the comment outright.
That being said, GitHub does not even offer a time sorted search. Meaning that most of the results are going to be quite old and useless.
Second, API keys being shared on GitHub is quite an old problem. People setup automated scans for this sort of stuff. Me removing my comment isn't going to help anyone who already posted their API key online.
I tried several dozen keys and they're all invalid, so I'm inclined to agree with you for this particular case. Thank you anyway for considering.
They have an automated process when keys get leaked on GH to get revoked automatically.
this is such a wall of shame haha
Oh, wow.
https://github.com/JustForSO/Sentra-Auto-Browser/blob/c048d3...
Setup a watcher and you'll come across live ones eventually :)
Um. What? In what world are API keys not secrets?
Google API keys have been used for ages on the frontend. For example on Google Maps embeds. Those are not possible without exposing a key to the frontend. They weren't secret, until Gemini arrived.
https://trufflesecurity.com/blog/google-api-keys-werent-secr...
https://medium.com/@ahhyesic/your-google-maps-api-key-now-ha...
https://www.malwarebytes.com/blog/news/2026/02/public-google...
If one ignores 70% of the documentation, it makes for a demonizing blog post about it, sure.
" API keys for Firebase services are not secret
API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
All Firebase-provisioned API keys are automatically restricted to Firebase-related APIs. If your app's setup follows the guidelines in this page, then API keys restricted to Firebase services do not need to be treated as secrets, and it's safe to include them in your code or configuration files. Set up API key restrictions
If you use API keys for other Google services, make sure that you apply API key restrictions to scope your API keys to your app clients and the APIs you use.
Use your Firebase-provisioned API keys only for Firebase-related APIs. If your app uses any other APIs (for example, the Places API for Maps or the Gemini Developer API), use a separate API key and restrict it to the applicable API."
https://firebase.google.com/support/guides/security-checklis...
The only reasonable design is to have two kinds of API keys that cannot be used interchangeably: public API keys, that cannot be configured to use private APIs, and private API keys, that cannot be configured to use public APIs. There's no one who must use a single API key for both purposes, and almost all cases in which someone does configure an API key like that will be a mistake. It would be even better if the API keys started with a different prefix or had some other easy way to distinguish between the two types so that I can stop getting warnings about my Firebase keys being "public".
In Firebase world API keys are for identification, not authorisation.
https://firebase.google.com/docs/projects/api-keys
Public by design: API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
Google's world. They explicitly tell you that API keys are not secrets.
https://trufflesecurity.com/blog/google-api-keys-werent-secr...
API keys for Firebase. While Google really messed up here, I doubt they ever published anything claiming that no Google API keys at all are secrets.
Google Maps is not Firebase.
And "Firebase AI Logic" sure sounds like something easy to confuse with a Firebase service...
The same principle applies, though.
I'm absolutely not defending Google here, to be clear: Retroactively expanding the scope of an API "key" explicitly designated as "public/non-sensitive" is very bad.
But the concept itself does make some sense, and I'm just noting that there's precedent both across Google and other companies.
> The same principle applies, though.
How?
"Firebase AI Logic"
Is this a Firebase service or not?
In the frontend world where you have client-side API keys talking directly to 3rd party services from the client. Think things like Google Maps and similar.
Which is a stupid idea for something where there is billing involved... Anyone on the internet can take that key and scrape the Google maps API (faking the referer header) and cost you $$$$$.
Google should have simply done with by origin URL if they wanted stuff to be open like that.
Once upon a time Google maps loads were nearly free, and there was no way to restrict that key.
Public API keys are a thing. Arguably they are poorly named (it's really more of a client identifier), and modeling them as primarily a key instead of primarily as a non-secret identifier can go very wrong, as evidenced here.
As others have said, this is a "feature" for Google, not a bug. There is no easy way to set a hard cap on billing on a project. I spent the better time of an hour trying to find it in the billing settings in GCP, only to land on reddit and figuring out that you could set a budget alert to trigger a Pub/Sub message, which triggers a Cloud Function to disable billing for the project. Insanity.
My favorite Google LLM benchmark is asking Gemini models to create a script that fetches API usage (just request counts) for a project from GCP.
100% failure rate.
Call it for what it is, an antifeature, a trap for the user.
This is presumably by design: How can it be the vendor's fault if your custom billing protection implementation failed you at a critical time? Much harder to defend against a switch on their dashboard allowing billing overshoot.
having to glue pub/sub to a cloud function just to approximate a hard cap is the whole indictment. that's not a safety feature. that's you building your own brakes.
Thanks LLM.
you're welcome.
As the other user said - this would be an anti-feature and user hostile.
This is a sign that somehow there isn’t sufficient incentive to work on these features.
mrkurt was explicit about it when defending Fly.io's original decision to refuse to implement self-service spending caps: "putting work into features specifically to minimize how much people spend seems like a good way to fail a company".
Previously: <https://news.ycombinator.com/item?id=39520776#39522099>
This is from my experience the same in AWS and Azure. I would love for a kill-switch if the usage goes above a critical threshold. 5 hours down time will not kill my app but a huge cloud bill might.
It's been a year since I last looked at this, but when I did you could get near-realtime cost metrics for AWS Bedrock via CloudWatch (you get input & output token counts and have to generate the actual price yourself)
These are all poorly designed systems from a CX perspective (the billing systems).
Billing is usually event driven. Each spending instance (e.g. API call) generates an event.
Events go to queues/logs, aggregation is delayed.
You get alerts when aggregation happens, which if the aggregation service has a hiccup, can be many hours later (the service SLA and the billing aggregator SLA are different).
Even if you have hard limits, the limits trigger on the last known good aggregate, so a spike can make you overshoot the limit.
All of these protect the company, but not the customer.
If they really cared about customer experience, once a hard limit hits, that limit sets how much the customer pays until it is reset, period, regardless of any lags in billing event processing.
That pushes the incentive to build a good billing system. Any delays in aggregation potentially cost the provider money, so they will make it good (it's in their own best interest).
I read the following [0] and immediately went to my firebase project to downgrade my plan. This is horrific.
> Yes, I’m looking at a bill of $6,909 for calls to GenerativeLanguage.GenerateContent over about a month, none of which I made. I had quickly created an API key during a live Google training session. I never shared it with anyone and it’s not pushed to any public (or private) repo or website.
0 - https://discuss.ai.google.dev/t/unexpected-gemini-api-billin...
So someone took a picture of the key at the live training session or something? What's the suspected cause?
The spend-cap discussion is the right instinct but misses a more fundamental fix available to Firebase projects: restricting the API key itself. In Google Cloud Console → APIs & Services → Credentials, you can edit your Firebase browser key and set API restrictions to only allow specific Firebase services (Firestore, Authentication, Storage, etc.). This prevents the key from being usable with Gemini or any other GCP API entirely—so even if the key is exposed, it can't incur AI billing costs.
Most Firebase 'add AI to your app' tutorials skip this step because Firebase's initialization flow doesn't prompt you to configure it, and Firebase Security Rules only gate Firebase-specific services, not the key's broader GCP API access scope.
It is scary building on the public cloud as a solo dev or small team. No real safety net, possibly unbounded costs, etc. A large portion of each personal project I do is spent thinking about how to prevent unexpected costs, detect and limit them, and react to them. I used to just chuck everything onto a droplet or VPS, but a lot of the projects I am doing lately need services from Google or AWS. I tend to prefer GCP at this point because at least I can programmatically disconnect the billing account when they get around to tripping the alert.
I wonder what happens if you just decide not to pay. Surely that would have some legal implications in the US, but what about elsewhere?
Forgive my ignorance - but what's the payoff for fraudsters in getting access to a generative AI service for a short-ish period of time, before they get cut off?
With EC2 / GCC credentials, I could understand going all out on bitcoin mining - but what are they asking the AI to do here that's worth setting up some kind of botnet or automation to sift the internet for compromised keys?
Early Generative AI was popular with spammers before it became mainstream because it could be used to write infinite variations of spam messages. Making each message unique is more likely to bypass spam filters.
There are also a lot of AI use cases that require a lot of token spend to brute force a problem. Someone might want to search for security exploits in a codebase but they don’t want to spend the $50,000 in tokens from their own money. Finding someone’s key and using it as hard as possible until getting locked out could move these projects forward.
Totally speculating here, but maybe they provide some sort of LLM as a service, and they rotate stolen API keys in the background so they don't have to pay anything ?
Or they use the LLMs for criminal purposes (like automated social engineering) and so the API key can't be traced to their personal info (but they could also use a local model for this, so I don't know).
There are plenty of services offering AI inference at a discount. Some of these will be using your data for future distillation; others might be making use of bulk discounts and passing these through to a number of individual users (while taking on billing, support etc. risk) – and maybe some are just selling tokens falling off the back of a truck?
If they work for hostile state, the payoff is destruction of economy and social contract. Damage here, damage there. It all adds up.
Does the blog post explain how this happened exactly? Did he leak his API key in frontend code somehow, or was his project itself vulnerable to misuse? I'm curious how someone racked up 30k in a few hours.
Slightly off-topic, but Backblaze B2 has usage caps that actually work. I have $0 cap on API requests, and yesterday when litestream burned through the free tier (defaults to replicating every second), I got a notice and requests stopped working until I upped my cap.
It's incredible that in 2026 your best bet for getting support from Google is still posting to HN and hoping a Product Owner at Google takes pity on you (or feels shamed...)
Related: https://news.ycombinator.com/item?id=47156925
We had this exact same problem (the key initially wasn’t a secret but became a secret once we enabled Gemini API with no warnings).
We managed to catch it somewhat early through alerting, so the damage was only $26k.
We asked our Google cloud support rep for a refund - they initially came back with a no but now the case is under further consideration.
I’d escalate this up the chain as much as possible.
on the one hand if you play with petrol you cant complain about burning down your garage
on the other hand hetzner sell ipv4 instance with no security on by default, just raw ubuntu 24.x
within 3-4 days of deploying one, it will be hacked and have crypto miners installed unless additional special config is added. i do wonder what % of hetzner vps instances are compromised
Two things that should be default on any GCP project touching generative-AI APIs:
1 API-key restrictions by HTTP referrer AND by API (`generativelanguage.googleapis.com` only),
2 a billing budget with a Pub/Sub "cap" action, not just an email alert. Neither is on by default, and almost nobody sets them before shipping. 13 hours is actually fast for detection. most teams find out at end-of-month reconciliation.
I want API keys with monthly and hourly quotas and RATE LIMITING.
like 50k requests per hour, above that 1/s/client up to 20 req/sec.
I don't want to shotgun my service for every user if one user is misbehaving. I want to set rate of bleeding
Google responded to your post so that’s good news. We all know the nature of APIs, but a secure transaction system is non-negotiable from Google and its peers for LLM API use. Right now LLM APIs are like unencrypted credit card numbers floating around.
> Are there recommended safeguards beyond ... moving calls server-side?
This implies the API calls originated in the client, suggesting the client may have had they API key.
That's standard for Firebase apps. It's also recommended by Google (they describe the keys as "public by design").
Feels like a confusing thing to name "key" if it's presumably more of an identifier.
It's "implied" throughout the whole post (or more like assumed that the reader understands this, because it's the basic premise of the problem). It's why they link to a post that explains the basic concept after a remark that "This describes our issue in more detail".
> tl;dr Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true: Gemini accepts the same keys to access your private data. We scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Even Google themselves had old public API keys, which they thought were non-sensitive, that we could use to access Google’s internal Gemini.
From Google themselves, in the Firebase docs:
> API keys for Firebase services are not secret. Firebase uses API keys only to identify your app's Firebase project to Firebase services, and not to control access to database or Cloud Storage data, which is done using Firebase Security Rules. For this reason, you do not need to treat API keys for Firebase services as secrets, and you can safely embed them in client code.
<https://firebase.google.com/support/guides/security-checklis...>
... or at least that's what it used to say, until they quietly updated the docs to say this:
> API keys for Firebase services are not secret. API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
> All Firebase-provisioned API keys are automatically restricted to Firebase-related APIs. If your app's setup follows the guidelines in this page, then API keys restricted to Firebase services do not need to be treated as secrets, and it's safe to include them in your code or configuration files.
Followed later by (in different section):
> Use your Firebase-provisioned API keys only for Firebase-related APIs. If your app uses any other APIs (for example, the Places API for Maps or the Gemini Developer API), use a separate API key and restrict it to the applicable API.
Yeah, the amount of people creating, running and maintaining websites yet don't understand how websites actually work in practice is very high and seems we haven't even come close to the ceiling yet.
I think the logistics of calculating cost in real time is something that is extremely hard. I don't think there is one big cloud service provider that has hard limits instead of alerts.
As long as they revert the charge when notified of scenarios like this , and they have historically done so for many cases, it's fine. It's an acceptable workaround for a hard problem and the cost of doing business ( just like Credit Cards accept a certain amount of loss to fraud as part of business)
They don't have to compute it in real time. They can cut service when they detect it reached the cost and the difference is free of charge.
Overcharge protection doesn't have to be free. It could be +5% on prices or a fee of 25% when you reach the threshold.
They would have financial interest in calculating cost in real time and it'd magically become more and more precise over releases.
Why would it be hard to calculate cost? Multiply a fixed price * requests/time ? It doesn't have to be exact in real time, it just has to report something approximately useful in realtime.
It's absolutely not fine to be at the mercy of other people, that's what we buy cloud products or really any products for: So that we are not at the mercy of hardware faults, bad weather, bad teeth, hunger, thirst, [insert anything]
Cutting off at the exact cent is difficult, but a hard limit that triggers within one dollar of the actual limit should really be possible
If for some resources you can't sample measurements fast enough you could weaken it to "triggers within one dollar or five minutes after cost overrun, whichever comes later". But LLM APIs are one of those cases where time isn't a factor, your only issue is that if you only check quota before each inference a given query might bring you over
> I think the logistics of calculating cost in real time is something that is extremely hard.
What makes you think that?
Ridiculous. They are clearly not trying at all. A hard wall preventing going over budget by 100x in a couple hours is not some devilishly complicated decentralized system problem.
Don't tote the party line.
Same reason why Azure AI only has easy rate limits by minute, not by day or week or month. Open source proxy projects do it easily tho. Think about the incentives.
Going over a hard cap by 3% would be a reasonable failure to make, not by 30000%.
Crude Oil Futures, Natural Gas Futures, Google Cloud API keys.
the widow-maker list increases.
Don’t use GCP (and other big clouds) until they sort out their safeguards.
All three of the big cloud subreddits have stories like this on a regular basis
Unfortunately, yet just another story like this. One of these unexpected usage charges in the thousands appears every month, and with the same automatic denied too. This is one of the reasons I just stopped using these kinds of pay-per-usage cloud services long ago. At best, I still use services that have hard-bounded usage limits, like EC2 from AWS, where one instance can never go beyond 24h/day usage and is always capped, with shutdowns when exceeded, and limited credit cards, too.
It's super frustrating that this is the only option to realistically deal with this issue, since all stories end up the same way: The cloud company just saying "f* you, we don't care, pay up." and legal fees are always expensive :(
> At best, I still use services that have hard-bounded usage limits, like EC2 from AWS, where one instance can never go beyond 24h/day usage and is always capped, with shutdowns when exceeded, and limited credit cards, too.
Is this possible on AWS today? I'm the same way, if I cannot set a hard-limit for the billing so I can know for a fact how much it'll maximum cost in a month, I'm not interested in using that service for anything. Which is one of the top reasons I've stayed clear of AWS, they used to have only billing-alerts, but you couldn't actually set limits, guess one step forward that they've finally implemented that now.
The top comment on the post physically hurt me. We've moved past the era of keep env files in code bases and are now actually serving them lol.
Also, can't you tie a key to a domain or IP address to help stop unauthorized usage?
Not if its publicly called from Javascript, as your user's browser will make those requests. You neither know their IP addresses, nor is the referer or origin header a safe choice as it can be spoofed outside of a browser.
If it's called from Javascript in the browser, it's not a secret API key....
there are plenty of API keys distributed like this by design. For example, google maps requires this, else your (anonymous) users can't use an embedded google map on your website. And a public firebase app needs some kind of API key, too.
Which is why Google calls it a public API key...
on a more positive note, you saved a few bucks not running your own server or database.
Can you pre-load money into your account and have that be used until it's zero, at which time you have to load more? Deepseek does it this way.
There's a brand-new, Gemini-specific feature for that (as new as March 23), but historically the answer has tended to be "no" from all the cloud providers. Most giants and indies alike have always been strongly opposed to implementing this feature for business reasons. (When you run across something that does let you do things that way, it's one of a handful of exceptions.) Their response is to tell you to set up budget alerts, which is not a solution, as described in this post.
<https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...>
No. I believe all major cloud providers are Pay As You Go. I think only Azure has a tier where you can run on free credits for a while.
The only thing I've seen is in MECM (SCCM) the azure extension will hard shut down when you hit a limit. if you want.
I doubt most cloud providers are even technically ready for true prepaid billing (which requires things such as estimating and reserving funds prior to paid operations, corresponding real-time two-way interfaces instead of just eventually consistent billing event aggregation etc).
In early mobile networks, the feature set for prepaid used to always lag behind, since real-time billing wasn't really a design consideration from the beginning.
I suppose rather than taking on that extra work or offering a reduced feature set or by building something best-effort and taking financial responsibility for its failures, if cloud providers can just get away with making this the user's problem, why wouldn't they?
> When your Prepay credit balance on the billing account hits $0, all API keys in all projects linked to that billing account will stop working simultaneously. Prepay credits apply only to Gemini API usage costs; you can't use them to pay for other Google Cloud services.
https://ai.google.dev/gemini-api/docs/billing#prepay
Google doesn't allow disconnecting credit card from account unless you close it. That includes situation when you are just trying out free tier.
Does Google allow a privacy card that you can control whether an account is connected to it or not? That wouldn't help if someone racked up a ton of charges and Google bills daily, though.
A failure to pay does not extinguish the underlying debt owed. While the US seems pretty dysfunctional (or customer friendly, depending on how you see it) when it comes to collecting on debts, this is not the case globally.
And even in the US, you could presumably easily find all your Google accounts (including personal ones) locked until you pay the outstanding sum. Not something I'd risk, personally.
No GCP is not prepay.
https://ai.google.dev/gemini-api/docs/billing#prepay
OpenAI also worked like this last time I used it - not sure if that's changed.
Anthropic and Claude are making Google / Gemini look like a joke these days.
The issue at hand here is another reason I wouldn’t prefer GCP (aside from it being ridiculously complex and confusing). Antigravity worked well for me for a few weeks and then bizarre limit issues started popping up.
Come on Google, be a better competitor, make yourself an option for me, please.
Take them to court.
As always, you will need to make lots of noise on here and similar channels visited by influential people so stuff can get actioned.
Leading tech companies in 2026, folks.
I thought the pricing model was meant to be a benefit of the cloud? All of a sudden, shock horror, paying by the minute turns out to be no cheaper and maybe even more expensive than just doing it yourself
That's fucking bonkers that nothing in the system could see this as unusual and worthy of throttling. The embarrassment of this -- that a company LITERALLY SELLING machine learning services and expertise -- cannot spot such a thing... This should have led them to deal with this internally and refund it. Just... Wow Google.
This is GCP's revenue model, lol. Let's provide a (semi) generous free tier and trick people into accidentally going over it.
there is no way to cap your billing on gcp.
you can get notifications but that's it.
i don't want to get throttled below my quota but some type of spend limit would be good.
Is there a cloud provider that does have hard unbreakable billing caps? Everything I've seen has always been notifications or soft caps.
Not talking about fixed-access things like a Hetzer box.
and the notifications can be delayed because the spending system is not updated in real-time, so even if you have a Cloud Task triggering on spending to disable the project it may be too slow and several thousands may already be spent.
The company selling machine learning services would probably love a €54k bonus
It's so funny seeing people thinking this is not by design
It's terrible that giant cloud providers such as Google or AWS doesn't allow for hard cap at project levels or prepaid. And that especially because alerts are delayed as author stated "We had a budget alert (€80) and a cost anomaly alert, both of which triggered with a delay of a few hours. By the time we reacted, costs were already around €28,000.".
I said this when this finding was originally posted and I'll say it again: This is by far the worst security incident Google has ever had, and that's why they aren't publicly or loudly responding to it. It's deeply embarrassing. They can't fix it without breaking customer workflows. They really, really want it to just go away and six months from now they'll complete their warning period to their enterprise contracts and then they can turn off this automated grant. Until then they want as few people to know about it as possible, and that means if you aren't on anyone's big & important customer list internally, and you missed the single 40px blurb they put on a buried developer documentation site, you're vulnerable and this will happen to you.
Disgusting behavior.
It's not a security incident because it makes Google money. It's extra revenue. They are embarrassed all the way to the bank.
At some point, when it appeared 2 months ago on HN and they still did nothing about it, intentionality can be assumed.
This is exactly it - and the normal "resolution" is a class-action lawsuit but no doubt their terms and conditions forbid that.
However, anyone affected should probably pollute their docket with lawsuits anyway.
This is only a little billing leakage, Operation Aurora in 2009 was 100x worse
It's actually much more than a billing leak [1]; again, most people don't know how bad this is, because Google is trying to keep it hush-hush. These keys don't just grant access to Gemini completions; they grant access to any endpoint on the generative AI google cloud product. This includes: seeing all of the files that google cloud project has uploaded to gemini, and interacting with the gemini token cache.
[1] https://trufflesecurity.com/blog/google-api-keys-werent-secr...
What does this have to do with security?
And this is why we invented segmentation, and everybody that are still not doing that are paying now and this is fine
Google is not the only culprit here;
good
i have seen this so many times...
i'm thinking it's time we replaced api keys.
some type of real time crypto payment maybe?
Prepaid only is a fantastic idea, especially for dumb-ass startups. Limiting your liability to $100 or so sound like a big-ass W.
Yes, pre-paid would be fine and it's a well-understood pattern.
No need to retire API keys.
Nobody is thinking about the stock owners, I see
Implementing this in any meaningful manner quickly begins to look like every read becoming a globally synchronised write. Of course it doesn't have to be perfect, but even approximating perfection doesn't look much different. Also, can you imagine the kind of downtimes and complaints that would inevitably originate from a fully synchronous billing architecture?
> Of course it doesn't have to be perfect, but even approximating perfection doesn't look much different.
It's pretty easy to get right, if the provider allows you to go (slightly) negative before cutting you off.
> Also, can you imagine the kind of downtimes and complaints that would inevitably originate from a fully synchronous billing architecture?
Doesn't need to fully synchronous.
Open ai has this
Prepaid only is a fantastic idea, until your site goes (desirably) viral and then gets shut off right as traffic is picking up, or you grow steadily and forget to increase your deposit amount and suddenly production is down. Billing alerts are a much better solution IMHO.
No you big dummy, that is especially when you want to limit your liability, lol.
Because these days it will be all worthless bot traffic.
Prepaid/paid limits with shutoff is appropriate for this though.
If you have per key limits, this is not possible, and even in a wild situation you should b able to expect that your firebase key will not use 50k.
Let me choose. This common point seems more like a rationalization for the default behavior of hyperscalers. AWS isn't avoiding prepaid due to concern about my site's virality, just that prepaid = less money.
You can also have both, a cap and one or more billing alert levels below it. Some providers do this (e.g. IIRC Backblaze B2).
Yes in reality, and ideally, you can have both, but GP specifically said "Prepaid only" implying you can't have both (which is what I replied to)
Well, they should also have pre-paid only. Offer a few different options.
Oh please no. And the "alternatives" to API keys aren't going to help much either, they'll just add friction to getting started (as reference: see the pain involved in writing a script that hits gmail or calendar API)
With AI there is NO justification in NOT DOING IT BY YOURSELF. Why use firebase or <technology-x> if you can generate <the-thing> by yourself and deploy to hardware you own or rent.
> and deploy to hardware you own or rent.
Because this part sucks. I grew up fiddling with Linux. I don't want to play devops anymore. I want to write code and run it.
That’s replacing Google with OpenAI/aanthropic/whatever. Same shit