> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.
So, is this OpenAI announcing they're strapped for cash?
That's not how I read that sentence at all. Maybe I've just been speaking VC for too long.
What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."
I haven't said the same thing as the parent commenter:
> So, is this OpenAI announcing they're strapped for cash?
It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.
You realize we're talking about a product that is currently free, right? Neither of us have any insight into the margins of their paid offering.
All this means is: we have a free offering that we can't figure out another way to monetize right now.
We can each draw our own conclusions about what that might mean for the state of their business, but all of the other inferences (ha) in this thread are conjecture.
The ads are for the free tier and new $8 ad-supported plan.
The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.
The real question is what do you get out of advertising to people who don't have any money? Kinda squeezing blood from a stone.
You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.
> The ads are for the free tier and new $8 ad-supported plan.
Dang.
> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.
Well - I think the writing was on the wall when they announced they were going to be for-profit. Slippery slope and all that, but I’m sure some of this is because they’ve been giving out free tokens for years.
Charitably, it seems that we have yet to find, as a species/society, anything more effectively profitable than ads. I cannot blame those who come to this conclusion so long as no more powerful and proven motivator yet exists. I hate it, but I understand.
It's not an issue of how - there's a great ADM with markup/down supported already, waiting for system prompts to be injected in realtime via the same online auction system that powers banner ads and smart tv content. There's got to be some latent resistance to the idea for now - but it's so easy to do, it'll happen.
I experimented with this way back when custom GPTs were first released (looks like late 2023). There are a few / commands you can use to suggest what product to inject, how overt, etc and a generic /operator command to send whatever you like 'out of band' from the chat.
One of the most interesting things is when it starts pitching a product and you start interrogating it about why it picked that product. I haven't used it in probably a year so it may not do the same thing now, but back then it 100% lied consistently and without any speck of remorse. It was rather eye opening.
I think it will be difficult to remove bias when you ask a model to compare alternative products. The model will simply lie, as with a biased human opinion and you will need to consult multiple models for a diversity of opinion and presumably use a "trusted" model to fuse the results. Anonymity will be a key tool in reducing the model's ability to engage in algorithmic pricing.
I don't buy this premise. Nothing stops a company from trying to hide ads in the first place, and plenty of them do. Ad blockers for web content have been a thing for years, and using an ad blocker has continued to be strictly a better experience regardless of how many "organic" ads are present on a page.
The ads are in the free tier and the new ad-supported $8/month plan.
Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.
I think that's where they want to be. feels like everyone knows it too, that the long term expectation is basically being able to buy ad words and have LLMs lean responses towards whatever people bought.
Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.
Long term all of the major LLM platforms will have invisible ads, influences, and propaganda woven into the content. The temptation will be irresistible for these companies.
I'm pretty sure that will be an eventual evolution of the product.
The business model cant sustain itself as it is at the moment, eventually chatGPT wont be the product... we the users will be.
That was the fearmongering, which made no sense because advertisers can't put a dollar value on "the AI will kind of sort of mention you", and because every conversation needs an ad. If ChatGPT always snuck in a brand mention even on the simplest questions, everyone would hate it.
Ad technology is really old. They're just going to use the same proven tech that has a track record of creating billionaires: intersperse content with sponsored blocks.
I don't think that's a fair dismissal: you see ads all over media websites because the rates have been plummeting as consumers tune out ads. One main reason why everyone does is that ads are so obtrusive and repetitive, and that's exactly what LLMs change: I'm sure we'll see regular ads on AI apps because the companies have trillions of dollars to repay but advertisers would pay a lot more for openings where they aren't _forcing_ their message as a distraction but are instead able to insert it fairly naturally into a context where the user is engaged.
The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.
Even if it wasn't necessary for their survival, it's hard to imagine a world where they wouldn't try to do it anyways. I'm not someone who buys into the idea that companies are obligated to maximize profits at the expense of all else, but I do think that in the absence of other factors (e.g. regulation) it's where pretty much every company will end up.
"the idea that companies are obligated to maximize profits at the expense of all else"
!! That is literally the definition of legally-binding fiduciary resonsibility for publicly-traded corporations. There are exceptions (PBCs, B-Corps) but they're rare.
I see OpenAI making a significantly larger amount from defense contracts than from advertisements pumped into chats. So I wonder whose bright idea it was to create a public perception risk.
I wish I had the optimism that you did about companies being willing to stop at just doing one dubious thing or another for money when there's nothing stopping them from doing both.
Every single MBA can show for at least one quarter revenue is up after they introduced ads. They do not care what happens after if they can plan their career around that.
Remember that ads are the "last resort" for OpenAI, and they're doing this despite the fact that it's "uniquely unsettling", according to Sam.
Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.
The ads are only for the free and $8/month plans. They basically added an ad-supported super discount level that you can ignore if you’re paying for the normal plans.
But the fact that they've added an ad-supported tier this early into their life as a company means they're desperate for revenue. You start inserting ads when you're optimizing for profit, not when you're still growing. It took how long for Netflix to introduce an ad-supported plan?
figured this was inevitable once they started the free tier. the attribution loop being a separate event stream is actually kind of clever engineering though -- means they can A/B test ad formats without touching the core model response
I don't get what's wrong with charging for your product. Like get rid of the free tier and make a small tier with an easy to serve model for like 5 bucks. Is it still the DAU rage of the 2010ss that's driving burning money?
Perhaps it’s a glib and easy thing to say, but after a teaser period, I would simply not offer free LLM inference. Agreeing to serve ads just completely re-aligns your interests away from providing the best possible user experience to something else entirely.
The average person is slightly more female than male and has 2.1 children, but they do benefit from defense contracts since it makes up a small percentage of their salary.
In the past month local models have been ramping up in major way meanwhile the namesake providers have upped prices, went offline randomly, and started doing slimier and slimier things.
I really think the future is local compute. Or at least self hosted models.
Is there a library of good tools for LLMs to call? I have to imagine the bot-detection avoidance mechanisms are a major engineering effort and not likely to work out of the box with a simple harness and random local LLM.
Kagi also has an API. People who hate ads are probably the same folk that should be paying for Kagi. That's the sane alternative world where companies respect their users.
That's not how it works. Whether local or hosted, every modern model has a cutoff date for its training data, and can be leveraged by agents / harnesses / tools to fetch context from the internet or wherever.
Qwen 3.6 which was released this month is a large but still smaller model. Supposedly it's at about sonnet level when configured correctly. It can be run on commodity hardware without purchasing a data center.
https://www.reddit.com/r/LocalLLaMA/comments/1so1533/qwen36_...
Then there are middle size ones which require multiple gpus which are like gpts latest flagships.
It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment
GLM 5.1 and DeepSeek 4 are acceptable, but the cost of hardware and energy cost that depending on your use case you may as well purchase a Tokens. They get useless and stupid rapidilty if you quant enough to run on single 16-24GB GPU style.
Less than two years ago, Sam Altman said
> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.
So, is this OpenAI announcing they're strapped for cash?
That's not how I read that sentence at all. Maybe I've just been speaking VC for too long.
What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."
You've said the same thing.
> Ads will be the last way I chose to do that
The implication is that they've exhausted all other options.
I haven't said the same thing as the parent commenter:
> So, is this OpenAI announcing they're strapped for cash?
It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.
Being forced into something you don't want to do, to stop selling at a loss... I would categorize that as some level of strapped for cash.
You realize we're talking about a product that is currently free, right? Neither of us have any insight into the margins of their paid offering.
All this means is: we have a free offering that we can't figure out another way to monetize right now.
We can each draw our own conclusions about what that might mean for the state of their business, but all of the other inferences (ha) in this thread are conjecture.
The ads are for the free tier and new $8 ad-supported plan.
The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.
The real question is what do you get out of advertising to people who don't have any money? Kinda squeezing blood from a stone.
You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.
> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible
So why chase this negligible revenue?
> The ads are for the free tier and new $8 ad-supported plan.
Dang.
> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.
Well - I think the writing was on the wall when they announced they were going to be for-profit. Slippery slope and all that, but I’m sure some of this is because they’ve been giving out free tokens for years.
Who can resist the temptation of profit? One always has to make money
If I say “Doing X is a last resort” and then I’m caught doing X, it should raise some eyebrows about my level of desperation.
It’s not that OpenAI is trying to raise revenues that bothers me, it’s how they are doing things that said was desperate just a couple years ago.
Charitably, it seems that we have yet to find, as a species/society, anything more effectively profitable than ads. I cannot blame those who come to this conclusion so long as no more powerful and proven motivator yet exists. I hate it, but I understand.
These are the less worrying kind of ads in our future.
Seeing how google has been fighting SEO for ages, what's going to happen when companies figure out how to inject ads into the model?
We haven't yet seen the problem of adversarial content in play, I think.
It's not an issue of how - there's a great ADM with markup/down supported already, waiting for system prompts to be injected in realtime via the same online auction system that powers banner ads and smart tv content. There's got to be some latent resistance to the idea for now - but it's so easy to do, it'll happen.
I experimented with this way back when custom GPTs were first released (looks like late 2023). There are a few / commands you can use to suggest what product to inject, how overt, etc and a generic /operator command to send whatever you like 'out of band' from the chat.
https://chatgpt.com/g/g-juO9gDE6l-covert-advertiser
One of the most interesting things is when it starts pitching a product and you start interrogating it about why it picked that product. I haven't used it in probably a year so it may not do the same thing now, but back then it 100% lied consistently and without any speck of remorse. It was rather eye opening.
Edit: Tried again, it didn't lie this time lol - https://chatgpt.com/share/69f16aa4-c008-83ea-92b3-51f16ca77d...
Why do you need to inject ads at the model weights layer when you control the frontend?
Have the model generate keywords from the query, then inject guidance from matching advertisers into the context window
q: How do I make a new React app?
a: Vercel makes it easier to get your project running fast ⓘ
Some other choices would be:
...
ⓘ This part of the response was sponsored by Vercel
Since they are served as distinct events then I would think they should be easy to block.
Once the ads are injected directly into the main response is when things get interesting.
> Once the ads are injected directly into the main response is when things get interesting.
This would be where you post-process the LLM response with a second LLM to remove the ad..
I think it will be difficult to remove bias when you ask a model to compare alternative products. The model will simply lie, as with a biased human opinion and you will need to consult multiple models for a diversity of opinion and presumably use a "trusted" model to fuse the results. Anonymity will be a key tool in reducing the model's ability to engage in algorithmic pricing.
Super easy. Barely an inconvenience.
> will simply lie, as with a biased human opinion
Is this really how bias works?
This is already how email works in the corporate world.
A writes email with chatgpt to B.
B sees big blob of text and summarizes email with chatgpt.
Adding an LLM in the middle is just the next step.
It's like one of those memes about the worst possible date picker, except for a communication system.
Then you just end up in an arms race that ultimately leads to photocopy-of-a-photocopy output.
you can block these URLs: |bzrcdn.openai.com^, ||bzr.openai.com^ It won't blanket block everything but will significantly reduce telemetry collected.
Blocking transparent ads is not a good idea. The consequence is that you will be fed opaque ads.
I don't buy this premise. Nothing stops a company from trying to hide ads in the first place, and plenty of them do. Ad blockers for web content have been a thing for years, and using an ad blocker has continued to be strictly a better experience regardless of how many "organic" ads are present on a page.
What possible reason could they have to not always run both? It would make zero sense to leave that money on the table
The ads are in the free tier and the new ad-supported $8/month plan.
Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.
Cable TV was once ad free. So was Netflix. Companies just can’t help themselves.
Would require a lot of training to implement ads blended into convo and not have it be too obvious/ eff up the results?
It is one of the eternal lessons; All tech business plans eventually lead to serving ads. At least until we ban pixels / 3rd party tracking.
I'd always thought that ChatGPT ads would be indistinguishable from actual content.
I think that's where they want to be. feels like everyone knows it too, that the long term expectation is basically being able to buy ad words and have LLMs lean responses towards whatever people bought.
Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.
this would be a breach of trust and short term would work great but long term is too detrimental.
same thing could've been said for search results, so at least that part is still "safe".
Long term all of the major LLM platforms will have invisible ads, influences, and propaganda woven into the content. The temptation will be irresistible for these companies.
O you think trust matters? This is capitalism not trustism.
Well it's sure not "anti-trustism" in recent years...
Long term retention is built on brand trust and usability, then ensh*ttification happens.
No, this is late stage capitalism without regulation.
I'm pretty sure that will be an eventual evolution of the product. The business model cant sustain itself as it is at the moment, eventually chatGPT wont be the product... we the users will be.
That was the fearmongering, which made no sense because advertisers can't put a dollar value on "the AI will kind of sort of mention you", and because every conversation needs an ad. If ChatGPT always snuck in a brand mention even on the simplest questions, everyone would hate it.
Ad technology is really old. They're just going to use the same proven tech that has a track record of creating billionaires: intersperse content with sponsored blocks.
I don't think that's a fair dismissal: you see ads all over media websites because the rates have been plummeting as consumers tune out ads. One main reason why everyone does is that ads are so obtrusive and repetitive, and that's exactly what LLMs change: I'm sure we'll see regular ads on AI apps because the companies have trillions of dollars to repay but advertisers would pay a lot more for openings where they aren't _forcing_ their message as a distraction but are instead able to insert it fairly naturally into a context where the user is engaged.
The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.
So news about OpenAI demise is real. They can’t sustain themselves without ads.
Never in any world were any of the top AI labs not going to sustain themselves with ads. It has always been a timing issue.
Even a cut on every sale on site + sub rev not close.
Even if it wasn't necessary for their survival, it's hard to imagine a world where they wouldn't try to do it anyways. I'm not someone who buys into the idea that companies are obligated to maximize profits at the expense of all else, but I do think that in the absence of other factors (e.g. regulation) it's where pretty much every company will end up.
"the idea that companies are obligated to maximize profits at the expense of all else"
!! That is literally the definition of legally-binding fiduciary resonsibility for publicly-traded corporations. There are exceptions (PBCs, B-Corps) but they're rare.
They can’t be hemorrhaging cash when they IPO.
I see OpenAI making a significantly larger amount from defense contracts than from advertisements pumped into chats. So I wonder whose bright idea it was to create a public perception risk.
I wish I had the optimism that you did about companies being willing to stop at just doing one dubious thing or another for money when there's nothing stopping them from doing both.
Maybe the negative press from ads is better than the negative press from powering murderbots?
Bad press from a contract like that happens once and everyone forgets. Ads are in your face everytime
"OpenAI Powered Drone Destroys Elementary School, Hundreds of Children Dead" might last a while.
Every single MBA can show for at least one quarter revenue is up after they introduced ads. They do not care what happens after if they can plan their career around that.
Can't wait for "watch this ad for 90s to use xxhigh on your next prompt!"
And it begins.
Remember that ads are the "last resort" for OpenAI, and they're doing this despite the fact that it's "uniquely unsettling", according to Sam.
Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.
The ads are only for the free and $8/month plans. They basically added an ad-supported super discount level that you can ignore if you’re paying for the normal plans.
But the fact that they've added an ad-supported tier this early into their life as a company means they're desperate for revenue. You start inserting ads when you're optimizing for profit, not when you're still growing. It took how long for Netflix to introduce an ad-supported plan?
when did netflix offer a free tier?
options 1 and 2 are not mutually exclusive
Interesting, no bidding flow entirely first party and contextual.
figured this was inevitable once they started the free tier. the attribution loop being a separate event stream is actually kind of clever engineering though -- means they can A/B test ad formats without touching the core model response
Really well written, technical post. Good read.
I've seen chatgpt suggest me more amazon products lately
I don't like anything about this.
Not to me they don’t, cause I canceled my account and stopped using their products when they made the announcement.
They don't serve them to me, either, because I don't use GPT-5.3 on the free tier or Go plan where these ads show up.
I don't get what's wrong with charging for your product. Like get rid of the free tier and make a small tier with an easy to serve model for like 5 bucks. Is it still the DAU rage of the 2010ss that's driving burning money?
How do you pick up new paying users without letting people use the service for free for a while first? Freemium is popular because it works well.
Let the enshittification commence!
This is gross
It feels like we’ve been in the golden age and the window is coming to a close
Let the enshitification begin, I guess
How do you expect the spend & COGS for free LLM inference to be funded? For users who don't want to pay, or maybe can't pay?
Perhaps it’s a glib and easy thing to say, but after a teaser period, I would simply not offer free LLM inference. Agreeing to serve ads just completely re-aligns your interests away from providing the best possible user experience to something else entirely.
From things like defense/private contracts
e.g. colleges pay for institutional subscriptions
The average person doesn't benefit from defense contracts ... Like ever.
The average person is slightly more female than male and has 2.1 children, but they do benefit from defense contracts since it makes up a small percentage of their salary.
You are a fun person. We should be friends
It has begun ever since they nerfed chatgpt4 before releasing 4o
In the past month local models have been ramping up in major way meanwhile the namesake providers have upped prices, went offline randomly, and started doing slimier and slimier things.
I really think the future is local compute. Or at least self hosted models.
The hosted ones still have the advantage of being able to search the internet for live info rather than being limited to a knowledge cut off date.
I’m not sure why a model needs to be hosted in order to make network calls?
Is there a library of good tools for LLMs to call? I have to imagine the bot-detection avoidance mechanisms are a major engineering effort and not likely to work out of the box with a simple harness and random local LLM.
Even the hosted ones are blocked from searching certain sites, for example Claude is banned from searching Reddit:
`Error: "The following domains are not accessible to our user agent: ['reddit.com']."`
Tavily, Exa, Firecrawl, Perplexity, and Linkup are all tools for agents to search the web.
I’ve been building a harness the past few months and supports them all out of the box with an API key.
Kagi also has an API. People who hate ads are probably the same folk that should be paying for Kagi. That's the sane alternative world where companies respect their users.
That's not how it works. Whether local or hosted, every modern model has a cutoff date for its training data, and can be leveraged by agents / harnesses / tools to fetch context from the internet or wherever.
Local ones that support tool use can do the same
You can do that locally too!
What's the rough equivalent of a local model? Are we talking GPT-4?
Qwen 3.6 which was released this month is a large but still smaller model. Supposedly it's at about sonnet level when configured correctly. It can be run on commodity hardware without purchasing a data center. https://www.reddit.com/r/LocalLLaMA/comments/1so1533/qwen36_...
Then there are middle size ones which require multiple gpus which are like gpts latest flagships.
Then there is kimi 2.6 which is a monster that is beating opus in some benchmarks. https://www.reddit.com/r/LocalLLaMA/comments/1sr8p49/kimi_k2...
It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment
Depends on your VRAM or "unified" memory for how smart it is, and CPU/GPU for how quick it is.
128GB of RAM? Sure, the early to mid 4s releases, except maybe 4o. And on an M5 Max, about the same speed.
I wouldn't really bother under 64GB (meaning 32GB or less) except for entertainment value (chats, summaries, tasky read-only agent things).
GLM 5.1 and DeepSeek 4 are acceptable, but the cost of hardware and energy cost that depending on your use case you may as well purchase a Tokens. They get useless and stupid rapidilty if you quant enough to run on single 16-24GB GPU style.
The arc of the technological universe is short, but it bends toward enshitification.
That's cool, I'll never see them.