Fully disagree. OpenAI has 800 millions active users and has effectively democratized cutting-edge AI to an amazing number of people everywhere. It took much longer for the Internet or Mobile Internet to have such an impact.
And it's up to a $1bn+ monthly revenue run rate, with no ads turned on. It's the first major consumer tech brand to launch since Facebook. It's an incredible business.
I propose oAI is the first one likely to enter the ranks of Apple, Google, Facebook, though. But it's just a proposal. FWIW they are already 3x Uber's MAU.
Spotify goes back and forth from barely profitable to losing money every quarter. They have to give 70% of their revenue to the record labels and that doesn’t count operating expenses.
As Jobs said about Dropbox, music streaming is a feature not a product
So so so happy about the "no ads" part and do really hope there is a paid option to keep no ads forever. And hopefully the paid subscriptions keep the ads off the free plans for this who aren't as privileged to pay for it.
My hot take is that it will probably follow the Netflix model of pricing once the VC money wants to turn on the profit switch.
Originally Netflix was a single tier at $9.99 with no ads. As ZIRP ended and investors told Netflix its VC-like honeymoon period was over - ads were introduced at $6.99 and the basic no ad tier went to $15.99 and the Premium went to 19.99.
Currently Netflix ad supported is $7.99, add free is $17.99 and Premium is $24.99.
Mapping that on to OpenAI pricing - ChatGPT will be ~$17.99 for ad supported, ~$49.99 for ad free and ~$599 for Pro.
Huge brand moat. Consumers around the world equate AI with ChatGPT. That kind of recognition is an extremely difficult thing to pull off, and also hard to unseat as long as they play their cards right.
"Brand moat" is not an actual economic concept. Moats indicate how easy/hard it is to switch to a competitor. If OpenAI does something user-adversarial, it takes two seconds to switch to Anthropic/Gemini (the exception being Enterprise contracts/lock-in, which is exactly why AI companies prioritize that). The entire reason that there are race-to-the-bottom price wars among LLM companies is that it's trivial for most people to switch to whatever's cheapest.
Brand loyalty and users not having sufficient incentive by default to switch to a competitor is something else. OpenAI has lost a lot of money to ensure no such incentive forms.
Moats, as noted in Google's "We Have no Moat, and Neither Does OpenAI" memo that made the discussion of moats relevant in AI circles, has a specific economic definition.
The concept of ‘moat’ comes out of marketing - it was a concept in marketing for decades before Warren Buffett coined the term economic moat. Brand moat had been part of marketing for years and is a fully recognized and researched concept. It’s even been researched with fMRIs.
You may not see it, but OpenAI’s brand has value. To a large portion of the less technical world, ChatGPT is AI.
I don't completely agree. Brand value is huge. Product culture matters.
But say you're correct, and follow the reasoning from there: posit "All frontier model companies are in a red queen's race."
If it's a true red queen's race, then some firms (those with the worst capital structure / costs) will drop out. The remaining firms will trend toward 10%-ish net income - just over cost of capital, basically.
Do you think inference demand and spend will stay stable, or grow? Raw profits could increase from here: if inference demand 8x, then oAI, as margins go down from 80% to 10%, would keep making $10bn or so a year in FCF at current spend; they'd decide if they wanted that to go into R&D or just enjoy it, or acquire smaller competitors.
Things you'd have to believe for it to be a true red queen's race:
* There is no liftoff - AGI and ASI will not happen; instead we'll just incrementally get logarithmically better.
* There is no efficiency edge possible for R&D teams to create/discover that would make for a training / inference breakaway in terms of economics
* All product delivery will become truly commoditized, and customers will not care what brand AI they are delivered
* The world's inference demand will not be a case of Jevon's paradox as competition and innovation drives inference costs down, and therefore we are close to peak inference demand.
Anyway, based on my answers to the above questions, oAI seems like a nice bet, and I'd make it if I could. The most "inference doomerish" scenario: capital markets dry up, inference demand stabilizes, R&D progress stops still leaves oAI in a very, very good position in the US, in my opinion.
The moat, imo, is mostly the tooling on top of the model. ChatGPT's thinking and deep research modes are still superior to the competition. But as the models themselves get more and more efficient to run, you won't necessarily need to rent them or rent a data center to run them. Alibaba's Qwen mixture of experts models are living proof that you can have GPT levels of raw inference on a gaming computer right now. How are these AI firms going to adapt once someone is able to run about 90% of raw OpenAI capability on a quad core laptop at 250-300 watts max power consumption?
I think one answer is that they'll have moved farther up the chain; agent training is this year, agent-managing-agents training is next year. The bottom of the chain inference could be Qwen or whatever for certain tasks, but you're going to have a hard and delayed time getting the open models to manage this stuff.
Futures like that are why Anthropic and oAI put out stats like how long the agents can code unattended. The dream is "infinite time".
I wouldn’t necessarily say so.
I guess that’s what they are trying to « pulse » people and « learn » from you instead of just providing decent unbiased answers.
In Europe, most companies and Gov are pushing for either mistral or os models.
Most dev, which, if I understand it correctly, are pretty much the only customers willing to pay +100$ a month, will change in a matter of minutes if a better model kicks in.
And they loose money on pretty much all usage.
To me a company like Antropics which mostly focus on a target audience + does research on bias, equity and such (very leading research but still) has a much better moat.
You mean it's thanks to the incredible invention known as the Internet that they were able to "democratize cutting-edge AI to an amazing number of people"
OpenAI didn't build the delivery system they built a chat app.
They changed the video game dota2 permanently. Their bots could not control a shared unit (courier) among themselves so bot matches against their AI had special rules like everyone having their own. Not long after the game was changed forever.
As a player for over 20 years this will be a core memory of OpenAI. Along with not living up to the name.
Apple has physical stores that will provide you timely top notch customer service. While not perfect, their mobile App Store is the best available in terms of curation and quality. Their hardware is not so diverse so is stable for long term use. And they have the mindshare in way that is hard to move off of.
Let’s say Google or Anthropic release a new model that is significantly cheaper and/or smarter that an OpenAI one, nobody would stick to OpenAI. There is nearly zero cost to switching and it is a commodity product.
Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple. There is nearly zero cost to switching and it is a commodity product.
The AI market, much like the phone market, is not a winner take all. There's plenty of room for multiple $100B/$T companies to "win" together.
> Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple.
I don't think this is true over the short to mid term. Apple is a status symbol to the point that Android users are bullied over it in schools and dating apps. It would take years ti reverse the perception.
> Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple.
This is not at all how the consumer phone market works. Price and “smarts” are not only factor that goes into phone decisions. There are ecosystem factors & messaging networks that add significant friction to switching. The deeper you are into one system the harder it is to switch.
e.g. I am on iPhone and the rest of my family is on Android. The group chat experience is significantly degraded, my videos look like 2003 flip phone videos. Versus my iPhone using friends everything is high resolution.
There is only a zero cost to switching if a company is so perfectly run that everyone involved comes to the same conclusion at the same time, there are no meetings and no egos.
The human side is impossible to cost ahead of time because it’s unpredictable and when it goes bad, it goes very bad. It’s kind of like pork - you’ll likely be okay but if you’re not, you’re going to have a shitty time.
Idk, it's a company with 4.5B in revenues in H1 2025.
It's not insane numbers but it's not bad either. YouTube had those revenues in...2018. 12 years after launching.
There's definitely a huge upside potential in openai. Of course they are burning money at crazy rates, but it's not that strange to see why investors are pouring money into it.
It's an insane number considering how little they monetize it. Free users are not even seeing ads rn and they already have 4.5B revenue. I think 100B by 2029 is a very conservative number.
I'm in awe they are still allowing free users at all. And I'm one of them. The free tier is enough for me to use it as a helper at work, and I'd probably pay for it tomorrow if they cut off the free tier.
...not monetized yet: Can't find the post, but a prev. HN post had a link to an article showing that OpenAI had hired someone from Meta's ad service leadership - so I took that to mean it's a matter of time.
It is hard though. Getting people to hand $4.5B to a company is difficult no matter how much money you are losing in the process.
I mean sure, you can get there instantly if you say "click here to buy $100 for $50", but that's not what's happening here - at least not that blatantly.
even if thats the case, they have eaten multiple times that amount of other companies lunch. Companies that currently use ads, whereas cgpt does not.(but will).
Ed... I wrote him a long note about how wrong his analysis on oAI was earlier this year. He wrote back and said "LOL, too long." I was like "Sir, have you read your posts? Here's a short version of why you're wrong." (In brief, if you depreciate models over even say 12 months they are already profitable. Given they still offer 3.5, three years is probably a more fair depreciation schedule. On those terms, they're super profitable)
The depreciation schedule doesn't affect long term profitability. It just shifts the profits/loss in time. It's a tool to make it appear like you paid for something while it's generating revenue. Any company would look really profitable for a while if it chose long enough depreciation schedules (e.g. 1000 years), but that's just deferring losses until later.
No it would in fact be appropriate to match the costs of the model training (incurred over a few months) with the lifetime of its revenue. That’s not some weird shifting - it helps you understand where the business is at. In this case on a per model basis, very profitable.
> I wrote him a long note about how wrong his analysis on oAI was earlier this year.
Why don't you consider posting it on HN either as a response in this thread or as it's own post. There's clearly interest in teasing out how much of OAI's unprecedented valuation is hype and/or justified.
I agree with you. Even Anthropic's CEO said EXACTLY this. He said, if you actually look at the lifecycle of each model as its own business, then they are all very profitable. It's just that while we're making money from Model A, we've started spending 10x on Model B.
> Given they still offer 3.5, three years is probably a more fair depreciation schedule.
But the usage should drop considerably as soon as next model is released. Many startups down the line are existing in hope of better model. Many others can switch to a better/cheaper model quite easily. I'd be very surprised if the usage of 3.5 is anywhere near what it was before release of the next generation, even given all the growth. New users just use the new models
OpenAI expects multi-year losses before turning consistently profitable, so saying they are already profitable based solely on an aggressive depreciation assumption overstates the case
The problem of this "depreciation" rationale is that it presumes that all the cost is in training, ignoring that actually serving the models is also very expensive. I certainly don't believe they would be profitable, and vague gestures at some hypothetical depreciation sounds like accounting shenanigans.
Also, the whole LLM industry is mostly trying to generate hype, at a possible future where it is vastly more capable than it currently is. It's unclear if they would still be generating as much revenue without this promise.
Your brief doesn't make sense, maybe you need to expand?
They're only offering 3.5 for legacy reasons: pre-Deepseek, 3.5 did legitimately have some things that open source hadn't caught up on (like world knowledge, even as an old model), but that's done.
Now the wins come from relatively cheap post-training, and a random Chinese food delivery companies can spit out 500B parameter LLMs that beats what OpenAI released a year ago for free with an MIT license.
Also as you release models you're enabling both distillation of your own models, and more efficent creation of new models (as the capabilities of the LLM themselves are increasingly useful for building, data labeling, etc.)
I think the title is inflammatory, but the reality is if AGI is really around the corner, none of OpenAI's actions are consistent with that.
Utilizing compute that should be catapulting you towards the imminent AGI to run AI TikTok and extract $20 from people doesn't add up.
They're on a treadmill with more competent competitors than anyone probably expected grabbing at their ankles, and I don't think any model that relies on them pausing to cash in on their progress actually works out.
Longcat-flash-thinking is not super popular right now; it doesn't appear on the top 20 at open router. I haven't used it, but the market seems to like it a lot less than grok, anthropic or even oAI's open model, oss-20b. Like I said I haven't tried it.
And to your point, once models are released open, they will be used in DPO post-training / fine-tuning scenarios, guaranteed, so it's hard to tell who's ahead by looking at an older open model vs a newer one.
Where are the wins coming from? It seems to me like there's a race to get efficient good-enough stuff in traditional form factors out the door; emphasis on efficiency. For the big companies it's likely maxing inference margins and speeding up response. For last year's Chinese companies it was dealing with being compute poor - similar drivers though. If you look at DeepSeek's released stuff, there were some architectural innovations, thinking mode, and a lottt of engineering improvements, all of which moved the needle.
On treadmills: I posit the oAI team is one of the top 4 AI teams in the world, and it has the best fundraiser and lowest cost of capital. My oAI bull story is this: if capital dries up, it will dry up everywhere, or at the least it will dry up last for a great fundraiser. In that world, pausing might make sense, and if so, they will be able to increase their cash from operations faster than any other company. While a productive research race is on, I agree they shouldn't pause. So far they haven't had to make any truly hard decisions though -- each successive model has been profitable and Sam has been successful scaling up their training budget geometrically -- at some point the questions about operating cashflow being deployed back to R&D and at what pace are going to be challenging. But that day is not right now.
That's disappointing to hear. I've generally liked Ed's writing but all his posts on AI / OAI specifically feel like they come from a place of seething animosity more than an interest in being critical or objective.
At least one of his posts repeated the claim that essentially all AI breakthroughs in the last few years are completely useless, which is just trainwreck hyperbole no matter where you lie on the spectrum as far as its utility or potential. I regularly use it for things now that feel like genuine magic, in wonder to the point of annoying my non-technical spouse, for whom it's just all stuff computers can do. I don't know if OpenAI is going to be a gazillion dollar business in ten years but they've certainly locked in enough customers - who are getting value out of it - to sustain for a while.
If you spend more money training the model and offering it as a service (with all the costs that that entails) than you earn back directly from that model's usage, it can only be profitable if you use voodoo economics to fudge it.
Luckily we live in a time period where voodoo economics is the norm, though eventually it will all come crashing down.
You’re right, but that’s not what’s happening. Every major model trained at Anthropic and oAI have been profitable. Inference margins are on the order of 80%.
That’s true, but OpenAI and its proponents say each model is individually profitable so if R&D ever stops then the company as a whole will be profitable.
The problem with this argument is that if R&D ever stops, OpenAI will not be differentiated (because everyone else will be able to catch up), so their pricing power will disappear, and they won't be able to charge much more than the inference costs.
You're missing that they're pricing the value of models progressing them towards AGI, and their own use of that model for research and development. You can argue the first one, and the second is probably not fully spun up yet (though you can see it's building steam fast), but it's not total fantasy economics, it's just highly optimistic because investors aren't going to buy the story from people who're hedging.
If you've been around, imgur, basically. All the image hosting solutions before it (imageshack) and all the file hosting solutions before them. Yahoo.
"Crashing" in this context doesn't mean something goes completely away, just that its userbase dwindles to 1-5-10% of what it once was and it's no longer part of the zeitgeist (again, Yahoo).
It's a rude response from someone whose public persona is famously rude and abrasive. It's also worth considering the difference between publishing 10000 words to an audience of subscribers, and sending 10000 words unsolicited to a stranger.
Especially rude given, if he was feeling it was too long, he could've had an AI summarize it.
But this shows a certain intellectual laziness/dishonesty and immaturity in the response.
Someone's taken the time to write a response to your article, you can choose to learn from it (assuming it's not an angry rant), or you could just ignore it.
In fact, that completely dismisses this stupid article for me.
I wouldn't write them off yet - but if their funding dries up and there's no more money to support their spending habits this will seem like a great prediction. Giving away stuff that's usually expensive for free is a great way to get numbers - It worked for facebook, uber and many others but it doesn't mean you'll become a profitable company.
Anyone with enough money can buy users - example they could start an airline tomorrow where flights are free and get a lot of riders - but if they don't figure out how to monetize, it'll be a very short experiment.
i dont believe this for a second. Inference margins are huge, if they stopped R&D tomorrow they would be making an incredible amount of money, but they cant stop investing because they have competitors.
It's this and it's really funny to see users here argue about how the revenue is really good and what not.
OpenAI is only alive because it's heavily subsidizing the actual cost of the service they provide using investor money. The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.
> The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.
You have hit the nail on the coffin
To me, it is natural for investor money to dry up as nobody should believe that things would always go the right way yet it seems that openAI and many other are just on the edge... so really its a matter of when and not if
So in essense this is a time bomb, tick tock, the time starts now and they might be desperate because of it as the article notes.
Inevitably it will get jammed with ads until barely profitable. Instead of being able to just cut and paste the output into your term paper, you're going to have to comb through it to remove all the instances of "Mountain Dew is for me and you!" from the output.
I thought it was starting when Ilya said that scaling has plateaued about a year ago. Now confirmed with GPT-5. Now they'll need to sell a pivot from AGI to productization of what they already have with a valuation that implies reaching AGI?
Reminds me of MoviePass or early stage Uber. Everything is revolutionary and amazing when VCs are footing the bill. Once you have to contend with market pricing things tend to change.
One of the consequences of this is that is there actual economic growth here? If it becomes a browser / social network / workplace productivity etc. company, it's basically becoming another google/microsoft. While great for OpenAI, is there a lot of space for them to find new money rather than just take it from Google/Microsoft/facebook?
Most things written about this subject is already polarizing.
I'd believe this if there was more internal company data than just some
outsider using the same secondary data that openai seemingly manipulates
to draw conclusions that have so many logical holes in them, they won't hold
half a litre of water for 5 minutes.
- Ed has no insider information on the accounting or strategy of these AI companies and primarily reacts to public rumors and/or public announcements. He has no education in the field or any special credentials relating to it
- The people with full information are intelligent, and are continually pouring a shit-tonne of money into it at huge valuations
To agree with his arguments you have to explain how the people investing are being fooled.. which is never brought up
> To agree with his arguments you have to explain how the people investing are being fooled
The people with insider knowledge are also the people who are financially invested in AI companies, and therefore incentivized to convince everyone else that growth will continue.
The “arguments” I see about that are always some variation of “they were wrong about WeWork!”, and leave it at that. Obviously smart people can be wrong, obviously dumb money exists, but the entire VC model is that the vast majority of your ultra-risky investments will fail, so pointing to failures proves nothing.
Their only moat is that they started 'AI revolution'. More shock waves like DeepSeek release are still to come. Not too mention that LLM->AGI transition in near future is a moot point. They're riding the wave but for how much longer?
>the only real difference is the amount of money backing it
Judging by how often Sam Altman makes appearances in DC, it's not just money that sets OpenAI apart. It's likely also a strategically important research and development vehicle with implicit state backing, like Intel or Boeing or Palantir or SpaceX. The losses don't matter, they can be covered by a keystroke at the Fed if necessary.
> Edward Benjamin Zitron (born 1986 or 1987) is an English technology writer, podcaster, and public relations specialist. He is a critic of the technology industry, particularly of artificial intelligence companies and the 2020s AI boom.
GPT5 completely demolishes every other AI model for me. With minimal prompting it repeatedly and correctly makes massive refactors for me. All the other models pump out garbage on similar tasks.
What would be a balanced perspective? Perhaps that oAI may now be another "boring" startup in that it is no longer primarily about moving the technology frontier, but about further scaling while keeping churn low, with margins (in the broader sense, i.e. for now prospective margins) becoming increasingly important?
This is not a direct response to this piece, but I wrote a short post about the egregious errors Zitron is comfortable pushing in order to make things sound as bad as possible.
Three inline subscription CTAs, a subscription pop-up, and a subscribe-wall a few paragraphs in.
Oof!
Reacting to what I could read without subscribing: turns out profitably applying AI to status-quo reality is way less exciting than exploring the edges of its capabilities. Go figure!
it's a shame; i generally agree with most of what ed has to say and i think his arguments come from a good place, but the website is pretty irritating and i find his delivery to be breathless and melodramatic to the point of cliche (not befitting the serious nature of the topics he argues). i had to stop listening to his podcast because of the delivery; its not an uncommon situation for other CZM podcasts but at least some of them handle their editorial content with a little more maturity (shout out to Molly Conger's Weird Little Guys podcast).
I hate to make the comparison between two left-ish people who yell for a living just because they're both British, but it kinda feels like ed is going for a john oliver type of delivery, which only really works well when you have a whole team of writers behind you.
I love almost all of the CZM shows, and even I have a hard time making it all the way through a full-on rant from Ed :/ and I agree with him. Sorry, Ed.
Ads or the obvious path, they just haven't had time to pull it off yet.. plus it's going to be hard to pull it off without weakening the experience so they'd like to push that out as much as possible similar to how Google has only eroded the experience slowly over time. Their biggest competitor is Google
They don’t really need to make much money on ads. They just need to weaken the free user experience to convert as many as possible into paid subscribers, then shake off the rest.
The intensely negative reaction to GPT-5 is a bit weird to me. It tops the charts in an elaborate new third-party evaluation of model performance at law/medicine, etc [0]. It's true that it was a bit of an incremental improvement to o3, but o3 was a huge leap in capabilities to GPT4, a completely worthy claimant to be the next generation of models.
I will be the first person to say that AI models have not yet realized the economic impact they promised - not even close. Still, there are reasons to think that there's at least one more impressive leap in capabilities coming, based on both frontier model performance in high-level math and CS competitions, and the current focus of training models on more complex real-world tasks that take longer to do and require using more tools.
I agree with the article that OpenAI seems a bit unfocused and I would be very surprised if all of these product bets play out. But all they need is one or two more ChatGPT-level successes for all these bets to be worth it.
I think a lot of it is a reaction to the hype before the launch of GPT-5. People were sold and were expecting a noticeable big step (akin to GPT 3.5-4), but in reality it's not that much noticeably better for the majority of use cases.
Don't get me wrong, I actually quite like GPT-5, but this is how I understand the backlash it has received.
Yeah that is fair. I admit to being a bit bummed out as well. One might almost say that if O3 was effectively GPT5 in terms of performance improvement, that we were all really hoping for a GPT6, and that's not here yet. I am pretty optimistic, based on the information I have, that we will see GPT6-class models which are correspondingly impressive. Not sure about GPT-7 though.
Honestly, I’m skeptical of that narrative. I think AI skeptics were always going to be shrill about how it was overhyped and thus this proves how right they were! Seriously, how good would GPT5 have had to be in order for Ed to NOT write this exact post?
I’m very happy with GPT5, especially as a heavy API user. It’s very cost effective for its capabilities. I’m sure GPT6 will be even better, and I’m sure Ed and all the other people who hate AI will call it a nothing burger too. So it goes.
> based on both frontier model performance in high-level math and CS competitions
IMO the only takeaway from those successes is that RL for reasoning works when you have a clear reward signal. Whether this RL-based approach to reasoning can be made to work in more general cases remains to be seen.
There is also a big disconnect between how these models do so well in benchmark tasks like these that they've been specifically trained for, and how easily they still fail in everyday tasks. Yesterday I had the just released Sonnet 4.5 fail to properly do a units conversion from radians to arcsec as part of a simple problem - it was off by a factor of 3. Not exactly a PhD level math performance!
I mean, I agree. There is not yet a clear path/story as to how a model can provide a consistently expert-performance on real-world tasks, and the various breakthroughs we hear about don't address that. I think the industry consensus is more just that we haven't correctly measured/targeted those abilities yet, and there is now a big push to do so. We'll see if that works out.
I agree. I mean, I can get o3 right from the API if I choose, but 5-Thinking is better than o3, and 5-Research is definitely better than o3 pro in both ergonomics and output quality. If you read reddit about 4o, the group that formed a parasocial relationship with 4o and relied on its sycophancy seems to be the main group complaining. Interesting from a product market fit perspective, but not worrying as to "Is 5 on the whole significantly better than 4 / o1 / o3?" It is. Well, 5-mini is a dumpster fire, and awful. But I do not use it. I'm sure it's super cheap to run.
Another way to think of oAI the business situation is: are customers using more inference minutes than a year ago? I definitely am. Most definitely. For multiple reasons: agent round trip interactions, multimodal parsing, parallel codex runs..
I'm not even saying he's "wrong"; I wouldn't want to be long OpenAI (I don't think they're doomed but that's too much risk for my blood). But I would bet all my money that Zitron has no idea what he's talking about here.
Yeah, I'm also not saying that, I'm not "OMG AGI tomorrow" either. I think he was one of the first to voice concerns about the financial situation of AI companies and that was valuable, but if you look at his blog he's basically written the same post nonstop for two years. How many times do you need to say that?
Mind sharing why you think that (genuinely curious)?
I think Ed hit some broad points, mostly (i) there were some breathless predictions (human level intelligence) that aren't panning out; (ii) oh wow they burn a ton of cash. A ton; (iii) and they're very Musky: lots of hype, way less product. Buttressed with lots of people saying that if AI did a thing, then that would be super useful; much less showing of the thing being done or evidence that it's likely to happen soon.
None of which says these tools aren't super useful for coding. But I'm missing the link between super useful for coding and a business making $100B / year or more which is what these investments need. And my experience is more like... a 20% speed improvement? Which, again, yes please... but not a fundamental rewriting of software economics.
I have been strongly-tempted to make Zitron and Marcus GPTs... But every time I think about getting started I realize a simple shell script would work better.
Oh wait Claude did a better job than I would have:
How long until disciples of Zitron realize that he is just feeding them sensationalist doomer slop in order to drive his own subscription business. Maybe never!
I find he exhbits the same characteristics of things that drove people like red letter media in the early aughts to be "successful". Make something so long and tedious that the idea of arguring with its own points would require something twice as long, and as such the ability to instead just motion to an uncontested 40 minute longread is then used as a surrogate for any actual arguement. Said diffferently, it's easy for AI skeptics to share this as some way of proving backing up their own point. It's 40 minutes long, how could it be wrong!
I, personally, think that OpenAI is way overhyped and will never deliver on that promise. But... it might not matter.
There is going to be an awful lot of disruption to the economy caused by displacing workers with AI. That's going to be a massive political problem. If these people get their way, in the future AI will do all the work but there'll be no one to buy their products because nobody is employed and has money.
But I just don't see one company dominating that space. As soon as you have an AI, you can duplicate it. We've seen with efforts like DeepSeek that replicating it once it's done is going to require significantly less effort. So that means you just don't have the moat you think you do.
Imagine the training costs get to $100M and require thousands of machines. Well, within a few years it's going to be $1M or less.
So the question is: can OpenAI (or any other company) keep advancing to outpace Moore's Law? I'm not convinced.
But here's why it might not matter: Tesla. Tesla should not be a trillion dollar company. No matter how you value it on fundamentals, it should be a fraction of that. Value it as a car maker, an energy company or whatever and it gets nowhere near $1T. Yet, it has defied gravity for years.
Why? IMHO because it's become too large to fail and, in part, it's now an investment in the wealth transfer that is going on and will continue from the government to the already wealthy. The government will make sure Tesla won't fail as long as it's friendly to the administration.
As much as AI is hyped, it's still incredibly stupid and limited. We may get to the point where it's "smart" enough to displace a ton of people but it's so expensive to run it's cheaper to employ humans.
Yup, what a pack of desperate losers. They should already be at $50 bill revenue, 90% gross margin, 60% operating margin, no capex. Unit economics can't possibly change, they have eaked out every last ounce of efficiency in training, inference, caching and hit every use case possible. It's all just really terrible.
I don't disagree that openai is desperate, this is a fierce competition and Google has a pretty huge head start in a lot of ways, but I wonder at what point these people who constantly dismiss LLMs and AI will change their tune? I understand hating it and wishing we could all agree to stop things, I do too, but if you can't find any uses for it at this point it's clear you're not trying
Ed's newsletter, on HN, unflagged?! Maybe the bubble really is about to pop.
I've read pretty much all his posts on AI. The economics of it are worrying, to say the least. What's even more worrying is how much the media isn't talking about it. One thing Ed's spot on about: the media loved parroting everything Sam and Dario and Jensen had to say.
Speaking of boring and desperate, if you browse the posts on this "newsletter" for more than 2 minutes it's clear that the sole author is a giant bozo who also happens to be in love with himself.
It would be awesome if this blog post was made by an OpenAI [investor / stakeholder / whatever that non profit has] in order to drive up engagement for defending or hyping up OpenAI's efforts.
Do people really buy this nonsense? I mean just this week Sora 2 is creating videos that were unimaginable a few months ago. People writing these screeds at this point to me seem like they’re going through some kind of coping mechanism that has nothing to do with the financials of AI companies and everything to do with their own personal fears around what’s happening with machine intelligence.
So, wait, you're saying that these guys just aren't impressed by the AI technology, and that is blinding them to the fact that the AI companies' economics look really good?
That is a laughable take.
The AI technology is very very impressive. But that doesn't mean you can recover the hundreds of billions of dollars that you invested in it.
World-changing new technology excites everyone and leads to overinvestment. It's a tale as old as time.
I’m saying that seeing dubious economics is blinding people from accepting what’s actually going on with neural networks, and it leads to them having a profoundly miscalibrated mental model. This is not like analyzing a typical tech cycle. We are dealing with something here that we don’t really understand and transcends basic models like “it’s just a really good tool.”
It’s not a religious angle, we literally don’t know how or why these models work.
Yes we know how to grow them, but we don’t know what is actually going on inside of them. This is why Anthropic’s CEO wrote the post he did about the need for massive investment in interpretability.
It should rattle you that deep learning has these emergent capabilities. I don’t see any reason to think we will see another winter.
(To be clear, I do agree that AI is going to drastically change the world, but I don't agree that that means the economics of it magically make sense. The internet drastically changed the world but we still had a dotcom bubble.)
I feel like people speculating on the unsustainability of their losses probably
value what they know more than what they don't know.
In this case however, what you don't know is more relevant than what you do know.
Despite the author's knowledge of publicly available information, I believe there is more the author is not aware of that might sway their arguments. Most firms keep a lot of things under wraps. Sure they are making lots of noise - everyone does.
The numbers don't add up and there are typical signs of the Magnificent 7 engaging in behavior to hide financials/ economics from their official balance sheets and investors.
PE & M7s are teaming up creating SPACS which then build and operate data centers.
By wonders of regulation and financial alchemy, that debt/ expenditure doesn't need to be reported as infra invest in their books then.
It's like the subprime mortgage mix all over again just this time it's about selling lofty future promises to enterprises who're gonna be left holding the bag on outdated chips or compute capacity without a path to ROI.
And there are multiple financial industry analysts besides Ed Zitron who raise the same topics.
Enterprises are always selling lofty future promises.
And your subprime mortgage reference - suggesting they are manipulating information to inflate the value of the firm - doesn't cleanly apply here. For once, here is a company that seems to have faithfully represented their obscene losses and here we are already comparing them to the likes of enron. Enron never reported financial data that can be categorized as losses.
I see lots of people speculating about these losses and I really wish someone investing in openai could come out and say something vague about why they are investing.
Once again, I need not tell you, the information available to the general public is not the same as that which is available to anyone that has invested a significant amount into openai.
So once again, reign in your tendencies to draw conclusion from the obscene losses they have reported - especially since I'm positive you do not have the right context to be able to properly evaluate whether these losses make sense or not.
Fully disagree. OpenAI has 800 millions active users and has effectively democratized cutting-edge AI to an amazing number of people everywhere. It took much longer for the Internet or Mobile Internet to have such an impact.
So "boring" ? Definitely not.
And it's up to a $1bn+ monthly revenue run rate, with no ads turned on. It's the first major consumer tech brand to launch since Facebook. It's an incredible business.
> first major consumer tech brand to launch since Facebook
from my recollection, post-FB $75B+ market cap consumer tech companies (excluding financial ones like Robinhood and Coinbase) include:
Uber, Airbnb, Doordash, Spotify (all also have ~$1bn+ monthly revenue run rate)
Fair comment, I will not fight you on it.
I propose oAI is the first one likely to enter the ranks of Apple, Google, Facebook, though. But it's just a proposal. FWIW they are already 3x Uber's MAU.
They aren’t making money so they haven’t entered any rank. They have users and revenue but cannot last at current setup. Ticking time bomb
Spotify goes back and forth from barely profitable to losing money every quarter. They have to give 70% of their revenue to the record labels and that doesn’t count operating expenses.
As Jobs said about Dropbox, music streaming is a feature not a product
I listed multiple candidates so disputing one wouldn't dispute my main point ;)
Hyperbole to say no major consumer tech brands have launched for decades
So so so happy about the "no ads" part and do really hope there is a paid option to keep no ads forever. And hopefully the paid subscriptions keep the ads off the free plans for this who aren't as privileged to pay for it.
My hot take is that it will probably follow the Netflix model of pricing once the VC money wants to turn on the profit switch.
Originally Netflix was a single tier at $9.99 with no ads. As ZIRP ended and investors told Netflix its VC-like honeymoon period was over - ads were introduced at $6.99 and the basic no ad tier went to $15.99 and the Premium went to 19.99.
Currently Netflix ad supported is $7.99, add free is $17.99 and Premium is $24.99.
Mapping that on to OpenAI pricing - ChatGPT will be ~$17.99 for ad supported, ~$49.99 for ad free and ~$599 for Pro.
Zero moat.
Huge brand moat. Consumers around the world equate AI with ChatGPT. That kind of recognition is an extremely difficult thing to pull off, and also hard to unseat as long as they play their cards right.
"Brand moat" is not an actual economic concept. Moats indicate how easy/hard it is to switch to a competitor. If OpenAI does something user-adversarial, it takes two seconds to switch to Anthropic/Gemini (the exception being Enterprise contracts/lock-in, which is exactly why AI companies prioritize that). The entire reason that there are race-to-the-bottom price wars among LLM companies is that it's trivial for most people to switch to whatever's cheapest.
Brand loyalty and users not having sufficient incentive by default to switch to a competitor is something else. OpenAI has lost a lot of money to ensure no such incentive forms.
McDonald’s has brand moat. So does Coca-Cola. And many more products. The switching cost is null, but the brand does it all.
Again, that's brand loyalty, not a brand moat.
Moats, as noted in Google's "We Have no Moat, and Neither Does OpenAI" memo that made the discussion of moats relevant in AI circles, has a specific economic definition.
The concept of ‘moat’ comes out of marketing - it was a concept in marketing for decades before Warren Buffett coined the term economic moat. Brand moat had been part of marketing for years and is a fully recognized and researched concept. It’s even been researched with fMRIs.
You may not see it, but OpenAI’s brand has value. To a large portion of the less technical world, ChatGPT is AI.
Still not a moat tho
I don't completely agree. Brand value is huge. Product culture matters.
But say you're correct, and follow the reasoning from there: posit "All frontier model companies are in a red queen's race."
If it's a true red queen's race, then some firms (those with the worst capital structure / costs) will drop out. The remaining firms will trend toward 10%-ish net income - just over cost of capital, basically.
Do you think inference demand and spend will stay stable, or grow? Raw profits could increase from here: if inference demand 8x, then oAI, as margins go down from 80% to 10%, would keep making $10bn or so a year in FCF at current spend; they'd decide if they wanted that to go into R&D or just enjoy it, or acquire smaller competitors.
Things you'd have to believe for it to be a true red queen's race:
* There is no liftoff - AGI and ASI will not happen; instead we'll just incrementally get logarithmically better.
* There is no efficiency edge possible for R&D teams to create/discover that would make for a training / inference breakaway in terms of economics
* All product delivery will become truly commoditized, and customers will not care what brand AI they are delivered
* The world's inference demand will not be a case of Jevon's paradox as competition and innovation drives inference costs down, and therefore we are close to peak inference demand.
Anyway, based on my answers to the above questions, oAI seems like a nice bet, and I'd make it if I could. The most "inference doomerish" scenario: capital markets dry up, inference demand stabilizes, R&D progress stops still leaves oAI in a very, very good position in the US, in my opinion.
The moat, imo, is mostly the tooling on top of the model. ChatGPT's thinking and deep research modes are still superior to the competition. But as the models themselves get more and more efficient to run, you won't necessarily need to rent them or rent a data center to run them. Alibaba's Qwen mixture of experts models are living proof that you can have GPT levels of raw inference on a gaming computer right now. How are these AI firms going to adapt once someone is able to run about 90% of raw OpenAI capability on a quad core laptop at 250-300 watts max power consumption?
I think one answer is that they'll have moved farther up the chain; agent training is this year, agent-managing-agents training is next year. The bottom of the chain inference could be Qwen or whatever for certain tasks, but you're going to have a hard and delayed time getting the open models to manage this stuff.
Futures like that are why Anthropic and oAI put out stats like how long the agents can code unattended. The dream is "infinite time".
[dead]
Having sticky 800M WAU is a moat.
I wouldn’t necessarily say so. I guess that’s what they are trying to « pulse » people and « learn » from you instead of just providing decent unbiased answers.
In Europe, most companies and Gov are pushing for either mistral or os models.
Most dev, which, if I understand it correctly, are pretty much the only customers willing to pay +100$ a month, will change in a matter of minutes if a better model kicks in.
And they loose money on pretty much all usage.
To me a company like Antropics which mostly focus on a target audience + does research on bias, equity and such (very leading research but still) has a much better moat.
[flagged]
I am with you, and they still have so many dials to tweak. Ads is one of the big dials.
Training costs can be brought down. New algorithm can still be invented. So many headrooms.
And this is not just for OpenAI. I think Anthropic and Gemini also have similar room to grow.
productized != democratized
Using inventions from other people or those who have now left the company.
They have no moat, their competitors are building equivalent or better products.
The point of the article is that they are a bad business because it doesn't pan out long term if they follow the same path.
its going to be one of the most consequential companies in human history
Yeah bc of the global warming trend it started and it’s epic collapse
I fully agree. People are going to be pointing to OpenAI as a warning of the dangers of the tech hype cycle for decades after its collapse.
You mean it's thanks to the incredible invention known as the Internet that they were able to "democratize cutting-edge AI to an amazing number of people"
OpenAI didn't build the delivery system they built a chat app.
They moved the industry, that's for sure.
But at this point - there's nothing really THAT special about them compared to their competition.
They changed the video game dota2 permanently. Their bots could not control a shared unit (courier) among themselves so bot matches against their AI had special rules like everyone having their own. Not long after the game was changed forever.
As a player for over 20 years this will be a core memory of OpenAI. Along with not living up to the name.
You can say the same thing about any number of hugely popular and profitable companies in the world.
You could say the same about Apple, word for word.
Apple has physical stores that will provide you timely top notch customer service. While not perfect, their mobile App Store is the best available in terms of curation and quality. Their hardware is not so diverse so is stable for long term use. And they have the mindshare in way that is hard to move off of.
Let’s say Google or Anthropic release a new model that is significantly cheaper and/or smarter that an OpenAI one, nobody would stick to OpenAI. There is nearly zero cost to switching and it is a commodity product.
Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple. There is nearly zero cost to switching and it is a commodity product.
The AI market, much like the phone market, is not a winner take all. There's plenty of room for multiple $100B/$T companies to "win" together.
> Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple.
I don't think this is true over the short to mid term. Apple is a status symbol to the point that Android users are bullied over it in schools and dating apps. It would take years ti reverse the perception.
> Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple.
This is not at all how the consumer phone market works. Price and “smarts” are not only factor that goes into phone decisions. There are ecosystem factors & messaging networks that add significant friction to switching. The deeper you are into one system the harder it is to switch.
e.g. I am on iPhone and the rest of my family is on Android. The group chat experience is significantly degraded, my videos look like 2003 flip phone videos. Versus my iPhone using friends everything is high resolution.
There's a huge "cost" in switching when you are tied to one ecosystem (iOS vs. Android). How will you transfer all your data?
Pixel phones exist (and have for some time!) yet people still buy iPhones
There is only a zero cost to switching if a company is so perfectly run that everyone involved comes to the same conclusion at the same time, there are no meetings and no egos.
The human side is impossible to cost ahead of time because it’s unpredictable and when it goes bad, it goes very bad. It’s kind of like pork - you’ll likely be okay but if you’re not, you’re going to have a shitty time.
Idk, it's a company with 4.5B in revenues in H1 2025.
It's not insane numbers but it's not bad either. YouTube had those revenues in...2018. 12 years after launching.
There's definitely a huge upside potential in openai. Of course they are burning money at crazy rates, but it's not that strange to see why investors are pouring money into it.
The insane numbers are the ones you find when you look at their promises, like reaching $125 billion in revenue by 2029 (which they predict will be the first year they are profitable) https://www.reuters.com/technology/artificial-intelligence/o...
How credible would you have found their claims in 2021 that by 2025 they’d be doing north of ten billion in revenue?
> Idk, it's a company with 4.5B in revenues in H1 2025.
giving away dollar bills for a nickel each is not particularly impressive
I would be pretty impressed by anyone who managed to do that nominally. Moving a dozen billion dollars alone seems not trivial to do.
Blowing a giant hole in Hoover Dam while somebody pees in Lake Mead would also be impressive. It just won't stay impressive for very long.
Even if the guy peeing is a world champion urinator named Sam.
It's an insane number considering how little they monetize it. Free users are not even seeing ads rn and they already have 4.5B revenue. I think 100B by 2029 is a very conservative number.
I'm in awe they are still allowing free users at all. And I'm one of them. The free tier is enough for me to use it as a helper at work, and I'd probably pay for it tomorrow if they cut off the free tier.
> I'm in awe they are still allowing free users at all.
I am not.
> The free tier is enough for me to use it as a helper at work, and I'd probably pay for it tomorrow if they cut off the free tier.
You are sort of proving the point that thid isn't crazy. They want to be the dealer of choice and they can afford to give you the hit now for free.
...not monetized yet: Can't find the post, but a prev. HN post had a link to an article showing that OpenAI had hired someone from Meta's ad service leadership - so I took that to mean it's a matter of time.
edit: believe it was Fidji Simo et al.
https://www.pymnts.com/artificial-intelligence-2/2025/openai...
> it's a company with 4.5B in revenues in H1 2025
That's a lot of money to be getting from a subscription business and no ads for the free tier
Not hard to see upside here
It’s when they are losing four times as much. Are their marginal costs per subscriber even positive?
Yeah, how much profit will they make if they're able to go for-profit? Revenue doesn't tell me anything.
It's not hard to make 4.5B when you lose 13.5B. If you give me 18B, I would bet I could lose 13.5B no problem.
It is hard though. Getting people to hand $4.5B to a company is difficult no matter how much money you are losing in the process.
I mean sure, you can get there instantly if you say "click here to buy $100 for $50", but that's not what's happening here - at least not that blatantly.
didn't the post a loss of $5 billion last year and are on track for a loss of $8-9 billion this year?
No: they’re on track to lose $30B or so (they lost 13.5B H1 2025)
even if thats the case, they have eaten multiple times that amount of other companies lunch. Companies that currently use ads, whereas cgpt does not.(but will).
Have they?
GOOG is at record highs, FB is at record highs, MSFT is at record highs
Seems very click baity and "get off my lawn" vibe.
OpenAI has been incredibly valuable to me, and far from boring. They are the new Google to me. I learn so much faster thanks to OpenAI.
Haters gonna hate.
Ed... I wrote him a long note about how wrong his analysis on oAI was earlier this year. He wrote back and said "LOL, too long." I was like "Sir, have you read your posts? Here's a short version of why you're wrong." (In brief, if you depreciate models over even say 12 months they are already profitable. Given they still offer 3.5, three years is probably a more fair depreciation schedule. On those terms, they're super profitable)
No answer.
The depreciation schedule doesn't affect long term profitability. It just shifts the profits/loss in time. It's a tool to make it appear like you paid for something while it's generating revenue. Any company would look really profitable for a while if it chose long enough depreciation schedules (e.g. 1000 years), but that's just deferring losses until later.
No it would in fact be appropriate to match the costs of the model training (incurred over a few months) with the lifetime of its revenue. That’s not some weird shifting - it helps you understand where the business is at. In this case on a per model basis, very profitable.
> I wrote him a long note about how wrong his analysis on oAI was earlier this year.
Why don't you consider posting it on HN either as a response in this thread or as it's own post. There's clearly interest in teasing out how much of OAI's unprecedented valuation is hype and/or justified.
I agree with you. Even Anthropic's CEO said EXACTLY this. He said, if you actually look at the lifecycle of each model as its own business, then they are all very profitable. It's just that while we're making money from Model A, we've started spending 10x on Model B.
Exponential cost growth with, at best, linear improvement is not a promising business projection.
Perhaps at some point we'll say "this model is profitable and we're just gonna stick with that".
Well have they actually come up with data supporting this?
Yes and it’s furnished to some of the worlds best investors who are happy to pay valuations between 150 and 500b to access it
Among those are the guys that gave money to WeWork.
Make of that what you will.
I am not really betting on it.
the logic of a happy WeWork investor
> Given they still offer 3.5, three years is probably a more fair depreciation schedule.
But the usage should drop considerably as soon as next model is released. Many startups down the line are existing in hope of better model. Many others can switch to a better/cheaper model quite easily. I'd be very surprised if the usage of 3.5 is anywhere near what it was before release of the next generation, even given all the growth. New users just use the new models
Probably true! If revs shift to a new cheaper model that’s not bad though.
OpenAI expects multi-year losses before turning consistently profitable, so saying they are already profitable based solely on an aggressive depreciation assumption overstates the case
This perspective ignores headwinds from copyright infringement lawsuits and increasingly popular LLM scraping protections.
Can you share your note to him and where he responded that way? Is it public?
Why 3.5 or 3 years for depreciations? Models have been retrained much faster than that. I would guess more in the 3 months range.
Because businesses and people rely on consistentish responses, which you can get on models you’ve already validated your prompts on.
Dropping old models means breaking paying customers, which is bad for business.
The problem of this "depreciation" rationale is that it presumes that all the cost is in training, ignoring that actually serving the models is also very expensive. I certainly don't believe they would be profitable, and vague gestures at some hypothetical depreciation sounds like accounting shenanigans.
Also, the whole LLM industry is mostly trying to generate hype, at a possible future where it is vastly more capable than it currently is. It's unclear if they would still be generating as much revenue without this promise.
Your brief doesn't make sense, maybe you need to expand?
They're only offering 3.5 for legacy reasons: pre-Deepseek, 3.5 did legitimately have some things that open source hadn't caught up on (like world knowledge, even as an old model), but that's done.
Now the wins come from relatively cheap post-training, and a random Chinese food delivery companies can spit out 500B parameter LLMs that beats what OpenAI released a year ago for free with an MIT license.
Also as you release models you're enabling both distillation of your own models, and more efficent creation of new models (as the capabilities of the LLM themselves are increasingly useful for building, data labeling, etc.)
I think the title is inflammatory, but the reality is if AGI is really around the corner, none of OpenAI's actions are consistent with that.
Utilizing compute that should be catapulting you towards the imminent AGI to run AI TikTok and extract $20 from people doesn't add up.
They're on a treadmill with more competent competitors than anyone probably expected grabbing at their ankles, and I don't think any model that relies on them pausing to cash in on their progress actually works out.
> expand brief please
OK!
Longcat-flash-thinking is not super popular right now; it doesn't appear on the top 20 at open router. I haven't used it, but the market seems to like it a lot less than grok, anthropic or even oAI's open model, oss-20b. Like I said I haven't tried it.
And to your point, once models are released open, they will be used in DPO post-training / fine-tuning scenarios, guaranteed, so it's hard to tell who's ahead by looking at an older open model vs a newer one.
Where are the wins coming from? It seems to me like there's a race to get efficient good-enough stuff in traditional form factors out the door; emphasis on efficiency. For the big companies it's likely maxing inference margins and speeding up response. For last year's Chinese companies it was dealing with being compute poor - similar drivers though. If you look at DeepSeek's released stuff, there were some architectural innovations, thinking mode, and a lottt of engineering improvements, all of which moved the needle.
On treadmills: I posit the oAI team is one of the top 4 AI teams in the world, and it has the best fundraiser and lowest cost of capital. My oAI bull story is this: if capital dries up, it will dry up everywhere, or at the least it will dry up last for a great fundraiser. In that world, pausing might make sense, and if so, they will be able to increase their cash from operations faster than any other company. While a productive research race is on, I agree they shouldn't pause. So far they haven't had to make any truly hard decisions though -- each successive model has been profitable and Sam has been successful scaling up their training budget geometrically -- at some point the questions about operating cashflow being deployed back to R&D and at what pace are going to be challenging. But that day is not right now.
I mean they are clearly trying to get some attention here. I wouldn't bite.
OpenAI is many things but I don't think I would call it boring or desperate. The title seems more desperate to me.
The lad doth continue to protest as valuations reach $1T. I wonder if he passed on some early stock and just can't get over it
That's disappointing to hear. I've generally liked Ed's writing but all his posts on AI / OAI specifically feel like they come from a place of seething animosity more than an interest in being critical or objective. At least one of his posts repeated the claim that essentially all AI breakthroughs in the last few years are completely useless, which is just trainwreck hyperbole no matter where you lie on the spectrum as far as its utility or potential. I regularly use it for things now that feel like genuine magic, in wonder to the point of annoying my non-technical spouse, for whom it's just all stuff computers can do. I don't know if OpenAI is going to be a gazillion dollar business in ten years but they've certainly locked in enough customers - who are getting value out of it - to sustain for a while.
If you spend more money training the model and offering it as a service (with all the costs that that entails) than you earn back directly from that model's usage, it can only be profitable if you use voodoo economics to fudge it.
Luckily we live in a time period where voodoo economics is the norm, though eventually it will all come crashing down.
You’re right, but that’s not what’s happening. Every major model trained at Anthropic and oAI have been profitable. Inference margins are on the order of 80%.
> Inference margins are on the order of 80%.
Source, please?
That’s true, but OpenAI and its proponents say each model is individually profitable so if R&D ever stops then the company as a whole will be profitable.
The problem with this argument is that if R&D ever stops, OpenAI will not be differentiated (because everyone else will be able to catch up), so their pricing power will disappear, and they won't be able to charge much more than the inference costs.
You're missing that they're pricing the value of models progressing them towards AGI, and their own use of that model for research and development. You can argue the first one, and the second is probably not fully spun up yet (though you can see it's building steam fast), but it's not total fantasy economics, it's just highly optimistic because investors aren't going to buy the story from people who're hedging.
> progressing them towards AGI
I don't see any reason to believe that LLMs, as useful as they can be, ever lead to AGI.
Believing this in an eventuality is frankly a religious belief.
what exactly does “come crashing down” mean? a service with 700 million users will cease to exist? close shop, oooops our bad?
same “come crashing down” arguments permiated HN on Uber and Meta monetizing mobile and …
nothing is crashing down at this type of “volume”/user base…
If you've been around, imgur, basically. All the image hosting solutions before it (imageshack) and all the file hosting solutions before them. Yahoo.
"Crashing" in this context doesn't mean something goes completely away, just that its userbase dwindles to 1-5-10% of what it once was and it's no longer part of the zeitgeist (again, Yahoo).
[dead]
> He wrote back and said "LOL, too long."
Some nerve
Seriously. It took more time to respond with disrespect than to just ignore it.
I meant more the nerve to say it was "too long". Ed Zitron may be right but he's got no right to accuse anyone else of writing too many words.
It's a rude response from someone whose public persona is famously rude and abrasive. It's also worth considering the difference between publishing 10000 words to an audience of subscribers, and sending 10000 words unsolicited to a stranger.
Especially rude given, if he was feeling it was too long, he could've had an AI summarize it.
But this shows a certain intellectual laziness/dishonesty and immaturity in the response.
Someone's taken the time to write a response to your article, you can choose to learn from it (assuming it's not an angry rant), or you could just ignore it.
In fact, that completely dismisses this stupid article for me.
I mean, using ai to summarise someone's arguments and then using ir in any way would be considerably more dishonest, lazy and rude.
Like, expectation that he will treat an unsolicited email with all seriousness is absurd in the first place, but ai summarize it would be wtf.
Not responding was a perfectly good neutral action.
Instead they chose to respond with a "LOL" and saying it was too long, like they're a pretty unintellectual person.
Let's agree to disagree.
I wouldn't write them off yet - but if their funding dries up and there's no more money to support their spending habits this will seem like a great prediction. Giving away stuff that's usually expensive for free is a great way to get numbers - It worked for facebook, uber and many others but it doesn't mean you'll become a profitable company.
Anyone with enough money can buy users - example they could start an airline tomorrow where flights are free and get a lot of riders - but if they don't figure out how to monetize, it'll be a very short experiment.
A bunch of this week’s OpenAI announcements address monetization, actually.
If they charged users what it actually costs to run their service, almost nobody would use it.
i dont believe this for a second. Inference margins are huge, if they stopped R&D tomorrow they would be making an incredible amount of money, but they cant stop investing because they have competitors.
its all pretty simple
It's this and it's really funny to see users here argue about how the revenue is really good and what not.
OpenAI is only alive because it's heavily subsidizing the actual cost of the service they provide using investor money. The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.
> The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.
You have hit the nail on the coffin To me, it is natural for investor money to dry up as nobody should believe that things would always go the right way yet it seems that openAI and many other are just on the edge... so really its a matter of when and not if
So in essense this is a time bomb, tick tock, the time starts now and they might be desperate because of it as the article notes.
Inevitably it will get jammed with ads until barely profitable. Instead of being able to just cut and paste the output into your term paper, you're going to have to comb through it to remove all the instances of "Mountain Dew is for me and you!" from the output.
I thought it was starting when Ilya said that scaling has plateaued about a year ago. Now confirmed with GPT-5. Now they'll need to sell a pivot from AGI to productization of what they already have with a valuation that implies reaching AGI?
Reminds me of MoviePass or early stage Uber. Everything is revolutionary and amazing when VCs are footing the bill. Once you have to contend with market pricing things tend to change.
One of the consequences of this is that is there actual economic growth here? If it becomes a browser / social network / workplace productivity etc. company, it's basically becoming another google/microsoft. While great for OpenAI, is there a lot of space for them to find new money rather than just take it from Google/Microsoft/facebook?
Most things written about this subject is already polarizing. I'd believe this if there was more internal company data than just some outsider using the same secondary data that openai seemingly manipulates to draw conclusions that have so many logical holes in them, they won't hold half a litre of water for 5 minutes.
I'm filing this under click-bait.
Sounds like they've realised AGI isn't just round the corner, and are retrenching to productise the things their tech can do well. (which is a lot).
Here's what I know:
- Ed has no insider information on the accounting or strategy of these AI companies and primarily reacts to public rumors and/or public announcements. He has no education in the field or any special credentials relating to it
- The people with full information are intelligent, and are continually pouring a shit-tonne of money into it at huge valuations
To agree with his arguments you have to explain how the people investing are being fooled.. which is never brought up
> To agree with his arguments you have to explain how the people investing are being fooled
The people with insider knowledge are also the people who are financially invested in AI companies, and therefore incentivized to convince everyone else that growth will continue.
The “arguments” I see about that are always some variation of “they were wrong about WeWork!”, and leave it at that. Obviously smart people can be wrong, obviously dumb money exists, but the entire VC model is that the vast majority of your ultra-risky investments will fail, so pointing to failures proves nothing.
Their only moat is that they started 'AI revolution'. More shock waves like DeepSeek release are still to come. Not too mention that LLM->AGI transition in near future is a moot point. They're riding the wave but for how much longer?
>the only real difference is the amount of money backing it
Judging by how often Sam Altman makes appearances in DC, it's not just money that sets OpenAI apart. It's likely also a strategically important research and development vehicle with implicit state backing, like Intel or Boeing or Palantir or SpaceX. The losses don't matter, they can be covered by a keystroke at the Fed if necessary.
> Edward Benjamin Zitron (born 1986 or 1987) is an English technology writer, podcaster, and public relations specialist. He is a critic of the technology industry, particularly of artificial intelligence companies and the 2020s AI boom.
I usually like his analysis but this one smacks of extreme & unnecessary bias.
Describing GPT5 as underwhelming seems subjective and somehow also wrong. It's won me and many other devs over. And Sora 2 is also clearly impressive.
GPT5 completely demolishes every other AI model for me. With minimal prompting it repeatedly and correctly makes massive refactors for me. All the other models pump out garbage on similar tasks.
I liked the broader article, "Why Everybody is Losing Money on AI", more for the overhead perspective:
https://www.wheresyoured.at/why-everybody-is-losing-money-on...
What would be a balanced perspective? Perhaps that oAI may now be another "boring" startup in that it is no longer primarily about moving the technology frontier, but about further scaling while keeping churn low, with margins (in the broader sense, i.e. for now prospective margins) becoming increasingly important?
This is not a direct response to this piece, but I wrote a short post about the egregious errors Zitron is comfortable pushing in order to make things sound as bad as possible.
https://crespo.business/posts/cost-of-inference/
>Post ragebaiting bearish article on AI to hackernews
>Make front page
“Salesforce is just a database wrapper!”
Boring take
OpenAI has 700m WAU which is definitely not nothing
Three inline subscription CTAs, a subscription pop-up, and a subscribe-wall a few paragraphs in.
Oof!
Reacting to what I could read without subscribing: turns out profitably applying AI to status-quo reality is way less exciting than exploring the edges of its capabilities. Go figure!
it's a shame; i generally agree with most of what ed has to say and i think his arguments come from a good place, but the website is pretty irritating and i find his delivery to be breathless and melodramatic to the point of cliche (not befitting the serious nature of the topics he argues). i had to stop listening to his podcast because of the delivery; its not an uncommon situation for other CZM podcasts but at least some of them handle their editorial content with a little more maturity (shout out to Molly Conger's Weird Little Guys podcast).
I hate to make the comparison between two left-ish people who yell for a living just because they're both British, but it kinda feels like ed is going for a john oliver type of delivery, which only really works well when you have a whole team of writers behind you.
I love almost all of the CZM shows, and even I have a hard time making it all the way through a full-on rant from Ed :/ and I agree with him. Sorry, Ed.
He's a performative contrarian. His arguments are...fine but not worth the spittle that comes along with it.
Ads or the obvious path, they just haven't had time to pull it off yet.. plus it's going to be hard to pull it off without weakening the experience so they'd like to push that out as much as possible similar to how Google has only eroded the experience slowly over time. Their biggest competitor is Google
They don’t really need to make much money on ads. They just need to weaken the free user experience to convert as many as possible into paid subscribers, then shake off the rest.
Why do you think they are starting these video sites? Ads are most certainly going to blow up on all the AI video platforms.
[dead]
I feel like this guy is getting a bit too big for his britches.
Which is ironic since he’s been wrong about almost everything for years, and just keeps doubling down.
Is GPT-5 so bad? I often find it pretty impressive, if slow.
The intensely negative reaction to GPT-5 is a bit weird to me. It tops the charts in an elaborate new third-party evaluation of model performance at law/medicine, etc [0]. It's true that it was a bit of an incremental improvement to o3, but o3 was a huge leap in capabilities to GPT4, a completely worthy claimant to be the next generation of models.
I will be the first person to say that AI models have not yet realized the economic impact they promised - not even close. Still, there are reasons to think that there's at least one more impressive leap in capabilities coming, based on both frontier model performance in high-level math and CS competitions, and the current focus of training models on more complex real-world tasks that take longer to do and require using more tools.
I agree with the article that OpenAI seems a bit unfocused and I would be very surprised if all of these product bets play out. But all they need is one or two more ChatGPT-level successes for all these bets to be worth it.
[0] https://mercor.com/apex/
I think a lot of it is a reaction to the hype before the launch of GPT-5. People were sold and were expecting a noticeable big step (akin to GPT 3.5-4), but in reality it's not that much noticeably better for the majority of use cases.
Don't get me wrong, I actually quite like GPT-5, but this is how I understand the backlash it has received.
Yeah that is fair. I admit to being a bit bummed out as well. One might almost say that if O3 was effectively GPT5 in terms of performance improvement, that we were all really hoping for a GPT6, and that's not here yet. I am pretty optimistic, based on the information I have, that we will see GPT6-class models which are correspondingly impressive. Not sure about GPT-7 though.
Honestly, I’m skeptical of that narrative. I think AI skeptics were always going to be shrill about how it was overhyped and thus this proves how right they were! Seriously, how good would GPT5 have had to be in order for Ed to NOT write this exact post?
I’m very happy with GPT5, especially as a heavy API user. It’s very cost effective for its capabilities. I’m sure GPT6 will be even better, and I’m sure Ed and all the other people who hate AI will call it a nothing burger too. So it goes.
> based on both frontier model performance in high-level math and CS competitions
IMO the only takeaway from those successes is that RL for reasoning works when you have a clear reward signal. Whether this RL-based approach to reasoning can be made to work in more general cases remains to be seen.
There is also a big disconnect between how these models do so well in benchmark tasks like these that they've been specifically trained for, and how easily they still fail in everyday tasks. Yesterday I had the just released Sonnet 4.5 fail to properly do a units conversion from radians to arcsec as part of a simple problem - it was off by a factor of 3. Not exactly a PhD level math performance!
I mean, I agree. There is not yet a clear path/story as to how a model can provide a consistently expert-performance on real-world tasks, and the various breakthroughs we hear about don't address that. I think the industry consensus is more just that we haven't correctly measured/targeted those abilities yet, and there is now a big push to do so. We'll see if that works out.
I agree. I mean, I can get o3 right from the API if I choose, but 5-Thinking is better than o3, and 5-Research is definitely better than o3 pro in both ergonomics and output quality. If you read reddit about 4o, the group that formed a parasocial relationship with 4o and relied on its sycophancy seems to be the main group complaining. Interesting from a product market fit perspective, but not worrying as to "Is 5 on the whole significantly better than 4 / o1 / o3?" It is. Well, 5-mini is a dumpster fire, and awful. But I do not use it. I'm sure it's super cheap to run.
Another way to think of oAI the business situation is: are customers using more inference minutes than a year ago? I definitely am. Most definitely. For multiple reasons: agent round trip interactions, multimodal parsing, parallel codex runs..
40 minute read.
The problem with this guy is that it's always the same 40 minutes on loop.
I'm not even saying he's "wrong"; I wouldn't want to be long OpenAI (I don't think they're doomed but that's too much risk for my blood). But I would bet all my money that Zitron has no idea what he's talking about here.
Yeah, I'm also not saying that, I'm not "OMG AGI tomorrow" either. I think he was one of the first to voice concerns about the financial situation of AI companies and that was valuable, but if you look at his blog he's basically written the same post nonstop for two years. How many times do you need to say that?
(I also flatly disbelieve in AGI).
Mind sharing why you think that (genuinely curious)?
I think Ed hit some broad points, mostly (i) there were some breathless predictions (human level intelligence) that aren't panning out; (ii) oh wow they burn a ton of cash. A ton; (iii) and they're very Musky: lots of hype, way less product. Buttressed with lots of people saying that if AI did a thing, then that would be super useful; much less showing of the thing being done or evidence that it's likely to happen soon.
None of which says these tools aren't super useful for coding. But I'm missing the link between super useful for coding and a business making $100B / year or more which is what these investments need. And my experience is more like... a 20% speed improvement? Which, again, yes please... but not a fundamental rewriting of software economics.
I have been strongly-tempted to make Zitron and Marcus GPTs... But every time I think about getting started I realize a simple shell script would work better.
Oh wait Claude did a better job than I would have:
https://claude.ai/share/32c5967a-1acc-450a-945a-04f6c554f752
wow claude gave them both pretty scathing descriptions and you didn't even provide much context lol.
maybe claude is funny.
Maybe if you click on all the links and go down rabbit holes. It doesn't take more than a few minutes to get through otherwise.
Read time estimated by AI*
they failed to estimate that when they create a popup, i will close the website so it's 0 min read
That can’t be right. It is not that long at all.
The rest is paywalled
didn't even realize that, have to double check and saw the "read full story" banner
How long until disciples of Zitron realize that he is just feeding them sensationalist doomer slop in order to drive his own subscription business. Maybe never!
I find he exhbits the same characteristics of things that drove people like red letter media in the early aughts to be "successful". Make something so long and tedious that the idea of arguring with its own points would require something twice as long, and as such the ability to instead just motion to an uncontested 40 minute longread is then used as a surrogate for any actual arguement. Said diffferently, it's easy for AI skeptics to share this as some way of proving backing up their own point. It's 40 minutes long, how could it be wrong!
> The GPT-5 upgrade for ChatGPT was a dud
All Claude Code users are moving to Codex as a result. I don't call that a dud
It also has no moat. I started using Gemini this month and honestly haven't missed OpenAI for a second.
I, personally, think that OpenAI is way overhyped and will never deliver on that promise. But... it might not matter.
There is going to be an awful lot of disruption to the economy caused by displacing workers with AI. That's going to be a massive political problem. If these people get their way, in the future AI will do all the work but there'll be no one to buy their products because nobody is employed and has money.
But I just don't see one company dominating that space. As soon as you have an AI, you can duplicate it. We've seen with efforts like DeepSeek that replicating it once it's done is going to require significantly less effort. So that means you just don't have the moat you think you do.
Imagine the training costs get to $100M and require thousands of machines. Well, within a few years it's going to be $1M or less.
So the question is: can OpenAI (or any other company) keep advancing to outpace Moore's Law? I'm not convinced.
But here's why it might not matter: Tesla. Tesla should not be a trillion dollar company. No matter how you value it on fundamentals, it should be a fraction of that. Value it as a car maker, an energy company or whatever and it gets nowhere near $1T. Yet, it has defied gravity for years.
Why? IMHO because it's become too large to fail and, in part, it's now an investment in the wealth transfer that is going on and will continue from the government to the already wealthy. The government will make sure Tesla won't fail as long as it's friendly to the administration.
As much as AI is hyped, it's still incredibly stupid and limited. We may get to the point where it's "smart" enough to displace a ton of people but it's so expensive to run it's cheaper to employ humans.
Yup, what a pack of desperate losers. They should already be at $50 bill revenue, 90% gross margin, 60% operating margin, no capex. Unit economics can't possibly change, they have eaked out every last ounce of efficiency in training, inference, caching and hit every use case possible. It's all just really terrible.
I don't disagree that openai is desperate, this is a fierce competition and Google has a pretty huge head start in a lot of ways, but I wonder at what point these people who constantly dismiss LLMs and AI will change their tune? I understand hating it and wishing we could all agree to stop things, I do too, but if you can't find any uses for it at this point it's clear you're not trying
Ed's newsletter, on HN, unflagged?! Maybe the bubble really is about to pop.
I've read pretty much all his posts on AI. The economics of it are worrying, to say the least. What's even more worrying is how much the media isn't talking about it. One thing Ed's spot on about: the media loved parroting everything Sam and Dario and Jensen had to say.
Speaking of boring and desperate, if you browse the posts on this "newsletter" for more than 2 minutes it's clear that the sole author is a giant bozo who also happens to be in love with himself.
I'd rather read a trillion lines of AI slop.
not surprising giving that in the internal fight the boring execs won, and the excitingly brilliant researchers lost and left the company.
It would be awesome if this blog post was made by an OpenAI [investor / stakeholder / whatever that non profit has] in order to drive up engagement for defending or hyping up OpenAI's efforts.
Epic ragebait dude.
Do people really buy this nonsense? I mean just this week Sora 2 is creating videos that were unimaginable a few months ago. People writing these screeds at this point to me seem like they’re going through some kind of coping mechanism that has nothing to do with the financials of AI companies and everything to do with their own personal fears around what’s happening with machine intelligence.
So, wait, you're saying that these guys just aren't impressed by the AI technology, and that is blinding them to the fact that the AI companies' economics look really good?
That is a laughable take.
The AI technology is very very impressive. But that doesn't mean you can recover the hundreds of billions of dollars that you invested in it.
World-changing new technology excites everyone and leads to overinvestment. It's a tale as old as time.
I’m saying that seeing dubious economics is blinding people from accepting what’s actually going on with neural networks, and it leads to them having a profoundly miscalibrated mental model. This is not like analyzing a typical tech cycle. We are dealing with something here that we don’t really understand and transcends basic models like “it’s just a really good tool.”
That pseudo-religious angle that seems to have infected a lot of the tech industry is part of what "doomers" like myself or Zitron criticise.
It is just a really good tool. And that's fine. Really good tools are awesome!
But they're not AGI - which is basically the tech-religious equivalent to the Second Coming of Christ and about as real.
The fear isn't about the practicability of the tool. It's about the mania caused by the religious component.
It’s not a religious angle, we literally don’t know how or why these models work.
Yes we know how to grow them, but we don’t know what is actually going on inside of them. This is why Anthropic’s CEO wrote the post he did about the need for massive investment in interpretability.
It should rattle you that deep learning has these emergent capabilities. I don’t see any reason to think we will see another winter.
100% agreed, but that is not a reason to spend this much money on it.
We don't know how acetaminophen works exactly either, but it's still just a really good tool...
Ah, this time it's different. Understood.
(To be clear, I do agree that AI is going to drastically change the world, but I don't agree that that means the economics of it magically make sense. The internet drastically changed the world but we still had a dotcom bubble.)
Yep, this time it’s different.
lol "unimaginable" aka boring creepy slop that drives engagement on facebook for old people.
I buy it. I perceive you and people who talk like you (read: LLM Boosters) as literal cult members.
Yeah, I know. It’s weird to admit this kind of obvious error in public though, speaks to a very big epistemological hole on your part.
There's no point wasting time on a blatantly biased opinion, even if it has some truths to some extent somewhere in the tirade
And no we didn't need a subscription reminder every 10s of interaction
oh god no please, not this guy again
I feel like people speculating on the unsustainability of their losses probably value what they know more than what they don't know.
In this case however, what you don't know is more relevant than what you do know. Despite the author's knowledge of publicly available information, I believe there is more the author is not aware of that might sway their arguments. Most firms keep a lot of things under wraps. Sure they are making lots of noise - everyone does.
I guess rather they value... math?
The numbers don't add up and there are typical signs of the Magnificent 7 engaging in behavior to hide financials/ economics from their official balance sheets and investors.
PE & M7s are teaming up creating SPACS which then build and operate data centers.
By wonders of regulation and financial alchemy, that debt/ expenditure doesn't need to be reported as infra invest in their books then.
It's like the subprime mortgage mix all over again just this time it's about selling lofty future promises to enterprises who're gonna be left holding the bag on outdated chips or compute capacity without a path to ROI.
And there are multiple financial industry analysts besides Ed Zitron who raise the same topics.
Worthwhile listen: https://www.theringer.com/podcasts/plain-english-with-derek-...
Enterprises are always selling lofty future promises.
And your subprime mortgage reference - suggesting they are manipulating information to inflate the value of the firm - doesn't cleanly apply here. For once, here is a company that seems to have faithfully represented their obscene losses and here we are already comparing them to the likes of enron. Enron never reported financial data that can be categorized as losses.
I see lots of people speculating about these losses and I really wish someone investing in openai could come out and say something vague about why they are investing.
Once again, I need not tell you, the information available to the general public is not the same as that which is available to anyone that has invested a significant amount into openai.
So once again, reign in your tendencies to draw conclusion from the obscene losses they have reported - especially since I'm positive you do not have the right context to be able to properly evaluate whether these losses make sense or not.