Here is a charitable perspective on what's happening:
- Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.
- Nvidia instead invests in other companies that use their gpus by providing them deals that must be spent on nvidia products.
- This accelerates the growth of these companies, drives further lock in to nvidia's platform, and gives nvidia an equity stake in these companies.
- Since growth for these companies is accelerated, future revenue will be brought forward for nvidia and since these investments must be spent on nvidia gpus it drives further lock in to their platform.
- Nvidia also benefits from growth due to the equity they own.
This is all dependent on token economics being or becoming profitable. Everything seems to indicate that once the models are trained, they are extremely profitable and that training is the big money drain. If these models become massively profitable (or at least break even) then I don't see how this doesn't benefit Nvidia massively.
> Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.
Here's an idea: they could make actual GPUs used for games affordable again, and not have Jensen Huang lie on stage about their performance to justify their astronomical prices. Sure, companies might want to buy them for ML/AI and crash the market again but I'm sure a company of their caliber could solve that if they _really_ wanted to.
I also just don’t understand, as someone with no business experience, how they aren’t just pouring all of that money into enhancing their production capacity. That’s very clearly their bottleneck here.
Yes, I’m certain they are spending an astronomical amount on that already, but why not more? Surely paying more money for construction of more facilities still nets gain even if you run into diminishing returns?
Instead they set up this whacko tax laundering scheme? Just seems like more corporate pocket filling to me, an idiot with no business knowledge.
The bottleneck is TSMC, who also make chips for almost every other hardware vendor.
TSMC is indeed increasing their production capability as fast as possible, but it's not easy... chip foundries are extremely expensive, complex, and take serious expertise to operate.
It’s called seeding the market. If they can accelerate the growth of potential customers, it will be more profitable than just increasing production to serve existing customers.
Think of exponential growth — would you rather increase the base or the exponent?
Hedging their bets against a potential sudden downturn in consumption of their product, e.g., an AI bubble exploding? If they invest heavily in production capacity only to find that there is not commensurate consumption, then they'll have lost badly.
Why would they want to do that? The only sector that matter to nvidia is datacenter, its where 90%+ of their profits are. Making their consumer sector even less profitable just seems like a waste of time
Yup. Not just Nvidia. Just look at the quarterly results reported by Amazon, Google, Meta, Microsoft and Apple. Each one is reporting revenues never before seen in history. If you make 100 Billion a quarter you have to spend it on something.
These guys are running hyper optimized cash extraction mega machines. There is no comparison to previous bubbles, cause so no such companies ever existed in the past.
What's shocking is the gulf between those companies and corporate 'normality'.
Eastern Airways, a UK airline, has just gone bust due to accumulated debts of £26 million. That's not even a rounding error for Google, yet was enough to put a 47-year-old company into bankruptcy and its staff out of work.
I think the only historical parallel to this disparity was the era of the East India Company.
100 billion a quarter is Alphabet, right? Given how much click fraud there is, and that every org and business under the sun is held to ransom to feature on the SERP for their own name even — it’s tempting to say Google’s become a private tax on everything.
They're "massively profitable" because they're laying off large portions of a major cost center - labor - and backloading uncoming data center construction costs. As those come due, and labor needs rise again, that profit disappears.
So many such profitable companies are the best possible evidence for the need for drastic antitrust intervention. The lack of competition and regulation is leading to a massive drain on every other sector.
This bubble is caused by excess competition. There are 4 large companies who believe that a large new market is being created so each is investing large amounts without any evidence that there will be a single winner that dominates the future market. None of these companies has anything remotely resembling a monopoly except for Amazon in online retail.
Your conclusion about training being the cost factor that will eventually align with profitability in the inference phases relies on training new models not being an endless arms race.
I'm just confused why people think token-based computing is going to be in such demand in the future. It's such a tiny slice of problems worth solving.
Yep. Same vibes as “ha ha who needs internet connected appliances” (pretty much all appliances are internet connected now). And the apocryphal “there is a worldwide market for maybe 5 computers”.
Right. As far as I can tell, OpenAI, Grok, etc sell me tokens at a loss.
But I am having a hard time figuring out how to turn tokens into money (i.e. increased productivity). I can justify $40-$200 per developer per month on tokens but not more than that.
There’s about 5M software devs in the US so even at $1000/year/person spend, that’s only $5B of revenue to go around. Theres plenty of other uses cases but focusing on pure tech usage, it’s hard to see how the net present value of that equates to multiple trillions of dollars across the ecosystem.
It's the first new way of interacting with computers since the iPhone. It's going to be massively valuable and OpenAI is essentially guaranteed to be one of the players.
It's not windows mobile because OpenAI was first and is the clear leader in the market. Windows mobile was late to the party and missed their window.
Palm is closer but it's a different world. It's established that Internet advertising companies are worth trillions. It's only in retrospect that what Palm could have been is obvious.
Barring something very unexpected OpenAI is coming out on top. They're prepaying for a good 5-10 years of compute. That means their inference and training for that time are "free" because they've been paid for. They're going to be able to bury their competition in money or buy them out.
Windows mobile by the time it looked like the iPhone was late to the party. But windows had been releasing a mobile os for a long time before that. Microsoft was first, they just didn’t make as good of a product as Apple despite their money.
OpenAI is also first, but it is absolutely not a given that they are the Apple in this situation. Microsoft too had money to bury the competition, they even staged a fake funeral when they shipped windows phone 7.
Yep. It would have to be something that dramatic to render all the technology and infrastructure OpenAI has obsolete. But if it's anything like massive data training on a huge number of GPUs then OpenAI is one of the winners.
This is where the money is. Anthropic just released claude for excel. If it replaces half of the spreadsheet pushers in the country theyre looking at massive revenue. They just started with coding because theres so much training data and the employees know a lot about coding
I'm not trying to be annoying, but surely if you'd justify spending $200/developer/month, you could afford $250/month...
The reason I wonder about that is because that also seems to be the dynamic with all these deals and valuations. Surely if OpenAI would pay $30 billion on data centers, they could pay $40 billion, right? I'm not exactly sure where the price escalations actually top out.
why would they sell you at a loss when they have been decreasing prices by 2x every year or so for the last 3 years? people wanted to purchase the product at price "X" in 2023 and now the same product costs X costs 10 times less over the years.. do you think they were always selling at a loss?
I can't read your hyperbolically titled paywalled medium post, so idk if it has data I'm not aware of or is just rehashing the same stats about OpenAI & co currently losing money (mostly due to training and free users) but here's a non paywalled blog post that I personally found convincing: https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...
These kinds of deals were very much a la mode just prior to the .com crash. Companies would buy advertising, then the websites and ad agencies would buy their services and they'd spend it again on advertising. The end result is immense revenues without profits.
Circular investments were also a compounding factor in the Japanese asset price bubble.
The practice was known as “zaitech”
> zaitech - financial engineering
> In 1984, Japan’s Ministry of Finance permitted companies to operate special accounts for their shareholdings, known as tokkin accounts. These accounts allowed companies to trade securities without paying capital gains tax on their profits.
> At the same time, Japanese companies were allowed to access the Eurobond market in London. Companies issued warrant bonds, a combination of traditional corporate bonds with an option (the “warrant") to purchase shares in the company at a specified price before expiry. Since Japanese shares were rising, the warrants became more valuable, allowing companies to issue bonds with low-interest payments.
> The companies, in turn, placed the money they raised into their tokkin accounts that invested in the stock market. Note the circularity: companies raised money by selling warrants that relied on increasing stock prices, which was used to buy more shares, thus increasing their gains from investing in the stock market.
There’s one key difference in my opinion: pre-.com deals were buying revenue with equity and nothing else. It was growth for growth’s sake. All that scale delivered mostly nothing.
OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.
> they’re using their equity to buy compute that is critical to improving their core technology
But we know that growth in the models is not exponential, its much closer to logarithmic. So they spend =equity to get >results.
The ad spend was a merry go round, this is a flywheel where the turning grinds its gears until its a smooth burr. The math of the rising stock prices only begins to make sense if there is a possible breakthrough that changes the flywheel into a rocket, but as it stands its running a lemonade stand where you reinvest profits into lemons that give out less juice
There is something about an argument made almost entirely out of metaphors that amuses me to the point of not being able to take it seriously, even if I actually agree with it.
OpenAI invests heavily into integration with other products. If model development stalls they just need to be not worse than other stalled models while taking advantage of brand recognition and momentum to stay ahead in other areas.
In that sense it makes sense to keep spending billions even f model development is nearing diminishing return - it forces competition to do the same and in that game victory belongs to the guy with deeper pockets.
Investors know that, too. A lot of startup business is a popularity contents - number one is more attractive for the sheer fact of being number one. If you’re a very rational investor and don’t believe in the product you still have to play this game because others are playing it, making it true. The vortex will not stop unless limited partners start pushing back.
But, if model development stalls, and everyone else is stalled as well, then what happens to turn the current wildly-unprofitable industry into something that "it makes sense to keep spending billions" on?
I suspect if model development stalls we may start to see more incremental releases to models, perhaps with specific fixes or improvements, updates to a certain cutoff date, etc. So less fanfare, but still some progress. Worth spending billions on? Probably not, but the next best avenue would be to continue developing deeper and deeper LLM integrations to stay relevant and in the news.
The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.
Because they’re not that wildly unprofitable. Yes, obviously the companies spend a ton of money on training, but several have said that each model is independently “profitable” - the income from selling access to the model has overcome the costs of training it. It’s just that revenues haven’t overcome the cost of training the next one, which gets bigger every time.
> the income from selling access to the model has overcome the costs of training it.
Citation needed. This is completely untrue AFAIK. They've claimed that inference is profitable, but not that they are making a profit when training costs are included.
The bigger threat is if their models "stall", while a new up-start discovers an even better model/training method.
What _could_ prevent this from happening is the lack of available data today - everybody and their dog is trying to keep crawlers off, or make sure their data is no longer "safe"/"easy" to be used to train with.
Well, the thing is that that kind of hardware chips quickly decrease in value. It's not like the billions spend in past bubbles like the 2000s where internet infrastructure was build (copper, fibre) or even during 1950s where transport infrastructure (roads) were build.
Data centers are massive infrastructural investments similar to roads and rails. They are not just a bunch of chips duct taped together, but large buildings with huge power and networking requirements.
Power companies are even constructing or recommissioning power plants specifically to meet the needs of these data centers.
All of these investments have significant benefits over a long period of time. You can keep on upgrading GPUs as needed once you have the data center built.
They are clearly quite profitable as well, even if the chips inside are quickly depreciating assets. AWS and Azure make massive profits for Amazon and Microsoft.
I think that, at best, that description boils down to Nvidia, Oracle, etc inventing fake wealth to build something and OpenAI building their own fake wealth by getting to use that new compute effectively for free.
There are physical products involved, but the situation otherwise feels very similar to ads prior to dotcom.
The same way the stock market invents a trillion dollars of fake wealth on a strong up day?
That's capital markets working as intended. It's not necessarily doomed to end in a fiery crash, although corrections along the way are a natural part of the process.
It seems very bubbly to me, but not dotcom level bubbly. Not yet anyway. Maybe we're in 1998 right now.
The stock market isn't inventing money. Those investing in the stock market might be, those buying on leverage for example.
Capital markets weren't intended for round trip schemes. If a company on paper hands 100B to another company who gives it back to the first company, that money never existed and that is capital markets being defrauded rather than working as expected.
I think it's worse. The US market feels like a casino to me right now and grift is at an all time high. We're not getting good economic data, it's super unpredictable, and private equity is a disaster waiting to happen IMO. For sure there are smart people able to make money on the gamble, but it's not my jam.
I don't tend to benefit from my predictions as things always take longer to unfold than I think they will, but I'm beyond bearish at present. I'd rather play blackjack.
> It seems very bubbly to me, but not dotcom level bubbly.
Not? Money is thrown after people without really looking at the details, just trying to get in on the hype train? That's exactly how the dotcom bubble felt like.
We shouldn't judge whether an indicator is stable or okay only by looking to see if its the highest historical value.
PE ratios of 50 make no sense, there is no justification for such a ratio. At best we can ignore the ratio and say PE ratios are only useful in certain situations and this isn't one of them.
Imagine if we applied similar logic to other potential concerns. Is a genocide of 500,000 people okay because others have done drastically more?
I’m not asking if it makes sense, I’m simply pointing out that by that measure this is much less extreme than 2000. As I stated, I think we’re in a bubble, so valuations won’t make much sense.
If you have a better measure, share it. I trust data more than your or my feelings on the matter.
I sell you a cat for $1B and you sell me a dog for $1B and now we’re both billionaires! Whether the capital markets “want” that or not it’s still silly.
> OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.
I'm commenting here in case a large crash occurs, to have a nice relic of the zeitgeist of the time.
Happy to have provided. I’m not an AI bull and not in any way invested in the U.S. economy besides a little money in funds, but I do try to think about the war of today vs the war of yesterday. Hopefully that’s always en vogue.
Eventually when ChatGPT replaces Google Search, they will run ads, and so have that whole revenue stream. Still isn't enough money to buy the trillions worth of infrastructure they want, but it might be enough to keep the lights on.
That's an insightful point! Making insightful points like that one is taxing on the brain, you should consider an electolyte drink like Brawndo™ (it's got what plants crave) to keep yourself sharp!
Ugh I hate it so much, but you're right, it's coming.
One thing I've been contemplating lately is that from a business perspective, when your competitors expand their revenue avenues (generally through ads) you have three options: copy them to catch up, do nothing and perish, and lobby the government for increased consumer protections.
I've started to wonder why we see so few companies do this. It's always "evil company lobbying to harm the its customers and the nation." Companies are made up of people, and for myself, if I was at a company I would be pushing to lobby on behalf of consumers to be able to keep a moral center and sleep at night. I am strongly for making money, but there are certain things I am not willing to do for it.
Targeted advertising is one of these things that I believe deserves to fully die. I have nothing against general analytics, nor gathering data about trends etc, but stalking every single person on the internet 24/7 is something people are put in jail for if they do it in person.
The customers bought real equipment that was claimed to be required for the "exponential growth" of the Internet. It is very much like building data centers.
If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.
This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.
OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.
The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.
As much ChatGPT says I’m basically a genius for asking it a good Vegan cake recipes, I don’t think that is providing it any data it doesn’t already have that makes it anyway better. Also at this point the massive increases in data and computing power seem to bring ever decreasing improvements (and sometimes just decline), so it seems we are simply hitting a limit this kind of architecture can achieve no matter what you throw at it.
ChatGPT chat logs contain massive amount of data teased out of people’s brains. But much of it is lore, biases, misconceptions, memes. There are nuggets of gold in there but it’s not at all clear if there’s a good way to extract them, and until then chat logs will make things worse, not better.
I’m thinking they eventually figure out who is the source of good data for a given domain, maybe.
Even if that is solved, models are terrible at long tail.
When I say models will plateau I don't mean there will be no progress. I mean progress will slow down since we'll be scraping the bottom of the barrel for training data. We might never quite run out but once we've sampled every novel, web site, scientific paper, chat log, broadcast transcript, and so on, we've exhausted the rich sources for easy gains.
Chat logs don’t run out. We may run out of novelty in those logs, at which point we may have ran out of human knowledge.
Or not - there still knowledge in people heads that is not bleeding into ai chat.
One implication here is that chats will morph to elicit more conversation to keep mining that mine. Which may lead to the need to enrage users to keep engagement.
Apple new M5 can run models over 10B parametres and if they give their new Studio next year enough juice, it can run maybe 30B local model. How long is it that you can run a full GPT-5 on your laptop or homeserver with few grands worth of hardware? What is going to happen to all these GPU farms, since as I understood they are fairly useless for anything else?
Very few people own top of the line Macs and most interactions are on phones these days. We are many generations of phones away from running GPT-5 on a phone without murdering your battery.
Even if that weren't true having your software be cheaper to run is not a bad thing. It makes the software more valuable in the long run.
Wasn’t there also a bunch of telecom infrastructure created in the dot-com bubble, tangible products created, etc? Things like servers, telephone wires, underwater internet cables, tech-storefronts, internet satellites, etc.
The other difference (besides Sam's deal making ability) is, willing investors: Nvidia's stock rally leaves it with a LOT of room to fund big bets right now. While in Oracle's case, they probably see GenAI as a way to go big in the Enterprise Cloud business.
The original "Tech" boom was an infrastructure boom by the telecoms funded by leveraged debt. It was an overbuild mismatch with the market timing. If you brought forward the timeline to when that infrastructure was used (late 2000s) you probably would never have had the crash.
This boom is a data center boom with AI being the software layer/driver. This one potentially has a lot longer to run even though everyone is freaking out now. If you believe the AI is rebuilding compute then this changes our compute paradigm in the future. As well as long as we don't get an over leveraged build out without revenue coming in the door. I think we are seeing a lot of revenue come in for certain applications.
The companies that are all smoke and mirrors built on chatGPT with little defensibility are probably the same as the ones you are referring to in the current era. Or the AI tooling companies.
To be clear circular deal flow is not a good look.
I can see the both sides of bull and bear at this moment.
One interesting aspect of this is that, with the exception of OpenAI, all of the companies leading this boom generate massive amounts of income from other arms of their buinesses. I think this is one reason for the potentially longer run, since they can subsidize AI CapEx with these cash flows for quite a while.
I'd gander a guess that there's nothing tech specific here and that fraudulent schemes are well defined for the SEC and commercial courts to take action if something is not kosher
It's usually not actually fraud. It's the amazon reinvesting back into growth, except the unit economics don't work if everyone cashes out at the same time, and if anyone starts cashing out the growth stops and everyone cashes out before it's too late.
* Rise of AI is one of the biggest “transfers” of IP-generated wealth.
* It is also a dramatic increase in the “software is eating the world” trend, or at least an anticipation of such. It kinda turned from everyone dragging their feet through software andoptin over the course of 30 years into a massive stampede.
Edit: the following is incorrect. I didn't know that the change to IRC § 174 was cancelled this summer.
------
What's crazy is that with the
2021 changes to IRC § 174 most software r&d spending is considered capital investment and can't be immediately expensed. Has to be amortized over 5 years.
I don't know how that 11.5B number was derived, but I would wager that the net loss on income statement is a lot lower than the net negative cash flow on cash flow statement.
If that 11.5B is net profit/loss, then whatever the portion of the expense part of the calculation that's software R&D could be 5x larger if it weren't for the new amortization rule.
Real question -- how else is OpenAI supposed to fund itself? It has capital requirements that the most moneyed business companies can't provide. So it has to come up with ways to get access to money while de-risking the terms. Not saying the circularity works but I don't know how else you raise at their scale.
This money is well beyond VC capability.
Either this lets them build to net positive without dying from painful financing terms or they explode spectacularly. Their rate of adoption it seems to be the former.
The tentacles seem a bit limp and disorientated on this one. There are lots of them but they just seem to flop wetly against the windows. I hope they're not going to start decomposing and stink the place up.
If you can only continue to fund a venture using scam-like structures, then maybe it's time to re-evaluate what the goals and value prop of the unfundable venture is.
It's incredible how Tesla used to lose a few hundred million a year and analysis shows would freak out claiming they'd never be profitable. Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.
I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets
> Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft
Rivian stock is down 90%, and I fairly regularly read financial news about it having bad earnings, stock going even lower, worst-in-industry reliability, etc etc.
I don't know why you don't hear about it, but it might be because it's already looking dead in the water so there's no additional news juice to squeeze out of it.
That's true, I shouldn't have written it off and was too eager to make the analogy.
There was a point where because of Tesla's enormous profits, it was seen as ok for Rivian to lose that much in a year, which was incredible because it's about the same amount of money Tesla lost during its entire tenure as a public company. You're right though they've been criticized for it and have paid the (stock) price for it.
Rivian lost something like $5B in 2024, but they're on track to only lose $2.25B in 2025. That trend line is clear. In 2026 they release a much lower cost model, and a lot of that loss has been development of that model. They probably won't achieve profitability in 2026, but if they get their loss down to $1B in 2026, in 2027 we'll likely see them go net positive.
We had an impressive new technology (the Web), and everyone could see it was going to change the world, which fueled a huge gold rush that turned into a speculative bubble. And yes, ultimately the Web did change the world and a lot of people made a lot of money off of it. But that largely happened later, after the bubble burst, and in ways that people didn't quite anticipate. Many of the companies people were making big bets on at the time are now fertile fodder for YouTube video essays on spectacular corporate failures, and many of the ones that are dominant now were either non-existent or had very little mindshare back in the late '90s.
For example, the same year the .com bubble burst, Google was a small new startup that failed to sell their search engine to Excite, one of the major Web portal sites at the time. Excite turned them down because they thought $750,000 was too high a price. 2 years later, after the dust had started to settle, Excite was bankrupt and Google was Google.
And things today sure do strike me as being very similar to things 25, 30 years ago. We've got an exciting new technology, we've got lots of hype and exuberant investment, we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal. And neither side really wants to listen to the more sober voices pointing out that both these things have been true at the same time many times in the past, so maybe it's possible for them to both be true at the same time in the present, too. And, as always, the people who are most confident in their ability to predict the future ultimately prove to be no more clairvoyant than the rest of us.
> we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal.
Um I think nobody is really denying that we are in a bubble. It's normal for new tech and the hype around it. Eventually the bad apples are weeded out and some things survive, others die out.
The first disagreement is how big the bubble is, i.e. how much air is in it that could vanish. And that's because of the second disagreement, which is about how useful this tech is and how much potential it has. It's clear that it has some undeniable usefulness. But some people think we'll soon have AGI replacing everybody and the opposite is that's all useless crap beyond a few niche applications. Most people fall somewhere in between, with a somewhat bimodal split between optimists and skeptics. But nobody really contends that it's a bubble.
>and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.
For Microsoft, and the other hyperscalers supporting OpenAI, they're all absolutely dependent on OpenAI's success. They can realistically survive through the difficult times, if the bubble bursts because of a minor player - for example if Coreweave or Mistral shuts down. But if the bubble bursts because the most visible symbol of AI's future collapses, the value-destruction for Microsoft's shareholders will be 100x larger than OpenAI's quarterly losses. The question for Microsoft is literally as fundamental as "do we want to wipe $1tn off our market cap, or eat $11bn losses per quarter for a few years?" and the answer is pretty straightforward.
Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
> Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
Yeah true, the whole pivot from non-profit to Too Big to Fail is pretty amazing tbh.
They’re dependent on usage of their cloud. I don’t agree that they are as dependent on OAI as you suggest. Ultimately, we’ve unlocked a new paradigm and people need GPUs to do things - regardless of whether that GPU is running OAI branded software or not.
Why? Microsoft has permanent, royalty free access to the frontier models. If OpenAI went under, MSFT would continue hosting GPT-5 on Azure, GitHub Copilot, etc. and not be affected in the slightest.
The couch fascinate me the most because it's almost justifiable. Like offices need furniture and grand openings should be nice; however the cost could never be recovered and the company was way too big to be doing things that don't scale.
In a similar vein, LLM's/AI are clearly impressive technologies that can be done profitably. Spending billions on a model however may not be economically feasible. It's a great example of runaway spending, whereas the weed thing feels more along the lines of a drug problem to me.
Don't forget the perfectly legal use of legislation and bureaucratic precedent that gives them "soft/lossy monopoly" power or all but forces people do to business with them.
And as we saw, once a model is trained you need very little compute to run it and there is very little advantage in begin the 1st model and the 10th model.
Monopoly in this field is impossible, your product won't ever be so good that the competition does not make sense
I’m not so sure. Look for more gov regulations that make it hard for startups. Look for stricter enforcement of copyright (or even updates to laws) once the big players have secured licensing deals, to cut off the supply of cheap training data.
Investors are trying to bet on OpenAI being the first to replace all human skilled labor. Of course, this is foolish for a few reasons:
1. Performance of AI tools improving but marginally so in practice
2. If human labor was replaced, it's the start of global societal collapse so any winnings would be moot.
The one what? What is the secret sauce that will distinguish one LLM from another? Is it patentable? What's going to prevent all of the free LLMs from winning the prize? An AI crash seems inevitable.
Then they're doing it backwards. Google first built a far superior product, then pursued all the tricks to maintain their monopoly. OpenAI at best has the illusion of a superior product, and even that is a stretch.
I don't believe google won the search engine wars because they had the best product, while it may be true, the won because the of the tools they provided to their users. Email, cloud storage, docs/sheets/drive, Chrome, etc
They were already pretty dominant in search by the time they released most if not all of those. They got into that position by being the better search engine - better results and nicer to use (clean design, faster loading times).
The moment properly self-improving AI (that doesn't run into some logistic upper bound of performance) is released, the economy breaks.
The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.
It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.
The machine god would still need resources provided by humans on their terms to run; the AI wouldn’t sweat having to run, for instance, 5 years straight of its immortality just to figure out a 10 years plan to eventually run at 5% less power than now, but humans may not be willing to foot the bill for this.
There’s no guarantee that the singularity makes economic sense for humans.
Your logic might make intuitive sense, but I don't think it is as ironclad as you portray it.
The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.
Silicon valley capital investment firms have always exploited regulatory capture to "compete". The public simply has a ridiculously short memory of the losers pushed out of the market during the loss-leader to exploit transition phase.
Currently, the trend is not whether one technology will outpace the other in the "AI" hype-cycle ( https://en.wikipedia.org/wiki/Gartner_hype_cycle ), but it does create perceived asymmetry with skilled-labor pools. That alone is valuable leverage to a corporation, and people are getting fired or ripped off anticipating the rise of real "AI".
One day real "AI" may exist, but a LLM or current reasoning model is unlikely going to make that happen. It is absolutely hilarious there is a cult-like devotion to the AstroTurf marketing.
The question is never whether this is right or wrong... but simply how one may personally capture revenue before the Trough of disillusionment. =3
I don't really believe that, and I thought it was interesting on Meta's earnings call that Zuck (or the COO) said that it seems unlikely at this point that a single company will dominate every use of LLMs/image models, and that we should expect to see specialization going forward.
As I understand the argument, it's that AI will reach a level where it's smart enough to improve itself, leading to a feedback loop where it takes off like a rocket. In this scenario, whoever is in second place is left so far in the dust that it doesn't matter. Whichever model is number one is so smart that it's able to absorb all economic demand, and all the other models will be completely obsolete.
This would be a terrifyingly dystopian outcome. Whoever owns this super intelligence is not going to use it for the good of humanity, they're going to use it for personal enrichment. Sam Altman says OpenAI will cure cancer, but in practice they're rolling out porn. There's more immediate profit to be made from preying on loneliness and delusion than there is from empowering everyone. If you doubt the other CEOs would do the same, just look at them kissing the ass of America's wannabe dictator in the White House.
Another possible outcome is that no single model or company wins the AI race. Consumers will choose the AI models that best suit their varying needs, and suppliers will compete on pricing and capability in a competitive free market. In this future, the winners will be companies and individuals who make best use of AI to provide value. This wouldn't justify the valuations of the largest AI companies, and it's absolutely not the future that they want.
I agree this is a reasonable bet though but for different reason, I believe this is a large scale exploitation where money is systematically siphoned away from workers and into billionaires via e.g. hedgefunds, bailouts, dividend payouts, underpay, wagetheft, etc. And the more they blow out this bubble the more money they can exploit out from workers. As such it is not really a bet, but rather the cost of business. Profits are guaranteed as long as workers are willing to work for yours.
I was at a bitcoin conference in 2018. One guy in the booth told me that the company had set up a $100M fund to fund startups that agreed to build apps on their blockchain. I wonder where they are now?
Okay, that article is a little bit shallow. I just summarises the headlines of the last weeks of circular deals. But is there also a more in depth article that sheds a little more light onto what this actually means? From a financial perspective?
He also has a podcast called Better Offline, which is slightly too ad heavy for my taste. Nevertheless, with my meagre understanding of the large corporate finances I was not able to find any errors in his core argument regardless of his somewhat sensationalist style of writing.
My complaint about Ed Zitron is that he's _always_ shouting into the void about something. A lot of the issues he covers are legitimate and deserve the scorn he gives them but at some point it became hard for me to sort the signal from the noise.
Ed Zitron sucks because he constantly spitballs on easy to confirm topics and keeps being wrong in ways that should be trivial to check and fix. Case in point:
It’s probably hard to do that in a news context because the real rationales are pretty tight.
Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era, or a metastasized financial cancer that’s going to implode the economy. Reality lies in the middle, and nobody really knows how the story is going to end.
In my personal opinion, “financial innovation” (see: the weird opaque deals funding the frantic data center construction) and bullshit like these circular deals driving speculation is a story we’ve seen time and time again, and it generally ends the same way.
An organization that I’m familiar with is betting on the latter - putting off a $200M data center replacement, figuring they’ll acquire one or two in 2-3 years for $0.20 on the dollar when the PE/private debt market implodes.
Not really. The idea that reality lies _in_ the middle is fairly coherent. It's not, on it's face, absolutely true but there are and infinite number of options between two outcomes so the odds are overwhelmingly in the favor that the truth lies somewhere in between. Is either side totally right about every single point of contention between them? Probably not, so the answer is likely in the middle. The fallacy is a lot easier to see when you're arguing about one precise point. In that case, someone is probably right and wrong. But, in cases where a side is talking about a complex event with a multitude of data points, both extremes are likely not completely correct and the answer does, indeed, lie in between the extremes.
The fallacy is that the true lies _at_ the middle, not in the middle.
You're thinking in one dimension. Truth. Add another dimension, time, and now we're talking about reality.
Ultimately, if both sides have a true argument, the real issue is which will happen first in time? Will AI change the world before the whole circular investment vehicle implode? Or after, like happened with the dotcom boom?
"Round" does not mean spherical and both of these claims are falsifiable and mutually exclusive.
The AI situation doesn't not have two mutually exclusive claims, it has two claims on the opposite sides of economic and cultural impact that are differences of magnitude and direction.
AI can both be a bubble and revolutionary, just like the internet.
"AI is a bubble" and "AI is going to replace all human jobs" is, essentially, the two extremes I'm seeing. AI replacing some jobs (even if partially) and the bubble-ness of the boom are both things that exist on a line between two points. Both can be partially true and exist anywhere on the line between true and false.
No jobs replaced<-------------------------------------->All jobs replaced
Bubble crashes the economy and we all end up dead in a ditch from famine<---------------------------------------->We all end up super rich in the post scarcity economy
For one, in higher dimensions, most of the volume of a hypersphere is concentrated near the border.
Secondly, and it is somewhat related, you are implicitly assuming some sort of convexity argument (X is maybe true, Y is maybe true, 0.5X + 0.5 Y is maybe true). Why?
I agree there is a large continuum of possibilities, but that does not mean that something in the middle is more likely, that is the fallacious step in the reasoning.
> Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era
Eh, in a way they're not mutually exclusive. Look back at the dot com crash: it was all about things like online shopping, which we absolutely take for granted and use every day in 2025. Same for the video game crash in the 80s. They are both an overhyped bubble and and the dawn of a new era.
Exactly. I think the difference is that we've developed a cadre of people are think 24x7 about capturing value in a way that makes dotcom era moguls look naive.
AI is a powerful and compelling technology, full stop. The sausage making process where the entire financial economy is pivoting around it is a different matter, and can only end in disaster.
The fact that it is private equity that is going to evaporate when the bubble bursts is the only silver lining I can see. However, my natual cynicism makes me bet they'll spend whatever they've got left over on their pet politicians to use government (ie, public funding) to bail themselves back out.
OpenAI is raising funding based on its own forecasts for AI demand growth, and sending most of it to Oracle, MSFT, Nvidia as well as paying insiders enormous salaries.
There are some interesting parallels here with the business model described in the book Confessions of an Economic Hitman. Developing countries take out huge loans from US lenders to build an electric grid, based on inflated forecasts from US consultancies they hired. The countries take on the debt, but the money mostly bypasses them and lands in the pockets of US engineering firms doing the construction, and government insiders taking kickbacks for greasing the wheels.
When the forecasted growth in industrial production fails to materialize, the countries are unable to repay the debt and have no option but to offer the US access to their resources, ports and votes in the UN.
What happens when OpenAI's forecasts of gargantuan growth fail to materialize and they're unable to sell more stock to pay off lenders? Does Uncle Sam step in with a bailout for "national security" reasons?
I can understand how someone's approach can be "hack all the things", however, at some point you run into the fundamental boundaries of the box you are in and you can't hack your way around those.
That doesn't really matter: as long as there are idiots who will buy your inflated stock you've externalized the problem for yourself whilst staying within the box.
Given that AI is a national security matter now, I'd expect the U.S.A to step in and rescue certain companies in the event of a crash. However, I'd give higher chances to NVIDIA than OpenAI. Weights are easily transferrable and the expertise is in the engineers, but ability to continue making advanced chips is not as easily transferred.
I’m curious if those of you calling for nationalization have worked for the government or a state-owned enterprise like Amtrak. People should witness the effects of long-term public sector ownership on productivity and effectiveness in a workplace.
Yeah, like IBM and Intel and GE and GM are shining examples of how effectively the private sector runs companies. Maybe large enterprises are by their nature inefficient. Maybe productivity isn't the best metric for a utility. We could, for instance, prioritize resiliency, longevity, accessibility, and environmental concerns.
The US government just allocated $10b towards Intel, and bailed out GM in the past. So what you said is clearly not the case. Now we have publicly-funded private management that is failing. At least if they were publicly owned and managed outright, they wouldn't be gutted by executives prioritizing quarterly profits.
Executives should prioritize producing things people are willing to pay money for cheaply. If there is a bias towards short-termism, that is a governance problem that should be addressed.
I agree that the US taking stakes or picking winners is bad, I don't think it follows that nationalization is the solution.
The USPS does more for its workers and customers than FedEx. There are addresses FedEx won't service due to "inefficiencies", hand over packages to the USPS for delivery.
Fwiw, this is a facile argument. You make no attenpt to demonstrate that after major reorganization (breakup / nationalization) that the firm will continue to have the desirable attributes (innovation, efficincy, ability to build) that made them too important to fail.
Read up a bit on the effort needed to get a fab going, and the yield rates. While engineers are crucial in the setup, the fab itself is not as 'fungible' as the employees involved.
I can spin up a strong ML team through hiring in probably 6-12 months with the right funding. Building a chip fab and getting it to a sensible yield would take 3-5 years, significantly more funding, strong supply lines, etc.
> I can spin up a strong ML team through hiring in probably 6-12 months with the right funding
Not sure what to call this except "HN hubris" or something.
There are hundreds of companies who thought (and still think) the exact same thing, and even after 24 months or more of "the right funding" they still haven't delivered the results.
I think you're misunderstanding how difficult all of this is, if you think it's merely a money problem. Otherwise we'd see SOTA models from new groups every month, which we obviously aren't, we have a few big labs iteratively progressing SOTA, with some upstarts appearing sometimes (DeepSeek, Kimi et al) but it isn't as easy as you're trying to make it out to be.
There’s a lot in LLM training that is pretty commodity at this point. The difficulty is in data - and a large part of why it has gotten more challenging is simply that some of the best sources of data have locked down against scraping post-2022 and it is less permissible to use copyrighted data than the “move fast and break things” pre-2023 era.
As you mentioned, multiple no name chinese companies have done it and published many of their results. There is a commodity recipe for dense transformer training. The difference between Chinese and US is that they have less data restrictions.
I think people overindex on the Meta example. It’s hard to fully understand why Meta/llama have failed as hard as they have - but they are an outlier case. Microsoft AI only just started their efforts in earnest and are already beating Meta shockingly.
Fully agree. I also think we are deep into the diminishing returns territory.
If I have to guess OAI and others pay top dollars for talent that has a higher probability of discovering the next "attention" mechanism and investors are betting this is coming soon (hence the hige capitalizations and willing to loive with 11B losses/quarter). If they lose patience in throwing money at the problem I see only few players remaining in the race because they have other revenue streams
> It's just that startups don't go after the frontier models but niche spaces
But both of "New SOTA models every month" and "Startups don't go for SOTA" cannot be true at the same time. Either we get new SOTA models from new groups every month (not true today at least) or we don't, maybe because the labs are focusing on non-SOTA instead.
I've always taken that term literally, basically "top of the top". If you're not getting the best responses from that LLM, then it's not "top of the top" anymore, regardless of size.
Then something could be "SOTA in it's class" I suppose, but personally that's less interesting and also not what the parent commentator claimed, which was basically "anyone with money can get SOTA models up and running".
Edit: Wikipedia seems to agree with me too:
> The state of the art (SOTA or SotA, sometimes cutting edge, leading edge, or bleeding edge) refers to the highest level of general development, as of a device, technique, or scientific field achieved at a particular time
I haven't heard of anyone using SOTA to not mean "at the front of the pack", but maybe people outside of ML use the word differently.
Right. I could spin up a strong ML team, an AI startup, build a foundational model, etc give a reasonable amount of seed capital.
Build a chip fab? I’ve got no idea where to start, where to even find people to hire, and i know the equipment we’d need to acquire would be also quite difficult to get at any price.
But the fabs don't belong to NVIDIA, they belong to TSMC. I have no doubt that Taiwan and maybe even the US government would step in to save TSMC if for some reason it got existential problems, but that doesn't provide an argument for saving NVIDIA
First-order: because of the capex and lead times. If you grab a bunch of world-class ML folks and put them in a room together, they're going to be able to start producing world-class work together. If you grab a bunch of world-class chip designers in the same scenario but don't have world-class fabs for them to use, they're not going to be able to ship competitive designs.
> If you grab a bunch of world-class chip designers in the same scenario but don't have world-class fabs for them to use, they're not going to be able to ship competitive designs.
But why such an unfair comparison?
Instead of comparing "skilled people with hardware VS skilled people without hardware", why not compare it to "a bunch of world-class ML folks" without any computers to do the work, how could they produce world-class work then?
- For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.
- For the chip design team, you need money and time. There's no workaround for the time aspect of it. You can't spend more money and get a fab quicker.
> - For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.
Even if you do those things though, it doesn't guarantee success or you'll be able to train something bigger. For that you need knowledge, hard work and expertise, regardless of how much money you have. It's not a problem you can solve by throwing money at it, although many are trying. You can increase the chances of hopefully discovering something novel that helps you build something SOTA, but as current history tells us, it isn't as easy as "ML Team + Money == SOTA model in a few months".
The start-up costs of creating a new chip manufacture are significantly higher (you can't just SAAS your way into factories) and the chips themselves more subject to IP and patents owned by that company.
One person can implement a transformer model from scratch in a weekend. Hardware is not the valuable part of machine learning. Data and how it is used are.
The "magic of AI" doesn't live inside an Nvidia GPU. There are billions of dollars of marketing being deployed to convince you it does. As soon as the market realizes that nvidia != magic AI box, the music should stop pretty quickly.
That's true, but without the kind of horsepower provided by modern hardware, even though I'm skeptical that it's all needed, especially given DeepSeek's amazing results, AI would be nearly impossible.
There are some important innovations on the algorithm / network structure side, but all these ideas are only able to be tried because the hardware supports it. This stuff has been around for decades.
AI models do not. Sure you can't just copy the exact floating point values without permission. But with enough capital you can train a model just as good, as the training and inference techniques are well known.
> But with enough capital you can train a model just as good, as the training and inference techniques are well known
You're not alone in believing just money can train a good model, and I've already answered elsewhere why things aren't so easy as you believe, but besides this, where are y'all getting that from? Is there some popular social media influencer that keeps parroting this or where it comes from? Clearly you're not involved in those processes/workflows yourself, then you wouldn't claim it's just a money problem, so where are you all getting this from?
Even if/when the bubble pops, I don't think NVIDIA is even close to need rescuing or being in trouble. They might end being worth 2 trillion instead of 5 but they're still selling GPUs nobody else knows how to make that power one of the most important technologies in the world. Also, all their other divisions.
The .com bubble didn't stop the internet or e-commerce, they still won, revolutioned everything, etc. etc. Just because there's a bubble it doesn't mean AI won't be successful. It will be, almost for sure. We've all used it, it's truly useful and transformative. Let's not miss the forest for the trees.
This comment is pretty depressing but it seems to be the path we're headed to:
> It's bad enough that people think fake videos are real, but they also now think real videos are fake. My channel is all wildlife that I filmed myself in my own yard, and I've had people leaving comments that it's AI, because the lighting is too pretty or the bird is too cute. The real world is pretty and cute all the time, guys! That's why I'm filming it!
Combine this with selecting only what you want to believe in and you can say that video/image that goes against your "facts" is "fake AI". We already have some people in pretty powerful positions doing this to manipulate their bases.
> We already have some people in pretty powerful positions doing this to manipulate their bases.
You don't have to be vague. Let's be specific. The President of the United States implied a very real voiceover of President Reagan was AI. Reagan was talking about the fallacy of tariffs as engines of economic growth, and it was used in an ad by the government of Ontario to sow divide within Republicans. It worked, and the President was nakedly mad at being told by daddy Reagan.
We are heading to an apocalyptic level of psychosis where human beings won't even believe the things they see with their own eyes are real anymore because of being flooded with AI slop 24/7/365.
There was a discussion on here recently about a new camera that could prove images taken with it weren't AI fakes, and most of the comments were skeptical anyone would care about such things.
This is an example of how people viscerally hate anyone passing off AI generated images and video as real.
>> figure out how to innovate on the financial model
Does it feel rather Orwellian that the original geeks now seem to be the same people who - forget about claiming technological innovation of as their own - completely discount it and apparently the important thing is now the creativity in funding an enterprise? We don't hear about the breakthroughs from the technologists, but the funding announcements from th investors and CEOs. It's not about the benefits of the technology, but how they're going to pay for it. Seems like a wildly perverse version of wag the dog...
No, they aren't. Locally, the person with the most esoteric knowledge is probably a weird nerd. It's mostly an accident that they chose to invest time in things typically associated with smarts. But globally, the best wizards got there by making it their profession. So maybe at your middling university, the people who could land a job at a frontier lab were nerdy wannabe frats, but at decent universities like MIT or Tsinghua, they're usually just better in every aspect of their lives. E.g. MIT has "math olympiad fraternities" all the cool kids join.
I went to a top 5 ranked school globally (~these lists fluctuate) and have been in elite circles since then. I can promise you that even there the autistic nerd fully outcompetes the renaissaince man.
Only if you defraud investors in hole-digging corp and hole-filling corp by that by doing this you will be able to extract Unobtanium, which will make both companies 1000x profitable.
This is just starting to sound more and more like "we're almost at AGI I promise bro just need one more round of investment bro please just one trillion more dollars please bro".
They might be using the GPUs, but is that use providing real value? You can run a while loop and max out any processor.
And, well, nobody knows if it is providing real value. We know it's doing something and has some value WE attached to it. We don't know what the real value is, we're just speculating.
Ok. Maybe use it better? Or don't use it at all. Doesn't mean it's not being used to some end, unlike a hole.
Keep in mind also that the models are going to continue improving, if only on cost. Just a significant cost reduction allows for more "thinking" mode use.
Most of the reports about how useless LLMs are were from older models being used by people that don't know how to use LLMs. I'm not someone that thinks they're perfect or even great yet, but their not dirt.
The increase in value of the companies outweighs the transactional costs and then you borrow against the value of the company and make new circular deals. It works really well for a very long time and then at some point it doesn’t. The trick of the game is to get big corps involved and key decision makers so that the government bails out everyone in the end.
> The trick of the game is to get big corps involved and key decision makers so that the government bails out everyone in the end.
This is bad. We should not shrug our shoulders and go "Oh ho, this is how the game is played" as though we can substitute cynicism for wisdom. We should say "this is bad, this is a moral hazard, and we should imprison and impoverish those who keep trying it".
* stock prices increasing more than the non-existent money being burnt
* they are now too big to fail - turn on the real money printers and feed it directly into their bank accounts so the Chinese/Russians/Iranians/Boogeymen don't kill us all
Well, I've got great news then 92% of GDP growth in the first half of 2025 was hole filling companies paying hole digging companies to dig holes and in-kind pay them to fill them up again
A lose lose situation for most people. Either the stock market crashes or AI progress meets expectations in the coming years and people start losing jobs.
As an aside does anyone get the feeling that NYT is also training its fire on all California tech companies these days? I know that NYT really doesn't like California (always hasn't - from restaurants to culture to business) but curious if other people see that as well?
This is such a strange article -- there's nothing particularly unusual going on here.
The first example basically stands in for all of them -- Microsoft invests $13B in OpenAI, and OpenAI spends $13B on Azure. This is literally just OpenAI purchasing Microsoft cloud usage with OpenAI's stock rather than its cash. There is nothing unusual, illicit, or deceptive about this. This is entirely normal. You can finance your spending through debt or equity. They're financing through equity, as most startups do, and they presumably get a better deal (better rates, more guaranteed access) via Microsoft than via other random investors and then buying the cloud compute retail from Microsoft.
This isn't deceiving any investors. This is all out in the open. And it's entirely normal business practice. Nothing of this is an indicator of a bubble or anything.
Or take the deal with Oracle -- Oracle is building data centers for OpenAI, with the guarantee that OpenAI will use them. That's just... a regular business deal. What is even newsworthy about this? NYT thinks these are "circular" deals, but by this logic every deal is a "circular" deal, because both sides benefit. This is just... normal capitalism.
I remember the same argument being used before the 2008 crash.
Point is that all of this companies need to start making real profits and pretty damn big ones, otherwise all of this will collapse. Problem is that unless Altman has some super-intelligent super-AI hidden in his closet, it is very unlikely that it will.
And whose gonna take the bill when it falls? Let me guess… Where have I seen this before…?
> Point is that all of this companies need to start making real profits and pretty damn big ones
MS, Meta, Google, Apple, Nvidia make enormous profits. I think part of this AI push we're seeing is that all of these companies have so much money they don't know how to spend it all. Meta is a great case where they bounced from blowing excess cash on the metaverse and now to AI.
That's fine, but that's a separate conversation. Maybe this is a bubble, maybe it isn't.
My point is that the way it's all being financed is just regular financing. This article is trying to present the way it's being funded as novel, as "complex and circular", when it's not. This is how funding and investment works 365 days a year in all sectors. Nothing about the funding arrangements is a bubble indicator.
So this is a strange article from the NYT, because it's trying to present normal everyday financing deals as uniquely "complex and circular".
I don’t know financial world well enough to say whether that’s here nor there, but can you give me examples from other companies or sectors where a company X funds the company Y with tens to hundreds of billions that the company Y uses to buy a service from the company X.
Furthermore, yes it might be business as usual but so is fraud and god knows what else in this particular political era. In order to strengthen your argument you have to not only show that the phenomenon is not only common, but good for the overall economy.
Circularly passing around tens to hundreds of billions of dollars for things which don't exist and may never exist to fund a technology that hasn't A. lived up to the hype they've marketed and B. proven any strategy to breakeven is fundamentally not that much different than the way in which Enron strategically boasted their revenue numbers by passing the money between shell corporations that their CFO created.
The main difference of course being that these are actual companies as opposed to just entities intently designed to inflate the apparent financials. While it seems like that difference means this situation is perfectly fine as compared with the fraudulent case of Enron, the net effect is still the same; these companies are posting crazy quarter over quarter revenue growth, sending their stock prices to crazy highs and P/E multiples, while the insiders are cashing out to the tunes of hundreds of millions of dollars.
I don't really see how exactly you're trying to make the argument that it may or may not be a bubble, it objectively meets the definition of a bubble in the traditional economic sense (when an asset's market price surges significantly above its intrinsic value, driven by speculative behavior rather than fundamental factors). These companies are massively overvalued on the speculative value of AI, despite AI having not yet shown much economic viability for actual profit (not just revenue).
Worse yet, it's not just one company with inflated numbers, it's pretty much the entire top end of the market. To compare it to the dot com bubble wouldn't be a stretch, it'd basically be apples to apples as far as I see it.
Microsoft isn't selling any stock. It's using its cash.
And an increase in revenue isn't the point. Microsoft isn't doing this to try to bump its short-term stock price or anything -- investors know where revenue is coming from. Microsoft is doing it because it thinks OpenAI is a good investment and wants to make money with that investment and have greater control.
No, that's not the last time this hit the news. This happens literally all the time. Again, this is just business as usual. It's not specific to AI, it's not specific to tech, and it's nothing to do with bubbles.
Then it would be great to have that context that shows criminality. Because that's an extraordinary claim you're suggesting, which is going to require actual evidence.
As for "bad ideas", businesses make tons of decisions every day that turn out to be good or bad in hindsight. So again, more specifics are needed here.
So what exactly are you suggesting? What context do you think the NYT chose to omit, and why would they omit it if it was meaningful?
The bubble part is that nvidia is getting revenue from people investing money in their hardware in order to sell something that has not yet been shown to be profitable. If it turns out no one can make enough money selling AI generated data to justify the costs spent on the compute needed to generate it at the current rate, then what nvidia are selling becomes much less valuable, and the whole thing collapses. We haven't figured out yet whether or not that will be the case.
But that has nothing to do with the arrangement of deals here.
If it's a bubble, then it will pop. If it's not a bubble, then all these investments will turn out to be great. But that's a different question.
The point is, all these deals happen all the time. They're not some kind of sign of a bubble. They happen just as much in non-bubbles. They're just capitalism working as usual.
These deals happen all the time. The case for a bubble is the following.
When Microsoft offers cloud-credits in exchange for openai equity, what it has effectively done is to purchase its own azure revenues. ie, a company uses its own cash to purchase its own revenues. This produces an illusion of revenue growth which is not economically sustainable. This is happening for all clouds right now wherein their revenues are inflated by uneconomic ai purchases. This is also happening for the gpu chip vendors as well, wherein they are offering cash or warrants to fund their own chip sales.
But nobody is falling for the "illusion of revenue growth". This is out in the open. This isn't a scam. Investors know this and are pricing accordingly. They see the revenue growth but also see the decrease in cash.
What Microsoft is actually doing is taking the large profits it would have otherwise made on its cloud compute with retail customers, losing much/all of those profits as it sells the compute more cheaply to OpenAI, and converting those lost profits into ownership of OpenAI because Microsoft's goal is to own more of OpenAI.
There is nothing "bubble" about this. Microsoft isn't some opaque startup investors don't understand. All of this is incredibly transparent.
There will be increased transparency since microsoft will now have to report on the performance of its openai equity [1]. The concern is that while chatgpt is a great app, the economic benefits of the current investments are being questioned. There is starting to be skepticism of ai as the public starts to get jaded. This happens in all fads. That explains why the media is buzzing with articles like these which are becoming increasingly critical while earlier they were all aboard the ai-train.
I’ve been listening to “The Smartest Guys In The Room” (the definitive book on Enron and their scandal) and one of the ways Enron continued to grow and grow is by setting up a really complicated system of moving debt onto equities off of their balance sheet.
While it was sorta legal (at the time) it was not ethical and led to a massive collapse of the #1 company at the time.
Makes you wonder if AI is in such a bubble. (It is).
There will be a bunch of layoffs and slowly they'll rehire back to pre-hysteria levels. I think the world is still going to need software engineers no matter what but companies will slow down on new features etc in an economic crunch.
The ripple effect will be felt hard, as American engineers are squeezed between offshoring and more engineers with Big Tech resumes being released into the market, and returnees go push back wages in their home countries in turn
They'll have to come in and redo all the work that people put onto LLMs as actual engineering software. The number of features I've worked on that could have been done with normal computing practices but instead shoehorned in bad AI to make decisions/routing logic is too high.
If it pops, some ai engineers will need to start doing some normal work again, and rest of us... we just continue doing what we were doing for past decades.
Or maybe not, nobody knows the future any more then next guy in line.
When I was 16 I started working at a startup buying and reselling used electronics.
There were like 5 competitors all trying to become the winner takes it all. Afaik after 10 years some closed, restructured but most of them burnt a lot of money. One lets call him indie dev made a lot of money building a simple comparison platform and getting 10-20% on all deals.
This is n=1, but I think it still made me really averse to raising money.
Speedrunning to "too big to fail". Turn on the infinite money printers and feed them directly into Sam Altman's bank account or the Chinese/Russians/Iranians/Boogeymen will destroy us all.
Everyone loves to compare AI with the dot com bubble. My question is, were there any policies put in place after the dot com bubble to mitigate a similar crash? Or did we learn nothing?
Weird angle, but isn't "believing there will be a crash" sort of framing it as if this were still normal market dynamics?
OpenAI and AI in general has posed itself as an existential threat and tightly integrated itself (how well? let's argue later) with so many facts of society, especially government, that like, realistically there just can't be a crash, no?
Or is this too doomsday / conspiratorial?
I just find it weird that we're framing it as crash/not crash when it seems pretty clear to me they really genuinely believe in AGI, and if you can get basically all facets of society to buy in... well, airlines don't "crash" anymore, do they?
If OpenAI were to shut down today, would anything in society really change? It seems all valuations are based on future integration into society and our daily lives. I don't think it has really happened yet.
A crash in the stock market doesn't necessarily mean a crash in the real market, The AI bubble burst being dot com style vs a gfc debacle depends on how much critical financial infrastructure is at risk during the debt deleveraging. If you look at the gdp growth during those two periods, the dot com era was a mild stagnation compared to the gfc's actual gdp decline.
Many here now didn't live through the dot-com bubble as an adult so can't really appreciate what it was like. The hype was something hard to describe. Financial analysts and journalists struggled to come up with ways to describe the health of these "companies". My favorite was what revenue multiple companies would trade it.
But the major takeaway was that almost none of these companies were real businesses. This is why I laughed at dot-com comparisons in the 2010s around the tech giants because Apple, Google, Microsoft, etc were money-printing machines on a scale we have trouble comprehending. That doesn't make them immune to economic struggles. Ad spending with Google will rise and fall with the economy.
OpenAI has a paper valuation in the hundreds of billions of dollars now and no prospect of a revenue model that will justify that for many, many years.
Currently, the hardware is a barrier to entry but that won't last. It has parallels in the dot-com era too when servers were expensive. The cost of training LLMs is (at least) halving every year. We're probably reaching the limits of what these transformers can do and we'll need another big breakthrough to improve.
OpenAI's moat is tenuous. Their value is in the model they don't release. But DeepSeek is a warning shot that it will be in somebody's geopolitical interest, probably China's, to prevent a US tech monopoly on AI.
If you look at these AI companies, so many of them are basically scams. I saw a video about a household humanoid robot that was, surprise surprise, just someone in a VR suit. Many cities have delivery drones now but somebody is remotely driving them.
I saw somebody float the theory that the super-profitable big tech companies are engaging in layoffs not because they don't need people but to pay for the GPUs. It's an interesting idea. A lot of these NVidia deals are just moving money around where NVidia comes out on top with a bunch of equity in these companies should they become trillion dollar companies.
Oh and take out data center building from the US economy and we're in recession. I do think this is a bubble and it will burst sooner rather than later.
this seems like a fake circular economy, ms invests in openai which uses the money in azure, amazon invests in anthropic which pays aws for hardware and infra, nvidia invests in openai which uses the money to buy nvidia hardware, etc
"You give me a million GPUs for free, I'll announce that you have sacrificed a million GPUs to the machine gods, and your stock price will spike 200 times the value of those GPUs."
You can do a lot of money in swindles and bubbles if you time your exit well. There is a fair bit of opportunistic investors who did well in the NFT craze, who speculated knowing fully well that NFT is a craze that will go to zero.
everything will eventually go to zero. we look at some of these things and laugh because we're pretty sure they're going to go to zero within weeks or months vs years. but by the end of all of our lifetimes, most the companies on the stock market will be replaced. the few that won't are probably investment banks like goldman sachs
these deals are made as part of a market so it's more like musical chairs where every time you change a chair you get a ton of money but you don't want to be the one that's stuck without a chair at the end
Central banks don't print money[1] but investment banks do. Think about it like this: Someone deposits $100. The bank pays interest, to make money on to pay that interest, ~$90 is loaned out to someone.
Now, I still have a bank slip that says $100 in the account, and the bank has given $90 of that to someone else. We now have $190 in the economy! The catch is, that money needs to be paid back, so when people need to call in that cash, suddenly the economy only has $10, because the loan needed to be paid back, causing a cash vacuum.
But that paying back is also where the profit is, because you sell off the loan book, and you can get all your money back, including future interest. So you have lent out $90, sold the right to collect the repayments to someone else as a bond, so you now have $120, a profit of $30
That $30 comes pretty much from nowhere. (there are caveats....)
Now we have my bank account, after say a year with $104 in it, the bank has $26 pure profit AND someone has a bond "worth" $90 which pays $8 a year. but guess what, that bond is also a store of value. So even though its debt, it acts as money/value/whatever.
Now, the numbers are made up, so are the percentages. but the broad thrust is there.
https://archive.is/tSrC8
Here is a charitable perspective on what's happening:
- Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.
- Nvidia instead invests in other companies that use their gpus by providing them deals that must be spent on nvidia products.
- This accelerates the growth of these companies, drives further lock in to nvidia's platform, and gives nvidia an equity stake in these companies.
- Since growth for these companies is accelerated, future revenue will be brought forward for nvidia and since these investments must be spent on nvidia gpus it drives further lock in to their platform.
- Nvidia also benefits from growth due to the equity they own.
This is all dependent on token economics being or becoming profitable. Everything seems to indicate that once the models are trained, they are extremely profitable and that training is the big money drain. If these models become massively profitable (or at least break even) then I don't see how this doesn't benefit Nvidia massively.
> Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.
Here's an idea: they could make actual GPUs used for games affordable again, and not have Jensen Huang lie on stage about their performance to justify their astronomical prices. Sure, companies might want to buy them for ML/AI and crash the market again but I'm sure a company of their caliber could solve that if they _really_ wanted to.
I also just don’t understand, as someone with no business experience, how they aren’t just pouring all of that money into enhancing their production capacity. That’s very clearly their bottleneck here.
Yes, I’m certain they are spending an astronomical amount on that already, but why not more? Surely paying more money for construction of more facilities still nets gain even if you run into diminishing returns?
Instead they set up this whacko tax laundering scheme? Just seems like more corporate pocket filling to me, an idiot with no business knowledge.
The bottleneck is TSMC, who also make chips for almost every other hardware vendor.
TSMC is indeed increasing their production capability as fast as possible, but it's not easy... chip foundries are extremely expensive, complex, and take serious expertise to operate.
It’s called seeding the market. If they can accelerate the growth of potential customers, it will be more profitable than just increasing production to serve existing customers.
Think of exponential growth — would you rather increase the base or the exponent?
Hedging their bets against a potential sudden downturn in consumption of their product, e.g., an AI bubble exploding? If they invest heavily in production capacity only to find that there is not commensurate consumption, then they'll have lost badly.
Why would they want to do that? The only sector that matter to nvidia is datacenter, its where 90%+ of their profits are. Making their consumer sector even less profitable just seems like a waste of time
Yup. Not just Nvidia. Just look at the quarterly results reported by Amazon, Google, Meta, Microsoft and Apple. Each one is reporting revenues never before seen in history. If you make 100 Billion a quarter you have to spend it on something.
These guys are running hyper optimized cash extraction mega machines. There is no comparison to previous bubbles, cause so no such companies ever existed in the past.
What's shocking is the gulf between those companies and corporate 'normality'.
Eastern Airways, a UK airline, has just gone bust due to accumulated debts of £26 million. That's not even a rounding error for Google, yet was enough to put a 47-year-old company into bankruptcy and its staff out of work.
I think the only historical parallel to this disparity was the era of the East India Company.
100 billion a quarter is Alphabet, right? Given how much click fraud there is, and that every org and business under the sun is held to ransom to feature on the SERP for their own name even — it’s tempting to say Google’s become a private tax on everything.
No, Apple also has 100 billion dollars in revenue despite floundering AI and running a very hardware dependent business.
Odd how they are simultaneously having large layoffs even as reporting record revenues.
The question is where the profits are.
Amazon - 14,000 layoffs; significant
Microsoft - 14,000 (multiple rounds); significant
Meta - 600 layoffs; insignificant for company size
Google - "Several hundred layoffs"; insignificant for a company size
Apple - No layoffs
Source: https://techcrunch.com/2025/10/24/tech-layoffs-2025-list/
Also every one of them has hundreds of thousands of external contractors which are not reported anywhere.
And offshoring is also a huge cost-cutting effort everywhere.
Google's Youtube unit is doing soft layoffs.
https://news.ycombinator.com/item?id=45766368
The layoffs at Amazon and Microsoft are not due to lack of profits. They’re massively profitable right now.
https://www.macrotrends.net/stocks/charts/MSFT/microsoft/ebi...
https://www.macrotrends.net/stocks/charts/AMZN/amazon/ebitda
They're "massively profitable" because they're laying off large portions of a major cost center - labor - and backloading uncoming data center construction costs. As those come due, and labor needs rise again, that profit disappears.
So many such profitable companies are the best possible evidence for the need for drastic antitrust intervention. The lack of competition and regulation is leading to a massive drain on every other sector.
This bubble is caused by excess competition. There are 4 large companies who believe that a large new market is being created so each is investing large amounts without any evidence that there will be a single winner that dominates the future market. None of these companies has anything remotely resembling a monopoly except for Amazon in online retail.
Google: search, chrome, youtube
Microsoft: desktop software
Meta: social media
Maybe on some technical definitions of "monopoly" these aren't monopolies, but nothing remotely resembling a monopoly? come on maan
How much more was worth USD at the beginning of the year?
It is'nt just nvidia though.
Your conclusion about training being the cost factor that will eventually align with profitability in the inference phases relies on training new models not being an endless arms race.
If the inference is profitable and training new models is actually an endless arms race that's actually the best outcome for nvidia specifically.
Only in the short term.
I'm just confused why people think token-based computing is going to be in such demand in the future. It's such a tiny slice of problems worth solving.
It's like how every big co these days is ML. It will transition to LLMs as well.
Just give it a few years.
Yep. Same vibes as “ha ha who needs internet connected appliances” (pretty much all appliances are internet connected now). And the apocryphal “there is a worldwide market for maybe 5 computers”.
> Everything seems to indicate that once the models are trained, they are extremely profitable
Some data would reinforce your case. Do you have it?
Here is my data point: "You Have No Idea How Screwed OpenAI Actually Is" - https://wlockett.medium.com/you-have-no-idea-how-screwed-ope...
Right. As far as I can tell, OpenAI, Grok, etc sell me tokens at a loss. But I am having a hard time figuring out how to turn tokens into money (i.e. increased productivity). I can justify $40-$200 per developer per month on tokens but not more than that.
There’s about 5M software devs in the US so even at $1000/year/person spend, that’s only $5B of revenue to go around. Theres plenty of other uses cases but focusing on pure tech usage, it’s hard to see how the net present value of that equates to multiple trillions of dollars across the ecosystem.
It's the first new way of interacting with computers since the iPhone. It's going to be massively valuable and OpenAI is essentially guaranteed to be one of the players.
I'm waiting for my Google Glass smart glasses to be useful for anything other then annihilating the privacy of everyone around me
Blackberry was a big deal for a while, too
Why is their product not palm? Or windows mobile?
It's not windows mobile because OpenAI was first and is the clear leader in the market. Windows mobile was late to the party and missed their window.
Palm is closer but it's a different world. It's established that Internet advertising companies are worth trillions. It's only in retrospect that what Palm could have been is obvious.
Barring something very unexpected OpenAI is coming out on top. They're prepaying for a good 5-10 years of compute. That means their inference and training for that time are "free" because they've been paid for. They're going to be able to bury their competition in money or buy them out.
Windows mobile by the time it looked like the iPhone was late to the party. But windows had been releasing a mobile os for a long time before that. Microsoft was first, they just didn’t make as good of a product as Apple despite their money.
OpenAI is also first, but it is absolutely not a given that they are the Apple in this situation. Microsoft too had money to bury the competition, they even staged a fake funeral when they shipped windows phone 7.
> Barring something very unexpected
Like the release of an iPhone?
Yep. It would have to be something that dramatic to render all the technology and infrastructure OpenAI has obsolete. But if it's anything like massive data training on a huge number of GPUs then OpenAI is one of the winners.
> Theres plenty of other uses cases
This is where the money is. Anthropic just released claude for excel. If it replaces half of the spreadsheet pushers in the country theyre looking at massive revenue. They just started with coding because theres so much training data and the employees know a lot about coding
I'm not trying to be annoying, but surely if you'd justify spending $200/developer/month, you could afford $250/month...
The reason I wonder about that is because that also seems to be the dynamic with all these deals and valuations. Surely if OpenAI would pay $30 billion on data centers, they could pay $40 billion, right? I'm not exactly sure where the price escalations actually top out.
No? That's a 25% expense increase. You just ate the margins on my product/service, and then some.
why would they sell you at a loss when they have been decreasing prices by 2x every year or so for the last 3 years? people wanted to purchase the product at price "X" in 2023 and now the same product costs X costs 10 times less over the years.. do you think they were always selling at a loss?
Inference cost has been going down for a while now. At what point do you think it will be profitable? When cost goes down by 2x? 5x?
I can't read your hyperbolically titled paywalled medium post, so idk if it has data I'm not aware of or is just rehashing the same stats about OpenAI & co currently losing money (mostly due to training and free users) but here's a non paywalled blog post that I personally found convincing: https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...
This is behind a paywall. Is there a free link you can share ?
https://archive.is/fnzB4
Would love to, and its normally what I do, but archive.is is currently down. At least here from the outer belt.
These kinds of deals were very much a la mode just prior to the .com crash. Companies would buy advertising, then the websites and ad agencies would buy their services and they'd spend it again on advertising. The end result is immense revenues without profits.
Circular investments were also a compounding factor in the Japanese asset price bubble.
The practice was known as “zaitech”
> zaitech - financial engineering
> In 1984, Japan’s Ministry of Finance permitted companies to operate special accounts for their shareholdings, known as tokkin accounts. These accounts allowed companies to trade securities without paying capital gains tax on their profits.
> At the same time, Japanese companies were allowed to access the Eurobond market in London. Companies issued warrant bonds, a combination of traditional corporate bonds with an option (the “warrant") to purchase shares in the company at a specified price before expiry. Since Japanese shares were rising, the warrants became more valuable, allowing companies to issue bonds with low-interest payments.
> The companies, in turn, placed the money they raised into their tokkin accounts that invested in the stock market. Note the circularity: companies raised money by selling warrants that relied on increasing stock prices, which was used to buy more shares, thus increasing their gains from investing in the stock market.
https://www.capitalmind.in/insights/lost-decades-japan-1980s...
And I guess no one of the people who were doing that paid but the community paid the price for this scam ?
There’s one key difference in my opinion: pre-.com deals were buying revenue with equity and nothing else. It was growth for growth’s sake. All that scale delivered mostly nothing.
OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.
> they’re using their equity to buy compute that is critical to improving their core technology
But we know that growth in the models is not exponential, its much closer to logarithmic. So they spend =equity to get >results.
The ad spend was a merry go round, this is a flywheel where the turning grinds its gears until its a smooth burr. The math of the rising stock prices only begins to make sense if there is a possible breakthrough that changes the flywheel into a rocket, but as it stands its running a lemonade stand where you reinvest profits into lemons that give out less juice
There is something about an argument made almost entirely out of metaphors that amuses me to the point of not being able to take it seriously, even if I actually agree with it.
As much as I dislike metaphors, this sounded reasonable to me. Just don't go poking holes in the metaphor instead of the real argument.
Indeed, poking holes in the metaphor is like putting a pin in a balloon, rather than knocking it out of the park by addressing the real argument.
OpenAI invests heavily into integration with other products. If model development stalls they just need to be not worse than other stalled models while taking advantage of brand recognition and momentum to stay ahead in other areas.
In that sense it makes sense to keep spending billions even f model development is nearing diminishing return - it forces competition to do the same and in that game victory belongs to the guy with deeper pockets.
Investors know that, too. A lot of startup business is a popularity contents - number one is more attractive for the sheer fact of being number one. If you’re a very rational investor and don’t believe in the product you still have to play this game because others are playing it, making it true. The vortex will not stop unless limited partners start pushing back.
But, if model development stalls, and everyone else is stalled as well, then what happens to turn the current wildly-unprofitable industry into something that "it makes sense to keep spending billions" on?
I suspect if model development stalls we may start to see more incremental releases to models, perhaps with specific fixes or improvements, updates to a certain cutoff date, etc. So less fanfare, but still some progress. Worth spending billions on? Probably not, but the next best avenue would be to continue developing deeper and deeper LLM integrations to stay relevant and in the news.
The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.
Because they’re not that wildly unprofitable. Yes, obviously the companies spend a ton of money on training, but several have said that each model is independently “profitable” - the income from selling access to the model has overcome the costs of training it. It’s just that revenues haven’t overcome the cost of training the next one, which gets bigger every time.
> the income from selling access to the model has overcome the costs of training it.
Citation needed. This is completely untrue AFAIK. They've claimed that inference is profitable, but not that they are making a profit when training costs are included.
If model development stalls, then the open weight free models will eventually totally catch up. The model itself will become a complete commodity.
It very well might. The ones with most smooth integrations and applications will win.
This can go either way. For databases open source integration tools prevailed, the commercial activity left hosting those tools.
But enterprise software integration that might end up mostly proprietary.
The bigger threat is if their models "stall", while a new up-start discovers an even better model/training method.
What _could_ prevent this from happening is the lack of available data today - everybody and their dog is trying to keep crawlers off, or make sure their data is no longer "safe"/"easy" to be used to train with.
They can also buy out the startup or match the development by hiring more people. Their comp packages are very competitive.
There's at least one contributor here on HN that believes growth in models is strictly exponential: https://www.julian.ac/blog/2025/09/27/failing-to-understand-...
Yeah, except you can keep on squeezing these lemons for a long time before they run out of juice.
Even if the model training part becomes less worthwhile, you can still use the data centers for serving API calls from customers.
The models are already useful for many applications, and they are being integrated into more business and consumer products every day.
Adoption is what will turn the flywheel into a rocket.
Well, the thing is that that kind of hardware chips quickly decrease in value. It's not like the billions spend in past bubbles like the 2000s where internet infrastructure was build (copper, fibre) or even during 1950s where transport infrastructure (roads) were build.
Data centers are massive infrastructural investments similar to roads and rails. They are not just a bunch of chips duct taped together, but large buildings with huge power and networking requirements.
Power companies are even constructing or recommissioning power plants specifically to meet the needs of these data centers.
All of these investments have significant benefits over a long period of time. You can keep on upgrading GPUs as needed once you have the data center built.
They are clearly quite profitable as well, even if the chips inside are quickly depreciating assets. AWS and Azure make massive profits for Amazon and Microsoft.
I think that, at best, that description boils down to Nvidia, Oracle, etc inventing fake wealth to build something and OpenAI building their own fake wealth by getting to use that new compute effectively for free.
There are physical products involved, but the situation otherwise feels very similar to ads prior to dotcom.
The same way the stock market invents a trillion dollars of fake wealth on a strong up day?
That's capital markets working as intended. It's not necessarily doomed to end in a fiery crash, although corrections along the way are a natural part of the process.
It seems very bubbly to me, but not dotcom level bubbly. Not yet anyway. Maybe we're in 1998 right now.
The stock market isn't inventing money. Those investing in the stock market might be, those buying on leverage for example.
Capital markets weren't intended for round trip schemes. If a company on paper hands 100B to another company who gives it back to the first company, that money never existed and that is capital markets being defrauded rather than working as expected.
I think it's worse. The US market feels like a casino to me right now and grift is at an all time high. We're not getting good economic data, it's super unpredictable, and private equity is a disaster waiting to happen IMO. For sure there are smart people able to make money on the gamble, but it's not my jam.
I don't tend to benefit from my predictions as things always take longer to unfold than I think they will, but I'm beyond bearish at present. I'd rather play blackjack.
More money is lost by bears fighting a bull market, than in actual bear market crashes.
I’ve made that mistake already.
I’m nervous about the economic data and the sky high valuations, but I’ll invest with the trend until the trend changes.
> It seems very bubbly to me, but not dotcom level bubbly.
Not? Money is thrown after people without really looking at the details, just trying to get in on the hype train? That's exactly how the dotcom bubble felt like.
Nvidia has a trailing PE of 50. Cisco was 200 At the height of the dotcom bubble.
Nowhere near that level. There’s real demand and real revenue this time.
It won’t grow as fast as investors expect, which makes it a bubble if I’m right about that. But not comparable to the dotcom bubble. Not yet anyway.
We shouldn't judge whether an indicator is stable or okay only by looking to see if its the highest historical value.
PE ratios of 50 make no sense, there is no justification for such a ratio. At best we can ignore the ratio and say PE ratios are only useful in certain situations and this isn't one of them.
Imagine if we applied similar logic to other potential concerns. Is a genocide of 500,000 people okay because others have done drastically more?
I’m not asking if it makes sense, I’m simply pointing out that by that measure this is much less extreme than 2000. As I stated, I think we’re in a bubble, so valuations won’t make much sense.
If you have a better measure, share it. I trust data more than your or my feelings on the matter.
Unless you have evidence that this measure of yours is a reliable predictor of how big a bubble is, it's on par with my gut feeling.
I sell you a cat for $1B and you sell me a dog for $1B and now we’re both billionaires! Whether the capital markets “want” that or not it’s still silly.
If we’re both willing to pay that in a free market economy, then we both leave the deal happy.
Things are worth what people are willing to pay for them. And that can change over time.
Sentiment matters more than fundamental value in the short term.
Long term, on a timescale of a decade or more, it’s different.
> If we’re both willing to pay that in a free market economy
The thing is: you've paid nothing - all you did was trade pets and played an accounting trick to make them seem more valuable than they are.
Is that not fraud?
> OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.
I'm commenting here in case a large crash occurs, to have a nice relic of the zeitgeist of the time.
Happy to have provided. I’m not an AI bull and not in any way invested in the U.S. economy besides a little money in funds, but I do try to think about the war of today vs the war of yesterday. Hopefully that’s always en vogue.
Eventually when ChatGPT replaces Google Search, they will run ads, and so have that whole revenue stream. Still isn't enough money to buy the trillions worth of infrastructure they want, but it might be enough to keep the lights on.
That's an insightful point! Making insightful points like that one is taxing on the brain, you should consider an electolyte drink like Brawndo™ (it's got what plants crave) to keep yourself sharp!
Ugh I hate it so much, but you're right, it's coming.
One thing I've been contemplating lately is that from a business perspective, when your competitors expand their revenue avenues (generally through ads) you have three options: copy them to catch up, do nothing and perish, and lobby the government for increased consumer protections.
I've started to wonder why we see so few companies do this. It's always "evil company lobbying to harm the its customers and the nation." Companies are made up of people, and for myself, if I was at a company I would be pushing to lobby on behalf of consumers to be able to keep a moral center and sleep at night. I am strongly for making money, but there are certain things I am not willing to do for it.
Targeted advertising is one of these things that I believe deserves to fully die. I have nothing against general analytics, nor gathering data about trends etc, but stalking every single person on the internet 24/7 is something people are put in jail for if they do it in person.
> critical to improving their core technology
It is at the very least highly debatable how much their core technology is improving from generation to generation despite the ballooning costs.
Dotcom scams included "vendor financing", where telecom equipment providers invested in their customers who built infrastructure:
https://time.com/archive/6931645/how-the-once-luminous-lucen...
The customers bought real equipment that was claimed to be required for the "exponential growth" of the Internet. It is very much like building data centers.
The assumption is that they have a large moat.
If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.
This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.
OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.
The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.
> This will be true if (as I believe) AI will plateau as we run out of training data.
Why would they run out of training data? They needed external data to bootstrap, now it's going directly to them through chatgpt or codex.
As much ChatGPT says I’m basically a genius for asking it a good Vegan cake recipes, I don’t think that is providing it any data it doesn’t already have that makes it anyway better. Also at this point the massive increases in data and computing power seem to bring ever decreasing improvements (and sometimes just decline), so it seems we are simply hitting a limit this kind of architecture can achieve no matter what you throw at it.
ChatGPT chat logs contain massive amount of data teased out of people’s brains. But much of it is lore, biases, misconceptions, memes. There are nuggets of gold in there but it’s not at all clear if there’s a good way to extract them, and until then chat logs will make things worse, not better.
I’m thinking they eventually figure out who is the source of good data for a given domain, maybe.
Even if that is solved, models are terrible at long tail.
When I say models will plateau I don't mean there will be no progress. I mean progress will slow down since we'll be scraping the bottom of the barrel for training data. We might never quite run out but once we've sampled every novel, web site, scientific paper, chat log, broadcast transcript, and so on, we've exhausted the rich sources for easy gains.
Chat logs don’t run out. We may run out of novelty in those logs, at which point we may have ran out of human knowledge.
Or not - there still knowledge in people heads that is not bleeding into ai chat.
One implication here is that chats will morph to elicit more conversation to keep mining that mine. Which may lead to the need to enrage users to keep engagement.
Apple new M5 can run models over 10B parametres and if they give their new Studio next year enough juice, it can run maybe 30B local model. How long is it that you can run a full GPT-5 on your laptop or homeserver with few grands worth of hardware? What is going to happen to all these GPU farms, since as I understood they are fairly useless for anything else?
Very few people own top of the line Macs and most interactions are on phones these days. We are many generations of phones away from running GPT-5 on a phone without murdering your battery.
Even if that weren't true having your software be cheaper to run is not a bad thing. It makes the software more valuable in the long run.
Quantized, a top-end Mac can run models up to about 200B (with 128GiB of unified RAM). They'll run a little slow but they're usable.
This is a pricey machine though. But 5-10 years from now I can imagine a mid-range machine running 200-400B models at a usable speed.
Wasn’t there also a bunch of telecom infrastructure created in the dot-com bubble, tangible products created, etc? Things like servers, telephone wires, underwater internet cables, tech-storefronts, internet satellites, etc.
so much fiber was run that in the US over 90% of it wasn't even used
>they’re using their equity to buy compute that is critical to improving their core technology
That's only like 1/8th of the flywheel, though.
> There’s one key difference in my opinion
The other difference (besides Sam's deal making ability) is, willing investors: Nvidia's stock rally leaves it with a LOT of room to fund big bets right now. While in Oracle's case, they probably see GenAI as a way to go big in the Enterprise Cloud business.
> Nvidia's stock rally leaves it with a LOT of room to fund big bets right now
And then what happens if the stock collapses?
Hence the emphasis on right now.
> I have some faith it could go another way.
I wonder how they felt during the .com era.
Yes, this time is different, trust big bro sama.
Gita Gopinath (imf’s economist) sounded the alarm on the scale of this - https://www.afr.com/wealth/investing/the-crash-that-could-to...
Stopped clock…
2020: https://www.youtube.com/watch?v=rpiZ0DkHeGE 2019: https://www.cadtm.org/spip.php?page=imprimer&id_article=1732...
The original "Tech" boom was an infrastructure boom by the telecoms funded by leveraged debt. It was an overbuild mismatch with the market timing. If you brought forward the timeline to when that infrastructure was used (late 2000s) you probably would never have had the crash.
This boom is a data center boom with AI being the software layer/driver. This one potentially has a lot longer to run even though everyone is freaking out now. If you believe the AI is rebuilding compute then this changes our compute paradigm in the future. As well as long as we don't get an over leveraged build out without revenue coming in the door. I think we are seeing a lot of revenue come in for certain applications.
The companies that are all smoke and mirrors built on chatGPT with little defensibility are probably the same as the ones you are referring to in the current era. Or the AI tooling companies.
To be clear circular deal flow is not a good look.
I can see the both sides of bull and bear at this moment.
One interesting aspect of this is that, with the exception of OpenAI, all of the companies leading this boom generate massive amounts of income from other arms of their buinesses. I think this is one reason for the potentially longer run, since they can subsidize AI CapEx with these cash flows for quite a while.
I'd gander a guess that there's nothing tech specific here and that fraudulent schemes are well defined for the SEC and commercial courts to take action if something is not kosher
It's usually not actually fraud. It's the amazon reinvesting back into growth, except the unit economics don't work if everyone cashes out at the same time, and if anyone starts cashing out the growth stops and everyone cashes out before it's too late.
Exactly, everything old is new again. This was one of the drivers of the original dot-com bubble.
A couple of thoughts on the big picture:
* Rise of AI is one of the biggest “transfers” of IP-generated wealth.
* It is also a dramatic increase in the “software is eating the world” trend, or at least an anticipation of such. It kinda turned from everyone dragging their feet through software andoptin over the course of 30 years into a massive stampede.
Reminds me of Iceland pre 2008 - lot of circular & complex deals - but now it's different
Related
https://www.theregister.com/2025/10/29/microsoft_earnings_q1...
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
Edit: the following is incorrect. I didn't know that the change to IRC § 174 was cancelled this summer.
------
What's crazy is that with the 2021 changes to IRC § 174 most software r&d spending is considered capital investment and can't be immediately expensed. Has to be amortized over 5 years.
I don't know how that 11.5B number was derived, but I would wager that the net loss on income statement is a lot lower than the net negative cash flow on cash flow statement.
If that 11.5B is net profit/loss, then whatever the portion of the expense part of the calculation that's software R&D could be 5x larger if it weren't for the new amortization rule.
Wasn't that change cancelled this summer?
It was
Real question -- how else is OpenAI supposed to fund itself? It has capital requirements that the most moneyed business companies can't provide. So it has to come up with ways to get access to money while de-risking the terms. Not saying the circularity works but I don't know how else you raise at their scale.
This money is well beyond VC capability.
Either this lets them build to net positive without dying from painful financing terms or they explode spectacularly. Their rate of adoption it seems to be the former.
They could try selling their services above cost.
It's highway robbery how cheap tokens are right now. Enjoy the free lunch while it lasts
Exactly - its like every VC company in the history of VC subsidized costs for growth. Once those tentacles have latched beware the exit costs though!
The tentacles seem a bit limp and disorientated on this one. There are lots of them but they just seem to flop wetly against the windows. I hope they're not going to start decomposing and stink the place up.
If you can only continue to fund a venture using scam-like structures, then maybe it's time to re-evaluate what the goals and value prop of the unfundable venture is.
I don't think you understand how ventures are funded.
Maybe, or maybe you think all venture funding schemes are scams. I am not totally there just yet.
It's incredible how Tesla used to lose a few hundred million a year and analysis shows would freak out claiming they'd never be profitable. Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.
I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets
> Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft
Rivian stock is down 90%, and I fairly regularly read financial news about it having bad earnings, stock going even lower, worst-in-industry reliability, etc etc.
I don't know why you don't hear about it, but it might be because it's already looking dead in the water so there's no additional news juice to squeeze out of it.
That's true, I shouldn't have written it off and was too eager to make the analogy.
There was a point where because of Tesla's enormous profits, it was seen as ok for Rivian to lose that much in a year, which was incredible because it's about the same amount of money Tesla lost during its entire tenure as a public company. You're right though they've been criticized for it and have paid the (stock) price for it.
Rivian lost something like $5B in 2024, but they're on track to only lose $2.25B in 2025. That trend line is clear. In 2026 they release a much lower cost model, and a lot of that loss has been development of that model. They probably won't achieve profitability in 2026, but if they get their loss down to $1B in 2026, in 2027 we'll likely see them go net positive.
> like the WeWork CEO flying couches to offices in private jets
I found there was more than just couches on the WeWork private jets:
https://www.inverse.com/input/tech/weworks-adam-neumann-got-...
How this guy is not in jail is beyond me.
It reminds me a lot of the late 1990s.
We had an impressive new technology (the Web), and everyone could see it was going to change the world, which fueled a huge gold rush that turned into a speculative bubble. And yes, ultimately the Web did change the world and a lot of people made a lot of money off of it. But that largely happened later, after the bubble burst, and in ways that people didn't quite anticipate. Many of the companies people were making big bets on at the time are now fertile fodder for YouTube video essays on spectacular corporate failures, and many of the ones that are dominant now were either non-existent or had very little mindshare back in the late '90s.
For example, the same year the .com bubble burst, Google was a small new startup that failed to sell their search engine to Excite, one of the major Web portal sites at the time. Excite turned them down because they thought $750,000 was too high a price. 2 years later, after the dust had started to settle, Excite was bankrupt and Google was Google.
And things today sure do strike me as being very similar to things 25, 30 years ago. We've got an exciting new technology, we've got lots of hype and exuberant investment, we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal. And neither side really wants to listen to the more sober voices pointing out that both these things have been true at the same time many times in the past, so maybe it's possible for them to both be true at the same time in the present, too. And, as always, the people who are most confident in their ability to predict the future ultimately prove to be no more clairvoyant than the rest of us.
> we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal.
Um I think nobody is really denying that we are in a bubble. It's normal for new tech and the hype around it. Eventually the bad apples are weeded out and some things survive, others die out.
The first disagreement is how big the bubble is, i.e. how much air is in it that could vanish. And that's because of the second disagreement, which is about how useful this tech is and how much potential it has. It's clear that it has some undeniable usefulness. But some people think we'll soon have AGI replacing everybody and the opposite is that's all useless crap beyond a few niche applications. Most people fall somewhere in between, with a somewhat bimodal split between optimists and skeptics. But nobody really contends that it's a bubble.
>and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.
For Microsoft, and the other hyperscalers supporting OpenAI, they're all absolutely dependent on OpenAI's success. They can realistically survive through the difficult times, if the bubble bursts because of a minor player - for example if Coreweave or Mistral shuts down. But if the bubble bursts because the most visible symbol of AI's future collapses, the value-destruction for Microsoft's shareholders will be 100x larger than OpenAI's quarterly losses. The question for Microsoft is literally as fundamental as "do we want to wipe $1tn off our market cap, or eat $11bn losses per quarter for a few years?" and the answer is pretty straightforward.
Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
> Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
Yeah true, the whole pivot from non-profit to Too Big to Fail is pretty amazing tbh.
They’re dependent on usage of their cloud. I don’t agree that they are as dependent on OAI as you suggest. Ultimately, we’ve unlocked a new paradigm and people need GPUs to do things - regardless of whether that GPU is running OAI branded software or not.
Why? Microsoft has permanent, royalty free access to the frontier models. If OpenAI went under, MSFT would continue hosting GPT-5 on Azure, GitHub Copilot, etc. and not be affected in the slightest.
> this feels a little like the WeWork CEO flying couches to offices in private jets
Fascinating! I unearthed the TL;DR for anyone else interested:
* WeWork purchased a $60 million Gulfstream G650ER private jet for Neumann's use.
* The G650ER was customized with two bedrooms and a conference table.
* Neumann used the jet extensively for global travel, meetings, and family trips.
* The jet was also used to transport items like a "sizable chunk" of marijuana in a cereal box, which might be worse and more negligent than couches.
Sources:
https://www.vanityfair.com/hollywood/2022/03/adam-neumann-re...
https://nypost.com/2021/07/17/the-shocking-ways-weworks-ex-c...
The couch fascinate me the most because it's almost justifiable. Like offices need furniture and grand openings should be nice; however the cost could never be recovered and the company was way too big to be doing things that don't scale.
In a similar vein, LLM's/AI are clearly impressive technologies that can be done profitably. Spending billions on a model however may not be economically feasible. It's a great example of runaway spending, whereas the weed thing feels more along the lines of a drug problem to me.
Very few industries are “deeply profitable” absent the illegal abuse of monopoly power
Don't forget the perfectly legal use of legislation and bureaucratic precedent that gives them "soft/lossy monopoly" power or all but forces people do to business with them.
OpenAI is pretty clearly pushing for complex government regulation as a way to protect their lead and prevent new entrants in the market.
And as we saw, once a model is trained you need very little compute to run it and there is very little advantage in begin the 1st model and the 10th model.
Monopoly in this field is impossible, your product won't ever be so good that the competition does not make sense
Add to this that AGI is impossible with LLMs...
I’m not so sure. Look for more gov regulations that make it hard for startups. Look for stricter enforcement of copyright (or even updates to laws) once the big players have secured licensing deals, to cut off the supply of cheap training data.
And did people listen to those "analyses" and dump Tesla, or its stock kept skyrocketing?
Tesla is a meme stock.
not true.
That was back in the mid 2010s right? Companies had yet to reach 1T valuation. 5bil against 1T is a drop in the bucket
Investors are trying to bet on OpenAI being the first to replace all human skilled labor. Of course, this is foolish for a few reasons:
1. Performance of AI tools improving but marginally so in practice 2. If human labor was replaced, it's the start of global societal collapse so any winnings would be moot.
You can't honestly be comparing a shitty real estate play like WeWork to the real functional benefits people get out of ChatGPT.
ChatGPT was mind blowing when you first used it. WeWork is a real estate play fronted by a self aggrandizing self dealing CEO.
The winner takes it all, so it is reasonable to bet big to be the one.
The one what? What is the secret sauce that will distinguish one LLM from another? Is it patentable? What's going to prevent all of the free LLMs from winning the prize? An AI crash seems inevitable.
It could end up like Search did, at first you had Lycos, AskJeeves, Altavista etc. and then Google became absolutely dominant.
They want to be the Google in this scenario.
Then they're doing it backwards. Google first built a far superior product, then pursued all the tricks to maintain their monopoly. OpenAI at best has the illusion of a superior product, and even that is a stretch.
Google was by far the best product. Maybe an LLM provider will emerge in that way, but it seems they are all very similar in capability right now.
I don't believe google won the search engine wars because they had the best product, while it may be true, the won because the of the tools they provided to their users. Email, cloud storage, docs/sheets/drive, Chrome, etc
They were already pretty dominant in search by the time they released most if not all of those. They got into that position by being the better search engine - better results and nicer to use (clean design, faster loading times).
You need the infrastructure, not just the model.
The model can be free, but the infrastructure (data center) ain't.
The goal isn't to be the best LLM, the goal is to be the first self-improving LLM.
On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.
Maybe in paper, but only on paper. There are so many half baked assumptions in that self-improvement logic.
The moment properly self-improving AI (that doesn't run into some logistic upper bound of performance) is released, the economy breaks.
The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.
It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.
The machine god would still need resources provided by humans on their terms to run; the AI wouldn’t sweat having to run, for instance, 5 years straight of its immortality just to figure out a 10 years plan to eventually run at 5% less power than now, but humans may not be willing to foot the bill for this.
There’s no guarantee that the singularity makes economic sense for humans.
Presuming the kind of runaway superintelligence people usually discuss, the sort with agency, this just turns into a boxing problem.
Are we /confident/ a machine god with `curl` can't gain its own resilient foothold on the world?
Self-improving LLM is as probable as a perpetual motion machine.
Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.
Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.
Your logic might make intuitive sense, but I don't think it is as ironclad as you portray it.
The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.
Silicon valley capital investment firms have always exploited regulatory capture to "compete". The public simply has a ridiculously short memory of the losers pushed out of the market during the loss-leader to exploit transition phase.
Currently, the trend is not whether one technology will outpace the other in the "AI" hype-cycle ( https://en.wikipedia.org/wiki/Gartner_hype_cycle ), but it does create perceived asymmetry with skilled-labor pools. That alone is valuable leverage to a corporation, and people are getting fired or ripped off anticipating the rise of real "AI".
https://www.youtube.com/watch?v=_zfN9wnPvU0
One day real "AI" may exist, but a LLM or current reasoning model is unlikely going to make that happen. It is absolutely hilarious there is a cult-like devotion to the AstroTurf marketing.
The question is never whether this is right or wrong... but simply how one may personally capture revenue before the Trough of disillusionment. =3
I don't really believe that, and I thought it was interesting on Meta's earnings call that Zuck (or the COO) said that it seems unlikely at this point that a single company will dominate every use of LLMs/image models, and that we should expect to see specialization going forward.
Do you have any reasoning to support the notion that this market is winner takes all?
With enough money to lobby, they can make it a winner takes all market (ala, a regulated monopoly).
Want to bet? I see this claim all over the internet and do not believe it for a moment.
But then you get stuff like Deepseek R1.
As I understand the argument, it's that AI will reach a level where it's smart enough to improve itself, leading to a feedback loop where it takes off like a rocket. In this scenario, whoever is in second place is left so far in the dust that it doesn't matter. Whichever model is number one is so smart that it's able to absorb all economic demand, and all the other models will be completely obsolete.
This would be a terrifyingly dystopian outcome. Whoever owns this super intelligence is not going to use it for the good of humanity, they're going to use it for personal enrichment. Sam Altman says OpenAI will cure cancer, but in practice they're rolling out porn. There's more immediate profit to be made from preying on loneliness and delusion than there is from empowering everyone. If you doubt the other CEOs would do the same, just look at them kissing the ass of America's wannabe dictator in the White House.
Another possible outcome is that no single model or company wins the AI race. Consumers will choose the AI models that best suit their varying needs, and suppliers will compete on pricing and capability in a competitive free market. In this future, the winners will be companies and individuals who make best use of AI to provide value. This wouldn't justify the valuations of the largest AI companies, and it's absolutely not the future that they want.
Does the winner take it all?
I agree this is a reasonable bet though but for different reason, I believe this is a large scale exploitation where money is systematically siphoned away from workers and into billionaires via e.g. hedgefunds, bailouts, dividend payouts, underpay, wagetheft, etc. And the more they blow out this bubble the more money they can exploit out from workers. As such it is not really a bet, but rather the cost of business. Profits are guaranteed as long as workers are willing to work for yours.
Will Sam Altman's fall be as legendary as Sam Bankman-Fried's?
I'm assuming Altman wasn't screwing his CFO and letting her post to 4chan about it, so probably not that bad.
Altman was screwing / raping his sister so not quite sure who is worse.
Itll be worse. He is doing this for ego, not money from what I see.
SBF’s fall is almost forgotten already.
Hopefully moreso
ressentiment: the forum
Most of the funds lost to SBF were recovered. And CPZ has a pardon. Crypto has evaporated about $2 trillion in assets since then.
The funds in USD were recovered because bitcoins value is 5x higher than it was when he got arrested.
And a set of fundamentally sound investments, including Anthropic iirc.
This is painting a target around the arrow. AFAIK, they had so much money to throw around for a spray and pray, similar to VC firms
I don't understand why you think it's OK to flagrantly violate financial laws for consumer protection, just because the bet got lucky?
I was at a bitcoin conference in 2018. One guy in the booth told me that the company had set up a $100M fund to fund startups that agreed to build apps on their blockchain. I wonder where they are now?
As long as they kept another $100M in coins then fairly happy.
Okay, that article is a little bit shallow. I just summarises the headlines of the last weeks of circular deals. But is there also a more in depth article that sheds a little more light onto what this actually means? From a financial perspective?
Ed Zitron has been shouting into the void about this for quite some time: https://www.wheresyoured.at/the-case-against-generative-ai/
He also has a podcast called Better Offline, which is slightly too ad heavy for my taste. Nevertheless, with my meagre understanding of the large corporate finances I was not able to find any errors in his core argument regardless of his somewhat sensationalist style of writing.
My complaint about Ed Zitron is that he's _always_ shouting into the void about something. A lot of the issues he covers are legitimate and deserve the scorn he gives them but at some point it became hard for me to sort the signal from the noise.
Ed Zitron sucks because he constantly spitballs on easy to confirm topics and keeps being wrong in ways that should be trivial to check and fix. Case in point:
https://bsky.app/profile/notalawyer.bsky.social/post/3ltkami...
It’s probably hard to do that in a news context because the real rationales are pretty tight.
Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era, or a metastasized financial cancer that’s going to implode the economy. Reality lies in the middle, and nobody really knows how the story is going to end.
In my personal opinion, “financial innovation” (see: the weird opaque deals funding the frantic data center construction) and bullshit like these circular deals driving speculation is a story we’ve seen time and time again, and it generally ends the same way.
An organization that I’m familiar with is betting on the latter - putting off a $200M data center replacement, figuring they’ll acquire one or two in 2-3 years for $0.20 on the dollar when the PE/private debt market implodes.
> Reality lies in the middle
The argument to moderation/middle ground fallacy is a fallacy.
https://en.wikipedia.org/wiki/Argument_to_moderation
Not really. The idea that reality lies _in_ the middle is fairly coherent. It's not, on it's face, absolutely true but there are and infinite number of options between two outcomes so the odds are overwhelmingly in the favor that the truth lies somewhere in between. Is either side totally right about every single point of contention between them? Probably not, so the answer is likely in the middle. The fallacy is a lot easier to see when you're arguing about one precise point. In that case, someone is probably right and wrong. But, in cases where a side is talking about a complex event with a multitude of data points, both extremes are likely not completely correct and the answer does, indeed, lie in between the extremes.
The fallacy is that the true lies _at_ the middle, not in the middle.
You're thinking in one dimension. Truth. Add another dimension, time, and now we're talking about reality.
Ultimately, if both sides have a true argument, the real issue is which will happen first in time? Will AI change the world before the whole circular investment vehicle implode? Or after, like happened with the dotcom boom?
Flat-earthers: The earth is flat.
Round-earthers: The earth is round.
"Reality lies in the middle" argument: The earth is oblong, not a perfect sphere, so both sides were right.
If we're gonna be pendantic about fallacies, you're using argument by analogy and it's not in any way comparable to the claims GP made about OpenAI.
"Round" does not mean spherical and both of these claims are falsifiable and mutually exclusive.
The AI situation doesn't not have two mutually exclusive claims, it has two claims on the opposite sides of economic and cultural impact that are differences of magnitude and direction.
AI can both be a bubble and revolutionary, just like the internet.
>infinite number of options between two outcomes so the odds are overwhelmingly in the favor that the truth lies somewhere in between
This is totally fallacious.
It isn't.
"AI is a bubble" and "AI is going to replace all human jobs" is, essentially, the two extremes I'm seeing. AI replacing some jobs (even if partially) and the bubble-ness of the boom are both things that exist on a line between two points. Both can be partially true and exist anywhere on the line between true and false.
No jobs replaced<-------------------------------------->All jobs replaced
Bubble crashes the economy and we all end up dead in a ditch from famine<---------------------------------------->We all end up super rich in the post scarcity economy
It is completely fallacious.
For one, in higher dimensions, most of the volume of a hypersphere is concentrated near the border.
Secondly, and it is somewhat related, you are implicitly assuming some sort of convexity argument (X is maybe true, Y is maybe true, 0.5X + 0.5 Y is maybe true). Why?
I agree there is a large continuum of possibilities, but that does not mean that something in the middle is more likely, that is the fallacious step in the reasoning.
> Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era
Eh, in a way they're not mutually exclusive. Look back at the dot com crash: it was all about things like online shopping, which we absolutely take for granted and use every day in 2025. Same for the video game crash in the 80s. They are both an overhyped bubble and and the dawn of a new era.
Exactly. I think the difference is that we've developed a cadre of people are think 24x7 about capturing value in a way that makes dotcom era moguls look naive.
AI is a powerful and compelling technology, full stop. The sausage making process where the entire financial economy is pivoting around it is a different matter, and can only end in disaster.
IMF chief economist has thoughts: https://www.afr.com/wealth/investing/the-crash-that-could-to...
Sam Conman.
They are fueling rich family office money into the bank accounts of their personell. Not bad not bad.
The fact that it is private equity that is going to evaporate when the bubble bursts is the only silver lining I can see. However, my natual cynicism makes me bet they'll spend whatever they've got left over on their pet politicians to use government (ie, public funding) to bail themselves back out.
OpenAI is raising funding based on its own forecasts for AI demand growth, and sending most of it to Oracle, MSFT, Nvidia as well as paying insiders enormous salaries.
There are some interesting parallels here with the business model described in the book Confessions of an Economic Hitman. Developing countries take out huge loans from US lenders to build an electric grid, based on inflated forecasts from US consultancies they hired. The countries take on the debt, but the money mostly bypasses them and lands in the pockets of US engineering firms doing the construction, and government insiders taking kickbacks for greasing the wheels.
When the forecasted growth in industrial production fails to materialize, the countries are unable to repay the debt and have no option but to offer the US access to their resources, ports and votes in the UN.
What happens when OpenAI's forecasts of gargantuan growth fail to materialize and they're unable to sell more stock to pay off lenders? Does Uncle Sam step in with a bailout for "national security" reasons?
I can understand how someone's approach can be "hack all the things", however, at some point you run into the fundamental boundaries of the box you are in and you can't hack your way around those.
That doesn't really matter: as long as there are idiots who will buy your inflated stock you've externalized the problem for yourself whilst staying within the box.
Lazy Susan is not a hack - it’s a scam.
Here's a diagram from Morgan Stanley showing OpenAI's hair-ball of deals:
https://x.com/akcakmak/status/1976204708655079840/photo/1
It really is a hair-ball: purchase-sale relationships, revenue share agreements, investments, vendor loans, and repurchase agreements, etc.
The most interesting thing here is that it's now reached the NY Times.
The numbers are quite historic in size.
“I’m not hearing any music.”
Given that AI is a national security matter now, I'd expect the U.S.A to step in and rescue certain companies in the event of a crash. However, I'd give higher chances to NVIDIA than OpenAI. Weights are easily transferrable and the expertise is in the engineers, but ability to continue making advanced chips is not as easily transferred.
If they're too-important-to-fail they're too important not to be broken up or nationalised.
While that is a sensible opinion the 2008 crash showed that it is not the opinion of decision makers in the US.
I’m curious if those of you calling for nationalization have worked for the government or a state-owned enterprise like Amtrak. People should witness the effects of long-term public sector ownership on productivity and effectiveness in a workplace.
Yeah, like IBM and Intel and GE and GM are shining examples of how effectively the private sector runs companies. Maybe large enterprises are by their nature inefficient. Maybe productivity isn't the best metric for a utility. We could, for instance, prioritize resiliency, longevity, accessibility, and environmental concerns.
Even those problematic companies exemplify the difference: when enterprises are mismanaged and fail, capital is reallocated away from them.
The US government just allocated $10b towards Intel, and bailed out GM in the past. So what you said is clearly not the case. Now we have publicly-funded private management that is failing. At least if they were publicly owned and managed outright, they wouldn't be gutted by executives prioritizing quarterly profits.
Executives should prioritize producing things people are willing to pay money for cheaply. If there is a bias towards short-termism, that is a governance problem that should be addressed.
I agree that the US taking stakes or picking winners is bad, I don't think it follows that nationalization is the solution.
The USPS does more for its workers and customers than FedEx. There are addresses FedEx won't service due to "inefficiencies", hand over packages to the USPS for delivery.
Fwiw, this is a facile argument. You make no attenpt to demonstrate that after major reorganization (breakup / nationalization) that the firm will continue to have the desirable attributes (innovation, efficincy, ability to build) that made them too important to fail.
Why is ML knowledge "in the engineers" while chip manufacturing apparently sits in the company/hardware/something else than the engineers/humans?
Read up a bit on the effort needed to get a fab going, and the yield rates. While engineers are crucial in the setup, the fab itself is not as 'fungible' as the employees involved.
I can spin up a strong ML team through hiring in probably 6-12 months with the right funding. Building a chip fab and getting it to a sensible yield would take 3-5 years, significantly more funding, strong supply lines, etc.
> I can spin up a strong ML team through hiring in probably 6-12 months with the right funding
Not sure what to call this except "HN hubris" or something.
There are hundreds of companies who thought (and still think) the exact same thing, and even after 24 months or more of "the right funding" they still haven't delivered the results.
I think you're misunderstanding how difficult all of this is, if you think it's merely a money problem. Otherwise we'd see SOTA models from new groups every month, which we obviously aren't, we have a few big labs iteratively progressing SOTA, with some upstarts appearing sometimes (DeepSeek, Kimi et al) but it isn't as easy as you're trying to make it out to be.
There’s a lot in LLM training that is pretty commodity at this point. The difficulty is in data - and a large part of why it has gotten more challenging is simply that some of the best sources of data have locked down against scraping post-2022 and it is less permissible to use copyrighted data than the “move fast and break things” pre-2023 era.
As you mentioned, multiple no name chinese companies have done it and published many of their results. There is a commodity recipe for dense transformer training. The difference between Chinese and US is that they have less data restrictions.
I think people overindex on the Meta example. It’s hard to fully understand why Meta/llama have failed as hard as they have - but they are an outlier case. Microsoft AI only just started their efforts in earnest and are already beating Meta shockingly.
Fully agree. I also think we are deep into the diminishing returns territory.
If I have to guess OAI and others pay top dollars for talent that has a higher probability of discovering the next "attention" mechanism and investors are betting this is coming soon (hence the hige capitalizations and willing to loive with 11B losses/quarter). If they lose patience in throwing money at the problem I see only few players remaining in the race because they have other revenue streams
>Otherwise we'd see SOTA models from new groups every month
We do.
It's just that startups don't go after the frontier models but niche spaces which are under served and can be explored with a few million in hardware.
Just like how open AI made gpt2 before they made gpt3.
> We do.
> It's just that startups don't go after the frontier models but niche spaces
But both of "New SOTA models every month" and "Startups don't go for SOTA" cannot be true at the same time. Either we get new SOTA models from new groups every month (not true today at least) or we don't, maybe because the labs are focusing on non-SOTA instead.
State of the art doesn't mean frontier.
I've always taken that term literally, basically "top of the top". If you're not getting the best responses from that LLM, then it's not "top of the top" anymore, regardless of size.
Then something could be "SOTA in it's class" I suppose, but personally that's less interesting and also not what the parent commentator claimed, which was basically "anyone with money can get SOTA models up and running".
Edit: Wikipedia seems to agree with me too:
> The state of the art (SOTA or SotA, sometimes cutting edge, leading edge, or bleeding edge) refers to the highest level of general development, as of a device, technique, or scientific field achieved at a particular time
I haven't heard of anyone using SOTA to not mean "at the front of the pack", but maybe people outside of ML use the word differently.
A sota decoder model is a bigger deal than yet another trillion parameter encoder only model trained on benchmarks.
I don't get why you think that the only way that you can beat the big guys is by having more parameters than them.
Right. I could spin up a strong ML team, an AI startup, build a foundational model, etc give a reasonable amount of seed capital.
Build a chip fab? I’ve got no idea where to start, where to even find people to hire, and i know the equipment we’d need to acquire would be also quite difficult to get at any price.
But the fabs don't belong to NVIDIA, they belong to TSMC. I have no doubt that Taiwan and maybe even the US government would step in to save TSMC if for some reason it got existential problems, but that doesn't provide an argument for saving NVIDIA
> I can spin up a strong ML team through hiring in probably 6-12 months with the right funding.
Mark Zuckerberg would like a word with you
Nvidia isn't a fab.
First-order: because of the capex and lead times. If you grab a bunch of world-class ML folks and put them in a room together, they're going to be able to start producing world-class work together. If you grab a bunch of world-class chip designers in the same scenario but don't have world-class fabs for them to use, they're not going to be able to ship competitive designs.
> If you grab a bunch of world-class chip designers in the same scenario but don't have world-class fabs for them to use, they're not going to be able to ship competitive designs.
But why such an unfair comparison?
Instead of comparing "skilled people with hardware VS skilled people without hardware", why not compare it to "a bunch of world-class ML folks" without any computers to do the work, how could they produce world-class work then?
Much easier and cheaper to source computers than a fab.
Right, but to source a fab you need experience as well, nothing you can just hire a random person to do exactly.
To simplify it down even more:
- For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.
- For the chip design team, you need money and time. There's no workaround for the time aspect of it. You can't spend more money and get a fab quicker.
> - For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.
Even if you do those things though, it doesn't guarantee success or you'll be able to train something bigger. For that you need knowledge, hard work and expertise, regardless of how much money you have. It's not a problem you can solve by throwing money at it, although many are trying. You can increase the chances of hopefully discovering something novel that helps you build something SOTA, but as current history tells us, it isn't as easy as "ML Team + Money == SOTA model in a few months".
Sure. No guarantees that you could throw money at putting an ML team together and have a new SOTA model in a few months. You might, you might not.
You know what I can guarantee? No matter how much money you throw at it, you will not have a new SOTA fab in a few months.
The start-up costs of creating a new chip manufacture are significantly higher (you can't just SAAS your way into factories) and the chips themselves more subject to IP and patents owned by that company.
One person can implement a transformer model from scratch in a weekend. Hardware is not the valuable part of machine learning. Data and how it is used are.
The "magic of AI" doesn't live inside an Nvidia GPU. There are billions of dollars of marketing being deployed to convince you it does. As soon as the market realizes that nvidia != magic AI box, the music should stop pretty quickly.
Umm, part of it does. It necessary but not sufficient, at least to achieve it on the timescales we've seen. Scale is part of the "magic".
That's true, but without the kind of horsepower provided by modern hardware, even though I'm skeptical that it's all needed, especially given DeepSeek's amazing results, AI would be nearly impossible.
There are some important innovations on the algorithm / network structure side, but all these ideas are only able to be tried because the hardware supports it. This stuff has been around for decades.
Deepseek required existing models that required the horsepower.
Chip designs have strong IP protections.
AI models do not. Sure you can't just copy the exact floating point values without permission. But with enough capital you can train a model just as good, as the training and inference techniques are well known.
> But with enough capital you can train a model just as good, as the training and inference techniques are well known
You're not alone in believing just money can train a good model, and I've already answered elsewhere why things aren't so easy as you believe, but besides this, where are y'all getting that from? Is there some popular social media influencer that keeps parroting this or where it comes from? Clearly you're not involved in those processes/workflows yourself, then you wouldn't claim it's just a money problem, so where are you all getting this from?
Chip manufacturing is extremely time consuming, especially when we are talking about masks for lithography.
The rights on masks for chips and their parts (IPs) belong to companies.
And one definitely does not want these masks to be sold during bankruptcy process to (arbitrary) higher bidders.
Even if/when the bubble pops, I don't think NVIDIA is even close to need rescuing or being in trouble. They might end being worth 2 trillion instead of 5 but they're still selling GPUs nobody else knows how to make that power one of the most important technologies in the world. Also, all their other divisions.
The .com bubble didn't stop the internet or e-commerce, they still won, revolutioned everything, etc. etc. Just because there's a bubble it doesn't mean AI won't be successful. It will be, almost for sure. We've all used it, it's truly useful and transformative. Let's not miss the forest for the trees.
Hank green did a vlog on this a few weeks ago and it’s a great explainer.
This one? https://www.youtube.com/watch?v=Vz0oQ0v0W10
This comment is pretty depressing but it seems to be the path we're headed to:
> It's bad enough that people think fake videos are real, but they also now think real videos are fake. My channel is all wildlife that I filmed myself in my own yard, and I've had people leaving comments that it's AI, because the lighting is too pretty or the bird is too cute. The real world is pretty and cute all the time, guys! That's why I'm filming it!
Combine this with selecting only what you want to believe in and you can say that video/image that goes against your "facts" is "fake AI". We already have some people in pretty powerful positions doing this to manipulate their bases.
> We already have some people in pretty powerful positions doing this to manipulate their bases.
You don't have to be vague. Let's be specific. The President of the United States implied a very real voiceover of President Reagan was AI. Reagan was talking about the fallacy of tariffs as engines of economic growth, and it was used in an ad by the government of Ontario to sow divide within Republicans. It worked, and the President was nakedly mad at being told by daddy Reagan.
We are heading to an apocalyptic level of psychosis where human beings won't even believe the things they see with their own eyes are real anymore because of being flooded with AI slop 24/7/365.
We desperately need a technological solution to be able to somehow "sign" images and videos as being real and not generated or manipulated by AI.
I have no idea how such a thing would work.
It won't work, because most people do not understand what a digital signature is and they will just say that has been faked as well.
Journalists will know how to check it in high profile cases.
And annoyed and suspicious techies can use it to check other people's content and report them as fake.
Yeah, there are a lot of dumb people who want to be deceived. But would be good for the rest of us to have some tools.
I feel bad for the guy but I think this confusion will be extraordinary and get people off the internet.
There was a discussion on here recently about a new camera that could prove images taken with it weren't AI fakes, and most of the comments were skeptical anyone would care about such things.
This is an example of how people viscerally hate anyone passing off AI generated images and video as real.
>> figure out how to innovate on the financial model
Does it feel rather Orwellian that the original geeks now seem to be the same people who - forget about claiming technological innovation of as their own - completely discount it and apparently the important thing is now the creativity in funding an enterprise? We don't hear about the breakthroughs from the technologists, but the funding announcements from th investors and CEOs. It's not about the benefits of the technology, but how they're going to pay for it. Seems like a wildly perverse version of wag the dog...
this is all a function of the media reporting, the change in ‘nerd culture’ has been vastly overreported.
these companies are staffed by spectrum-y nerds that we are being desperately propagandized into thinking are actually frat ‘bros’.
No, they aren't. Locally, the person with the most esoteric knowledge is probably a weird nerd. It's mostly an accident that they chose to invest time in things typically associated with smarts. But globally, the best wizards got there by making it their profession. So maybe at your middling university, the people who could land a job at a frontier lab were nerdy wannabe frats, but at decent universities like MIT or Tsinghua, they're usually just better in every aspect of their lives. E.g. MIT has "math olympiad fraternities" all the cool kids join.
I went to a top 5 ranked school globally (~these lists fluctuate) and have been in elite circles since then. I can promise you that even there the autistic nerd fully outcompetes the renaissaince man.
GamersNexus Consumer Advocacy Channel did a piece on this [0]
[0]: https://www.youtube.com/watch?v=h3JfOxx6Hh4
If you want to dig deeper on this - check out https://www.wheresyoured.at/
His podcast Better Offline is a treat too.
Isn't paying a company to dig a hole who then pays you the same amount to fill said hole illegal?
Even worse in VAT countries where such carousels make you eligible for a tax return on technically zero added value
In a fair and just system with appropriate oversight, yes. So in this instance, no.
Only if you defraud investors in hole-digging corp and hole-filling corp by that by doing this you will be able to extract Unobtanium, which will make both companies 1000x profitable.
This is just starting to sound more and more like "we're almost at AGI I promise bro just need one more round of investment bro please just one trillion more dollars please bro".
Yes, but what does that have to do with this situation? The hole served no purpose. The companies are using the GPUs.
I suppose we could use a few pennies to hire some security guards to protect the filled holes.
Now we’re creating jobs!
They might be using the GPUs, but is that use providing real value? You can run a while loop and max out any processor.
And, well, nobody knows if it is providing real value. We know it's doing something and has some value WE attached to it. We don't know what the real value is, we're just speculating.
99% of code I generated using genAI served no purpose at the end of the day
Ok. Maybe use it better? Or don't use it at all. Doesn't mean it's not being used to some end, unlike a hole.
Keep in mind also that the models are going to continue improving, if only on cost. Just a significant cost reduction allows for more "thinking" mode use.
Most of the reports about how useless LLMs are were from older models being used by people that don't know how to use LLMs. I'm not someone that thinks they're perfect or even great yet, but their not dirt.
Seems like a net loss due to transactional costs.
The increase in value of the companies outweighs the transactional costs and then you borrow against the value of the company and make new circular deals. It works really well for a very long time and then at some point it doesn’t. The trick of the game is to get big corps involved and key decision makers so that the government bails out everyone in the end.
> The trick of the game is to get big corps involved and key decision makers so that the government bails out everyone in the end.
This is bad. We should not shrug our shoulders and go "Oh ho, this is how the game is played" as though we can substitute cynicism for wisdom. We should say "this is bad, this is a moral hazard, and we should imprison and impoverish those who keep trying it".
Or we'll get more.
They are banking on:
* stock prices increasing more than the non-existent money being burnt
* they are now too big to fail - turn on the real money printers and feed it directly into their bank accounts so the Chinese/Russians/Iranians/Boogeymen don't kill us all
Not if it increases the GDP.
Well, I've got great news then 92% of GDP growth in the first half of 2025 was hole filling companies paying hole digging companies to dig holes and in-kind pay them to fill them up again
what could possibly go wrong
How's this legal? Smaller businesses get in trouble for creative deals leading to inflated earnings.
Sort of like Bitcoin...
A lose lose situation for most people. Either the stock market crashes or AI progress meets expectations in the coming years and people start losing jobs.
So real estate it is after all?!
As an aside does anyone get the feeling that NYT is also training its fire on all California tech companies these days? I know that NYT really doesn't like California (always hasn't - from restaurants to culture to business) but curious if other people see that as well?
This is such a strange article -- there's nothing particularly unusual going on here.
The first example basically stands in for all of them -- Microsoft invests $13B in OpenAI, and OpenAI spends $13B on Azure. This is literally just OpenAI purchasing Microsoft cloud usage with OpenAI's stock rather than its cash. There is nothing unusual, illicit, or deceptive about this. This is entirely normal. You can finance your spending through debt or equity. They're financing through equity, as most startups do, and they presumably get a better deal (better rates, more guaranteed access) via Microsoft than via other random investors and then buying the cloud compute retail from Microsoft.
This isn't deceiving any investors. This is all out in the open. And it's entirely normal business practice. Nothing of this is an indicator of a bubble or anything.
Or take the deal with Oracle -- Oracle is building data centers for OpenAI, with the guarantee that OpenAI will use them. That's just... a regular business deal. What is even newsworthy about this? NYT thinks these are "circular" deals, but by this logic every deal is a "circular" deal, because both sides benefit. This is just... normal capitalism.
I remember the same argument being used before the 2008 crash.
Point is that all of this companies need to start making real profits and pretty damn big ones, otherwise all of this will collapse. Problem is that unless Altman has some super-intelligent super-AI hidden in his closet, it is very unlikely that it will.
And whose gonna take the bill when it falls? Let me guess… Where have I seen this before…?
> Point is that all of this companies need to start making real profits and pretty damn big ones
MS, Meta, Google, Apple, Nvidia make enormous profits. I think part of this AI push we're seeing is that all of these companies have so much money they don't know how to spend it all. Meta is a great case where they bounced from blowing excess cash on the metaverse and now to AI.
That's fine, but that's a separate conversation. Maybe this is a bubble, maybe it isn't.
My point is that the way it's all being financed is just regular financing. This article is trying to present the way it's being funded as novel, as "complex and circular", when it's not. This is how funding and investment works 365 days a year in all sectors. Nothing about the funding arrangements is a bubble indicator.
So this is a strange article from the NYT, because it's trying to present normal everyday financing deals as uniquely "complex and circular".
I don’t know financial world well enough to say whether that’s here nor there, but can you give me examples from other companies or sectors where a company X funds the company Y with tens to hundreds of billions that the company Y uses to buy a service from the company X.
Furthermore, yes it might be business as usual but so is fraud and god knows what else in this particular political era. In order to strengthen your argument you have to not only show that the phenomenon is not only common, but good for the overall economy.
Circularly passing around tens to hundreds of billions of dollars for things which don't exist and may never exist to fund a technology that hasn't A. lived up to the hype they've marketed and B. proven any strategy to breakeven is fundamentally not that much different than the way in which Enron strategically boasted their revenue numbers by passing the money between shell corporations that their CFO created.
The main difference of course being that these are actual companies as opposed to just entities intently designed to inflate the apparent financials. While it seems like that difference means this situation is perfectly fine as compared with the fraudulent case of Enron, the net effect is still the same; these companies are posting crazy quarter over quarter revenue growth, sending their stock prices to crazy highs and P/E multiples, while the insiders are cashing out to the tunes of hundreds of millions of dollars.
I don't really see how exactly you're trying to make the argument that it may or may not be a bubble, it objectively meets the definition of a bubble in the traditional economic sense (when an asset's market price surges significantly above its intrinsic value, driven by speculative behavior rather than fundamental factors). These companies are massively overvalued on the speculative value of AI, despite AI having not yet shown much economic viability for actual profit (not just revenue).
Worse yet, it's not just one company with inflated numbers, it's pretty much the entire top end of the market. To compare it to the dot com bubble wouldn't be a stretch, it'd basically be apples to apples as far as I see it.
> Microsoft invests $13B in OpenAI, and OpenAI spends $13B on Azure.
This isn't deceiving any investors.
It's Microsoft increasing its revenue by selling its stock.
Microsoft isn't selling any stock. It's using its cash.
And an increase in revenue isn't the point. Microsoft isn't doing this to try to bump its short-term stock price or anything -- investors know where revenue is coming from. Microsoft is doing it because it thinks OpenAI is a good investment and wants to make money with that investment and have greater control.
The last time this hit the news, it was the dotcom bubble, and Nortel was in a similar position with startups, taking equity for equipment.
No, that's not the last time this hit the news. This happens literally all the time. Again, this is just business as usual. It's not specific to AI, it's not specific to tech, and it's nothing to do with bubbles.
Sometimes additional context can take the same action that looks harmless in a vacuum and turn it into a bad idea or even a crime!
Then it would be great to have that context that shows criminality. Because that's an extraordinary claim you're suggesting, which is going to require actual evidence.
As for "bad ideas", businesses make tons of decisions every day that turn out to be good or bad in hindsight. So again, more specifics are needed here.
So what exactly are you suggesting? What context do you think the NYT chose to omit, and why would they omit it if it was meaningful?
The bubble part is that nvidia is getting revenue from people investing money in their hardware in order to sell something that has not yet been shown to be profitable. If it turns out no one can make enough money selling AI generated data to justify the costs spent on the compute needed to generate it at the current rate, then what nvidia are selling becomes much less valuable, and the whole thing collapses. We haven't figured out yet whether or not that will be the case.
But that has nothing to do with the arrangement of deals here.
If it's a bubble, then it will pop. If it's not a bubble, then all these investments will turn out to be great. But that's a different question.
The point is, all these deals happen all the time. They're not some kind of sign of a bubble. They happen just as much in non-bubbles. They're just capitalism working as usual.
These deals happen all the time. The case for a bubble is the following.
When Microsoft offers cloud-credits in exchange for openai equity, what it has effectively done is to purchase its own azure revenues. ie, a company uses its own cash to purchase its own revenues. This produces an illusion of revenue growth which is not economically sustainable. This is happening for all clouds right now wherein their revenues are inflated by uneconomic ai purchases. This is also happening for the gpu chip vendors as well, wherein they are offering cash or warrants to fund their own chip sales.
But nobody is falling for the "illusion of revenue growth". This is out in the open. This isn't a scam. Investors know this and are pricing accordingly. They see the revenue growth but also see the decrease in cash.
What Microsoft is actually doing is taking the large profits it would have otherwise made on its cloud compute with retail customers, losing much/all of those profits as it sells the compute more cheaply to OpenAI, and converting those lost profits into ownership of OpenAI because Microsoft's goal is to own more of OpenAI.
There is nothing "bubble" about this. Microsoft isn't some opaque startup investors don't understand. All of this is incredibly transparent.
There will be increased transparency since microsoft will now have to report on the performance of its openai equity [1]. The concern is that while chatgpt is a great app, the economic benefits of the current investments are being questioned. There is starting to be skepticism of ai as the public starts to get jaded. This happens in all fads. That explains why the media is buzzing with articles like these which are becoming increasingly critical while earlier they were all aboard the ai-train.
[1] https://news.ycombinator.com/item?id=45719669
I’ve been listening to “The Smartest Guys In The Room” (the definitive book on Enron and their scandal) and one of the ways Enron continued to grow and grow is by setting up a really complicated system of moving debt onto equities off of their balance sheet.
While it was sorta legal (at the time) it was not ethical and led to a massive collapse of the #1 company at the time.
Makes you wonder if AI is in such a bubble. (It is).
When the AI bubble pops, what will happen to the software engineering jobs?
There will be a bunch of layoffs and slowly they'll rehire back to pre-hysteria levels. I think the world is still going to need software engineers no matter what but companies will slow down on new features etc in an economic crunch.
The ripple effect will be felt hard, as American engineers are squeezed between offshoring and more engineers with Big Tech resumes being released into the market, and returnees go push back wages in their home countries in turn
They'll have to come in and redo all the work that people put onto LLMs as actual engineering software. The number of features I've worked on that could have been done with normal computing practices but instead shoehorned in bad AI to make decisions/routing logic is too high.
If it pops, some ai engineers will need to start doing some normal work again, and rest of us... we just continue doing what we were doing for past decades.
Or maybe not, nobody knows the future any more then next guy in line.
free AI credits will be a thing of the past, "productivity" (real or not) will dive and real software engineering will become a moat again.
When I was 16 I started working at a startup buying and reselling used electronics.
There were like 5 competitors all trying to become the winner takes it all. Afaik after 10 years some closed, restructured but most of them burnt a lot of money. One lets call him indie dev made a lot of money building a simple comparison platform and getting 10-20% on all deals.
This is n=1, but I think it still made me really averse to raising money.
Speedrunning to "too big to fail". Turn on the infinite money printers and feed them directly into Sam Altman's bank account or the Chinese/Russians/Iranians/Boogeymen will destroy us all.
Everyone loves to compare AI with the dot com bubble. My question is, were there any policies put in place after the dot com bubble to mitigate a similar crash? Or did we learn nothing?
Weird angle, but isn't "believing there will be a crash" sort of framing it as if this were still normal market dynamics?
OpenAI and AI in general has posed itself as an existential threat and tightly integrated itself (how well? let's argue later) with so many facts of society, especially government, that like, realistically there just can't be a crash, no?
Or is this too doomsday / conspiratorial?
I just find it weird that we're framing it as crash/not crash when it seems pretty clear to me they really genuinely believe in AGI, and if you can get basically all facets of society to buy in... well, airlines don't "crash" anymore, do they?
If OpenAI were to shut down today, would anything in society really change? It seems all valuations are based on future integration into society and our daily lives. I don't think it has really happened yet.
A crash in the stock market doesn't necessarily mean a crash in the real market, The AI bubble burst being dot com style vs a gfc debacle depends on how much critical financial infrastructure is at risk during the debt deleveraging. If you look at the gdp growth during those two periods, the dot com era was a mild stagnation compared to the gfc's actual gdp decline.
related https://news.ycombinator.com/item?id=45766138
Complex and circular deals lead to the downfall of Enron. Just saying...
seems like it's like a mix of enron, subprime mortgages, and .com boom all in one
Many here now didn't live through the dot-com bubble as an adult so can't really appreciate what it was like. The hype was something hard to describe. Financial analysts and journalists struggled to come up with ways to describe the health of these "companies". My favorite was what revenue multiple companies would trade it.
But the major takeaway was that almost none of these companies were real businesses. This is why I laughed at dot-com comparisons in the 2010s around the tech giants because Apple, Google, Microsoft, etc were money-printing machines on a scale we have trouble comprehending. That doesn't make them immune to economic struggles. Ad spending with Google will rise and fall with the economy.
OpenAI has a paper valuation in the hundreds of billions of dollars now and no prospect of a revenue model that will justify that for many, many years.
Currently, the hardware is a barrier to entry but that won't last. It has parallels in the dot-com era too when servers were expensive. The cost of training LLMs is (at least) halving every year. We're probably reaching the limits of what these transformers can do and we'll need another big breakthrough to improve.
OpenAI's moat is tenuous. Their value is in the model they don't release. But DeepSeek is a warning shot that it will be in somebody's geopolitical interest, probably China's, to prevent a US tech monopoly on AI.
If you look at these AI companies, so many of them are basically scams. I saw a video about a household humanoid robot that was, surprise surprise, just someone in a VR suit. Many cities have delivery drones now but somebody is remotely driving them.
I saw somebody float the theory that the super-profitable big tech companies are engaging in layoffs not because they don't need people but to pay for the GPUs. It's an interesting idea. A lot of these NVidia deals are just moving money around where NVidia comes out on top with a bunch of equity in these companies should they become trillion dollar companies.
Oh and take out data center building from the US economy and we're in recession. I do think this is a bubble and it will burst sooner rather than later.
"Circular deals" feels like an awfully cute way to say "fraud"
this seems like a fake circular economy, ms invests in openai which uses the money in azure, amazon invests in anthropic which pays aws for hardware and infra, nvidia invests in openai which uses the money to buy nvidia hardware, etc
It's a bubble
"You give me a million GPUs for free, I'll announce that you have sacrificed a million GPUs to the machine gods, and your stock price will spike 200 times the value of those GPUs."
I honestly don’t get it. People love being swindled? Or people have enough cash to throw into the swindling machine even for no gain? Must be nice.
You can do a lot of money in swindles and bubbles if you time your exit well. There is a fair bit of opportunistic investors who did well in the NFT craze, who speculated knowing fully well that NFT is a craze that will go to zero.
The Greater Fool theory of investing strikes again.
everything will eventually go to zero. we look at some of these things and laugh because we're pretty sure they're going to go to zero within weeks or months vs years. but by the end of all of our lifetimes, most the companies on the stock market will be replaced. the few that won't are probably investment banks like goldman sachs
these deals are made as part of a market so it's more like musical chairs where every time you change a chair you get a ton of money but you don't want to be the one that's stuck without a chair at the end
They've all realized the guy without the chair can be the taxpayer.
Modern finance is all about debt.
Central banks don't print money[1] but investment banks do. Think about it like this: Someone deposits $100. The bank pays interest, to make money on to pay that interest, ~$90 is loaned out to someone.
Now, I still have a bank slip that says $100 in the account, and the bank has given $90 of that to someone else. We now have $190 in the economy! The catch is, that money needs to be paid back, so when people need to call in that cash, suddenly the economy only has $10, because the loan needed to be paid back, causing a cash vacuum.
But that paying back is also where the profit is, because you sell off the loan book, and you can get all your money back, including future interest. So you have lent out $90, sold the right to collect the repayments to someone else as a bond, so you now have $120, a profit of $30
That $30 comes pretty much from nowhere. (there are caveats....)
Now we have my bank account, after say a year with $104 in it, the bank has $26 pure profit AND someone has a bond "worth" $90 which pays $8 a year. but guess what, that bond is also a store of value. So even though its debt, it acts as money/value/whatever.
Now, the numbers are made up, so are the percentages. but the broad thrust is there.
[1] they do
it's a bubble