It's not really even a question. It's an obvious boondoggle. The forecasted net new energy requirements for the AI buildout over the next couple of years are roughly equivalent to all of Western Europe's power demand today.
That's absurd. It's a physical impossibility to bring that much power online that quickly. And the cost to get even close would make AI more expensive than just hiring knowledge workers to do the same tasks.
And it's all predicated on a tower of wobbly or broken assumptions -- chief among them that increasing the size of these models yields better performance.
We're going to look back on this era and wonder why anybody took any of the outrageous claims of tech CEOs seriously.
> Wobbly assumption that increasing the size of these models yields better performance.
I'm assuming you disagree that larger models are better? Can you expand on what indicates that AI will hit a wall in scaling given the evidence of the last 9 years of scaling transformers (or other models)? Where on the plot does the line go from exponential to flat?
Why do you believe progress is currently exponential? There’s one dubious chart showing “exponential growth” in a single narrow domain, and otherwise zero evidence to suggest exponential improvement.
People really really don’t understand the implications of AGI.
Whether or not you believe we will reach it in a fee years, we are certainly wayy closer today than we were even two years ago.
The possibility of genuine AGI obliterates all the financial or energy related worries, they pale in comparison to the ultimate impact of such a technology.
However, yes, if you believe AGI is not possible or won’t arrive in the coming decade then all the data center buildup seems foolish.
Yeah... The AI industry will die on the shadow of the Iran war, and there will be forever some people claiming that it was healthy all around and would lead to world-change if the rest of the economy didn't blow.
The .COM bubble was more than a divot because .COMs in so many industries employed so many people. There was amazon.com, but also pets.com, lowermybills.com, gateway.com. But if our economy somehow loses access to AI (rationing due to wartime efforts? sabotage by a foreign nation? simply not enough grid power to turn them on at the price people are willing to pay?) I would probably need to hire more coders to get the equivalent work done.
AI is driving trades, materials, real estate, all sorts of downstream stuff.
The rest of the economy is dead. Oracle is dead without OpenAI. Remember that unlike the dotcom, none of these companies are public. So when it pops, you’ll see private credit and PE funds implode, which could bring down banks with unhedged exposure. The headlines talk about JP Morgan (which likely has the risk managed), but regional banks got into that nature in the last couple of years in a big way.
Because everyone's retirement depends on the stock market. If you're unlucky and your portfolio vests the day after a massive crash, that can have a very meaningful impact on the rest of your life.
I want to see these overly powerful tech oligarchs to fail too. But one issue is that all of us are tied to their performance to some extent. Our investments are exposed to them. Your 401k probably has funds that include them. When they fail, it hurts others too.
It’s also why SpaceX wants to be included into index funds as soon as possible after they go public. I recall the rules may be revised to support this, meaning everyone who has money saved in those funds will automatically be tied to the fate of SpaceX.
why? its all private money funding AI at the moment, sure some data centre reality companies would go belly up but you will be surprised to find out how much of it is big headlines and very little action. there is a lot of installations and GPUs not yet brought online but marked as sold because of .. well physical world delays.
Let me put it this way, IF AI is a bubble then I'd like it to go bust asap instead of dragging along and going public and then us discovering BS/creative accounting revenue in S1 filings. by then it would be much worse. Right now VCs and PE firms will absorb it all.
The thing with dot-com was that there was actual public market corruption & euphoria. that caused the bust painful for everybody. RN its bigtech & PE how has heavy cash reserver and margins to bun through. I'd much rather have them take it then average 401k.
> The thing with dot-com was that there was actual public market corruption & euphoria.
Just like now, the financial cup game is insane. Commiting money to a company that plans on doing a thing if they can get another company to do a thing and then that company is leveraging those cumulative possibilities in its own wager. The speculation is out of control.
I replaced a $22/hr worker entirely with AI. And it costs me about $0.18/hr instead. The AI does a better job, is more reliable and consistent. The human was constantly behind schedule, made frequent mistakes, and also humans get sick, or call off work for other reasons.
So yes, AI is a bubble, but this bubble has generated value, it’s not at all like 2008.
$0.18/hr is the (massively) subsidized price of AI services. Once these companies are required to turn a profit for their investors, they'll raise the price. Then the math doesn't look so lopsided. We're already seeing this process unfold with token windows and ad rollout.
It's not that subsidised, this is just wishful thinking. You can run a local model like Qwen for equivalent prices. You might see it go up to $0.50/hr but you're definitely not going to see it at $22
I do run open models locally, but let's not fool ourselves into thinking that they're functionally competitive. I'm extremely skeptical of anybody claiming they've obviated a $22/hr job with an open model. Qwen is a big step down in capability. I can play with something like k2.5 for awhile, but if I want real work done I'm going back to a frontier model, which has significant runtime requirements for inference.
You're also ignoring the cost of purchasing and amortizing dedicated hardware in your local model example.
Inference isn't really that expensive, its the training of new foundational models that is. With whatever highly optimized setup the big providers are using, they should be able to pack quite a lot of concurrent users onto a deployment of a model. Just think too, it's very possible their use case would be served just fine by a 100B model deployed to a $4,000 DGX Spark.
My bet is something administrative, like reminding people to approve their timesheets for payroll. AI wouldn't be needed to replace that job though, just a recurring calendar event.
It's not really even a question. It's an obvious boondoggle. The forecasted net new energy requirements for the AI buildout over the next couple of years are roughly equivalent to all of Western Europe's power demand today.
That's absurd. It's a physical impossibility to bring that much power online that quickly. And the cost to get even close would make AI more expensive than just hiring knowledge workers to do the same tasks.
And it's all predicated on a tower of wobbly or broken assumptions -- chief among them that increasing the size of these models yields better performance.
We're going to look back on this era and wonder why anybody took any of the outrageous claims of tech CEOs seriously.
They could get lucky, make a break through in robotics, and vertically integrate power generation into their business model with minimal human labor.
> It's a physical impossibility to bring that much power online that quickly.
China begs to differ.
I played a role in China's shift to renewables. It's been decades in the making.
> Wobbly assumption that increasing the size of these models yields better performance.
I'm assuming you disagree that larger models are better? Can you expand on what indicates that AI will hit a wall in scaling given the evidence of the last 9 years of scaling transformers (or other models)? Where on the plot does the line go from exponential to flat?
Why do you believe progress is currently exponential? There’s one dubious chart showing “exponential growth” in a single narrow domain, and otherwise zero evidence to suggest exponential improvement.
In my experience the models havent gotten any better, just the hype.
People really really don’t understand the implications of AGI.
Whether or not you believe we will reach it in a fee years, we are certainly wayy closer today than we were even two years ago.
The possibility of genuine AGI obliterates all the financial or energy related worries, they pale in comparison to the ultimate impact of such a technology.
However, yes, if you believe AGI is not possible or won’t arrive in the coming decade then all the data center buildup seems foolish.
"any day now"
Gotta love this argument. Top it off by saying anyone skeptical is a fool, because of course.
i’m sure my argument would have no merit if you ignored the thousands of advancements made in AI over the past few years
Yeah... The AI industry will die on the shadow of the Iran war, and there will be forever some people claiming that it was healthy all around and would lead to world-change if the rest of the economy didn't blow.
https://archive.is/ad64x
I hope so the reckless statements from the major players deserve a WRECKANING.
If it does there is gonna be a lot of cheap second hand hardware out there for those who want to build something cool
Already got my 440 3-phase hookup scheduled. That NVL72 rack ain't gonna run on sunshine and pixie dust
Will anybody be able to power it?
I'm gonna scoop my own /8 and lock a 100 year colo lease
Better hope not. If the AI bubble pops it’s going to make the dotcom bubble look like a tiny divot in the road.
The .COM bubble was more than a divot because .COMs in so many industries employed so many people. There was amazon.com, but also pets.com, lowermybills.com, gateway.com. But if our economy somehow loses access to AI (rationing due to wartime efforts? sabotage by a foreign nation? simply not enough grid power to turn them on at the price people are willing to pay?) I would probably need to hire more coders to get the equivalent work done.
AI is driving trades, materials, real estate, all sorts of downstream stuff.
The rest of the economy is dead. Oracle is dead without OpenAI. Remember that unlike the dotcom, none of these companies are public. So when it pops, you’ll see private credit and PE funds implode, which could bring down banks with unhedged exposure. The headlines talk about JP Morgan (which likely has the risk managed), but regional banks got into that nature in the last couple of years in a big way.
[dead]
Did amazon.com go bust? Seems like I heard they were still in business as of a couple of years ago at least.
How would this be a bad thing?
Because everyone's retirement depends on the stock market. If you're unlucky and your portfolio vests the day after a massive crash, that can have a very meaningful impact on the rest of your life.
If it’s a bubble, it has to pop sooner or later. It’s better if it pops now before growing even larger.
Like New Caledonia the wealth of our nation has been pumped into a get rich scheme looking for a new world
I want to see these overly powerful tech oligarchs to fail too. But one issue is that all of us are tied to their performance to some extent. Our investments are exposed to them. Your 401k probably has funds that include them. When they fail, it hurts others too.
It’s also why SpaceX wants to be included into index funds as soon as possible after they go public. I recall the rules may be revised to support this, meaning everyone who has money saved in those funds will automatically be tied to the fate of SpaceX.
[dead]
We already have our jobs on the line with AI, right? How will a crash be worse in personal terms?
There's a lot of people who think the only thing keeping us out of a serious recession or even depression is AI investment.
why? its all private money funding AI at the moment, sure some data centre reality companies would go belly up but you will be surprised to find out how much of it is big headlines and very little action. there is a lot of installations and GPUs not yet brought online but marked as sold because of .. well physical world delays.
Let me put it this way, IF AI is a bubble then I'd like it to go bust asap instead of dragging along and going public and then us discovering BS/creative accounting revenue in S1 filings. by then it would be much worse. Right now VCs and PE firms will absorb it all.
The thing with dot-com was that there was actual public market corruption & euphoria. that caused the bust painful for everybody. RN its bigtech & PE how has heavy cash reserver and margins to bun through. I'd much rather have them take it then average 401k.
> The thing with dot-com was that there was actual public market corruption & euphoria.
Just like now, the financial cup game is insane. Commiting money to a company that plans on doing a thing if they can get another company to do a thing and then that company is leveraging those cumulative possibilities in its own wager. The speculation is out of control.
Of course it's going to pop!
I hope so!
[dead]
No probably not
I replaced a $22/hr worker entirely with AI. And it costs me about $0.18/hr instead. The AI does a better job, is more reliable and consistent. The human was constantly behind schedule, made frequent mistakes, and also humans get sick, or call off work for other reasons.
So yes, AI is a bubble, but this bubble has generated value, it’s not at all like 2008.
$0.18/hr is the (massively) subsidized price of AI services. Once these companies are required to turn a profit for their investors, they'll raise the price. Then the math doesn't look so lopsided. We're already seeing this process unfold with token windows and ad rollout.
It's not that subsidised, this is just wishful thinking. You can run a local model like Qwen for equivalent prices. You might see it go up to $0.50/hr but you're definitely not going to see it at $22
I do run open models locally, but let's not fool ourselves into thinking that they're functionally competitive. I'm extremely skeptical of anybody claiming they've obviated a $22/hr job with an open model. Qwen is a big step down in capability. I can play with something like k2.5 for awhile, but if I want real work done I'm going back to a frontier model, which has significant runtime requirements for inference.
You're also ignoring the cost of purchasing and amortizing dedicated hardware in your local model example.
It's not an apples-to-apples comparison.
Inference isn't really that expensive, its the training of new foundational models that is. With whatever highly optimized setup the big providers are using, they should be able to pack quite a lot of concurrent users onto a deployment of a model. Just think too, it's very possible their use case would be served just fine by a 100B model deployed to a $4,000 DGX Spark.
Just curious, what did your human worker do that you were able to entirely automate?
My bet is something administrative, like reminding people to approve their timesheets for payroll. AI wouldn't be needed to replace that job though, just a recurring calendar event.
I too, want to hear details about what this person did that they could be replaced completely with LLMs.
The value is negative to that worker, apparently.