It's hard for me to reconcile this piece with my personal experience as someone who works in AI and knows many others that do
The demand for AI is currently overwhelming. As in, can't build data centers and GPUs melting overwhelming, companies growing 3x in a month while already at multi-billion revenues.
The models get better and better, Chinese open source is falling further and further behind American companies. The productivity gains are, at this point, obvious. The best talent works (or wants to work) in America and get compensated obscene amounts, the most capital flows through America, this is still by far the best place to start a technology business in the world
I think American technology was on the decline for the past few years before LLMs, but for the foreseeable future as long as American companies control the talent flywheel I think the new world of tech is going to be much more American than before.
Do not have any empirical evidence, but reality is that China's semiconductor capabilities are not at par with Taiwan yet and the US is able to influence Nvidia's sales to China as well as access to other vendors (TSMC) and technologies, giving the West an unfair advantage.
Just like Chinese EVs and Chinese renewables eventually beat the West, I have no doubt that China can probably eventually pull ahead, but I think it is probably accurate to say that China is currently still behind (how far is hard to say) because they have a slight technology handicap imposed by the US.
Your comment is responding to an issue that is different from what GP said. GP was talking about Chinese open source particularly, i.e. their open source models, which AFAIK have consistently been keeping up with (albeit a few steps behind) the closed source OpenAI and Anthropic models.
> have consistently been keeping up with (albeit a few steps behind)
I mean, this sentence is self contradictory, no?
> Hardware capacity is a separate issue entirely.
It seems like hardware capabilities are at the very heart of both training and inference which is why Nvidia, TSMC are hitting record income and capitalization. Feels like divorcing hardware from the equation is discounting a big part of winning this race.
> I mean, this sentence is self contradictory, no?
By benchmarks, the Chinese models are ahead of where the proprietary US models were ... something like 6 or 12 months ago. And all the benchmarks are a bit fuzzy anyway on whether a small gap is trivial or significant. The Chinese aren't having any problems keeping up on model quality. The gap isn't going to lead to any difference that matters unless the US pulls a rabbit out of it's hat.
Plus dollar-for-performance they might be leading in practice, it is hard to compete with self hosted.
What's often understated is how much of an advantage the US has because it speaks the language of global commerce and technology, which for the entire 20th century and the first quarter of the 21st has been English. That's huge. It means teenagers reading man pages are reading fluently.
At some point, though, the balance could tip. It's impossible to say, and it'd be irresponsible to try to predict it, but there isn't any reason English is natively superior, any more than French was 150 years ago, or Latin 600 years ago. But it's a major advantage the US has that isn't acknowledged often enough.
It's an advantage, but I don't see that changing for a very long time:
1. English became the lingua franca right when the world really became globalized. So everyone from Europe to Asia to Africa has wanted to learn English as a second language for decades. So even if American power went away, I still don't see English falling from its perch. I often say it's really hard for Americans to learn another language because if you go to another country hoping to learn that language, so often you'll find many/most people just want to speak to you in English.
2. The only other power I could see surpassing the US in the mid term is China (and that's in no way guaranteed), but the Chinese language (Mandarin), and especially Chinese writing is inherently more difficult for foreigners to learn. I'd also argue the Chinese writing system is inherently more poorly suited to the digital world.
> It's an advantage, but I don't see that changing for a very long time:
It’s an interesting question: for how long will it remain important to know multiple languages in the age of LLMs? Of course, it’s better to know foreign language(s) — no doubt about that — but for day-to-day work, unless you’re living abroad, it seems that their practical utility will slowly decrease. And speech-to-speech translation will likely continue to improve as well.
I know it's a common pop science factoid, but there's actually no evidence that language difficulty has much to do with becoming a lingua franca.
Russian is commonly viewed as a difficult language, but it become a regional lingua franca in their sphere of influence. The only reason we aren't speaking Russian is because they lost the cold war.
I do agree that Mandarin speakers might become more open to Pinyin if more foreigners started learning the language. I'd also point out that English and Romance speakers find Mandarin difficult. For Mandarin speakers, is their own spoken language actually difficult for them? They might find English to be a difficult language.
English is one of the most difficult languages to learn, because there's so many irregular sentence/word constructions + irregular pronunciations due to vowel shift + foreign loan words like French/Latin that must be pronounced differently.
I think English is definitely a reason that I took for granted. To add to that from my experience:
- The culture is, I think, the root of the flywheel. The entrepreneurship and competitive intensity is unlike anywhere else I've lived (not an American). It's okay to go bankrupt. It's okay to fail multiple times and burn millions in VC money, in fact it's encouraged! Take a break and raise another round and go again, VCs like second time founders. In my home country having one business go under is the worst thing imaginable.
- The capital markets, even YC (one of the lower tier accelerators by now) gives you 500k for 7%, sometimes pre-revenue. That is an absurd proposition elsewhere
- Surrounding yourself with top talent raises the ceiling for what you think is possible and accelerates your career really fast. It's inspiring for me to be around so many smart and successful people.
> but there isn't any reason English is natively superior, any more than French was 150 years ago, or Latin 600 years ago.
Actually, there is. English is relatively unique in its ability to incorporate loan words and features of other languages. This is in part due to its history as a merger of 10k French (thus, Latinate) words into an otherwise Germanic language. But it's also due to the unfortunate history of the British empire, followed by American hegemony, which spread English to many other cultures who freely adapted it.
Whether this is enough to justify a continuing status as "the international language" is obviously debatable. But English is different from almost all other human languages, not because it is better, but because it is just ... more
I’m on a motorhome holiday in Norway right now. The younger people I’ve spoken to, from the Netherlands, through Germany and Denmark and into Norway have as good English as me. As with most American-exceptionalism, you ain’t that special. On previous holidays in France, often held up as “never-willingly-speak-English”, we’ve had similar experiences.
Older people here in Northern Europe often seem to speak English quite well, in France less so.
I'm English, my Danish friends have less of an English accent and are considerably more literate than the average of the people I interact with at work over most days.
It isn't a moat, My partners written English surpasses mine and it is her third language.
Do you need a study for when a trading firm reports PnL? Likewise when labs report 80x growth?
There are applied AI cos making 100-400M+ in just a few years of incorporation, does that count as financial gain?
Academia is currently 6-12mo behind the frontier of the industry, so any "long term" study, even for a year, would have to be published with ancient models.
The majority of AI revenue is probably VC money sloshing around in a closed system, e.g. a VC funds some AI company and they pay OpenAI/Claude. These startups also pay for other AI startup products and make it mandatory for their employees to use them. I would venture a guess that 50-80% of the AI revenue would dry up if VCs stopped funding AI startups.
I'll push back against most of the points in your comment.
> The demand for AI is currently overwhelming. As in, can't build data centers and GPUs melting overwhelming, companies growing 3x in a month while already at multi-billion revenues.
This isn't a sign of a successful, sustainable business; it's what a bubble looks like. Between the aggressive marketing (including astroturfing!) that LLM companies are engaged in, the perceived stock market advantage companies can gain by shoving LLMs into their offering, and the missile-gap-style approach that many businesses are taking around this, this centre cannot possibly hold.
> The models get better and better, Chinese open source is falling further and further behind American companies
American companies are, to be fair, flaunting safety and ignoring the wider social impacts of this technology, and both the US federal and state governments seem to be more than willing to go with the flow on that, probably at least partly because of a recognition that the LLM industry is propping up a significant part of the US economy.
> The productivity gains are, at this point, obvious
They are, emphatically, not. For me and my peers (most of us, individual contributors in software -- and emphatically, those of us working at companies who haven't fully leaned into vibe coding), our jobs have become babysitting claude agents and spending most of our time cleaning up its messes and doing code review. Short-term, sure, this might lead to some productivity gains, but long-term, this is going to lead to mass burnout.
> The best talent works (or wants to work) in America and get compensated obscene amounts, the most capital flows through America, this is still by far the best place to start a technology business in the world
Unfortunately, the US is in the midst of cracking down on immigration, and the international perception of the country is increasingly that it is an unsafe one.
> I think American technology was on the decline for the past few years before LLMs, but for the foreseeable future as long as American companies control the talent flywheel I think the new world of tech is going to be much more American than before.
What I see in the US's LLM-backed economy is what I see in many businesses in this same economy, increasingly: the blanket of AI is being used to paper over serious, systemic issues in the organization, but this clearly won't hold. In a world where we have an ounce of responsibility for what we produce, and where customers care about the quality (notably, quality as in correctness) of what's being delivered, this will eventually collapse.
I think it's obvious that demand is overwhelming supply right now. I agree that we don't know how much of the demand is due to perception, perverse incentives, or poor management, and how much of the demand is 'real'. I personally believe that the demand is mostly real and will continue to go up, but I don't have a crystal ball.
I also acknowledge that the productivity gains are highly dependent on your specific company's implementation and the work that you're doing. I think the role of a technical IC (which I am as well) is going to be managing fleets of agents, and many people who aren't suited to that type of work will leave the industry (and many people who are will join).
I generally agree with you on the points about American politics, I don't think the way they are cracking down on immigration is very wise.
As for correctness - it's a nontrivial problem to deploy AI in prod that works and doesn't blow up over millions of runs+. Hence why the initial value has accrued to the intelligence layer (labs) but the bulk of the remaining value will accrue to the applied layer in my opinion.
Wait until they charge the real pice, if I sold a dollar for 10ct I'd also have a lot of demand.
I'm burning billions of tokens on chatgpt "deepresearch Pro extended" for things I wouldn't even bother googling, the second I have to pay even 2x the price I won't use that anymore
The estimates I've seen are that running inference at scale on a Deepseek V3 sized model (so 700B parameters) costs roughly $0.70/mtok or so given current H100 rental costs. Sonnet charges $15/mtok on the API so the delta between the true cost and the API cost is quite large, to the point where even many subscription users are likely profitable.
It's hard for me to reconcile your post as being authentic. From what I see, current "AI" is simply a geo-political tool, and a tool for governments to maintain power and authority. It is not real AI, since it cannot learn.
Real AI is being suppressed and it seems that it will not be allowed to exist in the mainstream, especially in the US.
He's not denying that there is demand, he just has a different view on what's happening:
When developers say that LLMs make them more productive, you need to keep in mind that this is what they’re automating: dysfunction, tampering as a design strategy, superstition-driven coding, and software whose quality genuinely doesn’t matter, all in an environment where rigour is completely absent.
They are right. LLMs make work that doesn’t matter easier – it’s all monopolies, subscriptions, VCs, and lock-in anyway – in an industry that doesn’t care, where the only thing that’s measured is some bullshit productivity measure that’s completely disconnected from outcomes.
...
One group thinks this will make the world ten times richer. The other thinks it’ll be a catastrophe.
Reasonable conclusion, if you think the entire software industry is rotten then accelerating rot won't do much
I personally disagree with that worldview. (I read the article and the guy's tone is lowkey salty)
The reality is it's insanely hard to convince people (/especially/ consumers. //especially// technical consumers) to pay up to use software. Anyone who has tried to sell software as a startup knows, customers are laser focused on outcomes and value and anything that raises an eyebrow means you're toast
Ofc there are perverse incentives and I think those are bad
> The reality is it's insanely hard to convince people (/especially/ consumers. //especially// technical consumers) to pay up to use software.
The industry is in an extremely bimodal situation, which drives most of that rot.
You have the startups and small businesses who can't get businesses or customers to pay up. And you have the SaaS giants, who already have their customers and can charge whatever they want.
And this is where the "rotten software industry" and doubts about AI feasibility intersect: Both of these business archetypes lack a clear use case for AI.
If you're small, congratulations you can now spend thousands a month on tokens and still have $0 of revenue. AI doesn't really help you "catch up" to customer expectations as now you're also having to compete with the myriad of slop-shops and in-house AI software development.
If you're a giant, well... why bother? Why give OpenAI or Anthropic a million dollars in tokens? They don't need to make the software better nor do need any "AI efficiency" to do layoffs.
I'm curious as to where your perspective comes from.
My view is they both have a clear use case for AI, because every business has a use case for more intelligence on tap. Enterprises big and small already shell out billions upon billions for AI so I'm not sure how your premise holds
In fact AI has resulted in more startups than ever starting to take market share from the incumbent software companies (and the market has started to price that in)
I wonder if this is a sign of bad value. Long ago you'd be willing to pay. The relationship was clearer , simpler, stabler. No sudden change of price or rules, no constant false improvement. It was less flexible, and riskier on a way, but it cleaned the noise.
In the longer term, companies won’t be able to build AI infrastructure fast enough to keep up. The construction capacity isn’t there. The hardware production capacity isn’t there. Raw materials, energy, water—not enough of any of it. The supply chain is a fragile, grotesque joke.
> as long as American companies control the talent flywheel
The companies are eating their seed corn. Senior devs are going to age out and there won’t be enough juniors coming up the ranks to replace them. The oncoming demographic crisis multiplies this problem.
Americans decided to sabotage their own public education system for generations. They were able to bridge the gap with foreign undergrad/grad students for a while but that well has been poisoned, probably for good.
Thank you for sharing the article, it's an interesting perspective and I'm inclined to agree with the point about prior restraint.
I'm sad that America is making it more difficult for foreign talent to come in. But with the flip-flops between D/R in the white house it's really hard to predict what immigration looks like even 5 years from now
What are you talking about even. Chinese models are what pretty much every AI company in the US is using now because you can run them on prem and customize them, and because hosted versions cost a fraction of US ones. https://www.youtube.com/watch?v=9baDOfwUzHQ
And that's in the US, the rest of the world is all using Chinese models as well. Which means these models get far more collaboration from the global research community being developed in the open. They will set the standards in terms of how APIs work. And they will be what everyone uses going forward.
The closed approach simply can't compete with that. The same way Linux destroyed Windows on servers, open AI models will destroy proprietary solutions as well.
Can this be backed up with any numbers, especially in the US? Every company I've seen using an AI something has obviously been using the API of one of the bigger companies. If this is a valid approach with proof it's basically as good, it would be something I would recommend to my company
Indeed! China is leaning heavily into AI as state policy, as the solution to its looming demographic crisis. Any advantage the US has is going to be brief. It'll be like comparing the high speed trains in China with the high speed trains in California...
"Chinese models are what pretty much every AI company in the US is using now"
- just untrue. you think people inside Cursor use composer for most of their work? haha
the talent at the labs far surpasses the global research community its just not comparable
I'm not saying I prefer it this way, I want open source to do well but it's just not happening at the current pace
> Regulation that’s defined entirely in terms of the technology it regulates, as opposed to in terms of the effects it has on society or imposing boundaries and limits on the technology itself, is a core component of the technopolistic political and legislative environment.
Incredible article, a lot to unpack here, but I found this particular offhand tidbit interesting. It does seem like any attempt at tech industry regulation over the past decade or two (that isn't somewhat in the interests of big tech anyway, i.e. age verification and so on) has been either overly vague, or overly specific, leading to easy workarounds.
It seems like a microcosm of a wider trend in regulation; the disconnect between intentions and results. On the rare occasions that consumer-friendly legislation does go through, there is no working mechanism for evaluating its effectiveness and refining the rules as quickly as big corporations can adapt to them. I like how the article frames this, of how the regulations are targeting the wrong thing, how they're defined by the problem rather than the desired end state.
For more thoughts along these lines I'd highly recommend checking out Jennifer Pahlka's blog Eating Policy: https://www.eatingpolicy.com/
There was someone who said ten or fifteen years ago that these trillion-dollar issues weren't technology companies but technology control companies. It's been in my mind ever since.
Technology has politics, and it often serves to reproduce terrible modes of operation instead of something that could be described as "good progress" for humanity. The renewable energy landscape is the best example of a space that has had to fight against the old world's financial interests, even in the face of obvious monetary and technological supremacy.
The software world unfortunately has followed adtech + social media companies' operational structures, and we lost decades of "good progress" to attention-funded software.
I have a feeling this is why very few novel companies are springing up from this LLM shift: the relationship of a) lines of code b) solving problems to achieve progress c) getting paid for it has been decoupled for so long, because attention has been the main currency online.
Unsurprisingly, the Chinese technology market leap is fueled by a focus towards the "physical" (raw materials, manufacturing) and it's no surprise that a highly educated population is beating many Western economies in the electronics market (from small gadgets all the way to cars and energy). It's not impossible to try catching up by educating our people to reorient money to industry that brings "good progress", instead of industry that brings virtual money in the form of stocks or tech that mainly serves vices and/or entertainment.
We just finished watching a 90s Dennis Potter TV series, Lipstick on your Collar. Strange and mannered, and about in part the preparation for Suez at the end of empire, by an elderly leadership that hadn't realised that the British empire was already done (and at a time when the young were only interested in America, the new power). More stupidity than malice there. What we're getting today looks like both.
I think the new world cannot be born is larger than just AI. Even before llms we had a whole generation of people going through CS curriculum who call the Internet "wifi" and don't know what a file is. Even if LLMs disappeared tomorrow my faith that we'll ever have the same curious and brilliant minds in our field as yesteryear is fading. I hope I'm wrong.
That many people don't know what a file is, is most probably down to the very explicit war of one company, namely Apple, on the very concept of a file. And I fully agree that it is a terrible idea that makes people completely forget that what they're handling is actually a computer that could be doing so much more than what Apple allows them to do.
> Sitting in on a talk on autism diagnoses, one of a series of scientific talks, watching an animation they used as a diagnostic aid, hearing everybody around me laugh as if the shapes on the screen made sense, only then truly understanding myself, and feeling more alone than I have ever felt before or since.
Anybody have any idea what diagnostic shapes he's talking about?
> All that was needed was a tacit understanding that there were rules, that the US set those rules, and that those who followed the rules would benefit from the trade that came with being a part of the global hegemony.
This as been so overwhelmingly obvious in 3rd world countries (viz. India's "non-alignment" foreign policy) but, still, Europe, Canada, Japan and Australia didn't fully get it: the concept of "rules based world order" is just a layer of makeup over "American Imperialism". Americans make rules the same way Tony Soprano made rules: strictly for self-advantage. We should be thankful to Trump to wipe out that makeup, finally.
True, Mark Carney explained that in Davos. But I am not sure Canadians got it.
The whole Gramsci quote goes further than the part being quoted here: “il vecchio muore e il nuovo non può nascere: in questo interregno si verificano i fenomeni morbosi piú svariati”.
I knew the old world was lying to us when I saw what happened to Michael O. Church. Freedom of expression, unless you challenge the people at the top of the ladder. Then they erase and try to murder you.
And now there's evidence that Epstein was behind the prosecution of Swartz. He knew the man was onto something.
The authoritarianism is only more obvious. No one bothers to hide it. The social irresponsibility ramps up and up. Genocide in Burma? The cost of social connection. The cost of freedom.
At some point, it all breaks. No one knows what happens next. Models smooth reality, but reality, at some point, detests smoothness enough to become pointed.
I like to quip that any sufficiently sized US company eventually becomes a bank, a landlord, a defense contractor or some combination thereof. Another way to put this, in the author's framing, is a tool of empire. We've seen how quickly and easily these large companies have fallen in line with the administration. The era of the tech company as an antiestablishment upstart is long over.
I call the Hormuz crisis the biggest strategic blunder in US history and it's not even close. It's such a blunder it will probably be written about in history books as the end of the post-1945 era. It's not lost on people that the US would rather let the world burn than split with its attack dog in the region, even slightly. We're also seeing that, as the author notes, a tiny power can strategically defeat a military that over $1 trillion a year is spent on.
The author rightly points out of the lawlessness of everything going on and the destruction of trust in financial markets. All of this is correct. But I don't think the auuthor really identifies the reasons for the push for AI. And that is, labor displacement and wage suppression. Or, to put it another way, further wealth concentration into the hands of the "oligarchs". I guess it's another version of "whatever our oligarchs want to steal this month, they get."
This has to be trolling. You can't claim in one sentence that language is suffering, then in the next claim that only living beings can die or be born. How is abstract concepts suffering fine, but abstract concepts dying isn't?
Things can die and be born. The usage of those terms in relation to non-living entities and absent a description of biological progeny and senescence has been commonplace in English for centuries. For instance, the "birth" of a new era, or the "death" of disco.
You may find it easier to function in modern society without having such a strictly literal view of language. Idioms and metaphors do exist.
It's hard for me to reconcile this piece with my personal experience as someone who works in AI and knows many others that do
The demand for AI is currently overwhelming. As in, can't build data centers and GPUs melting overwhelming, companies growing 3x in a month while already at multi-billion revenues.
The models get better and better, Chinese open source is falling further and further behind American companies. The productivity gains are, at this point, obvious. The best talent works (or wants to work) in America and get compensated obscene amounts, the most capital flows through America, this is still by far the best place to start a technology business in the world
I think American technology was on the decline for the past few years before LLMs, but for the foreseeable future as long as American companies control the talent flywheel I think the new world of tech is going to be much more American than before.
There are no switching costs for users to move to a new model.
> Chinese open source is falling further and further behind American companies
This is simply not true?
Do not have any empirical evidence, but reality is that China's semiconductor capabilities are not at par with Taiwan yet and the US is able to influence Nvidia's sales to China as well as access to other vendors (TSMC) and technologies, giving the West an unfair advantage.
Just like Chinese EVs and Chinese renewables eventually beat the West, I have no doubt that China can probably eventually pull ahead, but I think it is probably accurate to say that China is currently still behind (how far is hard to say) because they have a slight technology handicap imposed by the US.
Your comment is responding to an issue that is different from what GP said. GP was talking about Chinese open source particularly, i.e. their open source models, which AFAIK have consistently been keeping up with (albeit a few steps behind) the closed source OpenAI and Anthropic models.
Hardware capacity is a separate issue entirely.
> I mean, this sentence is self contradictory, no?
By benchmarks, the Chinese models are ahead of where the proprietary US models were ... something like 6 or 12 months ago. And all the benchmarks are a bit fuzzy anyway on whether a small gap is trivial or significant. The Chinese aren't having any problems keeping up on model quality. The gap isn't going to lead to any difference that matters unless the US pulls a rabbit out of it's hat.
Plus dollar-for-performance they might be leading in practice, it is hard to compete with self hosted.
China would probably be very confused that you're asserting they're not on par with Taiwan.
In semiconductors.
Otherwise, no one would need to buy from Nvidia or contract with TSMC.
> There are no switching costs for users to move to a new model.
This depends on how many proprietary APIs are in the way of the model itself.
What's often understated is how much of an advantage the US has because it speaks the language of global commerce and technology, which for the entire 20th century and the first quarter of the 21st has been English. That's huge. It means teenagers reading man pages are reading fluently.
At some point, though, the balance could tip. It's impossible to say, and it'd be irresponsible to try to predict it, but there isn't any reason English is natively superior, any more than French was 150 years ago, or Latin 600 years ago. But it's a major advantage the US has that isn't acknowledged often enough.
It's an advantage, but I don't see that changing for a very long time:
1. English became the lingua franca right when the world really became globalized. So everyone from Europe to Asia to Africa has wanted to learn English as a second language for decades. So even if American power went away, I still don't see English falling from its perch. I often say it's really hard for Americans to learn another language because if you go to another country hoping to learn that language, so often you'll find many/most people just want to speak to you in English.
2. The only other power I could see surpassing the US in the mid term is China (and that's in no way guaranteed), but the Chinese language (Mandarin), and especially Chinese writing is inherently more difficult for foreigners to learn. I'd also argue the Chinese writing system is inherently more poorly suited to the digital world.
> It's an advantage, but I don't see that changing for a very long time:
It’s an interesting question: for how long will it remain important to know multiple languages in the age of LLMs? Of course, it’s better to know foreign language(s) — no doubt about that — but for day-to-day work, unless you’re living abroad, it seems that their practical utility will slowly decrease. And speech-to-speech translation will likely continue to improve as well.
I know it's a common pop science factoid, but there's actually no evidence that language difficulty has much to do with becoming a lingua franca.
Russian is commonly viewed as a difficult language, but it become a regional lingua franca in their sphere of influence. The only reason we aren't speaking Russian is because they lost the cold war.
I do agree that Mandarin speakers might become more open to Pinyin if more foreigners started learning the language. I'd also point out that English and Romance speakers find Mandarin difficult. For Mandarin speakers, is their own spoken language actually difficult for them? They might find English to be a difficult language.
English is one of the most difficult languages to learn, because there's so many irregular sentence/word constructions + irregular pronunciations due to vowel shift + foreign loan words like French/Latin that must be pronounced differently.
are you including pinyin in your writing system analysis?
I think English is definitely a reason that I took for granted. To add to that from my experience:
- The culture is, I think, the root of the flywheel. The entrepreneurship and competitive intensity is unlike anywhere else I've lived (not an American). It's okay to go bankrupt. It's okay to fail multiple times and burn millions in VC money, in fact it's encouraged! Take a break and raise another round and go again, VCs like second time founders. In my home country having one business go under is the worst thing imaginable.
- The capital markets, even YC (one of the lower tier accelerators by now) gives you 500k for 7%, sometimes pre-revenue. That is an absurd proposition elsewhere
- Surrounding yourself with top talent raises the ceiling for what you think is possible and accelerates your career really fast. It's inspiring for me to be around so many smart and successful people.
What's your definition of "successful people"?
Self actualized, high optionality
Rich, rich
> but there isn't any reason English is natively superior, any more than French was 150 years ago, or Latin 600 years ago.
Actually, there is. English is relatively unique in its ability to incorporate loan words and features of other languages. This is in part due to its history as a merger of 10k French (thus, Latinate) words into an otherwise Germanic language. But it's also due to the unfortunate history of the British empire, followed by American hegemony, which spread English to many other cultures who freely adapted it.
Whether this is enough to justify a continuing status as "the international language" is obviously debatable. But English is different from almost all other human languages, not because it is better, but because it is just ... more
I’m on a motorhome holiday in Norway right now. The younger people I’ve spoken to, from the Netherlands, through Germany and Denmark and into Norway have as good English as me. As with most American-exceptionalism, you ain’t that special. On previous holidays in France, often held up as “never-willingly-speak-English”, we’ve had similar experiences.
Older people here in Northern Europe often seem to speak English quite well, in France less so.
I'm English, my Danish friends have less of an English accent and are considerably more literate than the average of the people I interact with at work over most days.
It isn't a moat, My partners written English surpasses mine and it is her third language.
The language of global commerce and technology has not and has never been English
It is money.
Specifically, right now, petro-dollars. For a while before that, it was pounds
The writer is asking how much longer that will continue to be true that it is petro-dollars.
https://en.wikipedia.org/wiki/World_currency
right, it was english, but then it became english
The gains are so obvious that nobody can cite a source proving them
source: revenue, people opening their wallets
Tokenmaxing?
Source: trust me bro
Okay.
Just cite me any sort of study or financial data showing that AI provides long-term financial gains for any company besides small startups
Do you need a study for when a trading firm reports PnL? Likewise when labs report 80x growth?
There are applied AI cos making 100-400M+ in just a few years of incorporation, does that count as financial gain?
Academia is currently 6-12mo behind the frontier of the industry, so any "long term" study, even for a year, would have to be published with ancient models.
The majority of AI revenue is probably VC money sloshing around in a closed system, e.g. a VC funds some AI company and they pay OpenAI/Claude. These startups also pay for other AI startup products and make it mandatory for their employees to use them. I would venture a guess that 50-80% of the AI revenue would dry up if VCs stopped funding AI startups.
I'll push back against most of the points in your comment.
This isn't a sign of a successful, sustainable business; it's what a bubble looks like. Between the aggressive marketing (including astroturfing!) that LLM companies are engaged in, the perceived stock market advantage companies can gain by shoving LLMs into their offering, and the missile-gap-style approach that many businesses are taking around this, this centre cannot possibly hold. American companies are, to be fair, flaunting safety and ignoring the wider social impacts of this technology, and both the US federal and state governments seem to be more than willing to go with the flow on that, probably at least partly because of a recognition that the LLM industry is propping up a significant part of the US economy. They are, emphatically, not. For me and my peers (most of us, individual contributors in software -- and emphatically, those of us working at companies who haven't fully leaned into vibe coding), our jobs have become babysitting claude agents and spending most of our time cleaning up its messes and doing code review. Short-term, sure, this might lead to some productivity gains, but long-term, this is going to lead to mass burnout. Unfortunately, the US is in the midst of cracking down on immigration, and the international perception of the country is increasingly that it is an unsafe one. What I see in the US's LLM-backed economy is what I see in many businesses in this same economy, increasingly: the blanket of AI is being used to paper over serious, systemic issues in the organization, but this clearly won't hold. In a world where we have an ounce of responsibility for what we produce, and where customers care about the quality (notably, quality as in correctness) of what's being delivered, this will eventually collapse.Thank you for your perspective!
I think it's obvious that demand is overwhelming supply right now. I agree that we don't know how much of the demand is due to perception, perverse incentives, or poor management, and how much of the demand is 'real'. I personally believe that the demand is mostly real and will continue to go up, but I don't have a crystal ball.
I also acknowledge that the productivity gains are highly dependent on your specific company's implementation and the work that you're doing. I think the role of a technical IC (which I am as well) is going to be managing fleets of agents, and many people who aren't suited to that type of work will leave the industry (and many people who are will join).
I generally agree with you on the points about American politics, I don't think the way they are cracking down on immigration is very wise.
As for correctness - it's a nontrivial problem to deploy AI in prod that works and doesn't blow up over millions of runs+. Hence why the initial value has accrued to the intelligence layer (labs) but the bulk of the remaining value will accrue to the applied layer in my opinion.
I will buy your entire supply of money for $0.50 per dollar.
Our demand for compute and software is infinite, but our price sensitivity is also high.
> The demand for AI is currently overwhelming.
Wait until they charge the real pice, if I sold a dollar for 10ct I'd also have a lot of demand.
I'm burning billions of tokens on chatgpt "deepresearch Pro extended" for things I wouldn't even bother googling, the second I have to pay even 2x the price I won't use that anymore
I hear this analogy (selling a dollar for 10ct) but it's unclear to me how we can cleanly map intelligence to cents.
If the LLM was GPT-1, most people wouldn't even use it for free. So clearly there's another axis here?
The estimates I've seen are that running inference at scale on a Deepseek V3 sized model (so 700B parameters) costs roughly $0.70/mtok or so given current H100 rental costs. Sonnet charges $15/mtok on the API so the delta between the true cost and the API cost is quite large, to the point where even many subscription users are likely profitable.
It's hard for me to reconcile your post as being authentic. From what I see, current "AI" is simply a geo-political tool, and a tool for governments to maintain power and authority. It is not real AI, since it cannot learn.
Real AI is being suppressed and it seems that it will not be allowed to exist in the mainstream, especially in the US.
This is the ultimate "no true Scotsman" post.
He's not denying that there is demand, he just has a different view on what's happening:
When developers say that LLMs make them more productive, you need to keep in mind that this is what they’re automating: dysfunction, tampering as a design strategy, superstition-driven coding, and software whose quality genuinely doesn’t matter, all in an environment where rigour is completely absent.
They are right. LLMs make work that doesn’t matter easier – it’s all monopolies, subscriptions, VCs, and lock-in anyway – in an industry that doesn’t care, where the only thing that’s measured is some bullshit productivity measure that’s completely disconnected from outcomes.
...
One group thinks this will make the world ten times richer. The other thinks it’ll be a catastrophe.
(from an earlier post, https://www.baldurbjarnason.com/2026/the-two-worlds-of-progr...)
Reasonable conclusion, if you think the entire software industry is rotten then accelerating rot won't do much
I personally disagree with that worldview. (I read the article and the guy's tone is lowkey salty)
The reality is it's insanely hard to convince people (/especially/ consumers. //especially// technical consumers) to pay up to use software. Anyone who has tried to sell software as a startup knows, customers are laser focused on outcomes and value and anything that raises an eyebrow means you're toast
Ofc there are perverse incentives and I think those are bad
> The reality is it's insanely hard to convince people (/especially/ consumers. //especially// technical consumers) to pay up to use software.
The industry is in an extremely bimodal situation, which drives most of that rot.
You have the startups and small businesses who can't get businesses or customers to pay up. And you have the SaaS giants, who already have their customers and can charge whatever they want.
And this is where the "rotten software industry" and doubts about AI feasibility intersect: Both of these business archetypes lack a clear use case for AI.
If you're small, congratulations you can now spend thousands a month on tokens and still have $0 of revenue. AI doesn't really help you "catch up" to customer expectations as now you're also having to compete with the myriad of slop-shops and in-house AI software development.
If you're a giant, well... why bother? Why give OpenAI or Anthropic a million dollars in tokens? They don't need to make the software better nor do need any "AI efficiency" to do layoffs.
I'm curious as to where your perspective comes from.
My view is they both have a clear use case for AI, because every business has a use case for more intelligence on tap. Enterprises big and small already shell out billions upon billions for AI so I'm not sure how your premise holds
In fact AI has resulted in more startups than ever starting to take market share from the incumbent software companies (and the market has started to price that in)
I wonder if this is a sign of bad value. Long ago you'd be willing to pay. The relationship was clearer , simpler, stabler. No sudden change of price or rules, no constant false improvement. It was less flexible, and riskier on a way, but it cleaned the noise.
My 2cts
> Anyone who has tried to sell software as a startup knows, customers are laser focused on outcomes and value
So the solution is to reduce the cost to zero, instead of competing to provide the best outcome and highest value?
If you've ever tried to start your own company in the US it's a grueling, insane warzone of competition
That results in the winners providing insane value to both customers and equity holders
> The models get better and better, Chinese open source is falling further and further behind American companies.
Prior restraint is going to put a damper on American state of the art for the foreseeable future.
https://thezvi.substack.com/p/the-ai-ad-hoc-prior-restraint-...
In the longer term, companies won’t be able to build AI infrastructure fast enough to keep up. The construction capacity isn’t there. The hardware production capacity isn’t there. Raw materials, energy, water—not enough of any of it. The supply chain is a fragile, grotesque joke.
> as long as American companies control the talent flywheel
The companies are eating their seed corn. Senior devs are going to age out and there won’t be enough juniors coming up the ranks to replace them. The oncoming demographic crisis multiplies this problem.
Americans decided to sabotage their own public education system for generations. They were able to bridge the gap with foreign undergrad/grad students for a while but that well has been poisoned, probably for good.
Thank you for sharing the article, it's an interesting perspective and I'm inclined to agree with the point about prior restraint.
I'm sad that America is making it more difficult for foreign talent to come in. But with the flip-flops between D/R in the white house it's really hard to predict what immigration looks like even 5 years from now
What are you talking about even. Chinese models are what pretty much every AI company in the US is using now because you can run them on prem and customize them, and because hosted versions cost a fraction of US ones. https://www.youtube.com/watch?v=9baDOfwUzHQ
And that's in the US, the rest of the world is all using Chinese models as well. Which means these models get far more collaboration from the global research community being developed in the open. They will set the standards in terms of how APIs work. And they will be what everyone uses going forward.
The closed approach simply can't compete with that. The same way Linux destroyed Windows on servers, open AI models will destroy proprietary solutions as well.
Can this be backed up with any numbers, especially in the US? Every company I've seen using an AI something has obviously been using the API of one of the bigger companies. If this is a valid approach with proof it's basically as good, it would be something I would recommend to my company
Indeed! China is leaning heavily into AI as state policy, as the solution to its looming demographic crisis. Any advantage the US has is going to be brief. It'll be like comparing the high speed trains in China with the high speed trains in California...
ai generated video script
"Chinese models are what pretty much every AI company in the US is using now" - just untrue. you think people inside Cursor use composer for most of their work? haha
the talent at the labs far surpasses the global research community its just not comparable
I'm not saying I prefer it this way, I want open source to do well but it's just not happening at the current pace
Who cares how the script was generated. What he says is entirely factual. He cites plenty of concrete examples too.
The idea that the talent in the US surpasses the global research community is laughable. China already tops the world in artificial intelligence publications. https://www.science.org/content/article/china-tops-world-art...
China also has a population of 1.4 billion people, and an excellent education system. Pretty much all top universities are Chinese. https://www.nature.com/nature-index/institution-outputs/gene...
And let's not forget that top AI researchers from US are now fleeing to China. https://www.scmp.com/news/china/science/article/3353398/lead...
Publications != talent anymore. The top talent work at labs that keeps most of their research secret. And Microsoft AI is not in that circle
Not denying that China is a close #2 btw.
> Regulation that’s defined entirely in terms of the technology it regulates, as opposed to in terms of the effects it has on society or imposing boundaries and limits on the technology itself, is a core component of the technopolistic political and legislative environment.
Incredible article, a lot to unpack here, but I found this particular offhand tidbit interesting. It does seem like any attempt at tech industry regulation over the past decade or two (that isn't somewhat in the interests of big tech anyway, i.e. age verification and so on) has been either overly vague, or overly specific, leading to easy workarounds.
It seems like a microcosm of a wider trend in regulation; the disconnect between intentions and results. On the rare occasions that consumer-friendly legislation does go through, there is no working mechanism for evaluating its effectiveness and refining the rules as quickly as big corporations can adapt to them. I like how the article frames this, of how the regulations are targeting the wrong thing, how they're defined by the problem rather than the desired end state.
For more thoughts along these lines I'd highly recommend checking out Jennifer Pahlka's blog Eating Policy: https://www.eatingpolicy.com/
I haven’t seen something on HN so well written and insightful in many, many years. Everyone here should read this.
There was someone who said ten or fifteen years ago that these trillion-dollar issues weren't technology companies but technology control companies. It's been in my mind ever since.
LOL! (And I haven't written that in a very long time...)
The article is delusional. In particular, these claims:
- The Iran war is over.
- Iran has "won" the war.
- The US has lost influence with Asian allies.
- The petrodollar is over.
- The US economy is weaker due to billionaires and the stock market.
It's especially laughable given the recent diplomacy with China.
I also predict a secular government is running Iran before the fall...
Nice read!
Technology has politics, and it often serves to reproduce terrible modes of operation instead of something that could be described as "good progress" for humanity. The renewable energy landscape is the best example of a space that has had to fight against the old world's financial interests, even in the face of obvious monetary and technological supremacy.
The software world unfortunately has followed adtech + social media companies' operational structures, and we lost decades of "good progress" to attention-funded software.
I have a feeling this is why very few novel companies are springing up from this LLM shift: the relationship of a) lines of code b) solving problems to achieve progress c) getting paid for it has been decoupled for so long, because attention has been the main currency online.
Unsurprisingly, the Chinese technology market leap is fueled by a focus towards the "physical" (raw materials, manufacturing) and it's no surprise that a highly educated population is beating many Western economies in the electronics market (from small gadgets all the way to cars and energy). It's not impossible to try catching up by educating our people to reorient money to industry that brings "good progress", instead of industry that brings virtual money in the form of stocks or tech that mainly serves vices and/or entertainment.
Worth reading, well written.
We just finished watching a 90s Dennis Potter TV series, Lipstick on your Collar. Strange and mannered, and about in part the preparation for Suez at the end of empire, by an elderly leadership that hadn't realised that the British empire was already done (and at a time when the young were only interested in America, the new power). More stupidity than malice there. What we're getting today looks like both.
I think the new world cannot be born is larger than just AI. Even before llms we had a whole generation of people going through CS curriculum who call the Internet "wifi" and don't know what a file is. Even if LLMs disappeared tomorrow my faith that we'll ever have the same curious and brilliant minds in our field as yesteryear is fading. I hope I'm wrong.
Hey, welcome on my favorite soapbox!
That many people don't know what a file is, is most probably down to the very explicit war of one company, namely Apple, on the very concept of a file. And I fully agree that it is a terrible idea that makes people completely forget that what they're handling is actually a computer that could be doing so much more than what Apple allows them to do.
> Sitting in on a talk on autism diagnoses, one of a series of scientific talks, watching an animation they used as a diagnostic aid, hearing everybody around me laugh as if the shapes on the screen made sense, only then truly understanding myself, and feeling more alone than I have ever felt before or since.
Anybody have any idea what diagnostic shapes he's talking about?
Social Shapes Test https://www.cmu.edu/corecompetencies/collaboration/resources...
Web version here, if you want to see what it's like https://psytests.org/arc/ssten.html
> All that was needed was a tacit understanding that there were rules, that the US set those rules, and that those who followed the rules would benefit from the trade that came with being a part of the global hegemony.
This as been so overwhelmingly obvious in 3rd world countries (viz. India's "non-alignment" foreign policy) but, still, Europe, Canada, Japan and Australia didn't fully get it: the concept of "rules based world order" is just a layer of makeup over "American Imperialism". Americans make rules the same way Tony Soprano made rules: strictly for self-advantage. We should be thankful to Trump to wipe out that makeup, finally.
True, Mark Carney explained that in Davos. But I am not sure Canadians got it.
The whole Gramsci quote goes further than the part being quoted here: “il vecchio muore e il nuovo non può nascere: in questo interregno si verificano i fenomeni morbosi piú svariati”.
thank you
I knew the old world was lying to us when I saw what happened to Michael O. Church. Freedom of expression, unless you challenge the people at the top of the ladder. Then they erase and try to murder you.
And now there's evidence that Epstein was behind the prosecution of Swartz. He knew the man was onto something.
The authoritarianism is only more obvious. No one bothers to hide it. The social irresponsibility ramps up and up. Genocide in Burma? The cost of social connection. The cost of freedom.
At some point, it all breaks. No one knows what happens next. Models smooth reality, but reality, at some point, detests smoothness enough to become pointed.
I like to quip that any sufficiently sized US company eventually becomes a bank, a landlord, a defense contractor or some combination thereof. Another way to put this, in the author's framing, is a tool of empire. We've seen how quickly and easily these large companies have fallen in line with the administration. The era of the tech company as an antiestablishment upstart is long over.
I call the Hormuz crisis the biggest strategic blunder in US history and it's not even close. It's such a blunder it will probably be written about in history books as the end of the post-1945 era. It's not lost on people that the US would rather let the world burn than split with its attack dog in the region, even slightly. We're also seeing that, as the author notes, a tiny power can strategically defeat a military that over $1 trillion a year is spent on.
The author rightly points out of the lawlessness of everything going on and the destruction of trust in financial markets. All of this is correct. But I don't think the auuthor really identifies the reasons for the push for AI. And that is, labor displacement and wage suppression. Or, to put it another way, further wealth concentration into the hands of the "oligarchs". I guess it's another version of "whatever our oligarchs want to steal this month, they get."
> I call the Hormuz crisis the biggest strategic blunder in US history and it's not even close.
This crisis created billions of arms sales which is a success for some, especially as it made the other scandal go away.
A reminder that this author also believes Typescript to be a capitalist conspiracy from Microsoft.
Reminder that the article was good and there is a lot of truth in it.
Also, never trust Microsoft.
The lack of proper integer types might as well be a conspiracy.
the only thing suffering here, is language. things, no matter how vigorously anthomorphisized, can niether die, or be born.
This has to be trolling. You can't claim in one sentence that language is suffering, then in the next claim that only living beings can die or be born. How is abstract concepts suffering fine, but abstract concepts dying isn't?
Capital letters are apparently suffering a little.
I really wonder how much time and effort people think they are saving by avoiding the Shift key.
It's not about saving time. It's a poseur thing.
Things can die and be born. The usage of those terms in relation to non-living entities and absent a description of biological progeny and senescence has been commonplace in English for centuries. For instance, the "birth" of a new era, or the "death" of disco.
You may find it easier to function in modern society without having such a strictly literal view of language. Idioms and metaphors do exist.