The universal theme with general purpose technologies is 1) they start out lagging behind current practices in every context 2) they improve rapidly, but 3) they break through and surpass current practices in different contexts at different times.
What that means is that if you work in a certain context, for a while you keep seeing AI get a 0 because it is worse than the current process. Behind the scenes the underlying technology is improving rapidly, but because it hasn’t cusped the viability threshold you don’t feel it at all. From this vantage point, it is easy to dismiss the whole thing and forget about the slope, because the whole line is under the surface of usefulness in your context. The author has identified two cases where current AI is below the cusp of viability: design and large scale changes to a codebase (though Codex is cracking the second one quickly).
The hard and useful thing is not to find contexts where the general purpose technology gets a 0, but to surf the cusp of viability by finding incrementally harder problems that are newly solvable as the underlying technology improves. A very clear example of this is early Tesla surfing the reduction in Li-ion battery prices by starting with expensive sports cars, then luxury sedans, then normal cars. You can be sure that throughout the first two phases, everyone at GM and Toyota was saying: Li-ion batteries are totally infeasible for the consumers we prioritize who want affordable cars. By the time the technology is ready for sedans, Tesla has a 5 year lead.
> The universal theme with general purpose technologies is 1) they start out lagging behind current practices in every context 2) they improve rapidly, but 3) they break through and surpass current practices in different contexts at different times.
I think you should say succesful "general purpose technologies". What you describe is what happens when things work out. Sometimes things stall at step 1, and the technology gets relegated to a foot note in the history books.
We don’t argue that microwaves will be ubiquitous (which they aren’t, but close enough). We argue that microwaves are not an artificial general barbecue, as the makers might wish were true.
And we argue that microwaves will indeed never replace your grill as the makers, again, would love you to believe.
Your reasoning would be fine if there were a clear distinction, like between a microwave and a grill.
What we actually have is a physical system (the brain) that somehow implements what we know as the only approximation of general intelligence and artificial systems of various architectures (mostly transformers) that are intended to capture the essence of general intelligence.
We are not at the microwave and the grill stage. We are at the birds and the heavier-than-air contraptions stage, when it's not yet clear whether those particular models will fly, or whether they need more power, more control surfaces, or something else.
There was a lot of hubris around microwaves. I remember a lot of images of full chickens being roasted in them. I've never once seen that "in the wild" as it were. They are good for reheating something that was produced earlier. Hey the metaphor is even better than I thought!
IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.
Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile.
I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling.
I'm a SWE, DBA, SysAdmin, I work up and down the stack as needed. I'm not using LLMs at all. I really haven't tried them. I'm waiting for the dust to settle and clear "best practices" to emerge. I am sure that these tools are here to stay but I am also confident they are not in their final form today. I've seen too many hype trains in my career to still be jumping on them at the first stop.
It's time to jump on the train. I'm a cranky, old, embedded SWE and claude 4.5 is changing how I work. Before that I laughed off LLMs. They were trash. Claude still has issues, but damn, I think if I don't integrate it into my workflow I'll be out of work or relegated to work in QA or devops(where I'd likely be forced to use it).
No, it's not going to write all your code for you. Yes your skills are still needed to design, debug, perform teamwork(selling your designs, building consensus, etc), etc.. But it's time to get on the train.
I'm a SWE and also an art director. I have tried these tools and, the way I've also tried Vue and React, I think they're good enough for simple minded applications. It's worth the penny to try them and look through the binoculars, if only to see how unoriginal and creatively limited what most people in your field are actually doing if they find this something that saves them time.
You don’t have to jump on the hype train to get anything out of it. I started using claude code about 4 months back and I find it really hard to imagine developing without now. Sure I’m more of a manager, but the tedious busywork, the most annoying part of programming, is entirely gone. I love it.
The tools have reached the point where no special knowledge is required to get started. You can get going in 5 minutes. Try Claude Code with an API key (no subscription required). Run it in the terminal in a repo and ask how something works. Then ask it to make a straightforward but tedious change. Etc.
I'm surprised these pockets of job security still exist.
Know this: someone is coming after this already.
One day someone from management will hear about a cost-saving story at a dinner table, the words GPT, Cursor, Antigravity, reasoning, AGI will cause a buzzing in her ear. Waking up with tinnitus the next morning, they'll instantly schedule a 1:1 to discuss "the degree of AI use and automation"
> Know this: someone is coming after this already.
Yesterday, GitHub Copilot declared that my less-AI-weary friend’s new Laravel project was following all industry best-practices for database design as it storing entities as denormalized JSON blobs in a MySQL 8.x database with no FKs, indexes, constraints, all NULL columns (and using root@mysql as the login, of course); while all Laravel controller actions’ DB queries were RBAR loops that did loaded all rows into memory before doing JSON deserialisation in order to filter rows.
I can’t reconcile your attitude with my own personal lived experience of LLMs being utterly wrong 40% of the time; while 50% of the time being no better or faster than if I did things myself; another 5% of the time it gets stuck in a loop debating the existence of the seahorse emoji; and the last 5% of the time genuinely utterly scaring me with a profoundly accurate answer or solution that it produced instantly.
Also, LLMs have yet to demonstrate an ability to tackle other real-world DBA problems… like physically installing a new SSD into the SAN unit in the rack.
Why would you wait for dust to settle down? Just curious. Productivity gains are real in current form of LLMs. Guardrails and best practices can be learnt and self imposed.
I’m in the same position, but I use AI to get a second opinion. Try it by using the proper models, like Gemini 3 Pro that was just released and include grounding. Don’t use the free models, you’ll be surprised at how valuable it can be.
I think the question is whether those ai tools make you produce more value. Anecdotally, the ai tools have changed the workflow and allowed me to produce more tools etc.
They have not necessarily changed the rate at which I produce valuable outputs (yet).
There are a thousand "nuisance" problems which matter to me and me alone. AI allows me to bang these out faster, and put nice UIs on it. When I'm making an internal tool - there really is no reason not to put a high quality UX on top. The high quality UX, or existence of a tool that only I use does not mean my value went up - just that I can do work that my boss would otherwise tell me not to do.
personal increase in satisfaction (such as "work that my boss would otherwise tell me not to do") is valuable - even if only to you.
The fact is, value is produced when something can be produced at a fraction of the resources required previously, as long as the cost is borne by the person receiving the end result.
no - this is a lesson an engineer learns early on. The time spent making the tool may still dwarf the time savings you gain from the tool. I may make tools for problems that only ever occurred or will occur once. That single incident may have occurred before I made the tool.
This also makes it harder to prioritize work in an organization. If work is perceived as "cheap" then it's easy to demand teams prioritize features that will simply never be used. Or to polish single user experiences far beyond what is necessary.
One thing I learned from this is to disregard all attempts at prioritizing based on the output's expected value for the users/business.
We prioritize now based on time complexity and omg, it changes everything: if we have 10 easy bugfixes and one giant feature to do (random bad faith example), we do 5 bugfixes and half the feature within a month and have an enormous satisfaction output from the users who would never have accepted to do it that way in the first place . If we had listened, we would have done 75% of the features and zero bug fixes and have angry users/clients whining that we did nothing all month...
The time spent on dev stuff absolutely matters, and churning quick stuff quickly provides more joy to the people who pay us. It's a delicate balance.
As for AI, for now, it just wastes our time. Always craps out half correct stuff so we optimized our time by refusing to use it, and beat teams who do that way.
I'm curious if you could share something about custom agents. I love Claude Code and I'm trying to get it into more places in my workflow, so ideas like that would probably be useful.
I think that's also because Claude Code (and LLMs) is built by engineers who think of their target audience as engineers; they can only think of the world through their own lenses.
Kind of how for the longest time, Google used to be best at finding solutions to programming problems and programming documentation: say, a Google built by librarians would have a totally different slant.
Perhaps that's why designers don't see it yet, no designers have built Claude's 'world-view'.
If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
> But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.
If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".
Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.
We already have AGI - it's called humans - and frankly it's no magic bullet for AI progress.
Meta just laid 600 of them off.
All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.
For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.
The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.
Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.
Humans aren't really being put to work upgrading the underlying design of their own brains, though. And 5 years is a blink of an eye. My five-year-old will barely even be turning ten years old by then.
All I see it doing, as a SWE, is limiting the speed at which my co-workers learn and worsening the quality of their output. Finally many are noticing this and using it less...
Your probably bosses think it's worth it if the outcome is getting rid of the whole host of y'all and replace you with AWS Elastic-SWE instances. Which is why it's imperative that you maximize AI usage.
No one's switching to AI cold turkey. Think of it as training your own, cheaper replacement. SWEs & their line managers develop & test AI workflows, while giving the bosses time to evaluate AI capabilities, then hopefully shrink the headcount as close to 0 as possible without shrinking profits. Right now, it's juniors who're getting squeezed.
Where are the products? This site and everywhere around the internet, on x, linkedin and so is full of crazy claims and I have yet to see a product that people need and that actually works. What I'm experiencing is a gigantic enshittification everywhere, Windows sucks, web apps are bloated, slow and uninteresting. Infrastructure goes down even with "memory safe rust" burning millions and millions of compute for scaffolding stupid stuff. Such a disappointment
Citing AI software as the only examples of how AI benefits developing software, has a bit of a touch of self-help books describing how to attain success and fulfillment by taking the example of writing self-help books.
I don’t disagree that these are useful tools, by the way. I just haven’t seen any discernible uptick in general software quality and utility either, nor any economic uptick that should presumably follow from being able to develop software more efficiently.
ChatGPT is... a chat with some "augmentation" feature aka outputting rich html responses, nothing new except the generative side. Cursor is a VSCode fork with a custom model and a very good autocomplete integration. Again where are the products? Where the heck is Windows without the bloat that works reliably before becoming totally agentic? And therefore idiotic since it doesn't work reliably
I agree with everyone else, where is the Microsoft Office competitor created by 2 geeks in a garage with Claude Code? Where is the Exchange replacement created by a company of 20 people?
There are many really lucrative markets that need a fresh approach, and AI doesn't seem to have caused a huge explosion of new software created by upstarts.
Or am I missing something? Where are the consumer facing software apps developed primarily with AI by smaller companies? I'm excluding big companies because in their case it's impossible to prove the productivity, the could be throwing more bodies at the problem and we'd never know.
The challenge in competing with these products is not code. The challenge competing in lucrative markets that need a fresh approach is also generally not code. So I’m not sure that is a good metric to evaluate LLMs for code generation.
I think the point remains, if someone armed with Claude Code could whip out a feature complete clone of Microsoft Office over the weekend (and by all accounts, even a novice programmer could do this, because of the magnificent greatness of Claude), then why don't they just go ahead and do it? Maybe do a bunch of them: release one under GPL, one under MIT, one under BSD, and a few more sold as proprietary software. Wow, I mean, this should be trivial.
Cool. So we established that it's not code alone that's needed, it's something else. This means that the people who already had that something else can now bootstrap the coding part much faster than ever before, spend less time looking for capable people, and truly focus on that other part.
So where are they?
We're not asking to evaluate LLM's for code. We're asking to evaluate them as product generators or improvers.
We had upstarts in the 80s, the 90s, the 2000s and the 2010s. Some game, some website, some social network, some mobile app that blew up. We had many. Not funded by billions.
So, where is that in the 2020s?
Yes, code is a detail (ideas too). It's a platform. It positions itself as the new thing. Does that platform allow upstarts? Or does it consolidate power?
Fine, where's the slop then? I expected hundreds of scammy apps to show up imitating larger competitors to get a few bucks, but those aren't happening either. At least not any more than before AI.
This. Design tends to explore a latent space that isn't well documented. There is no Stack Overflow or Github for design. The closest we have are open sourced design systems like Material Design, and portfolio sites like Behance. These are not legible reference implementations for most use cases.
If LLMs only disrupt software engineering and content slop, the economy is going to undergo rapid changes. Every car wash will have a forward deployed engineer maintaining their mobile app, website, backend, and LLM-augmented customer service. That happens even if LLMs plateau in six months.
If you want to steal code, you can take it from GitHub and strip the license. That is what the Markov chains (https://arxiv.org/abs/2410.02724) do.
It's a code laundering machine. Software engineering has a higher number of people who have never created anything by themselves and have no issues with copyright infringement. Other professions still tend to take a broader view. Even unproductive people in other professions may have compunctions about stealing other people's work.
Did you read the essay? It never claimed that AI was useless, nor was the ultimate point of the article even about AI's utility—it was about the political and monetary power shifts it has enabled and their concomitant risks, along with the risks the technology might impose for society.
This ignorance or failure to address these aspects of the issue and solely focus on its utility in a vacuum is precisely the blinkered perspective that will enable the consolidations of power the essay is worried about...the people pushing this stuff are overjoyed that so few people seem to be paying any attention to the more significant shifts they are enacting (as the article states, land purchase, political/capital power accumulation, reduction of workforces and operating costs and labor power... the list goes on)
Software engineers been automating our own work since we built the first assembler. So far it's just made us more productive and valuable, because the demand for software has been effectively unlimited.
Maybe that will continue with AI, or maybe our long-standing habit will finally turn against us.
> Software engineers been automating our own work since we built the first assembler.
The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.
Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.
Automating away software engineering entirely is nothing new. It goes all the way back to BASIC and COBOL, and later visual programming tools, Microsoft Access, etc. There have been innumerable attempts to do somehow get by without need those pedantic and difficult programmers and all their annoying questions and nit picking.
But here's the thing: the hard part of programming was never really syntax, it was about having the clarity of thought and conceptual precision to build a system that normal humans find useful despite the fact they will never have the patience to understand let alone debug failures. Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.
I won't say AI will never get there—it already surpasses human programmers in many of the mechanical and rote knowledge of programing language arcana—but it it still is orders of magnitude away from being able to produce a useful system when specified by someone who does not think like a programmer. Perhaps it will get there. But I think the barrier at that point will be the age old human need to have a throat to choke when things go sideways. Those in power know how to control and manipulate humans through well-understood incentives, and this applies all the way to the highest levels of leadership. No matter how smart or competent AI is, you can't just drop it into those scenarios. Business leaders can't replace human accountability with an SLA from OpenAI, it just doesn't work. Never say never I suppose, but I'd be willing to bet the wheels come off modern civilization long before the skillset of senior software engineers becomes obsolete.
"The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe."
This is the crux of the OP's argument, adding in that (in the meantime) the incumbents and/or bad actors will use it as a path to intensify their political and economic power.
But to me the article fails to:
(1) actually make the case that AI's not going to be 'valuable enough' which is a sweeping and bold claim (especially in light of its speed), and;
(2) quantify AI's true value versus the crazy overhyped valuation, which is admittedly hard to do - but matters if we're talking 10% of 100x overvalued.
If all of my direct evidence (from my own work and life) is that AI is absolutely transformative and multiplies my output substantially, AND I see that that trend seems to be continuing - then it's going to be a hard argument for me to agree with #1 just because image generation isn't great (and OP really cares about that).
Higher Ed is in crisis; VC has bet their entire asset class on AI; non-trivial amounts of code are being written by AI at every startup; tech co's are paying crazy amounts for top AI talents... in other words, just because it can't one-shot some complex visual design workflow does not mean (a) it's limited in its potential, or (b) that we fully understand how valuable it will become given the rate of change.
As for #2 - well, that's the whole rub isn't it? Knowing how much something is overvalued or undervalued is the whole game. If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo".
My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain.
This might be the money quote, encapsulating the difference between people who say their work benefits from LLMs and those who don't. Expecting it to one-shot your entire module will leave you disappointed, using it for code completion, generating documentation, and small-scale agentic tasks frees you up from a lot of little trivial distractions.
Worth what? I probably agree, the greenfield rote mechanical tasks of putting together something like a basic interface, somewhat thorough unit tests, or a basic state container that maps to a complicated typed endpoint are things I'd procrastinate on or would otherwise drain my energy before I get started.
But that real tangible value does need to have an agreeable *price* and *cost* depending on the context. For me, that price ceiling depends on how often and to what extent it's able to contribute to generating maximum overall value, but in terms of personal economic value (the proportion of my fixed time I'm spending on which work), if it's on an upward trend of practical utility, that means I'm actually increasing the proportion of dull tasks I'm spending my time on... potentially.
Kind of like how having a car makes it so comfortable and easy and ostensibly fast to get somewhere for an individual—theoretically freeing up time to do all kinds of other activities—that some people justify endless amounts of debt to acquire them, allowing the parameters of where they're willing to live to shift further and further to the point where nearly all of their free time, energy, and money is spent on driving, all of their kids depend on driving, and society accepts it as an unavoidable necessity; all the deaths, environmental damage, side-effects of decreased physical activity and increased stress along for the ride. Likewise how various chat platforms tried to make communication so friction-less that I actually now want to exchange messages with people far less than ever before, effectively a foot gun
Maybe America is once again demolishing its cities so they can plow through a freeway, and before we know it, every city will be Dallas, and every road will be like commuting between San Jose to anywhere else—metaphorically of course, but also literally in the case of infrastructure build— when will it be too late to realize that we should have just accepted the tiny bit of hardship of walking to the grocery store.
------
All of that might be a bit excessive lol, but I guess we'll find out
I'm in the sciences, but at my first college I took a programming course for science majors. We were partnered up for an end of semester project. I didn't quite know how to get started, but my partner came to me with a bunch of pieces of the project and it was easy to put them together and then tinker with them to make it work.
An agentic git interface might be nice, though hallucinations seem like they could create a really messy problem. Still, you could just roll back in that case, I suppose. Anyways, it would be nice to tell it where I'm trying to get to and let it figure out how to get there.
"This lump of code is producing this behaviour when I don't want to"
Is a quick way to find/fix bugs (IME)
BUT it requires me to understand the response (sometimes the AI hits the nail on the head, sometimes it says something that makes my brain - that's not it, but now I know exactly what it is
Honestly one the best use cases I've found for it is creating configs. It used to be that I was able to spend a week fiddling around with, say, nvim settings. Now I tell an LLM what I want and it basically gives it to me without having to do trial and error, or locating some obscure comment from 2005 that tells me what I need to know.
If it's a less trodden path expect it to hallucinate some settings.
Also a regular thing I see is that it adds some random other settings without comment and then when you ask it about them it goes, whoops, yeah, those aren't necessary.
This seems pretty close to my own view, although I'm not sure the secret power grab is about land and water so much as just getting into everyone's brains. What I'd add is that it's not just a front for consolidation of resources and power, it's also a result of a preexisting consolidation of resources and power that we've ignored for too long. If we had had a healthier society 5 or 10 years ago we could have withstood this. Now, it's not so clear.
The safety regulations for every new technology are written in blood. Every time. AI won’t be any different, we’ll push it as hard & fast as our tolerance for human suffering allows.
The casualty is functional literacy, and at least half of the murder was handing smartphones out to children and then putting our lives on the web (social)
Very good article. With regards to the guy being a designer, he is IMHO still correct with regards to layouts. Currently LLMs ares till pretty clueless about layouts. Also SWE is much more then coding. Even in the coding area there is much more room for improvement. The idea that you would not need Software Engineers soon, is brain dead.
I'd like more people to talk about AI and surveillance. I think that is going to be one of it's biggest impacts on society(ies).
We are a decade or two in to having massive video coverage, such that you are probably on someone's camera much of your day in the world, and video feeds that are increasingly cloud hosted.
But nobody could possibly watch all that video. Even cameras specifically controlled by the police, it had already outstripped the ability to have humans monitoring it. At best you could refer to it when you had reason to think there'd be something on it, and even that was hugely expensive to human time.
Enter AI. "Find where Joe Schmoe was at 3:30pm yesterday and show me the video" "Give me a written summary of all the cars which crossed into the city from east to west yesterday afternoon." "Give me the names of everyone who entered the convenience store at 2323 Monument St last week." "Give me a written summary of Sue Brown's known activities in November."
The total surveillance society is coming.
I think it will be the biggest impact AI has on society in retrospect. I, for one, am not looking forward to it.
> I'd like more people to talk about AI and surveillance. I think that is going to be one of it's biggest impacts on society(ies).
We lost that fight when literally no one fought back against LPR. LPR cameras were later enabled for facial rec. That data is actually super easy to trace. No LLMs necessary.
Funny story, in my city, when we moved to ticketless public transport, a few people were worried about surveilance. "Police wont have access to the data" they said. The first request for data from the police came < 7 days into the systems operation, and an arrest was made on that basis. Its basically impossible to travel near, by any means, any major metro, and not be tracked and deanonymised later.
Now if you have no understanding of history or politics, this might not shock you. But I find it hard to imagine a popular uprising, even a peaceable one, being effective in this environment.
Actually LLMs introducing a compounding 3% error in reviewing and collating this data might be the best thing to ever happen.
I think you're describing technology that has existed for 15+ years and is already pretty accurate. It's not even necessarily "AI"/ML. For example, I think OpenALPR (automated license plate recognition) is all "classical" computer vision. The most accurate facial/gait/etc. recognition is most likely ML-based with a state-of-the-art model, admittedly, and perhaps the threshold of accuracy for large-scale usefulness was only crossed recently.
The guard rails IMHO are not technological but who owns the cameras/video storage backend, when/if a warrant is needed, and the criteria for granting one.
What about surveillance? Lately I've been feeling that is what it's really for. Because our data can be queried in a much more powerful way when it has all been used to train LLMs.
- Companies had to go in that direction. You cannot just fall behind AI gold rush
- These solutions are easier for people, and therefore will win in the long run
- these solutions benefit companies because of the surveillance data they have access to now. They always had some data, but now they collect and process even more
- those who control AI will be the kings of the future, so naturally everyone will be running toward this goal
The AI race is presumably won by whomever can automate AI R&D first, thus everyone who is in an adjacent field will see the incremental benefits sooner than those further away. The further removed, the harder the takeoff once it happens.
This notion of a hard takeoff, or singularity, based on self-improving AI, is based on the implicit assumption that what's holding AI progress back is lack of AI researchers/developers, which is false.
Ideas are a penny a dozen - the bottleneck is the money/compute to test them at scale.
What exactly is the scenario you are imagining where more developers at a company like OpenAI (or maybe Meta, which has just laid off 600 of them) would accelerate progress?
It's not hard to believe that adding AI researchers to an AI company marginally increases the rate of progress, otherwise why would the companies be clamouring for talent with eye-watering salaries? In any case, I'm not just talking about AI researchers—AGI will not only help with algorithmic efficiency improvements, but will probably make spinning up chip fabs that much easier.
The eye-watering salary you probably have in mind is for a manager at Meta, same company that just laid of 600 actual developers. Why just Meta, not other companies - because they are blaming poor LLama performance on the manager, it seems.
Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.
Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?
All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?
To me the hard take off won't happen until a humanoid robot can assemble another humanoid robot from parts, as well as slot in anywhere in the supply chain where a human would be required to make those parts.
Once you have that you functionally have a self-replicating machine which can then also build more data centers or semi fabs.
Another major use case for it is enabling students to more easily cheat on their homework. Which is why it is probably going to end up putting Chegg out of business.
Many people use AI as the source for knowledge. Even though it is often wrong or misleading, it's advice is better on average than their own judgement or the judgement of people they know. When an AI is "smarter" than 95%? of the population, even if it does not reach superintelligence, will be a very big deal.
This means to me AI is rocket fuel for our post-truth reality.
Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.
Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.
AI can interpolate in the space of search results, yielding results in between the hits that a simple text index would return.
It is also a fuzzy index with the unique ability to match on multiple poorly specified axes at once in a very high dimensional search space. This is notoriously difficult to code with tradition computer science techniques. Large language models are in some sense optimal at it instead of “just a little bit better than a total failure”, which is what we had before.
Just today I needed to find a library I only vaguely remembered from years ago. Gemini found it in seconds based on the loosest description of what it does.
That is a technology that is getting difficult to distinguish from magic.
1. Yes Capitalism
2. Just waiting for the bubble to pop, when investor wake up to only Nvidia making money, and all that money will flow somewhere else.
> To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.
What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.
It’s a funny anology because what’s missing for the rainbows with pots of gold is magic and fairytales…so what’s missing for consciousness is also magic and fairytales? I’ve yet to see any compelling argument for believing enough computer wouldn’t allow us to code consciousness.
Yes that's just it though, it's a logic argument. "Tell me why we aren't just stochastic parrots!" is more logically sound than "God made us", but that doesn't defacto make it "The correct model of reality".
I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?
I've had *very* much the opposite experience. Very nearly every AI skeptic take I read has exactly this opinion, if not always so well-articulated (until the last section, which lost me). But counterarguments always attack the complete strawman of "AI is utterly useless," which very few people, at least within the confines of the tech and business commentariat, are making.
Measuring productivity in software development, or even white collar jobs in general, let alone the specific productivity gains of even things like the introduction of digital technology and the internet at all, let alone stuff like static vs dynamic types, or the productivity difference of various user interface modalities, is notoriously extremely difficult. Why would we expect to be able to do it here?
I found the last section to be the most exciting part of the article. Describing a conspiracy around AI development, not being about the AI, but the power that a few individuals will gain by building data centers that rival the size, power, and water consumption of small cities, which are will be used to gain political power.
> It can take enormous amounts of time to replicate existing imagery with prompt engineering, only to have your tool of choice hiccup every now and again or just not get some specific aspect of what a person had created previously.
Yes... I don't think the current process of using a diffusion model to generate an image is the way to go. We need AI that integrates fully within existing image and design tools, so it can do things like rendering SVG, generating layers and manipulating them, the same as we would with the tool, rather than one-shot generating the full image via diffusion.
Same with code -- right now, so much AI code gen and modification, as well as code understanding, is done via raw LLM. But we have great static analysis tools available (ie what IDES do to understand code). LLMs that have access to those tools will be more precise and efficient.
It's going to take time to integrate LLMs properly with tools. And train LLMs to use the tools the best way. Until we get there, the potential is still more limited. But I think the potential is there.
> But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.
It doesn’t matter they murdered software engineering and destroyed tens of thousands of careers once it bursts it will be an oops and onto the next hype.
I like to reduce things to absurdity to put them into perspective.
The hype of AI is to sell illusion to naive people.
It is like create a hammer that nails by itself... like cars that choose the path by itself.
So stop thinking AI is intelligent... it is merely an advanced tool that demands skill and creativity like any other. Its output is limited to the hability of its user.
The worry should be the amount of resources used to vanity (hammers into the newborn hands) or the nails in the wrong place (viral fake content targeted to unaware people).
Like in Industrial Revolution people got reduced to screw tighteners, mind will be reduced to bad prompters expecting wonders and producing bad content or the same. A step back in civilization except for the money makers and thinkers until AI revolution gives birth to its Karl Marx.
Could have been, or rather, they thought it would, but the open models from China that you can just run locally are changing the game. Distributing power instead of concentrating it.
It's pretty clear that the financialization aspect of AI is a bubble. There's way too much market cap created by trading debt back and forth. How well AI will work remains an open question at this point.
Also, it may be true that these companies theoretically have the cash flow to cover to spending, but that doesn't mean that they will be comfortable with that risk, especially as that risk becomes more likely in some kind of mass extinction event amongst AI startups. To concretize that a bit, the remote possibility of having to give up all your profits for 2 years to payoff DC investment is fine at 1% chance of happening, but maybe not so ok at a 40% chance.
My new thing with articles like these: just search for the word "water".
I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water. Datacenters that outburn cities to keep the data churning are big, expensive, and have to be built somewhere. The deals made to develop this kind of property are political — they affect cities and states more than just about any other business run within their borders.
I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water.
If you just wanted land, water, and electricity, you could buy them directly instead of buying $100 million of computer hardware bundled with $2 million worth of land and water rights. Why are high end GPUs selling in record numbers if AI is just a cover story for the acquisition of land, electricity, and water?
But with this play they can inflate their company holdings and cash out in new rounds. It’s the ultimate self enrichment scheme! Nobody wants that crappy piece of land but now it’s got GPUs and we can leverage that into a loan for more GPUs and cash out along the way.
Valid question. What the OP talks about though is that these things were not for sale normally. My takeaway from his essay is that a few oligarchs get a pass to take over all energy, by means of a manufactured crisis.
When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.
He could have explained that better. Try to not look at the media drama the political actors give you each day, but look at the agenda the real powers laid bare
- Trump is threatening an oil rich neighbor with war. A complete expensive as hell army blowing up 'drug boats' (claim) to make help the press sell it as a war on drugs. Yeah right.
- Green energy projects, even running ones, get cancelled. Energy from oil and nuclear are both capital intensive and at the same time completely out-shined by solar and battery tech. So the energy card is a strong one to direct policy towards your interests.
If you can turn the USA into a resource economy like Russia, than you can rule like a Russian oligarch. That is also why the admin sees no problem in destroying academia or other industries via tariffs; controlling resources is easier and more predictable than having to rely on an educated populace that might start to doubt the promise of the American Dream.
I did not think about it that way, but it makes perfect sense. And it is really scary. It hasn't even been a year since Trump's second term started. We still have three more years left.
I believe it’s a bubble. Every app interface is becoming similar to ChatGPT, claiming they’ll “help you automate,” while drifting away from the app’s original purpose.
Most of this feels like people trying to get rich off VC money — and VCs trying to get rich off someone else’s money.
Best case is hardly a bubble. I definitely think this is a new paradigm that'll lead to something, even if the current iteration won't be the final version and we've probably overinvested a slight bit.
Same as the dot-com bubble. Fundamentals were wildly off for some businesses, but you can also find almost every business that failed then running successfully today.
Personally I don't think sticking AI in every software is where the real value is, it's improving understanding of huge sets of data already out there. Maybe OpenAI challenges Google for search, maybe they fail, I'm still pretty sure the infrastructure is going to get used because the amount of data we collect and try to extract value from isn't going anywhere.
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have.
> Again, I think that AI is probably just a normal technology, riding a normal hype wave. And here’s where I nurse a particular conspiracy theory: I think the makers of AI know that.
I think those committing billions towards AI know it too. It's not a conspiracy theory. All the talk about AGI is marketing fluff that makes for good quotes. All the investment in data centers and GPU's is for regular AI. It doesn't need AGI to justify it.
I don't know if there's a bubble. Nobody knows. But what if it turns out that normal AI (not AGI) will ultimately provide so much value over the next couple decades that all the data centers being built will be used to max capacity and we need to build even more? A lot of people think the current level of investment is entirely economically rational, without any requirement for AGI at all. Maybe it's overshooting, maybe it's undershooting, but that's just regular resource usage modeling. It's not dependent on "coding consciousness" as the author describes.
What is the value of technology which allows people communicate clearly with other people of any language? That is what these large language models have achieved. We can now translate pretty much perfectly between all the languages in the world. The curse from the tower of Babel has been lifted.
There will be a time in the future, when people will not be able to comprehend that you couldn't exchange information regardless of personal language skills.
So what is the value of that? Economically, culturally, politically, spiritually?
Language is a lot deeper than that. It's like if I say "we speak the same language", it means a lot more than just the ability to translate. It's talking about a shared past and worldview and hopefully future which I/we intend to invest in.
You could make the same argument about video conferencing: Yes, you can now talk to anyone anywhere anytime, and it's amazing. But somehow all big companies are convinced that in-person office work is more productive.
Machine translation was horrible and completely unreliable before LLMs. And human translators are very expensive and slow in comparison.
LLM is for translation as computers were for calculating. Sure, you could do without them before. They used to have entire buildings with office workers whose job it was to compute.
I don't think you understand how off that statement is. It's also pretty ignorant considering Google Translate barely worked at all for many languages. So no, it didn't work great and even for the best possible language pair Google Translate is not in the same ballpark.
Not really long before, although I suppose it's relative. Google translate was pretty garbage until around 2016-2017 and then it started really improving
It really didn't. There were many languages which it couldn't handle at all, just making completely garbled output. It wasn't possible to use Google Translate professionally.
We could communicate with people before LLMs just fine though? We have hand gestures and some people learn multiple languages and google translate was pretty solid. I got by just fine in countries where I didn’t know the language because hand gestures work or someone speaks English.
What is the value of losing our uniqueness to a computer that lies and makes us all talk the same?
Incredible that we happen to be alive at the exact moment humanity peaked in its interlingual communication. With Google Translate and hand gestures there is no need to evolve it any further.
You can maybe order in a restaurant or ask the way with hand gestures. But surely you must be able to take a higher perspective than your own, and realize that there's enormous amounts of exchange between nations with differing language, and all of this relies on some form of translation. Hundreds of millions of people all over the world have to deal with language barriers.
Google Translate was far from solid, the quality of translations were so bad before LLMs that it simply wasn't an option for most languages. It would sometimes even translate numbers incorrectly.
LLMs are here and Google Translate is still bad (surely, if it was easy as just plugging the miraculous perfect llms into it, it would be perfect now?), I don't think people who think we've somehow solved translation actually understand how much it still deals extremely poorly with.
And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)
What argument are you making? LLM translating is available to anybody to try and use right now, and you can use services like Kagi Translate or DeepL to see the evidence for yourself that they make excellent translations. I honestly don't care what Google Translate does, because nobody who is serious about translation uses it.
> And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)
The kind of deeply understood communication you are demanding is usually impossible even between people who have the same native tongue, from the same town and even within the same family. And people can misunderstand each other just fine without the help of AI. However, is it better to understand nothing at all, then to not understand every nuance?
>That’s quite a contradiction. A datacenter takes years to construct. How will today’s plans ever enable a company like OpenAI to catch up with what they already claim is a computational deficit that demands more datacenters?
Its difficult to steelman such a weird argument. If a deficit cant be remedied immediately, it should never be remedied?
This is literally how capex works. You purchase capacity now, based on receiving it, and the rewards of having it, in the future.
>And yet, these deals are made. There’s a logic hole here that’s easily filled by the possibility that AI is a fitting front for consolidation of resources and power.
No you just made some stuff up, and then suggested that your own self inflicted confusion might be better explained with some other stuff you made up.
>Globalism eroded borders by crossing them, this new thing — this Privatism — erodes them from within.
What? Its called Capitalism. You dont need a new word for it every 12 months. Emotive words like "erosion" say nothing but are just targeted at like, stirring people up. Demonstrate the erosion.
>Remember, datacenters are built on large pieces of land, drawing more heavily from existing infrastructure and natural resources than they give back to the immediately surrounding community
How did you calculate this. Show your work. Pretty sure if someone made EQ SY1 SY2 and SY3 disappear, the local community, the distant community, communities all over the planet would be negatively affected.
>When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style
To take the overwrought disproportionate emotive language out of this.
"How are private entities allowed to build big things I dont like, including power sources I dont like"
The answer is that many people are allowed to do things you don't approve of. This is normal. This is society. Not everything needs the approval of the blogerati. Such a world would be horrific.
>when the infrastructure that powers AI becomes more valuable than the AI itself, when the people who control that infrastructure hold more sway over policy and resources than elected governments.
Show your working. How are the infrastructure providers going to run the government? I believe historically big infrastructure projects tend to die, require some government inducements and then go away. People had similar misgivings about the railroads in the US, in fact it was a big bugbear for henry george I believe. Is Amtrak secretly pulling the strings of the US Deep State? If the US Government is weak to private interests, thats up to the good burghers of yankistan to correct at the polls. If electoral politics dont work, then other means seppos find scary might be required. Freaking out about AI investment seems like a weird place to suddenly be concerned about this.
See Also: AT&T Long Lines, Hydro Electric Dams, Nuclear Energy, Submarine Cable Infrastructure. If Political power comes from owning infrastructure we should be more worried about like, Hurricane Electric. Its demonstrable that people who build big infra dont run the planet. Heck Richest Man and Weird Person Darling Elon Musk doesn't honestly command much infrastructure, he mostly just lives on hype and speculation.
>but I’m really just following the money and the power to their logical conclusion.
The more you need to invoke "Logical conclusion" the less geniune and logical the piece reads.
>Maybe AI will do everything humans do. Maybe it will usher in a new society defined by something other than the balancing of labor units and wealth units. Maybe AGI — these days defined as a general intelligence that exceeds human kind in all contexts — will emerge and “justify” all of this. Maybe.
Probably things will continue on as they always have, but the planet will have more datacenter capacity. Likely, if the AI bubble does burst, datacenter capacity will be cheaper.
>The market concentration and incestuous investment shell game is real.
Yes? And that will probably explode and we will see AI investors jumping out of buildings. nVidia is in a position right now to underwrite big AI Datacentre loans, which could completely offset the huge gains they have made. What about it. Again, you demonstrate nothing.
>The infrastructure is real. The land deals are real.
Yes. Remember to put 2 truths before your lie.
>The resulting shifts in power are real.
So far they exist in your mind.
>we will find ourselves citizens of a very new kind of place that no longer feels like home.
Reminds me of an old argument that a raving white supremacist used to push on me. That "justice" as he defined it, was that society not change so old people wont be scared by it. That having a new (possibly browner) person running the local store was tantamount to and justification for genocide.
Change is a constant. That change making you sad is not in and of itself a bad thing. Please adjust accordingly.
AI is not overhyped. It's like saying going to the moon is overhyped.
First of all this AI stuff is next level. It's as great, if not greater than going to space or going to the moon.
Second the rate at which is improving makes it such that the hype is relevant and realistic.
I think what's throwing people off are two things. First people are just over exposed to AI. So the overexposure is causing people to feel AI is boring and useless slop. Investments are heavy into AI but the people who throw that money around are a minority, overall the general public is actually UNDER hyping AI. Look at everyone on this thread. Everyone and I mean Everyone isn't overly optimistic about AI. instead the irony is... Everyone and I mean everyone again strangely thinks the world is overhyped about AI and they are wrong. This thread and practically every thread on HN is a microcosm of the world and the sentiment is decidedly against AI. Think about it like this, if Elon Musk invented a car that cost 1$ and this car could travel at FTL speeds to anywhere in the universe, than interstellar travel will be routine and boring within a year. People will call it overhyped.
Second the investment and money spent on AI is definitely overhyped. Right? Think about it. If we quantify the utility and achievement of what AI can currently do and what it's projected to achieve the math works out. If you quantify the profitability of AI the math suddenly doesn't work out.
Seems like an apt comparison; it was a massive money sink and a regular person gained absolutely nothing from the moon landing, it's just the big organization (NASA, US government) that got the bragging rights.
The best AI is the one hidden, silent, ubiquitous that works and you feel it's not there. Apple devices but really many modern devices before the LLM hype era had a lot of AI we didn't know about. Today if I read a product has AI i feel let down cause most of the time is a not very well integrated ChatBot that if you will to spend some time sooner or later will impersonate Adolf Hitler and, who knows, maybe leaks sensitive data or apis meta. The bubble needs to burst so we can go back to silently pack products with useful ai features without telling the world
This is what I wonder to, what is the end game?
Advance technology so that we can have anything that we want, whenever we want it.
Fly to distant galaxies.
Increase the options available to us and our offspring.
But ultimately, what will we gain from that?
Is it to say that we did it or is it for the pleasure of the process?
If it's for pleasure, then why have we made our processes so miserable for everyone involved? If it's to say that we did it, couldn't we not and say that we did? That's the whole point of fantasy.
Is Elon using AI to supplement his own lack of imagination?
I could be wrong, this could be nonsense. I just can't make sense of it.
I see, Fly was perhaps the wrong word to use here.
Phase-Shift to new galaxies is probably the right term.
Where you change your entire system's resonant frequency, to match what exists in the distant galaxy.
Less of transportation, and more of a change of focus.
Like the way we can daydream about a galaxy, then snap-back to work.
It's the same mechanism, but with enhanced focus you go from not just visualising > feeling > embodying > grounding in the new location.
We do it all the time, however because we require belief that it's possible in order to maintain our location, whenever we question where we are - we're pulled back into the reality that questions things (it's a very Earth centric way of seeing reality)
If things were left to their own devices, the end game would a civilization like stroggos: the remaining humans will choose to fuse with machines, as it would give them an advantage. The first tactical step will be to nudge people to give up more and more agency to AI companions. I doubt this future will materialise, though.
There are some flavors of AI doomerism that I'm unwilling to fight - the proliferance of AI slop, the inability of our current capital paradigm to adjust such that loads of people don't become overnight-poor, those sorts of things.
If you tell me, though, that "We installed AI in a place that wasn't designed around it and it didn't work" you're essentially complaining that your horse-drawn cart broke when you hooked it up to your HEMI. Of course it didn't work. The value proposition built around the concept of long dev cycles with huge teams and multiple-9s reliability deliverables is not what this stuff excels at.
I have churned out perfectly functional MVPs for tens of projects in a matter of weeks. I've created robust frameworks with >90% test coverage for fringe projects that would never have otherwise gotten the time budget allotted to them. The boundaries of what can be done aren't being pushed up higher or down deeper, they're being pushed out laterally. This is very good in a distributed sense, but not so great for business as usual - we've had megacorps consolidating and building vertically forever and we've forgotten what it was like to have a robust hacker culture with loads of scrappy teams forging unbeaten paths.
Ironically, VCs have completely missed the point in trying to all build pickaxes - there's a ton of mining to do in this new space (but the risk profile makes the finance-pilled queasy). We need both.
AI is already very good at some things, they just don't look like the things people were expecting.
The coming of AI seems one of those things like the agricultural revolution or industrial revolution that is kind of inevitable once it starts. All the business of who pays how much for which stock and what price is sensible and which algorithm seem kind of secondary.
The universal theme with general purpose technologies is 1) they start out lagging behind current practices in every context 2) they improve rapidly, but 3) they break through and surpass current practices in different contexts at different times.
What that means is that if you work in a certain context, for a while you keep seeing AI get a 0 because it is worse than the current process. Behind the scenes the underlying technology is improving rapidly, but because it hasn’t cusped the viability threshold you don’t feel it at all. From this vantage point, it is easy to dismiss the whole thing and forget about the slope, because the whole line is under the surface of usefulness in your context. The author has identified two cases where current AI is below the cusp of viability: design and large scale changes to a codebase (though Codex is cracking the second one quickly).
The hard and useful thing is not to find contexts where the general purpose technology gets a 0, but to surf the cusp of viability by finding incrementally harder problems that are newly solvable as the underlying technology improves. A very clear example of this is early Tesla surfing the reduction in Li-ion battery prices by starting with expensive sports cars, then luxury sedans, then normal cars. You can be sure that throughout the first two phases, everyone at GM and Toyota was saying: Li-ion batteries are totally infeasible for the consumers we prioritize who want affordable cars. By the time the technology is ready for sedans, Tesla has a 5 year lead.
> The universal theme with general purpose technologies is 1) they start out lagging behind current practices in every context 2) they improve rapidly, but 3) they break through and surpass current practices in different contexts at different times.
I think you should say succesful "general purpose technologies". What you describe is what happens when things work out. Sometimes things stall at step 1, and the technology gets relegated to a foot note in the history books.
Yeah, that comment is heavy on survivor bias. The universal theme is that things go the way they go.
We don’t argue that microwaves will be ubiquitous (which they aren’t, but close enough). We argue that microwaves are not an artificial general barbecue, as the makers might wish were true.
And we argue that microwaves will indeed never replace your grill as the makers, again, would love you to believe.
Your reasoning would be fine if there were a clear distinction, like between a microwave and a grill.
What we actually have is a physical system (the brain) that somehow implements what we know as the only approximation of general intelligence and artificial systems of various architectures (mostly transformers) that are intended to capture the essence of general intelligence.
We are not at the microwave and the grill stage. We are at the birds and the heavier-than-air contraptions stage, when it's not yet clear whether those particular models will fly, or whether they need more power, more control surfaces, or something else.
There was a lot of hubris around microwaves. I remember a lot of images of full chickens being roasted in them. I've never once seen that "in the wild" as it were. They are good for reheating something that was produced earlier. Hey the metaphor is even better than I thought!
“As a designer…”
IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.
Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile.
I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling.
I'm a SWE, DBA, SysAdmin, I work up and down the stack as needed. I'm not using LLMs at all. I really haven't tried them. I'm waiting for the dust to settle and clear "best practices" to emerge. I am sure that these tools are here to stay but I am also confident they are not in their final form today. I've seen too many hype trains in my career to still be jumping on them at the first stop.
It's time to jump on the train. I'm a cranky, old, embedded SWE and claude 4.5 is changing how I work. Before that I laughed off LLMs. They were trash. Claude still has issues, but damn, I think if I don't integrate it into my workflow I'll be out of work or relegated to work in QA or devops(where I'd likely be forced to use it).
No, it's not going to write all your code for you. Yes your skills are still needed to design, debug, perform teamwork(selling your designs, building consensus, etc), etc.. But it's time to get on the train.
I'm a SWE and also an art director. I have tried these tools and, the way I've also tried Vue and React, I think they're good enough for simple minded applications. It's worth the penny to try them and look through the binoculars, if only to see how unoriginal and creatively limited what most people in your field are actually doing if they find this something that saves them time.
You don’t have to jump on the hype train to get anything out of it. I started using claude code about 4 months back and I find it really hard to imagine developing without now. Sure I’m more of a manager, but the tedious busywork, the most annoying part of programming, is entirely gone. I love it.
The tools have reached the point where no special knowledge is required to get started. You can get going in 5 minutes. Try Claude Code with an API key (no subscription required). Run it in the terminal in a repo and ask how something works. Then ask it to make a straightforward but tedious change. Etc.
Just download Gemini (no API key) and use it.
I'm surprised these pockets of job security still exist.
Know this: someone is coming after this already.
One day someone from management will hear about a cost-saving story at a dinner table, the words GPT, Cursor, Antigravity, reasoning, AGI will cause a buzzing in her ear. Waking up with tinnitus the next morning, they'll instantly schedule a 1:1 to discuss "the degree of AI use and automation"
> Know this: someone is coming after this already.
Yesterday, GitHub Copilot declared that my less-AI-weary friend’s new Laravel project was following all industry best-practices for database design as it storing entities as denormalized JSON blobs in a MySQL 8.x database with no FKs, indexes, constraints, all NULL columns (and using root@mysql as the login, of course); while all Laravel controller actions’ DB queries were RBAR loops that did loaded all rows into memory before doing JSON deserialisation in order to filter rows.
I can’t reconcile your attitude with my own personal lived experience of LLMs being utterly wrong 40% of the time; while 50% of the time being no better or faster than if I did things myself; another 5% of the time it gets stuck in a loop debating the existence of the seahorse emoji; and the last 5% of the time genuinely utterly scaring me with a profoundly accurate answer or solution that it produced instantly.
Also, LLMs have yet to demonstrate an ability to tackle other real-world DBA problems… like physically installing a new SSD into the SAN unit in the rack.
Why would you wait for dust to settle down? Just curious. Productivity gains are real in current form of LLMs. Guardrails and best practices can be learnt and self imposed.
> Productivity gains are real in current form of LLM
I haven't found that to be true
I'm of the opinion that anyone who is impressed by the code these things produce is a hack
I’m in the same position, but I use AI to get a second opinion. Try it by using the proper models, like Gemini 3 Pro that was just released and include grounding. Don’t use the free models, you’ll be surprised at how valuable it can be.
How could you not at least try?
I think the question is whether those ai tools make you produce more value. Anecdotally, the ai tools have changed the workflow and allowed me to produce more tools etc.
They have not necessarily changed the rate at which I produce valuable outputs (yet).
can you say more about this? what do you mean when you say 'more tools' is not the same as 'valuable outputs'
There are a thousand "nuisance" problems which matter to me and me alone. AI allows me to bang these out faster, and put nice UIs on it. When I'm making an internal tool - there really is no reason not to put a high quality UX on top. The high quality UX, or existence of a tool that only I use does not mean my value went up - just that I can do work that my boss would otherwise tell me not to do.
personal increase in satisfaction (such as "work that my boss would otherwise tell me not to do") is valuable - even if only to you.
The fact is, value is produced when something can be produced at a fraction of the resources required previously, as long as the cost is borne by the person receiving the end result.
Under this definition, could any tool at all be considered to produce more value?
no - this is a lesson an engineer learns early on. The time spent making the tool may still dwarf the time savings you gain from the tool. I may make tools for problems that only ever occurred or will occur once. That single incident may have occurred before I made the tool.
This also makes it harder to prioritize work in an organization. If work is perceived as "cheap" then it's easy to demand teams prioritize features that will simply never be used. Or to polish single user experiences far beyond what is necessary.
One thing I learned from this is to disregard all attempts at prioritizing based on the output's expected value for the users/business.
We prioritize now based on time complexity and omg, it changes everything: if we have 10 easy bugfixes and one giant feature to do (random bad faith example), we do 5 bugfixes and half the feature within a month and have an enormous satisfaction output from the users who would never have accepted to do it that way in the first place . If we had listened, we would have done 75% of the features and zero bug fixes and have angry users/clients whining that we did nothing all month...
The time spent on dev stuff absolutely matters, and churning quick stuff quickly provides more joy to the people who pay us. It's a delicate balance.
As for AI, for now, it just wastes our time. Always craps out half correct stuff so we optimized our time by refusing to use it, and beat teams who do that way.
Do using the tools increase ROI?
> IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.
the jury is still out on that...
Yeah, I'll gladly AI-gen code, but I still write docs by hand. Have yet to see one good AI generated doc, they're all garbage.
I'm curious if you could share something about custom agents. I love Claude Code and I'm trying to get it into more places in my workflow, so ideas like that would probably be useful.
I've been using Google ADK to create custom agents (fantastic SDK).
With subagents and A2A generally, you should be able to hook any of them into your preferred agentic interface
I’m struggling to see how somebody who’s looking for inspiration in using agents in their coding workflow would glean any value from this comment.
I think that's also because Claude Code (and LLMs) is built by engineers who think of their target audience as engineers; they can only think of the world through their own lenses.
Kind of how for the longest time, Google used to be best at finding solutions to programming problems and programming documentation: say, a Google built by librarians would have a totally different slant.
Perhaps that's why designers don't see it yet, no designers have built Claude's 'world-view'.
If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
> But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.
If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".
Sam Altman has been drumming[1] the ASI drum for a while now. I don't think it's a stretch to say that this is the vision he is selling.
[1] - https://ia.samaltman.com/#:~:text=we%20will%20have-,superint...
Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.
We already have AGI - it's called humans - and frankly it's no magic bullet for AI progress.
Meta just laid 600 of them off.
All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.
For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.
The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.
Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.
Humans aren't really being put to work upgrading the underlying design of their own brains, though. And 5 years is a blink of an eye. My five-year-old will barely even be turning ten years old by then.
All I see it doing, as a SWE, is limiting the speed at which my co-workers learn and worsening the quality of their output. Finally many are noticing this and using it less...
Your probably bosses think it's worth it if the outcome is getting rid of the whole host of y'all and replace you with AWS Elastic-SWE instances. Which is why it's imperative that you maximize AI usage.
So instead of firing and replacing me with AI my boss will pay me to use AI he would've used..?
No one's switching to AI cold turkey. Think of it as training your own, cheaper replacement. SWEs & their line managers develop & test AI workflows, while giving the bosses time to evaluate AI capabilities, then hopefully shrink the headcount as close to 0 as possible without shrinking profits. Right now, it's juniors who're getting squeezed.
Where are the products? This site and everywhere around the internet, on x, linkedin and so is full of crazy claims and I have yet to see a product that people need and that actually works. What I'm experiencing is a gigantic enshittification everywhere, Windows sucks, web apps are bloated, slow and uninteresting. Infrastructure goes down even with "memory safe rust" burning millions and millions of compute for scaffolding stupid stuff. Such a disappointment
I think chatGPT itself is an epic product, Cursor has insane growth and usage. I also think they are both over-hyped, have too much a valuation.
Citing AI software as the only examples of how AI benefits developing software, has a bit of a touch of self-help books describing how to attain success and fulfillment by taking the example of writing self-help books.
I don’t disagree that these are useful tools, by the way. I just haven’t seen any discernible uptick in general software quality and utility either, nor any economic uptick that should presumably follow from being able to develop software more efficiently.
I made 1500 USD speculating on NVidia earnings, that's economic uptick for me !
It doesn’t matter what you think. Where’s all the data proving that AI is actually valuable? All we have are anecdotes and promises.
ChatGPT is... a chat with some "augmentation" feature aka outputting rich html responses, nothing new except the generative side. Cursor is a VSCode fork with a custom model and a very good autocomplete integration. Again where are the products? Where the heck is Windows without the bloat that works reliably before becoming totally agentic? And therefore idiotic since it doesn't work reliably
I agree with everyone else, where is the Microsoft Office competitor created by 2 geeks in a garage with Claude Code? Where is the Exchange replacement created by a company of 20 people?
There are many really lucrative markets that need a fresh approach, and AI doesn't seem to have caused a huge explosion of new software created by upstarts.
Or am I missing something? Where are the consumer facing software apps developed primarily with AI by smaller companies? I'm excluding big companies because in their case it's impossible to prove the productivity, the could be throwing more bodies at the problem and we'd never know.
> Office…Exchange
The challenge in competing with these products is not code. The challenge competing in lucrative markets that need a fresh approach is also generally not code. So I’m not sure that is a good metric to evaluate LLMs for code generation.
I think the point remains, if someone armed with Claude Code could whip out a feature complete clone of Microsoft Office over the weekend (and by all accounts, even a novice programmer could do this, because of the magnificent greatness of Claude), then why don't they just go ahead and do it? Maybe do a bunch of them: release one under GPL, one under MIT, one under BSD, and a few more sold as proprietary software. Wow, I mean, this should be trivial.
Cool. So we established that it's not code alone that's needed, it's something else. This means that the people who already had that something else can now bootstrap the coding part much faster than ever before, spend less time looking for capable people, and truly focus on that other part.
So where are they?
We're not asking to evaluate LLM's for code. We're asking to evaluate them as product generators or improvers.
It's not that they failed to compete on other metrics, it's that they don't even have a product to fail to sell.
We had upstarts in the 80s, the 90s, the 2000s and the 2010s. Some game, some website, some social network, some mobile app that blew up. We had many. Not funded by billions.
So, where is that in the 2020s?
Yes, code is a detail (ideas too). It's a platform. It positions itself as the new thing. Does that platform allow upstarts? Or does it consolidate power?
Fine, where's the slop then? I expected hundreds of scammy apps to show up imitating larger competitors to get a few bucks, but those aren't happening either. At least not any more than before AI.
This. Design tends to explore a latent space that isn't well documented. There is no Stack Overflow or Github for design. The closest we have are open sourced design systems like Material Design, and portfolio sites like Behance. These are not legible reference implementations for most use cases.
If LLMs only disrupt software engineering and content slop, the economy is going to undergo rapid changes. Every car wash will have a forward deployed engineer maintaining their mobile app, website, backend, and LLM-augmented customer service. That happens even if LLMs plateau in six months.
If you want to steal code, you can take it from GitHub and strip the license. That is what the Markov chains (https://arxiv.org/abs/2410.02724) do.
It's a code laundering machine. Software engineering has a higher number of people who have never created anything by themselves and have no issues with copyright infringement. Other professions still tend to take a broader view. Even unproductive people in other professions may have compunctions about stealing other people's work.
Did you read the essay? It never claimed that AI was useless, nor was the ultimate point of the article even about AI's utility—it was about the political and monetary power shifts it has enabled and their concomitant risks, along with the risks the technology might impose for society.
This ignorance or failure to address these aspects of the issue and solely focus on its utility in a vacuum is precisely the blinkered perspective that will enable the consolidations of power the essay is worried about...the people pushing this stuff are overjoyed that so few people seem to be paying any attention to the more significant shifts they are enacting (as the article states, land purchase, political/capital power accumulation, reduction of workforces and operating costs and labor power... the list goes on)
> IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.
How are we building _for_ ourselves when we literally automate away our jobs? This is probably one of the _worst_ things someone could do to me.
Software engineers been automating our own work since we built the first assembler. So far it's just made us more productive and valuable, because the demand for software has been effectively unlimited.
Maybe that will continue with AI, or maybe our long-standing habit will finally turn against us.
> Software engineers been automating our own work since we built the first assembler.
The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.
Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.
Automating away software engineering entirely is nothing new. It goes all the way back to BASIC and COBOL, and later visual programming tools, Microsoft Access, etc. There have been innumerable attempts to do somehow get by without need those pedantic and difficult programmers and all their annoying questions and nit picking.
But here's the thing: the hard part of programming was never really syntax, it was about having the clarity of thought and conceptual precision to build a system that normal humans find useful despite the fact they will never have the patience to understand let alone debug failures. Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.
I won't say AI will never get there—it already surpasses human programmers in many of the mechanical and rote knowledge of programing language arcana—but it it still is orders of magnitude away from being able to produce a useful system when specified by someone who does not think like a programmer. Perhaps it will get there. But I think the barrier at that point will be the age old human need to have a throat to choke when things go sideways. Those in power know how to control and manipulate humans through well-understood incentives, and this applies all the way to the highest levels of leadership. No matter how smart or competent AI is, you can't just drop it into those scenarios. Business leaders can't replace human accountability with an SLA from OpenAI, it just doesn't work. Never say never I suppose, but I'd be willing to bet the wheels come off modern civilization long before the skillset of senior software engineers becomes obsolete.
"The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe."
This is the crux of the OP's argument, adding in that (in the meantime) the incumbents and/or bad actors will use it as a path to intensify their political and economic power.
But to me the article fails to:
(1) actually make the case that AI's not going to be 'valuable enough' which is a sweeping and bold claim (especially in light of its speed), and;
(2) quantify AI's true value versus the crazy overhyped valuation, which is admittedly hard to do - but matters if we're talking 10% of 100x overvalued.
If all of my direct evidence (from my own work and life) is that AI is absolutely transformative and multiplies my output substantially, AND I see that that trend seems to be continuing - then it's going to be a hard argument for me to agree with #1 just because image generation isn't great (and OP really cares about that).
Higher Ed is in crisis; VC has bet their entire asset class on AI; non-trivial amounts of code are being written by AI at every startup; tech co's are paying crazy amounts for top AI talents... in other words, just because it can't one-shot some complex visual design workflow does not mean (a) it's limited in its potential, or (b) that we fully understand how valuable it will become given the rate of change.
As for #2 - well, that's the whole rub isn't it? Knowing how much something is overvalued or undervalued is the whole game. If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo".
>If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo".
The market can remain irrational longer than you can remain solvent.
> frees you up from a lot of little trivial distractions.
I think one huge issue in my life has been: getting started
If AI helps with this, I think it is worth it.
Even if getting started is incorrect, it sparks outrage and an "I'll fix this" momentum.
> If AI helps with this, I think it is worth it.
Worth what? I probably agree, the greenfield rote mechanical tasks of putting together something like a basic interface, somewhat thorough unit tests, or a basic state container that maps to a complicated typed endpoint are things I'd procrastinate on or would otherwise drain my energy before I get started.
But that real tangible value does need to have an agreeable *price* and *cost* depending on the context. For me, that price ceiling depends on how often and to what extent it's able to contribute to generating maximum overall value, but in terms of personal economic value (the proportion of my fixed time I'm spending on which work), if it's on an upward trend of practical utility, that means I'm actually increasing the proportion of dull tasks I'm spending my time on... potentially.
Kind of like how having a car makes it so comfortable and easy and ostensibly fast to get somewhere for an individual—theoretically freeing up time to do all kinds of other activities—that some people justify endless amounts of debt to acquire them, allowing the parameters of where they're willing to live to shift further and further to the point where nearly all of their free time, energy, and money is spent on driving, all of their kids depend on driving, and society accepts it as an unavoidable necessity; all the deaths, environmental damage, side-effects of decreased physical activity and increased stress along for the ride. Likewise how various chat platforms tried to make communication so friction-less that I actually now want to exchange messages with people far less than ever before, effectively a foot gun
Maybe America is once again demolishing its cities so they can plow through a freeway, and before we know it, every city will be Dallas, and every road will be like commuting between San Jose to anywhere else—metaphorically of course, but also literally in the case of infrastructure build— when will it be too late to realize that we should have just accepted the tiny bit of hardship of walking to the grocery store.
------
All of that might be a bit excessive lol, but I guess we'll find out
I'm in the sciences, but at my first college I took a programming course for science majors. We were partnered up for an end of semester project. I didn't quite know how to get started, but my partner came to me with a bunch of pieces of the project and it was easy to put them together and then tinker with them to make it work.
Perhaps a human coworker or colleague would help?
I think AI is “worth it” in that sense as long as it stays free :D
Nothing is free, especially not AI, which accounted for 92% of U.S. GDP growth in the first half of 2025.
If? Shouldn't you know by now whether AI does or doesn't help with that? ;D
An agentic git interface might be nice, though hallucinations seem like they could create a really messy problem. Still, you could just roll back in that case, I suppose. Anyways, it would be nice to tell it where I'm trying to get to and let it figure out how to get there.
Lots of things might be nice when the expenditure accounts for 92% of GDP growth.
What am I finding is that the size of the "small" use case is becoming larger and larger as time goes by and the models improve.
And bug fixes
"This lump of code is producing this behaviour when I don't want to"
Is a quick way to find/fix bugs (IME)
BUT it requires me to understand the response (sometimes the AI hits the nail on the head, sometimes it says something that makes my brain - that's not it, but now I know exactly what it is
Honestly one the best use cases I've found for it is creating configs. It used to be that I was able to spend a week fiddling around with, say, nvim settings. Now I tell an LLM what I want and it basically gives it to me without having to do trial and error, or locating some obscure comment from 2005 that tells me what I need to know.
Depends what you're doing.
If it's a less trodden path expect it to hallucinate some settings.
Also a regular thing I see is that it adds some random other settings without comment and then when you ask it about them it goes, whoops, yeah, those aren't necessary.
This seems pretty close to my own view, although I'm not sure the secret power grab is about land and water so much as just getting into everyone's brains. What I'd add is that it's not just a front for consolidation of resources and power, it's also a result of a preexisting consolidation of resources and power that we've ignored for too long. If we had had a healthier society 5 or 10 years ago we could have withstood this. Now, it's not so clear.
The safety regulations for every new technology are written in blood. Every time. AI won’t be any different, we’ll push it as hard & fast as our tolerance for human suffering allows.
The casualty is functional literacy, and at least half of the murder was handing smartphones out to children and then putting our lives on the web (social)
But AI is certainly going to be the death knell
Very good article. With regards to the guy being a designer, he is IMHO still correct with regards to layouts. Currently LLMs ares till pretty clueless about layouts. Also SWE is much more then coding. Even in the coding area there is much more room for improvement. The idea that you would not need Software Engineers soon, is brain dead.
LLMs raise the floor for confidence for near/offshoring in the executive class. That’s the actual sell.
Edit: in the context of SWE at least
Good article. Now would anyone tell me how do I short AI?
I'd like more people to talk about AI and surveillance. I think that is going to be one of it's biggest impacts on society(ies).
We are a decade or two in to having massive video coverage, such that you are probably on someone's camera much of your day in the world, and video feeds that are increasingly cloud hosted.
But nobody could possibly watch all that video. Even cameras specifically controlled by the police, it had already outstripped the ability to have humans monitoring it. At best you could refer to it when you had reason to think there'd be something on it, and even that was hugely expensive to human time.
Enter AI. "Find where Joe Schmoe was at 3:30pm yesterday and show me the video" "Give me a written summary of all the cars which crossed into the city from east to west yesterday afternoon." "Give me the names of everyone who entered the convenience store at 2323 Monument St last week." "Give me a written summary of Sue Brown's known activities in November."
The total surveillance society is coming.
I think it will be the biggest impact AI has on society in retrospect. I, for one, am not looking forward to it.
> I'd like more people to talk about AI and surveillance. I think that is going to be one of it's biggest impacts on society(ies).
We lost that fight when literally no one fought back against LPR. LPR cameras were later enabled for facial rec. That data is actually super easy to trace. No LLMs necessary.
Funny story, in my city, when we moved to ticketless public transport, a few people were worried about surveilance. "Police wont have access to the data" they said. The first request for data from the police came < 7 days into the systems operation, and an arrest was made on that basis. Its basically impossible to travel near, by any means, any major metro, and not be tracked and deanonymised later.
Now if you have no understanding of history or politics, this might not shock you. But I find it hard to imagine a popular uprising, even a peaceable one, being effective in this environment.
Actually LLMs introducing a compounding 3% error in reviewing and collating this data might be the best thing to ever happen.
I think you're describing technology that has existed for 15+ years and is already pretty accurate. It's not even necessarily "AI"/ML. For example, I think OpenALPR (automated license plate recognition) is all "classical" computer vision. The most accurate facial/gait/etc. recognition is most likely ML-based with a state-of-the-art model, admittedly, and perhaps the threshold of accuracy for large-scale usefulness was only crossed recently.
The guard rails IMHO are not technological but who owns the cameras/video storage backend, when/if a warrant is needed, and the criteria for granting one.
The difference is that AI makes annotating/combing through all that data much more feasible.
What about surveillance? Lately I've been feeling that is what it's really for. Because our data can be queried in a much more powerful way when it has all been used to train LLMs.
- Companies had to go in that direction. You cannot just fall behind AI gold rush
- These solutions are easier for people, and therefore will win in the long run
- these solutions benefit companies because of the surveillance data they have access to now. They always had some data, but now they collect and process even more
- those who control AI will be the kings of the future, so naturally everyone will be running toward this goal
AI was good for this before LLMs. LLMs would only introduce compounding errors to the dataset (I hope they do it)
A bit of sarcasm, but I think it's porn.
It’s at least about stimulating you to give richer data. Which isn’t quite porn.
The AI race is presumably won by whomever can automate AI R&D first, thus everyone who is in an adjacent field will see the incremental benefits sooner than those further away. The further removed, the harder the takeoff once it happens.
This notion of a hard takeoff, or singularity, based on self-improving AI, is based on the implicit assumption that what's holding AI progress back is lack of AI researchers/developers, which is false.
Ideas are a penny a dozen - the bottleneck is the money/compute to test them at scale.
What exactly is the scenario you are imagining where more developers at a company like OpenAI (or maybe Meta, which has just laid off 600 of them) would accelerate progress?
It's not hard to believe that adding AI researchers to an AI company marginally increases the rate of progress, otherwise why would the companies be clamouring for talent with eye-watering salaries? In any case, I'm not just talking about AI researchers—AGI will not only help with algorithmic efficiency improvements, but will probably make spinning up chip fabs that much easier.
The eye-watering salary you probably have in mind is for a manager at Meta, same company that just laid of 600 actual developers. Why just Meta, not other companies - because they are blaming poor LLama performance on the manager, it seems.
Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.
Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?
All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?
There's no path from LLMs to AGI.
> spinning up chip fabs that much easier
AI already accounts for 92% of U.S. GDP growth. This is a path to disaster.
Agreed.
To me the hard take off won't happen until a humanoid robot can assemble another humanoid robot from parts, as well as slot in anywhere in the supply chain where a human would be required to make those parts.
Once you have that you functionally have a self-replicating machine which can then also build more data centers or semi fabs.
The use case for AI is spam.
It's the reverse printing press, drowning all purposeful human communication in noise.
Another major use case for it is enabling students to more easily cheat on their homework. Which is why it is probably going to end up putting Chegg out of business.
I am shocked when I talk to college kids about AI these days.
I try to explain stuff to them like regurgitating the training data, context window limits, and confabulation.
They stick their fingers in their ears and say "LA LA LA LA it does my homework for me nothing else matters LA LA LA LA i can't hear you"
They really do not care about the Turing Test. Today's LLMs pass the "snowed my teaching assistant test" and nothing else matters.
Academic fraud really is the killer app for this technology. At least if you're a 19-year-old.
Maybe AI will finally skewer the myth that an undergraduate degree means anything.
Many people use AI as the source for knowledge. Even though it is often wrong or misleading, it's advice is better on average than their own judgement or the judgement of people they know. When an AI is "smarter" than 95%? of the population, even if it does not reach superintelligence, will be a very big deal.
This means to me AI is rocket fuel for our post-truth reality.
Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.
Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.
What "gets better"? Rapid global warming will lead to societal collapse this century.
How would you define post-truth? It's not like people haven't been spouting incorrect facts or total bs since forever.
The 95th percentile IQ is 125, which is about average in my circle. (Several of my friends are verified triple nines.)
How is this different from a less reliable search engine?
AI can interpolate in the space of search results, yielding results in between the hits that a simple text index would return.
It is also a fuzzy index with the unique ability to match on multiple poorly specified axes at once in a very high dimensional search space. This is notoriously difficult to code with tradition computer science techniques. Large language models are in some sense optimal at it instead of “just a little bit better than a total failure”, which is what we had before.
Just today I needed to find a library I only vaguely remembered from years ago. Gemini found it in seconds based on the loosest description of what it does.
That is a technology that is getting difficult to distinguish from magic.
1. Yes Capitalism 2. Just waiting for the bubble to pop, when investor wake up to only Nvidia making money, and all that money will flow somewhere else.
> To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.
What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.
It’s a funny anology because what’s missing for the rainbows with pots of gold is magic and fairytales…so what’s missing for consciousness is also magic and fairytales? I’ve yet to see any compelling argument for believing enough computer wouldn’t allow us to code consciousness.
Yes that's just it though, it's a logic argument. "Tell me why we aren't just stochastic parrots!" is more logically sound than "God made us", but that doesn't defacto make it "The correct model of reality".
I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?
Consciousness is a physical phenomenon; rainbows, their ends, and pots of gold at them are not.
> it’s a useful technology that is very likely overhyped to the point of catastrophe
I wish more AI skeptics would take this position but no, it's imperative to claim that it's completely useless.
I've had *very* much the opposite experience. Very nearly every AI skeptic take I read has exactly this opinion, if not always so well-articulated (until the last section, which lost me). But counterarguments always attack the complete strawman of "AI is utterly useless," which very few people, at least within the confines of the tech and business commentariat, are making.
Maybe I'm focusing too much in the hardliners but I see it everywhere, especially in tech.
If you’re talking about forums and social media, or anything attention-driven, then the prevalence of hyperbole is normal.
Where’s all the data showing productivity increases from AI adoption? If AI is so useful, it shouldn’t be hard to prove it.
Measuring productivity in software development, or even white collar jobs in general, let alone the specific productivity gains of even things like the introduction of digital technology and the internet at all, let alone stuff like static vs dynamic types, or the productivity difference of various user interface modalities, is notoriously extremely difficult. Why would we expect to be able to do it here?
https://en.wikipedia.org/wiki/Productivity_paradox
https://danluu.com/keyboard-v-mouse/
https://danluu.com/empirical-pl/
https://facetation.blogspot.com/2015/03/white-collar-product...
https://newsletter.getdx.com/p/difficult-to-measure
I found the last section to be the most exciting part of the article. Describing a conspiracy around AI development, not being about the AI, but the power that a few individuals will gain by building data centers that rival the size, power, and water consumption of small cities, which are will be used to gain political power.
> It can take enormous amounts of time to replicate existing imagery with prompt engineering, only to have your tool of choice hiccup every now and again or just not get some specific aspect of what a person had created previously.
Yes... I don't think the current process of using a diffusion model to generate an image is the way to go. We need AI that integrates fully within existing image and design tools, so it can do things like rendering SVG, generating layers and manipulating them, the same as we would with the tool, rather than one-shot generating the full image via diffusion.
Same with code -- right now, so much AI code gen and modification, as well as code understanding, is done via raw LLM. But we have great static analysis tools available (ie what IDES do to understand code). LLMs that have access to those tools will be more precise and efficient.
It's going to take time to integrate LLMs properly with tools. And train LLMs to use the tools the best way. Until we get there, the potential is still more limited. But I think the potential is there.
I think this is the best part of the essay:
It says absolutely nothing about anything. Its like 10 fearmongering tweets in a blender.
It doesn’t matter they murdered software engineering and destroyed tens of thousands of careers once it bursts it will be an oops and onto the next hype.
I like to reduce things to absurdity to put them into perspective.
The hype of AI is to sell illusion to naive people.
It is like create a hammer that nails by itself... like cars that choose the path by itself.
So stop thinking AI is intelligent... it is merely an advanced tool that demands skill and creativity like any other. Its output is limited to the hability of its user.
The worry should be the amount of resources used to vanity (hammers into the newborn hands) or the nails in the wrong place (viral fake content targeted to unaware people).
Like in Industrial Revolution people got reduced to screw tighteners, mind will be reduced to bad prompters expecting wonders and producing bad content or the same. A step back in civilization except for the money makers and thinkers until AI revolution gives birth to its Karl Marx.
Ever heard of a nail gun?
Interesting perspective
In the past every company had strict control to prevent source code from been leaked to a third party.
And yet here we are.
Could have been, or rather, they thought it would, but the open models from China that you can just run locally are changing the game. Distributing power instead of concentrating it.
"A datacenter takes years to construct."
Not for Elon, apparently.
See https://en.wikipedia.org/wiki/Colossus_(supercomputer)
A small catch
> Using an existing space rather than building one from the ground up allowed the company to begin working on the computer immediately.
It's pretty clear that the financialization aspect of AI is a bubble. There's way too much market cap created by trading debt back and forth. How well AI will work remains an open question at this point.
It's a big number - but still less than tech industry profits.
That is true, but not evenly distributed. Oracle for example: https://arstechnica.com/information-technology/2025/11/oracl...
Also, it may be true that these companies theoretically have the cash flow to cover to spending, but that doesn't mean that they will be comfortable with that risk, especially as that risk becomes more likely in some kind of mass extinction event amongst AI startups. To concretize that a bit, the remote possibility of having to give up all your profits for 2 years to payoff DC investment is fine at 1% chance of happening, but maybe not so ok at a 40% chance.
My new thing with articles like these: just search for the word "water".
I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water. Datacenters that outburn cities to keep the data churning are big, expensive, and have to be built somewhere. The deals made to develop this kind of property are political — they affect cities and states more than just about any other business run within their borders.
I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water.
If you just wanted land, water, and electricity, you could buy them directly instead of buying $100 million of computer hardware bundled with $2 million worth of land and water rights. Why are high end GPUs selling in record numbers if AI is just a cover story for the acquisition of land, electricity, and water?
But with this play they can inflate their company holdings and cash out in new rounds. It’s the ultimate self enrichment scheme! Nobody wants that crappy piece of land but now it’s got GPUs and we can leverage that into a loan for more GPUs and cash out along the way.
Valid question. What the OP talks about though is that these things were not for sale normally. My takeaway from his essay is that a few oligarchs get a pass to take over all energy, by means of a manufactured crisis.
He could have explained that better. Try to not look at the media drama the political actors give you each day, but look at the agenda the real powers laid bare- Trump is threatening an oil rich neighbor with war. A complete expensive as hell army blowing up 'drug boats' (claim) to make help the press sell it as a war on drugs. Yeah right.
- Green energy projects, even running ones, get cancelled. Energy from oil and nuclear are both capital intensive and at the same time completely out-shined by solar and battery tech. So the energy card is a strong one to direct policy towards your interests.
If you can turn the USA into a resource economy like Russia, than you can rule like a Russian oligarch. That is also why the admin sees no problem in destroying academia or other industries via tariffs; controlling resources is easier and more predictable than having to rely on an educated populace that might start to doubt the promise of the American Dream.
I did not think about it that way, but it makes perfect sense. And it is really scary. It hasn't even been a year since Trump's second term started. We still have three more years left.
Because then you can buy calls on the GPU companies
I believe it’s a bubble. Every app interface is becoming similar to ChatGPT, claiming they’ll “help you automate,” while drifting away from the app’s original purpose.
Most of this feels like people trying to get rich off VC money — and VCs trying to get rich off someone else’s money.
Best case is hardly a bubble. I definitely think this is a new paradigm that'll lead to something, even if the current iteration won't be the final version and we've probably overinvested a slight bit.
The author thinks that the bubble is a given (and doesn’t have to spell doom), and the best case is that there isn’t anything worse in addition.
Same as the dot-com bubble. Fundamentals were wildly off for some businesses, but you can also find almost every business that failed then running successfully today. Personally I don't think sticking AI in every software is where the real value is, it's improving understanding of huge sets of data already out there. Maybe OpenAI challenges Google for search, maybe they fail, I'm still pretty sure the infrastructure is going to get used because the amount of data we collect and try to extract value from isn't going anywhere.
Something notable like pets.com is literally chewy just 20 years earlier
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have.
> Again, I think that AI is probably just a normal technology, riding a normal hype wave. And here’s where I nurse a particular conspiracy theory: I think the makers of AI know that.
I think those committing billions towards AI know it too. It's not a conspiracy theory. All the talk about AGI is marketing fluff that makes for good quotes. All the investment in data centers and GPU's is for regular AI. It doesn't need AGI to justify it.
I don't know if there's a bubble. Nobody knows. But what if it turns out that normal AI (not AGI) will ultimately provide so much value over the next couple decades that all the data centers being built will be used to max capacity and we need to build even more? A lot of people think the current level of investment is entirely economically rational, without any requirement for AGI at all. Maybe it's overshooting, maybe it's undershooting, but that's just regular resource usage modeling. It's not dependent on "coding consciousness" as the author describes.
Let's take the highest perspective possible:
What is the value of technology which allows people communicate clearly with other people of any language? That is what these large language models have achieved. We can now translate pretty much perfectly between all the languages in the world. The curse from the tower of Babel has been lifted.
There will be a time in the future, when people will not be able to comprehend that you couldn't exchange information regardless of personal language skills.
So what is the value of that? Economically, culturally, politically, spiritually?
Language is a lot deeper than that. It's like if I say "we speak the same language", it means a lot more than just the ability to translate. It's talking about a shared past and worldview and hopefully future which I/we intend to invest in.
Then are you better off by not being able to communicate anything?
You could make the same argument about video conferencing: Yes, you can now talk to anyone anywhere anytime, and it's amazing. But somehow all big companies are convinced that in-person office work is more productive.
Which languages couldn't we translate before? Not you, the individual. We, humanity?
Machine translation was horrible and completely unreliable before LLMs. And human translators are very expensive and slow in comparison.
LLM is for translation as computers were for calculating. Sure, you could do without them before. They used to have entire buildings with office workers whose job it was to compute.
Google translate worked great long before LLMs.
The only reason to think that is not knowing when Google switched to using LLMs. The radical change is well documented.
I disagree. It worked passably and was better than no translation. The depth, correctness, and nuance is much better with LLMs.
LLMs are not they only "AI"
I don't think you understand how off that statement is. It's also pretty ignorant considering Google Translate barely worked at all for many languages. So no, it didn't work great and even for the best possible language pair Google Translate is not in the same ballpark.
Not really long before, although I suppose it's relative. Google translate was pretty garbage until around 2016-2017 and then it started really improving
It really didn't. There were many languages which it couldn't handle at all, just making completely garbled output. It wasn't possible to use Google Translate professionally.
We could communicate with people before LLMs just fine though? We have hand gestures and some people learn multiple languages and google translate was pretty solid. I got by just fine in countries where I didn’t know the language because hand gestures work or someone speaks English.
What is the value of losing our uniqueness to a computer that lies and makes us all talk the same?
Incredible that we happen to be alive at the exact moment humanity peaked in its interlingual communication. With Google Translate and hand gestures there is no need to evolve it any further.
You can maybe order in a restaurant or ask the way with hand gestures. But surely you must be able to take a higher perspective than your own, and realize that there's enormous amounts of exchange between nations with differing language, and all of this relies on some form of translation. Hundreds of millions of people all over the world have to deal with language barriers.
Google Translate was far from solid, the quality of translations were so bad before LLMs that it simply wasn't an option for most languages. It would sometimes even translate numbers incorrectly.
LLMs are here and Google Translate is still bad (surely, if it was easy as just plugging the miraculous perfect llms into it, it would be perfect now?), I don't think people who think we've somehow solved translation actually understand how much it still deals extremely poorly with.
And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)
What argument are you making? LLM translating is available to anybody to try and use right now, and you can use services like Kagi Translate or DeepL to see the evidence for yourself that they make excellent translations. I honestly don't care what Google Translate does, because nobody who is serious about translation uses it.
> And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)
The kind of deeply understood communication you are demanding is usually impossible even between people who have the same native tongue, from the same town and even within the same family. And people can misunderstand each other just fine without the help of AI. However, is it better to understand nothing at all, then to not understand every nuance?
No it isnt lol.
>I’m more than open to being wrong;
Doubtful.
>That’s quite a contradiction. A datacenter takes years to construct. How will today’s plans ever enable a company like OpenAI to catch up with what they already claim is a computational deficit that demands more datacenters?
Its difficult to steelman such a weird argument. If a deficit cant be remedied immediately, it should never be remedied?
This is literally how capex works. You purchase capacity now, based on receiving it, and the rewards of having it, in the future.
>And yet, these deals are made. There’s a logic hole here that’s easily filled by the possibility that AI is a fitting front for consolidation of resources and power.
No you just made some stuff up, and then suggested that your own self inflicted confusion might be better explained with some other stuff you made up.
>Globalism eroded borders by crossing them, this new thing — this Privatism — erodes them from within.
What? Its called Capitalism. You dont need a new word for it every 12 months. Emotive words like "erosion" say nothing but are just targeted at like, stirring people up. Demonstrate the erosion.
>Remember, datacenters are built on large pieces of land, drawing more heavily from existing infrastructure and natural resources than they give back to the immediately surrounding community
How did you calculate this. Show your work. Pretty sure if someone made EQ SY1 SY2 and SY3 disappear, the local community, the distant community, communities all over the planet would be negatively affected.
>When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style
To take the overwrought disproportionate emotive language out of this.
"How are private entities allowed to build big things I dont like, including power sources I dont like"
The answer is that many people are allowed to do things you don't approve of. This is normal. This is society. Not everything needs the approval of the blogerati. Such a world would be horrific.
>when the infrastructure that powers AI becomes more valuable than the AI itself, when the people who control that infrastructure hold more sway over policy and resources than elected governments.
Show your working. How are the infrastructure providers going to run the government? I believe historically big infrastructure projects tend to die, require some government inducements and then go away. People had similar misgivings about the railroads in the US, in fact it was a big bugbear for henry george I believe. Is Amtrak secretly pulling the strings of the US Deep State? If the US Government is weak to private interests, thats up to the good burghers of yankistan to correct at the polls. If electoral politics dont work, then other means seppos find scary might be required. Freaking out about AI investment seems like a weird place to suddenly be concerned about this.
See Also: AT&T Long Lines, Hydro Electric Dams, Nuclear Energy, Submarine Cable Infrastructure. If Political power comes from owning infrastructure we should be more worried about like, Hurricane Electric. Its demonstrable that people who build big infra dont run the planet. Heck Richest Man and Weird Person Darling Elon Musk doesn't honestly command much infrastructure, he mostly just lives on hype and speculation.
>but I’m really just following the money and the power to their logical conclusion.
The more you need to invoke "Logical conclusion" the less geniune and logical the piece reads.
>Maybe AI will do everything humans do. Maybe it will usher in a new society defined by something other than the balancing of labor units and wealth units. Maybe AGI — these days defined as a general intelligence that exceeds human kind in all contexts — will emerge and “justify” all of this. Maybe.
Probably things will continue on as they always have, but the planet will have more datacenter capacity. Likely, if the AI bubble does burst, datacenter capacity will be cheaper.
>The market concentration and incestuous investment shell game is real.
Yes? And that will probably explode and we will see AI investors jumping out of buildings. nVidia is in a position right now to underwrite big AI Datacentre loans, which could completely offset the huge gains they have made. What about it. Again, you demonstrate nothing.
>The infrastructure is real. The land deals are real.
Yes. Remember to put 2 truths before your lie.
>The resulting shifts in power are real.
So far they exist in your mind.
>we will find ourselves citizens of a very new kind of place that no longer feels like home.
Reminds me of an old argument that a raving white supremacist used to push on me. That "justice" as he defined it, was that society not change so old people wont be scared by it. That having a new (possibly browner) person running the local store was tantamount to and justification for genocide.
Change is a constant. That change making you sad is not in and of itself a bad thing. Please adjust accordingly.
AI is not overhyped. It's like saying going to the moon is overhyped.
First of all this AI stuff is next level. It's as great, if not greater than going to space or going to the moon.
Second the rate at which is improving makes it such that the hype is relevant and realistic.
I think what's throwing people off are two things. First people are just over exposed to AI. So the overexposure is causing people to feel AI is boring and useless slop. Investments are heavy into AI but the people who throw that money around are a minority, overall the general public is actually UNDER hyping AI. Look at everyone on this thread. Everyone and I mean Everyone isn't overly optimistic about AI. instead the irony is... Everyone and I mean everyone again strangely thinks the world is overhyped about AI and they are wrong. This thread and practically every thread on HN is a microcosm of the world and the sentiment is decidedly against AI. Think about it like this, if Elon Musk invented a car that cost 1$ and this car could travel at FTL speeds to anywhere in the universe, than interstellar travel will be routine and boring within a year. People will call it overhyped.
Second the investment and money spent on AI is definitely overhyped. Right? Think about it. If we quantify the utility and achievement of what AI can currently do and what it's projected to achieve the math works out. If you quantify the profitability of AI the math suddenly doesn't work out.
Seems like an apt comparison; it was a massive money sink and a regular person gained absolutely nothing from the moon landing, it's just the big organization (NASA, US government) that got the bragging rights.
The best AI is the one hidden, silent, ubiquitous that works and you feel it's not there. Apple devices but really many modern devices before the LLM hype era had a lot of AI we didn't know about. Today if I read a product has AI i feel let down cause most of the time is a not very well integrated ChatBot that if you will to spend some time sooner or later will impersonate Adolf Hitler and, who knows, maybe leaks sensitive data or apis meta. The bubble needs to burst so we can go back to silently pack products with useful ai features without telling the world
Seamless OCR from every iOS photo and screenshot has been magical in utility, reliability and usability.
This is what I wonder to, what is the end game? Advance technology so that we can have anything that we want, whenever we want it. Fly to distant galaxies. Increase the options available to us and our offspring. But ultimately, what will we gain from that? Is it to say that we did it or is it for the pleasure of the process? If it's for pleasure, then why have we made our processes so miserable for everyone involved? If it's to say that we did it, couldn't we not and say that we did? That's the whole point of fantasy. Is Elon using AI to supplement his own lack of imagination?
I could be wrong, this could be nonsense. I just can't make sense of it.
> Fly to distant galaxies
Unless AI can change the laws of physics, extremely unlikely.
I see, Fly was perhaps the wrong word to use here. Phase-Shift to new galaxies is probably the right term. Where you change your entire system's resonant frequency, to match what exists in the distant galaxy. Less of transportation, and more of a change of focus.
Like the way we can daydream about a galaxy, then snap-back to work. It's the same mechanism, but with enhanced focus you go from not just visualising > feeling > embodying > grounding in the new location.
We do it all the time, however because we require belief that it's possible in order to maintain our location, whenever we question where we are - we're pulled back into the reality that questions things (it's a very Earth centric way of seeing reality)
You missed the point ... going to distant galaxies is physically impossible.
> Where you change your entire system's resonant frequency, to match what exists in the distant galaxy.
This collection of words does not describe a physical reality.
Any favorite movies or TV episodes on the above themes?
If things were left to their own devices, the end game would a civilization like stroggos: the remaining humans will choose to fuse with machines, as it would give them an advantage. The first tactical step will be to nudge people to give up more and more agency to AI companions. I doubt this future will materialise, though.
There are some flavors of AI doomerism that I'm unwilling to fight - the proliferance of AI slop, the inability of our current capital paradigm to adjust such that loads of people don't become overnight-poor, those sorts of things.
If you tell me, though, that "We installed AI in a place that wasn't designed around it and it didn't work" you're essentially complaining that your horse-drawn cart broke when you hooked it up to your HEMI. Of course it didn't work. The value proposition built around the concept of long dev cycles with huge teams and multiple-9s reliability deliverables is not what this stuff excels at.
I have churned out perfectly functional MVPs for tens of projects in a matter of weeks. I've created robust frameworks with >90% test coverage for fringe projects that would never have otherwise gotten the time budget allotted to them. The boundaries of what can be done aren't being pushed up higher or down deeper, they're being pushed out laterally. This is very good in a distributed sense, but not so great for business as usual - we've had megacorps consolidating and building vertically forever and we've forgotten what it was like to have a robust hacker culture with loads of scrappy teams forging unbeaten paths.
Ironically, VCs have completely missed the point in trying to all build pickaxes - there's a ton of mining to do in this new space (but the risk profile makes the finance-pilled queasy). We need both.
AI is already very good at some things, they just don't look like the things people were expecting.
The coming of AI seems one of those things like the agricultural revolution or industrial revolution that is kind of inevitable once it starts. All the business of who pays how much for which stock and what price is sensible and which algorithm seem kind of secondary.