Not a day goes by that a fellow engineer doesn't text me a screenshot of something stupid an AI did in their codebase. But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce.
Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace.
Unfortunately, this group, the curious ones, IMHO is a minority.
I am solidly in this "curious" camp. I've read HN for the past 15(?) years. I dropped out of CS and got an art agree instead. My career is elsewhere, but along the way, understanding systems was a hobby.
I always kind of wanted to stop everything else and learn "real engineering," but I didn't. Instead, I just read hundreds (thousands?) of arcane articles about enterprise software architecture, programming language design, compiler optimization, and open source politics in my free time.
There are many bits of tacit knowledge I don't have. I know I don't have them, because I have that knowledge in other domains. I know that I don't know what I don't know about being a "real engineer."
But I also know what taste is. I know what questions to ask. I know the magic words, and where to look for answers.
For people like me, this feels like an insane golden age. I have no shortage of ideas, and now the only thing I have is a shortage of hands, eyes, and on a good week, tokens.
You think you know what taste is. Have you been cranking on real systems all these years, or have you been on the sidelines armchairing the theoretics? I'm not trying to come across as rude, but it may be unavoidable to some degree when indirect criticism becomes involved. A laboring engineer has precious little choice in the type of systems available on which to work on. Fundamentally, it's all going to be some variant of system to make money for someone else somehow, or system that burns money, but ensures necessary work gets done somehow. That's it. That's the extent of the optimization function as defined by capitalism. Taste, falls by the wayside, compared to whether or not you are in the context of the optimizers who matter, because they're at the center of the capital centralization machine making the primary decisions as to where it gets allocated, is all that matters these days. So you make what they want or you don't get paid. As an Arts person, you should understand that no matter how sublime the piece to the artist, a rumbling belly is all that currently awaits you if your taste does not align with the holders of the fattest purses to lighten. I'm not speaking from a place of contempt here; I have a Philosophy background, and reaching out as one individual of the Humanities to another. We've lost sight of the "why we do things" and let ourselves become enslaved by the balance sheets. The economy was supposed to serve the people, it's now the other way around. All we do is feed more bodies to the wood chipper. Until we wake up from that, not even the desperate hope in the matter of taste will save us. We'll just keep following the capital gradient until we end up selling the world from under ourselves because it's the only thing we have left, and there is only the usual suspects as buyers.
I know it's not anyone's fault exactly, but the current state of systems in general is an absolute shit show. If you care about what you do, I'd expect you to be cheering that we just might have an opportunity for a renaissance.
Moreover, this kind of thinking is incredibly backward. If you were better than me then, you can easily become much better than I'll ever be in the future.
The K-shaped workforce point is sharp and I think you're right. The curious ones are a minority, but they've always been the ones who moved things forward. AI just made the gap more visible :)
Your Codex case study with the content creators is fascinating. A PhD in Biology and a masters in writing building internal tools... that's exactly the kind of thing i meant by "you can learn anything now." I'm surrounded by PhDs and professors at my workplace and I'm genuinely positive about how things are progressing. These are people with deep domain expertise who can now build the tools they need. It's an interesting time. please write that up...
Engineers will go back in and fix it when they notice a problem. Or find someone who can. AI will send happy little emoji while it continues to trash your codebase and brings it to a state of total unmaintainability.
But that's the problem. Something that can be so reliable at times, can also fail miserably at others. I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load. You're not just coding anymore, you're thinking about what needs to be done, and then reviewing it as if someone else wrote the code.
LLMs are great for rapid prototyping, boilerplate, that kind of thing. I myself use them daily. But the amount of mistakes Claude makes is not negligible in my experience.
This is a fair point. The cognitive load is real. Reviewing AI output is a different kind of exhausting than writing code yourself.
Even when the output is "guided," I don't trust it. I still review every single line. Every statement. I need to understand what the hell is going on before it goes anywhere. That's non-negotiable. I think it gets better as you build tighter feedback loops and better testing around it, but I won't pretend it's effortless.
This is a fair observation, and I think it actually reinforces the argument. The burnout you're describing comes from treating AI output as "your code that happens to need review." It's not. It's a hypothesis. Once you reframe it that way, the workflow shifts: you invest more in tests, validation scenarios, acceptance criteria, clear specs. Less time writing code, more time defining what correct looks like. That's not extra work on top of engineering. That is the engineering now. The teams I've seen adapt best are the ones that made this shift explicit: the deliverable isn't the code, it's the proof that the code is right.
One issue is that developers have been trained for the past few decades to look for solutions to problems online by just dumping a few relevant keywords into Google. But to get the most out of AI you should really be prompting as if you were writing a formal letter to the British throne explaining the background of your request. Basic English writing skills, and the ability to formulate your thoughts in a clear manner, have become essential skills for engineering (and something many developers simply lack).
I agree on the curiosity part, I have a non CS background but I have learned to program just out of curiosity. This led me to build production applications which companies actually use and this is before the AI era.
Now, with AI I feel like I have an assistant engineer with me who can help me build exciting things.
I'm currently teaching a group of very curious non-technical content creators at one of the firms I consult at. I set up Codex for them, created the repo to have lots of hand-holding built in - and they took off. It's been 4 weeks and we already have 3 internal tools deployed, one of which eliminated the busy work of another team so much that they now have twice the capacity. These are all things 'real' engineers and product managers could have done, but just empowering people to solve their own problems is way faster. Today, several of them came to me and asked me to explain what APIs are (They want to use the google workspace APIs for something)
I wrote out a list of topics/key words to ask AI about and teach themselves. I've already set up the integration in an example app I will give them, and I literally have no idea what they are going to build next, but I'm .. thrilled. Today was the first moment I realized, maybe these are the junior engineers of the future. The fact that they have nontechnical backgrounds is a huge bonus - one has a PhD in Biology, one a masters in writing - they bring so much to the process that a typical engineering team lacks. Thinking of writing up this case study/experience because it's been a highlight of my career.
Maybe. The reality of software engineering is that there's a lot of mediocre developers on the market and a lot of mediocre code being written; that's part of the industry, and the jobs of engineers working with other engineers and/or LLMs is that of quality control, through e.g. static analysis, code reviews, teaching, studying, etc.
And those mediocre engineers put their work online, as do top-tier developers. In fact, I would say that the scale is likely tilted towards mediocre engineers putting more stuff online than really good ones.
So statistically speaking, when the "AI" consumes all of that as its training data and returns the most likely answer when prompted, what percentage of developers will it be better than?
Doubt it”s sustainable. These big models keep improving at a fast pace and any progress like this made in a niche would likely get caught up to very quickly.
The "most engineers" not "most engineers we've hired".
But also "most engineers" aren't very good. AIs know tricks that the average "I write code for my dayjob" person doesn't know or frankly won't bother to learn.
Even speaking from a pure statistical perspective, it is quite literally impossible for "AI" that outputs world's-most-average-answer to be better than "most engineers".
In fact, it's pretty easy to conclude what percentage of engineers it's better than: all it does is it consumes as much data as possible and returns the statistically most probable answer, therefore it's gonna be better than roughly 50% of engineers. Maybe you can claim that it's better than 60% of engineers because bottom-of-the-barrel engineers tend to not publish their works online for it to be used as training data, but for every one of those you have a bunch of non-engineers that don't do this for a living putting their shitty attempts at getting stuff done using code online, so I'm actually gonna correct myself immediately and say that it's about 40%.
The same goes for every other output: it's gonna make the world's most average article, the most average song in a genre and so on. You can nudge it to be slightly better than the average with great effort, but no, you absolutely cannot make it better than most.
The kid can learn and become better over time, while "AI" can only be retrained using better training data.
I'm not against using AI by any means, but I know what to use it for: for stuff where I can only do a worse than half the population because I can't be bothered to learn it properly. I don't want to toot my own horn, but I'd say I'm definitely better at my niche than 50% of the people. There are plenty of other niches where I'm not.
The thing that separates AI Agents from normal programmers is that agents don't get bored or tired.
For most engineers the ability might be there, but the motivation or willingness to write, for example, 20 different test cases checking the 3 line bug you just fixed is fixed FOR SURE usually isn't there. You add maybe 1-2 tests because they're annoying boilerplate crap to write and create the PR. CI passes, you added new tests, someone will approve it. (Yes, your specific company is of course better than this and requires rigorous testing, but the vast majority isn't. Most don't even add the two tests as long as the issue is fixed.)
An AI Agent will happily and without complaining use Red/Green TDD on the issue, create the 20 tests first, make sure they fail (as they should), fix the issue and then again check that all tests pass. And it'll do it in 30 minutes while you do something else.
>But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
Are you serious? I've been hearing this constantly. since mid 2025.
The gaslighting over AI is really something else.
Ive also never seen jobs advertised before whose job was to lobby skeptical engineers over about how to engage in technical work. This is entirely new. There is a priesthood developing over this.
I wrote code by hand for 20 years. Now I use AI for nearly all code. I just can’t compete in speed and thoroughness. As the post says, you must guide the AI still. But if you think you can continue working without AI in a competitive industry, I am absolutely sure you will eventually have a very bad time.
I certainly know engineers for which this is true but unfortunately they were never particularly thorough or fast to begin with.
I believe you can tell which way the wind is blowing by looking at open source.
Other than being flooded with PRs high profile projects have not seen a notable difference - certainly no accelerated enhancements. there has definitely been an explosion of new projects, though, most of dubious quality.
They will never admit it, but many are scared of losing their jobs.
This threat, while not yet realized, is very real from a strictly economic perspective.
AI or not, any tool that improves productivity can lead to workforce reduction.
Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves?
To make matters worse, all your competitors also have the same semi-automatic ovens...
A bit simplistic. The bakery can just expand its product range or do various other things to add work. In fact that's exactly what I would expect to happen at a tech company, ceteris paribus.
This is what I find interesting - the response from most companies is "we will need fewer engineers because of AI", not "we can build more things because of AI".
What is driving companies to want to get rid of people, rather than do more? Is it just short-term investor-driven thinking?
How much more productive are we supposed to be in engineering? Are we 10x'ing our testing capability at the same time? QA is already a massive bottleneck at my $DAYJOB. I'm not sure what benefits the company at-large derives from having the typing machine type faster.
I think it's an excuse to do needed lay offs without saying as much. So yes, preserving signals, essentially. I've never met a tech company that didn't love expanding work to fill capacity, even if the work is of little value.
> Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
That is actually the case with a lot of bakeries these days. But the one major difference being,the baker can rely with almost 100% reliability that the form, shape and ingredients used will be exact to the rounding error. Each time. No matter how many times they use the oven. And they don't have to invent strategies on how to "best use the ovens", they don't claim to "vibe-bake" 10x more than what they used to bake before etc... The semi-automated ovens just effing work!
Now show me an LLM that even remotely provides this kind of experience.
On another note, if you had 100 engineers and you lay almost all of them off and keep 5 super-AI-accelerated engineers, and your competitor keeps 50 of such engineers, your competitor is still able to iterate 10x as fast. So you still lay people off at the risk of falling behind.
Fair enough. I know how that reads. But when anyone with a laptop and a subscription can ship production software in a weekend, the architecture and the idea start to matter a lot more. The technical details in the post are real. I just can't share the what yet. Take it or leave it.
It’s kind of funny seeing all the AI hype guys talking about their 10 OpenClaw instances all running doing work and when you ask what it is, you can never get a straight answer..
For the record though, I love agentic coding. It deals with the accumulated cruft of software for me.
The issue is that you become lazy after a while and stop “leading the design”. And I think that’s ok because most of the code is just throwaway code.
You would rewrite your project/app several times by the time it’s worth it to pay attention to “proper” architecture. I wish I had these AIs 10 years ago so that I could focus on everything I wanted to build instead to become a framework developer/engineer.
I agree. I've got more lazy over time too. But the cost of creating code is so cheap... it's now less important to be perfect the first time the code hits prod (application dependant). It can be rewritten from scratch in no time. The bar for 'maintainability' is a lot lower now, because the AI has more capacity and persistence to maintain terrible code.
I'm sure plenty of people disagree with me. But I'm a good hand programmer, and I just don't feel the need to do that any more. I got into this to build things for other people, and AI is letting me do that more efficiently. Yes, I've had to give up a puritan approach to code quality.
> But guided? The models can write better code than most developers. That’s the part people don’t want to sit with. When guided.
Where do you draw the line between just enough guidance vs too much hand holding to an agent? At some point, wouldn't it be better to just do it yourself and be done with the project (while also build your muscle memory, experiences and the mental model for future projects, just like tons of regular devs have done in the past)
I'm not asking an agent to build me a full-stack app. That's where you end up babysitting it like a kindergartener and honestly you'd be faster doing it yourself. The way I use agents is focused, context-driven, one small task at a time.
For example: i need a function that takes a dependency graph, topologically sorts it, and returns the affected nodes when a given node changes. That's well-scoped. The agent writes it, I review it, done.
But say I'm debugging a connection pool leak in Postgres where connections aren't being released back under load because a transaction is left open inside a retry loop. I'm not handing that to an agent. I already know our system. I know which service is misbehaving, I know the ORM layer, I know where the connection lifecycle is managed. The context needed to guide the agent properly would take longer to write than just opening the code and tracing it myself.
That's the line. If the context you'd need to provide is larger than the task itself, just do it. If the task is well-defined and the output is easy to verify, let the agent rip.
The muscle memory point is real though. i still hand-write code when I'm learning something new or exploring a space I don't understand yet. AI is terrible for building intuition in unfamiliar territory because you can't evaluate output you don't understand. But for mundane scaffolding, boilerplate, things that repeat? I don't. llife's too short to hand-write your 50th REST handler.
Very much on the same page as the author, I think AI is a phenomenal accelerant.
If you're going in the right direction, acceleration is very useful. It rewards those who know what they're doing, certainly. What's maybe being left out is that, over a large enough distribution, it's going to accelerate people who are accidentally going in the right direction, too.
Maybe to the people writing the invoices for the infra you're renting, sure. Or to the people who get paid to dig you out of the consequences you inevitably bring about. Remember, the faster the timescale, the worse we are wired to effectively handle it as human beings. We're playing with a fire that catches and spreads so fast, by the time anyone realizes the forest is catching and starting to react, the entire forest is already well on the way to joining in the blaze.
> We're playing with a fire that catches and spreads so fast, by the time anyone realizes the forest is catching and starting to react, the entire forest is already well on the way to joining in the blaze.
I suspect this has been said in one form or another since the discovery of fire itself.
The only way I see out of this crisis (yes I'm not on the token-using side of this) is strict liability for companies making software products (just like in the physical world). Then it doesn't matter if the token-generator spits out code or a software engineer spits out code - the company's incentives are aligned such that if something breaks it's on them to fix it and sort out any externalities caused. This will probably mean no vibe-coded side hustles but I personally am OK with that.
> The problem is: you can’t justify this throughput to someone who doesn’t understand real software engineering. They see the output and think “well the AI did it.” No. The AI executed it. I designed it. I knew what to ask for, how to decompose the problem, what patterns to use, when the model was going off track, and how to correct it. That’s not prompting. That’s engineering.
That’s the “money quote,” for me. Often, I’m the one that causes the problem, because of errors in prompting. Sometimes, the AI catches it, sometimes, it goes into the ditch, and I need to call for a tow.
The big deal, is that I can considerably “up my game,” and get a lot done, alone. The velocity is kind of jaw-dropping.
I’m not [yet] at the level of the author, and tend to follow a more “synchronous” path, but I’m seeing similar results (and enjoying myself).
> Building systems that supervise AI agents, training models, wiring up pipelines where the AI does the heavy lifting and I do the thinking. Honestly? I’m having more fun than ever.
I'm sure some people are having fun that way.
But I'm also sure some people don't like to play with systems that produce fuzzy outputs and break in unexpected moments, even though overall they are a net win.
It's almost as if you're dealing with humans. Some people just prefer to sit in a room and think, and they now feel this is taken away from them.
I get this. I don't think either of you is wrong. There's a real loss in not writing something from scratch and feeling it come together under your hands. I'm not dismissing that.
I have immense respect for the senior engineers who came before me. They built the systems and the thinking that everything I do now sits on top of. I learned from people. Not from AI. The engineers who reviewed my terrible pull requests, the ones who sat with me and explained why my approach was wrong. That's irreplaceable. The article is about where I think things are going, not about what everyone should enjoy.
This essay somehow sounds worse than AI slop, like ChatGPT did a line of coke before writing this out.
I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.
I agree wholeheartedly with all that is said in this article. When guided, AI amplifies the productivity of experts immensely.
There are two problems left, though.
One is, laypersons don't understand the difference between "guided" and "vibe coded". This shouldn't matter, but it does, because in most organizations managers are laypersons who don't know anything about coding whatsoever, aren't interested by the topic at all, and think developers are interchangeable.
The other problem is, how do you develop those instincts when you're starting up, now that AI is a better junior coder than most junior coders? This is something one needs to think about hard as a society. We old farts are going to be fine, but we're eventually going to die (retire first, if we're lucky; then die).
What comes after? How do we produce experts in the age of AI?
This is the question I keep coming back to. I don't have a clean answer yet.
The foundation I built came from years of writing bad code and understanding why it was bad. I look at code I wrote 10 years ago and it's genuinely terrible. But that's the point. It took time, feedback, reading books, reviewing other people's work, failing, and slowly building the instinct for what good looks like. That process can't be skipped.
If AI shortens the path to output, educators have to double down on the fundamentals. Data structures, systems thinking, understanding why things break. Not because everyone needs to hand-write a linked list forever, but because without that foundation you can't tell when the AI is wrong. You can't course-correct what you don't understand.
Anyone can break into tech. That's a good thing. But if someone becomes a purely vibe-coding engineer with no depth, that's not on them. That's on the companies and institutions that didn't evaluate for the right things. We studied these fundamentals for a reason. That reason didn't go away just because the tools got better.
People always learn the things they need to learn.
Were people clutching their pearls about how programmers were going to lack the fundamentals of assembly language after compilers came along? Probably, but it turned out fine.
People who need to program in assembly language still do. People who need to touch low-level things probably understand some of it but not as deeply. Most of us never need to worry about it.
I don't think the comparison (that's often made) between AI and compilers is valid though.
A compiler is deterministic. It's a function; it transforms input into output and validates it in the process. If the input is incorrect it simply throws an error.
AI doesn't validate anything, and transforms a vague input into a vague output, in a non-deterministic way.
A compiler can be declared bug-free, at least in theory.
But it doesn't mean anything to say that the chain 'prompt-LLM-code' is or isn't "correct". It's undecidable.
>People always learn the things they need to learn.
No, they don't. Which why a huge % of people are functionaly illiterate at the moment, know nothing about finance and statistics and such and are making horrendous decisions for their future and their bottom line, and so on.
There is also such a thing as technical knowledge loss between generations.
I find really sad how people are so stubborn to dismiss AI as a slop generator.
I completely agree with the author, once you spend the time building a good enough harness oh boy you start getting those sweet gains, but it takes a lot of time and effort but is absolutely worth it.
what about the environmental impact of AI, especially agentic AI? I keep reading praise for AI on the orange site, but its environmental impact is rarely discussed. It seems that everyone has already adopted this technology, which is destroying our world a little more.
If you read the book, my point should be crystal clear - that environmental impact which aligns with The Party goals (shareholder profits) the best, is painted the least concerning of all.
I believe the orange site's consensus was that it's approximately one additional mini fridge or dish washer worth of consumption on average. You've got users who use these tools barely 1k tokens per week. Assuming it's all batched ideally that's like running an LED floodlight for a minute or so. The other end of the spectrum can be pretty extreme in consumption but it's also rare. Most people just use the adhoc stuff.
The environmental impact of AI replacing a human programmer is orders of magnitude lower than the environmental impact of that programmer. Look up average US water consumption and CO2 emissions per capita.
And then add on top the environmental impact of all of the money that programmer gets from programming - travels around the world, buying large houses, ...
If you care about the environment, you should want AI's replacing humans at most jobs so that they can no longer afford traveling around the world and buying extravagant stuff.
Yes the environmental impact of an AI agent performing a given task is lower. However we will not simply replace every programmer with an agent: in the process we will use more agents exceeding the previous environmental impact of humans. This is the rebound effect [0].
Your reasoning could be effective if we bounded the computing resources usable by all AI in order to meet carbon reduction goals.
>The environmental impact of AI replacing a human programmer is orders of magnitude lower than the environmental impact of that programmer. Look up average US water consumption and CO2 emissions per capita.
The programmer will continue to exist as a consumer of those things even if they get replaced by AI in their job.
on top of that for sure all programmers AI is replacing are all extravagantly traveling around the world (especially ones in America that make the most dough and 90% do not have a passport)
The phrase "shape up or ship out" is an apt one I've heard. Agentic AI is a core part of software engineering. Either you are learning and using these tools, or you're not a professional and don't belong in the field.
Seems strange, for decades we allowed developers to use what made them comfortable, you like notepad? go ahead and use it. Don't want an LSP? that's fine disable it.
So long as their productivity was on par with the rest of the team there was no issue.
Suddenly, everyone needs to use this new tool (which we haven't proven to actually be effective) and if you don't you don't belong in the industry.
> So long as their productivity was on par with the rest of the team there was no issue.
Emphasis added. And anyway, for most software dev in most shops it wasn't true; most development takes place in whatever IDE the group/organization standardized on for the task, to make sure everyone gets proper tooling and to make collaboration and information sharing easier. Think of all the Java enterprise software developed by legions of drones in the 2000s and 2010s. They all used Eclipse, because Eclipse is what they were given.
It's only with the emergence of whiny, persnickety Unix devs who refused to leave the comforting embrace of their editor of choice that shops in the internet/dotcom/startup tradition embraced a "use whatever tools you want" philosophy. They had uncharacteristically enormous leverage over the tech stack being deployed in such businesses and could force employers to make that concession. And anyway, what some of them could do with vi blew the boss's mind.
It is true that we don't have a whole lot of hard data from large organizations that show AI productivity improvements. But absence of evidence is not evidence of absence. Turns out, most large organizations just haven't adopted AI in the amount and ways that could make a big impact.
But we have enough anecdata from competent developers to suggest that the productivity gains are huge. So big, AI not only lets you do your normal tasks many times faster, it puts projects within reach that you would not have countenanced before because they were too complex or tedious to be worth the payoff.
So no. Refusing to use AI is just pure bloodymindedness at this point—like insisting on using a keypunch while everyone around you discovers the virtues of CRT terminals and timesharing. There were people like this even in the 1970s when IBM finally came around and made timesharing available in their mainframes. Those people either got up to speed or moved on to a different profession. They couldn't keep working the way they'd been working because the productivity expectations changed with the availability of new technology.
Not a day goes by that a fellow engineer doesn't text me a screenshot of something stupid an AI did in their codebase. But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce.
Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace.
Unfortunately, this group, the curious ones, IMHO is a minority.
I am solidly in this "curious" camp. I've read HN for the past 15(?) years. I dropped out of CS and got an art agree instead. My career is elsewhere, but along the way, understanding systems was a hobby.
I always kind of wanted to stop everything else and learn "real engineering," but I didn't. Instead, I just read hundreds (thousands?) of arcane articles about enterprise software architecture, programming language design, compiler optimization, and open source politics in my free time.
There are many bits of tacit knowledge I don't have. I know I don't have them, because I have that knowledge in other domains. I know that I don't know what I don't know about being a "real engineer."
But I also know what taste is. I know what questions to ask. I know the magic words, and where to look for answers.
For people like me, this feels like an insane golden age. I have no shortage of ideas, and now the only thing I have is a shortage of hands, eyes, and on a good week, tokens.
You think you know what taste is. Have you been cranking on real systems all these years, or have you been on the sidelines armchairing the theoretics? I'm not trying to come across as rude, but it may be unavoidable to some degree when indirect criticism becomes involved. A laboring engineer has precious little choice in the type of systems available on which to work on. Fundamentally, it's all going to be some variant of system to make money for someone else somehow, or system that burns money, but ensures necessary work gets done somehow. That's it. That's the extent of the optimization function as defined by capitalism. Taste, falls by the wayside, compared to whether or not you are in the context of the optimizers who matter, because they're at the center of the capital centralization machine making the primary decisions as to where it gets allocated, is all that matters these days. So you make what they want or you don't get paid. As an Arts person, you should understand that no matter how sublime the piece to the artist, a rumbling belly is all that currently awaits you if your taste does not align with the holders of the fattest purses to lighten. I'm not speaking from a place of contempt here; I have a Philosophy background, and reaching out as one individual of the Humanities to another. We've lost sight of the "why we do things" and let ourselves become enslaved by the balance sheets. The economy was supposed to serve the people, it's now the other way around. All we do is feed more bodies to the wood chipper. Until we wake up from that, not even the desperate hope in the matter of taste will save us. We'll just keep following the capital gradient until we end up selling the world from under ourselves because it's the only thing we have left, and there is only the usual suspects as buyers.
Yet another wannabe systems engineer cheers the robbery and loss of job of real systems engineers.
Calling somebody a wannabe systems engineer is unneccessarily antagonistic.
I know it's not anyone's fault exactly, but the current state of systems in general is an absolute shit show. If you care about what you do, I'd expect you to be cheering that we just might have an opportunity for a renaissance.
Moreover, this kind of thinking is incredibly backward. If you were better than me then, you can easily become much better than I'll ever be in the future.
The K-shaped workforce point is sharp and I think you're right. The curious ones are a minority, but they've always been the ones who moved things forward. AI just made the gap more visible :)
Your Codex case study with the content creators is fascinating. A PhD in Biology and a masters in writing building internal tools... that's exactly the kind of thing i meant by "you can learn anything now." I'm surrounded by PhDs and professors at my workplace and I'm genuinely positive about how things are progressing. These are people with deep domain expertise who can now build the tools they need. It's an interesting time. please write that up...
Engineers will go back in and fix it when they notice a problem. Or find someone who can. AI will send happy little emoji while it continues to trash your codebase and brings it to a state of total unmaintainability.
But that's the problem. Something that can be so reliable at times, can also fail miserably at others. I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load. You're not just coding anymore, you're thinking about what needs to be done, and then reviewing it as if someone else wrote the code.
LLMs are great for rapid prototyping, boilerplate, that kind of thing. I myself use them daily. But the amount of mistakes Claude makes is not negligible in my experience.
This is a fair point. The cognitive load is real. Reviewing AI output is a different kind of exhausting than writing code yourself.
Even when the output is "guided," I don't trust it. I still review every single line. Every statement. I need to understand what the hell is going on before it goes anywhere. That's non-negotiable. I think it gets better as you build tighter feedback loops and better testing around it, but I won't pretend it's effortless.
This is a fair observation, and I think it actually reinforces the argument. The burnout you're describing comes from treating AI output as "your code that happens to need review." It's not. It's a hypothesis. Once you reframe it that way, the workflow shifts: you invest more in tests, validation scenarios, acceptance criteria, clear specs. Less time writing code, more time defining what correct looks like. That's not extra work on top of engineering. That is the engineering now. The teams I've seen adapt best are the ones that made this shift explicit: the deliverable isn't the code, it's the proof that the code is right.
One issue is that developers have been trained for the past few decades to look for solutions to problems online by just dumping a few relevant keywords into Google. But to get the most out of AI you should really be prompting as if you were writing a formal letter to the British throne explaining the background of your request. Basic English writing skills, and the ability to formulate your thoughts in a clear manner, have become essential skills for engineering (and something many developers simply lack).
> But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
Because the instances of this happening are a) random and b) rarely ever happening ?
I agree on the curiosity part, I have a non CS background but I have learned to program just out of curiosity. This led me to build production applications which companies actually use and this is before the AI era.
Now, with AI I feel like I have an assistant engineer with me who can help me build exciting things.
I'm currently teaching a group of very curious non-technical content creators at one of the firms I consult at. I set up Codex for them, created the repo to have lots of hand-holding built in - and they took off. It's been 4 weeks and we already have 3 internal tools deployed, one of which eliminated the busy work of another team so much that they now have twice the capacity. These are all things 'real' engineers and product managers could have done, but just empowering people to solve their own problems is way faster. Today, several of them came to me and asked me to explain what APIs are (They want to use the google workspace APIs for something)
I wrote out a list of topics/key words to ask AI about and teach themselves. I've already set up the integration in an example app I will give them, and I literally have no idea what they are going to build next, but I'm .. thrilled. Today was the first moment I realized, maybe these are the junior engineers of the future. The fact that they have nontechnical backgrounds is a huge bonus - one has a PhD in Biology, one a masters in writing - they bring so much to the process that a typical engineering team lacks. Thinking of writing up this case study/experience because it's been a highlight of my career.
Quite frankly, if AI can write better code than most of your engineers "hundreds of times", then your hiring team is doing something terribly wrong.
Maybe. The reality of software engineering is that there's a lot of mediocre developers on the market and a lot of mediocre code being written; that's part of the industry, and the jobs of engineers working with other engineers and/or LLMs is that of quality control, through e.g. static analysis, code reviews, teaching, studying, etc.
And those mediocre engineers put their work online, as do top-tier developers. In fact, I would say that the scale is likely tilted towards mediocre engineers putting more stuff online than really good ones.
So statistically speaking, when the "AI" consumes all of that as its training data and returns the most likely answer when prompted, what percentage of developers will it be better than?
In other words, there's probably a market for a model trained on a curated collection of high-quality code.
Doubt it”s sustainable. These big models keep improving at a fast pace and any progress like this made in a niche would likely get caught up to very quickly.
These people also prefer plastic averaged-out images of AI girls to real ones.
The Average is their top-tier.
The "most engineers" not "most engineers we've hired".
But also "most engineers" aren't very good. AIs know tricks that the average "I write code for my dayjob" person doesn't know or frankly won't bother to learn.
Even speaking from a pure statistical perspective, it is quite literally impossible for "AI" that outputs world's-most-average-answer to be better than "most engineers".
In fact, it's pretty easy to conclude what percentage of engineers it's better than: all it does is it consumes as much data as possible and returns the statistically most probable answer, therefore it's gonna be better than roughly 50% of engineers. Maybe you can claim that it's better than 60% of engineers because bottom-of-the-barrel engineers tend to not publish their works online for it to be used as training data, but for every one of those you have a bunch of non-engineers that don't do this for a living putting their shitty attempts at getting stuff done using code online, so I'm actually gonna correct myself immediately and say that it's about 40%.
The same goes for every other output: it's gonna make the world's most average article, the most average song in a genre and so on. You can nudge it to be slightly better than the average with great effort, but no, you absolutely cannot make it better than most.
This is kind of like saying a kid can never become a better programmer than the average of his teachers.
IMHO, the reasons not to use AI are social, not logical.
The kid can learn and become better over time, while "AI" can only be retrained using better training data.
I'm not against using AI by any means, but I know what to use it for: for stuff where I can only do a worse than half the population because I can't be bothered to learn it properly. I don't want to toot my own horn, but I'd say I'm definitely better at my niche than 50% of the people. There are plenty of other niches where I'm not.
The thing that separates AI Agents from normal programmers is that agents don't get bored or tired.
For most engineers the ability might be there, but the motivation or willingness to write, for example, 20 different test cases checking the 3 line bug you just fixed is fixed FOR SURE usually isn't there. You add maybe 1-2 tests because they're annoying boilerplate crap to write and create the PR. CI passes, you added new tests, someone will approve it. (Yes, your specific company is of course better than this and requires rigorous testing, but the vast majority isn't. Most don't even add the two tests as long as the issue is fixed.)
An AI Agent will happily and without complaining use Red/Green TDD on the issue, create the 20 tests first, make sure they fail (as they should), fix the issue and then again check that all tests pass. And it'll do it in 30 minutes while you do something else.
>But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
Are you serious? I've been hearing this constantly. since mid 2025.
The gaslighting over AI is really something else.
Ive also never seen jobs advertised before whose job was to lobby skeptical engineers over about how to engage in technical work. This is entirely new. There is a priesthood developing over this.
I wrote code by hand for 20 years. Now I use AI for nearly all code. I just can’t compete in speed and thoroughness. As the post says, you must guide the AI still. But if you think you can continue working without AI in a competitive industry, I am absolutely sure you will eventually have a very bad time.
>I just can’t compete in speed and thoroughness
I certainly know engineers for which this is true but unfortunately they were never particularly thorough or fast to begin with.
I believe you can tell which way the wind is blowing by looking at open source.
Other than being flooded with PRs high profile projects have not seen a notable difference - certainly no accelerated enhancements. there has definitely been an explosion of new projects, though, most of dubious quality.
Spikes and research are definitely cheaper now.
you’ve been hearing that since mid 2025 bc that’s when it became true.
They will never admit it, but many are scared of losing their jobs.
This threat, while not yet realized, is very real from a strictly economic perspective.
AI or not, any tool that improves productivity can lead to workforce reduction.
Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves?
To make matters worse, all your competitors also have the same semi-automatic ovens...
A bit simplistic. The bakery can just expand its product range or do various other things to add work. In fact that's exactly what I would expect to happen at a tech company, ceteris paribus.
This is what I find interesting - the response from most companies is "we will need fewer engineers because of AI", not "we can build more things because of AI".
What is driving companies to want to get rid of people, rather than do more? Is it just short-term investor-driven thinking?
How much more productive are we supposed to be in engineering? Are we 10x'ing our testing capability at the same time? QA is already a massive bottleneck at my $DAYJOB. I'm not sure what benefits the company at-large derives from having the typing machine type faster.
I think it's an excuse to do needed lay offs without saying as much. So yes, preserving signals, essentially. I've never met a tech company that didn't love expanding work to fill capacity, even if the work is of little value.
The optimization function of capitalism and it's instrumental convergence. The AI Alignment problem is already here, and it is us.
> Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
That is actually the case with a lot of bakeries these days. But the one major difference being,the baker can rely with almost 100% reliability that the form, shape and ingredients used will be exact to the rounding error. Each time. No matter how many times they use the oven. And they don't have to invent strategies on how to "best use the ovens", they don't claim to "vibe-bake" 10x more than what they used to bake before etc... The semi-automated ovens just effing work!
Now show me an LLM that even remotely provides this kind of experience.
On another note, if you had 100 engineers and you lay almost all of them off and keep 5 super-AI-accelerated engineers, and your competitor keeps 50 of such engineers, your competitor is still able to iterate 10x as fast. So you still lay people off at the risk of falling behind.
Maybe the bakery expands to make more than just loaves of bread, maybe different cakes, sandwiches, maybe expand delivery to nearby towns.
Lost me at "I’m building something right now. I won’t get into the details. You don’t give away the idea."
Fair enough. I know how that reads. But when anyone with a laptop and a subscription can ship production software in a weekend, the architecture and the idea start to matter a lot more. The technical details in the post are real. I just can't share the what yet. Take it or leave it.
Perhaps execution is cheap now and ideas aren't?
Personally I'm quite pleased with this inversion.
It’s kind of funny seeing all the AI hype guys talking about their 10 OpenClaw instances all running doing work and when you ask what it is, you can never get a straight answer..
For the record though, I love agentic coding. It deals with the accumulated cruft of software for me.
The work is mysterious and important.
The issue is that you become lazy after a while and stop “leading the design”. And I think that’s ok because most of the code is just throwaway code. You would rewrite your project/app several times by the time it’s worth it to pay attention to “proper” architecture. I wish I had these AIs 10 years ago so that I could focus on everything I wanted to build instead to become a framework developer/engineer.
I agree. I've got more lazy over time too. But the cost of creating code is so cheap... it's now less important to be perfect the first time the code hits prod (application dependant). It can be rewritten from scratch in no time. The bar for 'maintainability' is a lot lower now, because the AI has more capacity and persistence to maintain terrible code.
I'm sure plenty of people disagree with me. But I'm a good hand programmer, and I just don't feel the need to do that any more. I got into this to build things for other people, and AI is letting me do that more efficiently. Yes, I've had to give up a puritan approach to code quality.
> But guided? The models can write better code than most developers. That’s the part people don’t want to sit with. When guided.
Where do you draw the line between just enough guidance vs too much hand holding to an agent? At some point, wouldn't it be better to just do it yourself and be done with the project (while also build your muscle memory, experiences and the mental model for future projects, just like tons of regular devs have done in the past)
The line is scope.
I'm not asking an agent to build me a full-stack app. That's where you end up babysitting it like a kindergartener and honestly you'd be faster doing it yourself. The way I use agents is focused, context-driven, one small task at a time.
For example: i need a function that takes a dependency graph, topologically sorts it, and returns the affected nodes when a given node changes. That's well-scoped. The agent writes it, I review it, done.
But say I'm debugging a connection pool leak in Postgres where connections aren't being released back under load because a transaction is left open inside a retry loop. I'm not handing that to an agent. I already know our system. I know which service is misbehaving, I know the ORM layer, I know where the connection lifecycle is managed. The context needed to guide the agent properly would take longer to write than just opening the code and tracing it myself.
That's the line. If the context you'd need to provide is larger than the task itself, just do it. If the task is well-defined and the output is easy to verify, let the agent rip.
The muscle memory point is real though. i still hand-write code when I'm learning something new or exploring a space I don't understand yet. AI is terrible for building intuition in unfamiliar territory because you can't evaluate output you don't understand. But for mundane scaffolding, boilerplate, things that repeat? I don't. llife's too short to hand-write your 50th REST handler.
Very much on the same page as the author, I think AI is a phenomenal accelerant.
If you're going in the right direction, acceleration is very useful. It rewards those who know what they're doing, certainly. What's maybe being left out is that, over a large enough distribution, it's going to accelerate people who are accidentally going in the right direction, too.
There's a baseline value in going fast.
>There's a baseline value in going fast.
Maybe to the people writing the invoices for the infra you're renting, sure. Or to the people who get paid to dig you out of the consequences you inevitably bring about. Remember, the faster the timescale, the worse we are wired to effectively handle it as human beings. We're playing with a fire that catches and spreads so fast, by the time anyone realizes the forest is catching and starting to react, the entire forest is already well on the way to joining in the blaze.
> We're playing with a fire that catches and spreads so fast, by the time anyone realizes the forest is catching and starting to react, the entire forest is already well on the way to joining in the blaze.
I suspect this has been said in one form or another since the discovery of fire itself.
The only way I see out of this crisis (yes I'm not on the token-using side of this) is strict liability for companies making software products (just like in the physical world). Then it doesn't matter if the token-generator spits out code or a software engineer spits out code - the company's incentives are aligned such that if something breaks it's on them to fix it and sort out any externalities caused. This will probably mean no vibe-coded side hustles but I personally am OK with that.
> The problem is: you can’t justify this throughput to someone who doesn’t understand real software engineering. They see the output and think “well the AI did it.” No. The AI executed it. I designed it. I knew what to ask for, how to decompose the problem, what patterns to use, when the model was going off track, and how to correct it. That’s not prompting. That’s engineering.
That’s the “money quote,” for me. Often, I’m the one that causes the problem, because of errors in prompting. Sometimes, the AI catches it, sometimes, it goes into the ditch, and I need to call for a tow.
The big deal, is that I can considerably “up my game,” and get a lot done, alone. The velocity is kind of jaw-dropping.
I’m not [yet] at the level of the author, and tend to follow a more “synchronous” path, but I’m seeing similar results (and enjoying myself).
There are two types of engineers who use AI:
- Ones who see it generated something bad, and blame the AI.
- Ones who see it generated something bad, and revert it and try to prompt better, with more clarity and guidance.
- Ones who see it generated something bad, and realise it'd be faster to just hand fix the issues than babysit an LLM
That's a PEBKAC issue.
Three types:
- Ones that use it as a “pair partner,” as opposed to an employee.
Thanks for the implicit insult. That was helpful.
> Building systems that supervise AI agents, training models, wiring up pipelines where the AI does the heavy lifting and I do the thinking. Honestly? I’m having more fun than ever.
I'm sure some people are having fun that way.
But I'm also sure some people don't like to play with systems that produce fuzzy outputs and break in unexpected moments, even though overall they are a net win. It's almost as if you're dealing with humans. Some people just prefer to sit in a room and think, and they now feel this is taken away from them.
I'm just an old school programmer who loves writing code, and the recent AI developments have just taken the most fun part away from me.
I get this. I don't think either of you is wrong. There's a real loss in not writing something from scratch and feeling it come together under your hands. I'm not dismissing that.
I have immense respect for the senior engineers who came before me. They built the systems and the thinking that everything I do now sits on top of. I learned from people. Not from AI. The engineers who reviewed my terrible pull requests, the ones who sat with me and explained why my approach was wrong. That's irreplaceable. The article is about where I think things are going, not about what everyone should enjoy.
And "taking the fun out" is one thing. Making 50% or more of coders redandunt is a whole other can of worms.
fr, like in 2020 I started to learn programming in C/C++ at 9 and in 2023 when the AI bubble just went on, it feels like I did it all for nothing
This essay somehow sounds worse than AI slop, like ChatGPT did a line of coke before writing this out.
I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.
I wrote it myself. But the irony isn't lost on me. "Who did what" is kind of the whole point of the article. Appreciate the feedback.
It sounds a bit no-true-scotsman to me.
I agree wholeheartedly with all that is said in this article. When guided, AI amplifies the productivity of experts immensely.
There are two problems left, though.
One is, laypersons don't understand the difference between "guided" and "vibe coded". This shouldn't matter, but it does, because in most organizations managers are laypersons who don't know anything about coding whatsoever, aren't interested by the topic at all, and think developers are interchangeable.
The other problem is, how do you develop those instincts when you're starting up, now that AI is a better junior coder than most junior coders? This is something one needs to think about hard as a society. We old farts are going to be fine, but we're eventually going to die (retire first, if we're lucky; then die).
What comes after? How do we produce experts in the age of AI?
This is the question I keep coming back to. I don't have a clean answer yet.
The foundation I built came from years of writing bad code and understanding why it was bad. I look at code I wrote 10 years ago and it's genuinely terrible. But that's the point. It took time, feedback, reading books, reviewing other people's work, failing, and slowly building the instinct for what good looks like. That process can't be skipped.
If AI shortens the path to output, educators have to double down on the fundamentals. Data structures, systems thinking, understanding why things break. Not because everyone needs to hand-write a linked list forever, but because without that foundation you can't tell when the AI is wrong. You can't course-correct what you don't understand.
Anyone can break into tech. That's a good thing. But if someone becomes a purely vibe-coding engineer with no depth, that's not on them. That's on the companies and institutions that didn't evaluate for the right things. We studied these fundamentals for a reason. That reason didn't go away just because the tools got better.
I think the problem is overstated.
People always learn the things they need to learn.
Were people clutching their pearls about how programmers were going to lack the fundamentals of assembly language after compilers came along? Probably, but it turned out fine.
People who need to program in assembly language still do. People who need to touch low-level things probably understand some of it but not as deeply. Most of us never need to worry about it.
I don't think the comparison (that's often made) between AI and compilers is valid though.
A compiler is deterministic. It's a function; it transforms input into output and validates it in the process. If the input is incorrect it simply throws an error.
AI doesn't validate anything, and transforms a vague input into a vague output, in a non-deterministic way.
A compiler can be declared bug-free, at least in theory.
But it doesn't mean anything to say that the chain 'prompt-LLM-code' is or isn't "correct". It's undecidable.
>People always learn the things they need to learn.
No, they don't. Which why a huge % of people are functionaly illiterate at the moment, know nothing about finance and statistics and such and are making horrendous decisions for their future and their bottom line, and so on.
There is also such a thing as technical knowledge loss between generations.
Finally a take that I can agree with.
I find really sad how people are so stubborn to dismiss AI as a slop generator. I completely agree with the author, once you spend the time building a good enough harness oh boy you start getting those sweet gains, but it takes a lot of time and effort but is absolutely worth it.
Personally, I dismiss AI, mainly agenetic ones, because of its environmental impact. I hope that one day everyone will be held accountable for it.
what about the environmental impact of AI, especially agentic AI? I keep reading praise for AI on the orange site, but its environmental impact is rarely discussed. It seems that everyone has already adopted this technology, which is destroying our world a little more.
All environmental impacts are equal, but some of them are more equal than the others!
This comes from a dystopian book (Animal Farm). What is your point?
If you read the book, my point should be crystal clear - that environmental impact which aligns with The Party goals (shareholder profits) the best, is painted the least concerning of all.
I believe the orange site's consensus was that it's approximately one additional mini fridge or dish washer worth of consumption on average. You've got users who use these tools barely 1k tokens per week. Assuming it's all batched ideally that's like running an LED floodlight for a minute or so. The other end of the spectrum can be pretty extreme in consumption but it's also rare. Most people just use the adhoc stuff.
The environmental impact of AI replacing a human programmer is orders of magnitude lower than the environmental impact of that programmer. Look up average US water consumption and CO2 emissions per capita.
And then add on top the environmental impact of all of the money that programmer gets from programming - travels around the world, buying large houses, ...
If you care about the environment, you should want AI's replacing humans at most jobs so that they can no longer afford traveling around the world and buying extravagant stuff.
Yes the environmental impact of an AI agent performing a given task is lower. However we will not simply replace every programmer with an agent: in the process we will use more agents exceeding the previous environmental impact of humans. This is the rebound effect [0].
Your reasoning could be effective if we bounded the computing resources usable by all AI in order to meet carbon reduction goals.
[0] https://en.wikipedia.org/wiki/Rebound_effect_(conservation)
>The environmental impact of AI replacing a human programmer is orders of magnitude lower than the environmental impact of that programmer. Look up average US water consumption and CO2 emissions per capita.
The programmer will continue to exist as a consumer of those things even if they get replaced by AI in their job.
But he will no longer have that much money to spend on environment damaging products.
So you mean that human programmers who were replaced by AI are dead by now?
"You'll be fine digging trenches, programmer", they said.
Seriously, though:
...so that they can no longer afford traveling around the world...
This is either a sarcasm I failed to parse, or pure technofascism.
on top of that for sure all programmers AI is replacing are all extravagantly traveling around the world (especially ones in America that make the most dough and 90% do not have a passport)
[dead]
The phrase "shape up or ship out" is an apt one I've heard. Agentic AI is a core part of software engineering. Either you are learning and using these tools, or you're not a professional and don't belong in the field.
Seems strange, for decades we allowed developers to use what made them comfortable, you like notepad? go ahead and use it. Don't want an LSP? that's fine disable it.
So long as their productivity was on par with the rest of the team there was no issue.
Suddenly, everyone needs to use this new tool (which we haven't proven to actually be effective) and if you don't you don't belong in the industry.
> So long as their productivity was on par with the rest of the team there was no issue.
Emphasis added. And anyway, for most software dev in most shops it wasn't true; most development takes place in whatever IDE the group/organization standardized on for the task, to make sure everyone gets proper tooling and to make collaboration and information sharing easier. Think of all the Java enterprise software developed by legions of drones in the 2000s and 2010s. They all used Eclipse, because Eclipse is what they were given.
It's only with the emergence of whiny, persnickety Unix devs who refused to leave the comforting embrace of their editor of choice that shops in the internet/dotcom/startup tradition embraced a "use whatever tools you want" philosophy. They had uncharacteristically enormous leverage over the tech stack being deployed in such businesses and could force employers to make that concession. And anyway, what some of them could do with vi blew the boss's mind.
It is true that we don't have a whole lot of hard data from large organizations that show AI productivity improvements. But absence of evidence is not evidence of absence. Turns out, most large organizations just haven't adopted AI in the amount and ways that could make a big impact.
But we have enough anecdata from competent developers to suggest that the productivity gains are huge. So big, AI not only lets you do your normal tasks many times faster, it puts projects within reach that you would not have countenanced before because they were too complex or tedious to be worth the payoff.
So no. Refusing to use AI is just pure bloodymindedness at this point—like insisting on using a keypunch while everyone around you discovers the virtues of CRT terminals and timesharing. There were people like this even in the 1970s when IBM finally came around and made timesharing available in their mainframes. Those people either got up to speed or moved on to a different profession. They couldn't keep working the way they'd been working because the productivity expectations changed with the availability of new technology.