I think executives are excited about AI because it confirms their worldview: that the work is a commodity and the real value lies in orchestration and strategy.
It doesn't help that the west has a clear bias wherein moving "up" is moving away from the work. Many executives often don't know what good looks like at the detail level, so they can't evaluate AI output quality.
MD here, of a really small company (and I'm not a doctor).
I'm (mildly) excited by LLMs because I love a new shiny tool that does appear to have quite some utility.
My analogy these days is a screwdriver. Let's ignore screw development for now.
The first screwdrivers, which we still use, are slotted and have a habit of slipping sideways and jumping (camming out). That's err before LLMs ... something ... something.
Fast forward and we have Philips and Pozi and electric drivers. Yes there were ratchet jobs, and I still have one but the cordless electric drilldriver is nearly as magical as the Dr Who sonic effort! That's your modern LLM that is.
Now a modern drilldriver can wrench your wrist if you are not careful and brace properly. A modern LLM will hallucinate like a nineties raver on ecstasy but if you listen carefully and phrase your prompts carefully and ignore the chomping teeth and keep them hydrated, you may get something remarkable out of the creature 8)
Now I only use Chat at the totally free level but I do run several on-prem models using ollama and llama.cpp (all compiled from source ... obviously).
I love a chat with the snappily named "Qwen3.5-35B-A3B-UD-Q4_K_XL" but I'm well aware that it is like an old school Black and Decker off of the noughties and not like my modern De Walt wrist knackerers. I've still managed to get it to assist me to getting PowerDNS running with DNSSEC and LUA and configuring LACP and port channel/trunking and that on several switch brands.
> I'm (mildly) excited by LLMs because I love a new shiny tool that does appear to have quite some utility.
I really think a lot of folks were conned by a smooth operator and a polished demo, so now everyone has to suffer though having this nebulous thing rammed down our throats regardless of its real utility because people with higher pay grades believe it has utility.
It feels like a lot of “AI is inevitable; you are failing to make this abundant future inevitable by your skepticism.”
>A modern LLM will hallucinate like a nineties raver on ecstasy but if you listen carefully and phrase your prompts carefully and ignore the chomping teeth and keep them hydrated, you may get something remarkable out of the creature 8)
I think another part of it is that AI tools demo really well, easily hiding how imperfect and limited they are when people see a contrived or cherry-picked example. Not a lot of people have a good intuition for this yet. Many people understand "a functional prototype is not a production app" but far fewer people understand "an AI that can be demonstrated to write functional code is not a software engineer" because this reality is rapidly evolving. In that rapidly evolving reality, people are seeing a lot of conflicting information, especially if you consider that a lot of that information is motivated (eg, "ai is bad because it's bad to fire engineers" which, frankly, will not be compelling to some executives out there). Whatever the new reality is going to be, we're not going to find out one step at a time. A lot of lessons are going to be learned the hard way.
Yes, and they work really well for small side projects that an exec probably used to try out the LLM.
But writing code in one clean discrete repo is (esp. at a large org) only a part of shipping something.
Over time, I think tooling will get better at the pieces surrounding writing the code though. But the human coordination / dependency pieces are still tricky to automate.
Jeff Bezos famously said “your margin is my opportunity,” I feel like Steve Jobs could’ve just as easily said “your slop is my opportunity.” (And he sort of did with “insanely great”)
Indeed. Even the ur-craftsman, John Carmack, says that delivering value to customers is pretty much the only thing that matters in development. If AI lets you do that faster, cheaper, you'd be a fool not to use it. There's a reason why it's virtually a must in professional software engineering now.
As someone who's both an IC and leads other developers I disagree with the explanation. As a technical lead, with people I can much better predict the quality of the outcome than with LLMs, and the "failure modes" are much more manageable. As a programmer, I am actually more impressed with AI agents but in an informed and qualified way. Their debugging ability wows me; their coding ability disappoints and frustrates me.
I think that the simple explanation for why executives are so hyped about AI is simply that they're not familiar with its severe current limitations. For example, Garry Tan seems to really believe he's generating 10KLOC of working code per day; if he'd been a working developer he would have known he isn't.
I lead a team of Data Engineers, DevOps Engineers, and Data Scientists. I write code and have done so literally for my entire life. AI-assisted codegen is incredible; especially over the last 3-4m.
I understand that developers feel their code is an art form and are pissed off that their life’s work is now a commodity; but, it’s time to either accept it and move on with what has happened, specialize as an actual artist, or potentially find yourself in a very rough spot.
I wonder if your background just has you fooled. I worked on a data science team and code was always a commodity. Most data scientists know how to code in a fairly trivial way, just enough to get their models built and served. Even data engineers largely know how to just take that and deploy to Spark. They don't really do much software engineering beyond that.
I'm not being precious here or protective of my "art" or whatever. But I do find it sort of hilarious and obvious that someone on a data science team might not understand the aesthetic value of code, and I suspect anyone else who has worked on such a team/ with such a team can probably laugh about the same thing - we've uh... we've seen your code. We know you don't value aesthetic code lol. Single variable names, `df1`, `df2`, `df3`.
I'm not particularly uncomfortable at the moment because understanding computers, understanding how to solve problems, understanding how to map between problems and solutions, what will or won't meet a customer's expectations, etc, is still core to the job as it always has been. Code quality is still critical as well - anyone who's vibe-coded >15KLOC projects will know that models simply can not handle that scale unless you're diligent about how it shoul dbe structured.
My job has barely changed semantically, despite rapid adoption of AI.
I understand that you’re trying to apply your experience to what we do as a team and that makes sense; but, we’re many many stddev beyond the 15K LOC target you identified and have no issues because we do indeed take care to ensure we’re building these things the right way.
I have worked at many places and have seen the work of DEs and DSs that is borderline psychotic; but it got the job done, sorta. I have suffered through QA of 10000 lines that I ended up rewriting in less than 100.
So, yes; I understand where you’re coming from. But; that’s not what we do.
Yes, but then you said that you do what I'm suggesting is still critical to do, which is maintain the codebase even if you heavily leverage models. " we do indeed take care to ensure we’re building these things the right way."
lol this is not why people do "df1", "df2", etc, nor are those polymorphic names but okay.
> it's coming... some places move slower than other but it's coming
What is coming, exactly? Again, as said, I work at a company that has rapidly adopted AI, and I have been a long time user. My job was never about rapidly producing code so the ability to rapidly produce code is strictly just a boon.
My problem is that c suite equates “vibe coding” and what you need is spec driven dev.
Spec driven dev is good software engineering practice. It’s been cast aside in the name of “agile” (which has nothing to do with not doing docs - but that’s another discussion).
My problem is writing good specs takes time. Reviewing code and coaxing the codegen to use specific methods (async, critical sections, rwlocks, etc) is based on previous dev experience. The general perception with c suite is that neither is important now since “vibing” is what’s in.
Which parts of it exactly? I've considered for loops and if branches "commodities" for a while. The way you organize code, the design, is still pretty much open and not a solved problem, including by AI-based tools. Yes we can now deal with it at a higher level (e.g. in prompts, in English), but it's not something I can fully delegate to an agent and expect good results (although I keep trying, as tools improve).
LLM-based codegen in the hands of good engineers is a multiplier, but you still need a good engineer to begin with.
My problem with the code the agents produce has nothing to do with style or art. The clearest example of how bad it is was shown by Anthropic's experiements where agents failed to write a C compiler, which is not a very hard programming job to begin with if you know compilers, as the models do, but they failed even with a practically unrealistic level of assistance (a complete spec, thousands of human-written tests, and a reference implementation used as an oracle, not to mention that the models were trained on both the spec and reference implementation).
If you look at the evolution of agent-written code you see that it may start out fine, but as you add more and more features, things go horribly wrong. Let's say the model runs into a wall. Sometimes the right thing to do is go back into the architecture and put a door in that spot; other times the right thing to do is ask why you hit that wall in the first place, maybe you've taken a wrong turn. The models seem to pick one or the other almost at random, and sometimes they just blast a hole through the wall. After enough features, it's clear there's no convergence, just like what happened in Anthropic's experiment. The agents ultimately can't fix one problem without breaking something else.
You can also see how they shoot themselves in the foot by adding layers upon layers of defensive coding that get so think they themselves can't think through them. I once asked an agent to write a data structure that maintains an invariant in subroutine A and uses it in subroutine B. It wrote A fine, but B ignored the invariant and did a brute-force search over the data, the very thing the data structure was meant to avoid. As it was writing it the agent explained that it doesn't want to trust the invariant established in A because it might be buggy... Another thing you frequently see is that the code they write is so intent on success that it has a plan A, plan B, and plan C for everything. It tries to do something one way and adds contingencies for failure.
And so the code and the complexity compound until nothing and no one can save you. If you're lucky, your program is "finished" before that happens. My experience is mostly with gpt5.4 and 5.3-codex, although Anthropic's failed experiment shows that the Claude models suffer from similar problems. What does it say when a compiler expert that knows multiple compilers pretty much by heart, with access to thousands of tests, can't even write a C compiler? Most important software is more complex than a C compiler, isn't as well specified, and the models haven't trained on it.
I wish they could write working code; they just don't.[1] But man, can they debug (mostly because they're tenacious and tireless).
[1]: By which I don't mean they never do, but you really can't trust them to do it as you can a programmer. Knowing to code, like knowing to fly a plane, doesn't mean sometimes getting the right result. It means always getting the right result (within your capabilities that are usually known in advance in the case of humans).
The thing is for most places the kind of code they write is good enough. You have painted an awfully pessimistic picture that frankly does not mirror reality of many enterprises.
> What does it say when a compiler expert that knows multiple compilers pretty much by heart, with access to thousands of tests, can't even write a C compiler?
It does not know compilers by heart. That's just not true. The point of the experiment was to see how big of a codebase it can handle without human intervention and now we know the limits. The limitation has always been context size.
>By which I don't mean they never do, but you really can't trust them to do it as you can a programmer. Knowing to code, like knowing to fly a plane, doesn't mean sometimes getting the right result. It means always getting the right result (within your capabilities that are usually known in advance in the case of humans).
Getting things right ~90% of the time still saves me a lot of time. In fact I would assume this is how autopilot also works in that it does 90% of a job and the pilot is required to supervise it.
A friend of mine works at a place whose CEO has been completely one-shotted; he vibe-coded an app and decided this could multiply their productivity like a hundredfold. And now he's implementing an AI mandate for every employee, replete with tracking and metrics and the threat of being fired of you don't play ball.
I was explaining this to my wife, who asked, why doesn't the CEO understand the limitations and the drawbacks the programmers are experiencing. And I said—he doesn't care, because he's looking at what other businesses are doing, what they're writing about in Bloomberg and WSJ, what "industry best practice is", and where the money is going. Trillions of dollars are going in to revolutionizing every industry with AI. If you're a CEO and you're not angling to capture a piece of that, then the board is going to have some serious questions about your capability to lead the company. Executives are often ignorant of the problems faced by line workers in a way perhaps best explained by a particular scene from Swordfish (2001): "He lives in a world beyond your world..." https://www.youtube.com/watch?v=jOV6YelKJ-A The complaints of a few programmers just don't matter when you have millions or billions of capital at your command, and business experts are saying you can tenfold your output with half the engineering workforce.
Right now there are only two choices for programmers: embrace generative AI fully and become proficient at it. Instead of surfacing problems with it, offer solutions: how can we use AI to make this better? Or have a very, very hard time working in the field.
The biggest differentiating factor today is engineers and/or decision maker willing to say no to a certain feature or implementation.
It's too easy to add bloat and complexity that can never go away, and with the tooling we now have a significant portion of engineers are now active risk to the projects they are working on.
AI allows executives to spend R&D to create a flywheel which builds more, faster, without hiring more. It makes every individual employee able to deliver more.
ICs dislike this because it raises expectations and puts the spotlight on delivery velocity. In a manufacturing analogy, it’s the same as adding robots that enables workers to pack twice as many pallets per day. You work the same hours, but you’re more tired, and the company pockets the profits.
Software Engineers are experiencing, many for the first time in their careers, what happens when they lose individual bargaining power. Their jobs are being redefined, and they have no say in the matter - especially in the US where “Union” is a forbidden word.
ICs dislike this because executives haven't been shy that their goal in increasing productivity with LLMs is to reduce headcount. Additionally, we have 50 years of data showing that increased productivity only marginally increases pay, if at all - all the gains are captured by the executives.
The more appropriate tools for ICs are torches and pitchforks.
No, they are captured disproportionately by the haut bourgeois capitalists. The two groups overlap to an extent (when major capitalist are nominally employed by a firm they invest in, it is usually as an executive), but executives qua executives (that is, in their role as top level managerial employees) are not the main beneficiaries of increased productivity.
You must be living in a different universe if you think ICs aren't enamored by AI. Every developer I know basically can't operate now without Claude Code (or equivalent).
Since the November/December Opus and Claude Code, I found I don't need to read the code any more. Architecture overview sure, and testing yes, but not reading the code directly any more.
Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.
I do regularly read the code that Claude outputs. And about 25% of the time the tests it writes will reimplement the code under test in the test.
Another 25% of the time the tests are wrong in some other way. Usually mocking something in a way that doesn't match reality.
And maybe 5% of the time Claude does some testing that requires a database, it will find some other database lying around and try to use that instead of what it's supposed to be doing.
And even if Claude writes a correct test, it will general have it skip the test if a dependency isn't there--no matter how fervently I tell it not to.
If you're not looking the code at all, you're building a house of cards. If you not reading the tests you're not even building you're just covering the floor in a big sloppy pile of runny shit.
I'd understand not reading the code of the system under test, but you don't even read the tests? I'd do that if my architecture and design were very precise, but at this point I'd have spent too much time designing rather than implementing (and possibly uncovering unknown unknowns in the process).
> Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.
Doesn't this take longer than reading the code?
I can see how some of this is part of the future (I remember this article talking about python modules having a big docstring at the top fully describing the public functions, and the author describing how they just update this doc, then regenerate the code fully, never reading it, and I find this quite convincing), but in the end I just want the most concise language for what I'm trying to express. If I need an edge case covered, I'd rather have a very simple test making that explicit than more verbose forms. Until we have formal specifications everywhere I guess.
But maybe I'm just not picturing what you mean exactly by "reports".
I've seen the code these models produce without a human programmer going over the results with care. It's still slop. Better slop than in the past, but slop none the less. If you aren't at minimum reading the code yourself and you're shipping a significant amount of it, you're either effectively the first person to figure out the magic prompt to get the models to produce better code, or you're shipping slop. Personally, I wouldn't bet on the former.
Yeah, these models have definitely become more useful in the last months, but statements like "I don't need to read the code any more" still say more about the person writing that than about agents.
How is it not? It reads to me as them saying that all these devs have deskilled from "barely competent" to "completely helpless". Or is your claim that they were actually really good devs, and the deskilling has been even more intense than I'm picturing?
I find people tend to omit that on HN and folks dealing with different roles end up yelling at each other because those details are missing. Being an embedded sw engineer writing straight C/ASM is, for instance, quite different from being a frontend engineer. AI will perform quite differently in each case.
My experience is that it gets the syntax right but constantly hallucinates APIs and functions that don't exist but sound like they should. It also seems to be tricked by variable names that don't line up with their usage.
If those devs can't operate without an LLM, they weren't worth their salt to begin with. I find that most competent devs are skeptical of the tech, because it doesn't help them. But even among those who embrace it, they would get by just fine if it was gone tomorrow.
IC here enamored with LLM - my implementation speed used to be a bottleneck of what I can do both professionally and personally, and now only thoughts and idea are the limit. That is incredibly exciting.
Thoughts and idea as in "I will implement this in this structure, with these tradeoff, and it will work with these 4 APIs and have no extra features and here's how I'm (or LLM with tools is) going to run it and test it".
Thoughts and idea not as in "build facebook" - a lot of people think AI can do that, it won't (but might pretend to) and it will just lead to failure.
My competitive edge did not diminish, it expanded.
It's absolutely replacing their jobs, but not their positions. They use it extensively to create all the paperwork, communications, emails, translations... and they work fine for these tasks so they think it's equally useful for everything.
I believe that it's pretty close to the article thesis, just more prosaic.
And yes, the AI works great for some programming tasks, just not for everything or completely unsupervised.
It’s not a mystery… I can tell you what I do most days, and probably 80% of it is communication. An AI could do that. That communication is to learn what is going on up, down, and across the org. I mostly want to make sure we aren’t doing redundant work — though sometimes that is useful, and making sure timelines aren’t slipping. Oh, and dealing with conflicts.
The other 20% is writing: policies, SOPs, audits, grants, performance reviews, etc.
I could probably automate over half my job in n8n in a weekend… hmm… actually might try that.
No, execs aren't owners, but... if an exec can deliver the same or better results with fewer employees, aren't they a better exec? And if so, aren't they worth more money?
(Yeah, I know, there's lots of instances of execs who got paid huge amounts of money and delivered abysmal results...)
Boards aren't exactly dummies either. If they can see their exec isn't necessary I think they'd make moves to eliminate the positions. But that's in a world where reality meets the hype, and I don't think we're there yet. It gets weirder to think that then anyone with access to the tools and some capital could reasonably make their own company to battle it out with the big guys, but that future is a lot hazier.
Not really, not unless you're C-suite or your org size is in the thousands. When Google's looking for a VP to run a 100 person department, they care about your experience running similarly sized orgs as much as they care about your ability to achieve business results. People make fun of empire building but it's absolutely rational on the individual level.
In addition to reason in the article, one thing I’ve noticed among some executives and product managers is their experience using LLM coding tools causes them to lose respect for human software engineers. I’ve seen managers lose all respect for engineering excellence and assume anything they want can be shat out by an LLM on a short deadline. Or assume because they were able to vibe code something trivial like a blog they don’t need to involve engineers in the design of anything, rather they should just be code monkeys that follow whatever design the product managers vibed up. It is really demoralizing to be talked to as if the speaker is promoting an LLM.
I'm an IC and I love it. Executives have the wrong concept of AI. For them it's chat + magic, and then it does everything. You can't work with people who have incorrect concepts about how the world works. Best ignore them.
You're right, execs keep trying to fit the LLM square peg into the "inteligent agent" round hole.
Developers use it, for groking a codebase, for implementing boilerplate, for debugging. They don't need juniors to do the grunt work anymore, they can build and throw away, the language and technology moats get smaller.
The value of low level managers, whose power came from having warm bodies to do the grunt work, diminishes.
The bean counters will be like when does it pay for itself. Will it? IDK, IDC.
Validation efforts likely become more necessary, so costs rise in another area. And product managers find they still need someone to translate the requirements well because LLMs are too agreeable. Cost optimization still needs someone to intervene as well.
I know there's an attempt to shift the development part from developers to other laypeople, but I think that's just going to frustrate everyone involved and probably settle back down into technical roles again. Well paid? Unclear.
I've seen literally every company in the world launch a chat bot as their AI strategy. I've also had clients who "wanted to do something with AI", and they would only be happy when they saw a chat UI. I built Semantic Search. Improves business performance significantly without changing the UI. Nobody is impressed because you cannot show it around and talk about it. It looks the same. It has to look like real AI, they say.
The bigger question is if AI helps cut down the time of development by 10x (assume for this conversation), and the products are released immediately, will companies keep pushing products/functionalities out every week/month? They still have to wait and see adoption, feedback etc to see if it works or not. Sure ai speeds up development but to what end? It’s not like meta is going to compress 5yrs or instagram features into 1! No one has the pipeline built up. So not sure how it fits into the overall company strategy. It’s only helping to fire people now that’s it?
Iterating an existing product takes time, but creating a clean room clone of an existing product could be accelerated significantly with AI. We could be moving towards an environment where bigtech falls back on one of its core competencies (scale) and hoards infra while small startups pay them compute and inference costs to undercut existing consumer-facing software on price.
I can’t say who will win or lose. The value of a social network has as much to do with its userbase than its tech, so maybe Meta has a different path. Alphabet and Microsoft are who I really have in mind here.
For non-technical, the current meteoric rise of AI is due to the fact that AI is generally synonymous to "it can talk". It has never _really_ spoken to the wider audience that the image recognition, or various filters, or whatever classifiers they could have stumbled upon are AI as well. What we have, now, is AI in the truest sense. And executives are primarily non-technical.
As for the technical people, we know how it works, we know how it doesn't work, and we're not particularly amused.
I don't buy it.
Executives worry about labor costs, ARR, RoI, etc. The grandest promises of AI are that executives will make a lot more money with a lot fewer employees. Of course they are pushing it!
ICs worry about doing their job (either doing it well because they care about their craft, or doing it good enough because they need to pay bills). AI doesn't really promise them anything. Maybe they automate some of their tasks away, but that just means they will take on more tasks. For practically any IC, there is no increase in wealth nor reduction in labor time. There is only a new quiet lingering threat that they might be laid off if an executive determines they're not needed anymore.
The premise is wrong. Plenty of ICs 'enamored' with AI.
If you are not, you either have a boring job or do not have any ideas that are worth prototyping asynchronously. Or haven't tried AI in the last ~3 months.
They don’t have to use the tech, except maybe superficially. They are either being explicitly mislead by salespeople or like others have mentioned, it simply is a vehicle to confirm their own biases or annoyance at having to pay peons. It’s up to the grunts to actually make this work.
It’s like Marc Andressen bloviating about how AI will replace everyone except him.
To be fair, some of this is understandable. At some level, you’re just going to see some things as a bullet point in a daily/monthly/quarterly report and possibly a 10 minute presentation. You’re implicitly assuming that the folks under you have condensed this information into something meaningful.
I do not think most executives are particularly enamored with AI. They are being mostly driven by the fear of missing out. More precisely, their thought process is: if they bet on AI and fail, they can plausibly claim that it was the technology's fault (not good enough, poorly suited for the business, etc). But if they skip on AI by choice, and their competition succeeds, they will be blamed personally. The more hyped a technology is, the stronger this calculus is for the managers. It's like Pascal's wager in a way.
Look at any of the large developer surveys out there, AI adoption is up to 80 - 90%; ICs absolutely are enamored with AI too. HN, and social media in general, is largely an echo chamber of the loudest voices that tend to skew negative, but does not reflect the broader reality. If HN were to be believed, most of Big Tech would be dead instead of thriving more than ever.
That said, the central point of the TFA is spot-on, though it could be made more generally, as it applies to engineering as well as management: uncertainty rises sharply the higher you climb the corporate and/or seniority ladder. In fact, the most important responsibility at higher levels is to take increasing ambiguity and transform it into much more deterministic roles and tasks that can be farmed out to many more people lower on the ladder.
The biggest impact of AI is that most deterministic tasks (and even some suprisingly ambiguous ones) are now spoken for. This happens to be at the bread and butter of the junior levels, and is where most of the job displacement will happen.
I would say the most essential skill now is critical thinking, and the most essential personality trait is being comfortable with uncertainty (or as the LinkedInfluencers call it, "having a growth mindset.") Unfortunately, most of our current educational and training processes fail to adequately prepare us for this (see: "grade inflation") so at a minimum the fix needs to start there.
Executive's job is to increase profit. Reduction in employees is a primary way to do that. AI is the most promising way to reduce the need for employees.
Executives do not need actively functional systems from AI to help with their own daily work. Nothing falls over if their report is not quite right. So they are seeing AI output that is more complete for their own purposes.
But also, AI is good enough to accelerate software engineering. To the degree that there are problems with the output, well, that's why they haven't fired all the the engineers yet. And executives never really cared about code quality -- that is the engineers' problem.
What I'm trying to build for my small business client right now is not engineering but still requires some remaining employees. He's already automated a lot of it. But I'm trying to make a full version of his call little center that can run on one box like an H200. Which we can rent for like $3.59/hr. Which if I remember correctly is approximately the cost of one of his Filipino employees.
Where we are headed is that the executives are themselves pretty quickly going to be targeted for replacement. Especially those that do not have firm upper class social status that puts them in the same social group as ownership.
> individual contributors are evaluated by their execution on deterministic tasks.
Ha! Apparently the author hasn't been asked "how long will it take to code this?" yet... And isn't a common developer complaint that management does not know how to evaluate them, and substitutes things like how quickly a task gets completed, with the result that some guy looks amazing while his coworkers get stuck with all his technical debt?
I think ICs are threatened because they're told from day one how they are at will employees that can be terminated at any time with or without cause.
On top of that, places like Amazon extol the virtues of only working on projects that can be completed with entirely fungible staffing and Google tries ever so hard to electroplate this steaming turd of an ideology with iron pyrite calling fungibles "generalists."
So along comes AI coding agents, which I love as an IC because it excels at tedious work I'd rather not have to do in the first place, yet I get why others see it as a threat. But I really think it's no more of a threat than any other empty promise to cut costs with the silver bullet of the month and we just have to let the loudmouths insist otherwise until the industry figures out this isn't a magic black box. They never learn, do they? Maybe their jobs depend on never learning.
Because like everything else in technology, executive don’t understand it beyond a first order level and assign their own value system to it. It seems like magic TO THEM because they’ve never been able to orchestrate such capability without friction until now and that is the shadow of 20 years of search and semantic search stagnation mostly due to Google.
ICs see the teams being reduced as the individual productivity gets increased the amount of FTE per project goes down, and superfluous folks shown the door.
Meanwhile executives see the money related numbers go up.
- You check their work and they made some mistakes, but it's good enough to use
- You ultimately don't know if they're doing the best at their job but you have regular performance check-ins to be safe
As ICs we can complain all we want about the quality of AI, but as far as your manager goes - you using AI is not that much different to them having an employee.
I’d posit that AI is good at tasks that managers have to do: it is a world composited primarily of processes and procedures set up by humans, about other humans. In other words it is just like an ai trained on text. At the worker level you have to interact with the real, outside world in some way. If I could have AI take the wheel for every share point tracker management manages to cook up, I’d be raving about it too.
I think one reason for the excitement is that the “software crisis” is real, painful, and costly. Thus it’s tempting to grasp for a shiny new silver bullet that might have a chance of solving it.
I’m neither a developer nor an executive, but from my vantage point the software crisis has to do with the fact that software development presents an existential risk to any organization that engages in it. It seems to be utterly resilient to estimation, and projects can run late by months or even years with no good explanation except “it’s management’s fault.” This has been discussed at length. If I had a good answer, “I wouldn’t still be working here” as the saying goes. But half a century after The Mythical Man Month, it still reads like it was written yesterday, and “no silver bullets” seems to ring true.
In my view, the software crisis will be resilient. Throwing more code, or more code per day, at a late project will make it later. There will be a grace period while the pace of coding seems exciting, but then the reality will set in: “We haven’t shipped a product.” And it will be management’s fault.
In my systems programming job ICs have mostly avoided it because we don't have time to learn a new thing with questionable benefits. A lot of my team are really, really good programmers and like that aspect of the job. They don't want to turn any part of it over to a machine. Now if a machine could save us from ever dealing with Jira...
That said, I have begun using AI for some things and it is starting to be useful. It's still 50/50 though, with many hallucinations that waste time but some cases where it caught very simple bugs(syntax or copy/paste errors). I think the experience of, say, systems programmers is very different vs python/web folks though. AI does a great job for my helper scripts in Python.
Management needs to take their own medicine though. They continue to refuse to leverage AI to do things it could actually be good at. I give a duplicate status to management 3x/week now. Why? AI could handle tracking and summarizing it just fine. It could also produce my monthly status for me.
Admit to having drank the koolaide, it is the first step.
I wrote an entire system with tech I barely understand (duckdb), next.js etc, made 7 to 10 iterations per day, and multiple new functions and integrations in hours all while doing my main job. What does the code look like ???. It works, I do not care. Can the AI modify it in under 5 minutes, yes. New features that would take minimal a week, got done in 2 to 3 minutes. Did the AI ever complain, no it did not. Anyone who thinks they will be hand coding going forward is completely fooling themselves.
The AI tests better than most engineers. When asked it builds flawless test harnesses and even suggests better solutions.
Never going back.
I liked the article but it misses one point. ICs take pride in some types of expertise they have accumulated over the years. AI kinda nullifies this. For instance, if I worked with Python/Django for ~5-10 years I might have become a sort of expert in Django. I know exactly the utility methods, conventions to use etc. But there's little need for such expertise with AI.
What? Doesn't this boil down to "people like people who reliably get results", e.g., we live in a complicated nondeterministic world but we try and make it as deterministic as possible, except for some reason you focus on the nondeterministic part for managers, and "deterministic" part for engineers?
Not even sure if determinism is a good axis to analyze this problem. Also smells extremely like concept creep - do you mean "moving up the abstraction stack" as "non determinism" too?
When you analyze this as "Management loves AI" and "workers hate it" goes completely back to 'who owns the means of production?', and can be clearly seen within Marx's critique.
> When you analyze this as "Management loves AI" and "workers hate it" goes completely back to 'who owns the means of production?', and can be clearly seen within Marx's critique.
AI has freed me from a vicious cycle that I had been corralled into as an explicit attrition tactic, and which almost ended with me being used non-consensually for reproductive purposes on at least one occasion.
It accomplished this not simply by eliminating my overpaid bullshit job as parasite attractor; but by putting an end to its pathetic semblance of a premise: building software to be used by, uh, someone? for, uh, something?
The various entities requesting the work (or, in later years, the layers of barely-sentient intermediaries between me and said entities) were hardly if ever clear on how exactly this was supposed to produce value; but now they're free, too! Free from having to even try to understand how answering that question is relevant, emdash - so in the end it worked out for them as well!
I am finally at liberty to do something worthwhile with my life, and while at this point I realize it'll take me some time to even remember what "worthwhile" even was (or whether such a thing still exists in your imaginary world of personalized sensory bubbles), I do sleep a rich REM sleep knowing society is now capable of digging its own grave without my assistance. Seriously, I was looking at my bank account and getting a little worried.
I am told that mine is a minority position: if you happen to be the kind of person who believes that more is better, no matter more of what, rest assured you and your eventual progeny will be quite safe - for a while, anyway - in your new role as AI trainer (or is it AI fodder, let's let the market decide!)
Well, turns out when we are all busy looking the part, it becomes impossible for anyone to actually play the part; but also nobody notices, so this is fine too!
Just one request on my part: if possible, do shut up while figuring out how to better turn yourself and our world into paperclips, alright? Besides the ones that you recognize as people, a whole bunch of other people do live on this here planetation - and I hear they find all the AI blather to be mighty annoying.
Reads like an extended slop LinkedIn post. The author poses a question with an obvious answer yet answers with the most galaxy brain take possible while dropping in some academic concepts to make themselves sound like a thought leader despite probably only taking an intro class in college 10+ years ago.
I think executives are excited about AI because it confirms their worldview: that the work is a commodity and the real value lies in orchestration and strategy.
It doesn't help that the west has a clear bias wherein moving "up" is moving away from the work. Many executives often don't know what good looks like at the detail level, so they can't evaluate AI output quality.
MD here, of a really small company (and I'm not a doctor).
I'm (mildly) excited by LLMs because I love a new shiny tool that does appear to have quite some utility.
My analogy these days is a screwdriver. Let's ignore screw development for now.
The first screwdrivers, which we still use, are slotted and have a habit of slipping sideways and jumping (camming out). That's err before LLMs ... something ... something.
Fast forward and we have Philips and Pozi and electric drivers. Yes there were ratchet jobs, and I still have one but the cordless electric drilldriver is nearly as magical as the Dr Who sonic effort! That's your modern LLM that is.
Now a modern drilldriver can wrench your wrist if you are not careful and brace properly. A modern LLM will hallucinate like a nineties raver on ecstasy but if you listen carefully and phrase your prompts carefully and ignore the chomping teeth and keep them hydrated, you may get something remarkable out of the creature 8)
Now I only use Chat at the totally free level but I do run several on-prem models using ollama and llama.cpp (all compiled from source ... obviously).
I love a chat with the snappily named "Qwen3.5-35B-A3B-UD-Q4_K_XL" but I'm well aware that it is like an old school Black and Decker off of the noughties and not like my modern De Walt wrist knackerers. I've still managed to get it to assist me to getting PowerDNS running with DNSSEC and LUA and configuring LACP and port channel/trunking and that on several switch brands.
You?
> I'm (mildly) excited by LLMs because I love a new shiny tool that does appear to have quite some utility.
I really think a lot of folks were conned by a smooth operator and a polished demo, so now everyone has to suffer though having this nebulous thing rammed down our throats regardless of its real utility because people with higher pay grades believe it has utility.
It feels like a lot of “AI is inevitable; you are failing to make this abundant future inevitable by your skepticism.”
>A modern LLM will hallucinate like a nineties raver on ecstasy but if you listen carefully and phrase your prompts carefully and ignore the chomping teeth and keep them hydrated, you may get something remarkable out of the creature 8)
Like what - the world's most advanced blowjob?
Perhaps I should have gone for Sherlock Holmes doing morphine as an analogy. Mind you the '90s raver fits for some models or is it the prompter ...
[dead]
This is definitely part of it.
I think another part of it is that AI tools demo really well, easily hiding how imperfect and limited they are when people see a contrived or cherry-picked example. Not a lot of people have a good intuition for this yet. Many people understand "a functional prototype is not a production app" but far fewer people understand "an AI that can be demonstrated to write functional code is not a software engineer" because this reality is rapidly evolving. In that rapidly evolving reality, people are seeing a lot of conflicting information, especially if you consider that a lot of that information is motivated (eg, "ai is bad because it's bad to fire engineers" which, frankly, will not be compelling to some executives out there). Whatever the new reality is going to be, we're not going to find out one step at a time. A lot of lessons are going to be learned the hard way.
> AI tools demo really well
Yes, and they work really well for small side projects that an exec probably used to try out the LLM.
But writing code in one clean discrete repo is (esp. at a large org) only a part of shipping something.
Over time, I think tooling will get better at the pieces surrounding writing the code though. But the human coordination / dependency pieces are still tricky to automate.
Jeff Bezos famously said “your margin is my opportunity,” I feel like Steve Jobs could’ve just as easily said “your slop is my opportunity.” (And he sort of did with “insanely great”)
The reasons given in the article are much more compelling.
Work is delivering value.
Yes, we have craftsmanship, but at the end of the day everything is ephemeral and impermanent and the world continues on without remembering us.
I think both the IC and executive are correct in superposition.
Indeed. Even the ur-craftsman, John Carmack, says that delivering value to customers is pretty much the only thing that matters in development. If AI lets you do that faster, cheaper, you'd be a fool not to use it. There's a reason why it's virtually a must in professional software engineering now.
As someone who's both an IC and leads other developers I disagree with the explanation. As a technical lead, with people I can much better predict the quality of the outcome than with LLMs, and the "failure modes" are much more manageable. As a programmer, I am actually more impressed with AI agents but in an informed and qualified way. Their debugging ability wows me; their coding ability disappoints and frustrates me.
I think that the simple explanation for why executives are so hyped about AI is simply that they're not familiar with its severe current limitations. For example, Garry Tan seems to really believe he's generating 10KLOC of working code per day; if he'd been a working developer he would have known he isn't.
And one executive talks to other executives not to their engineers. I think this is more peer pressure than anything else.
I lead a team of Data Engineers, DevOps Engineers, and Data Scientists. I write code and have done so literally for my entire life. AI-assisted codegen is incredible; especially over the last 3-4m.
I understand that developers feel their code is an art form and are pissed off that their life’s work is now a commodity; but, it’s time to either accept it and move on with what has happened, specialize as an actual artist, or potentially find yourself in a very rough spot.
I wonder if your background just has you fooled. I worked on a data science team and code was always a commodity. Most data scientists know how to code in a fairly trivial way, just enough to get their models built and served. Even data engineers largely know how to just take that and deploy to Spark. They don't really do much software engineering beyond that.
I'm not being precious here or protective of my "art" or whatever. But I do find it sort of hilarious and obvious that someone on a data science team might not understand the aesthetic value of code, and I suspect anyone else who has worked on such a team/ with such a team can probably laugh about the same thing - we've uh... we've seen your code. We know you don't value aesthetic code lol. Single variable names, `df1`, `df2`, `df3`.
I'm not particularly uncomfortable at the moment because understanding computers, understanding how to solve problems, understanding how to map between problems and solutions, what will or won't meet a customer's expectations, etc, is still core to the job as it always has been. Code quality is still critical as well - anyone who's vibe-coded >15KLOC projects will know that models simply can not handle that scale unless you're diligent about how it shoul dbe structured.
My job has barely changed semantically, despite rapid adoption of AI.
I understand that you’re trying to apply your experience to what we do as a team and that makes sense; but, we’re many many stddev beyond the 15K LOC target you identified and have no issues because we do indeed take care to ensure we’re building these things the right way.
So you understand and you agree and confirm my experience?
I have worked at many places and have seen the work of DEs and DSs that is borderline psychotic; but it got the job done, sorta. I have suffered through QA of 10000 lines that I ended up rewriting in less than 100.
So, yes; I understand where you’re coming from. But; that’s not what we do.
Yes, but then you said that you do what I'm suggesting is still critical to do, which is maintain the codebase even if you heavily leverage models. " we do indeed take care to ensure we’re building these things the right way."
> We know you don't value aesthetic code lol. Single variable names, `df1`, `df2`, `df3`.
https://degoes.net/articles/insufficiently-polymorphic
> My job has barely changed semantically, despite rapid adoption of AI.
it's coming... some places move slower than other but it's coming
> https://degoes.net/articles/insufficiently-polymorphic
lol this is not why people do "df1", "df2", etc, nor are those polymorphic names but okay.
> it's coming... some places move slower than other but it's coming
What is coming, exactly? Again, as said, I work at a company that has rapidly adopted AI, and I have been a long time user. My job was never about rapidly producing code so the ability to rapidly produce code is strictly just a boon.
My problem is that c suite equates “vibe coding” and what you need is spec driven dev.
Spec driven dev is good software engineering practice. It’s been cast aside in the name of “agile” (which has nothing to do with not doing docs - but that’s another discussion).
My problem is writing good specs takes time. Reviewing code and coaxing the codegen to use specific methods (async, critical sections, rwlocks, etc) is based on previous dev experience. The general perception with c suite is that neither is important now since “vibing” is what’s in.
> their life’s work is now a commodity
Which parts of it exactly? I've considered for loops and if branches "commodities" for a while. The way you organize code, the design, is still pretty much open and not a solved problem, including by AI-based tools. Yes we can now deal with it at a higher level (e.g. in prompts, in English), but it's not something I can fully delegate to an agent and expect good results (although I keep trying, as tools improve).
LLM-based codegen in the hands of good engineers is a multiplier, but you still need a good engineer to begin with.
My problem with the code the agents produce has nothing to do with style or art. The clearest example of how bad it is was shown by Anthropic's experiements where agents failed to write a C compiler, which is not a very hard programming job to begin with if you know compilers, as the models do, but they failed even with a practically unrealistic level of assistance (a complete spec, thousands of human-written tests, and a reference implementation used as an oracle, not to mention that the models were trained on both the spec and reference implementation).
If you look at the evolution of agent-written code you see that it may start out fine, but as you add more and more features, things go horribly wrong. Let's say the model runs into a wall. Sometimes the right thing to do is go back into the architecture and put a door in that spot; other times the right thing to do is ask why you hit that wall in the first place, maybe you've taken a wrong turn. The models seem to pick one or the other almost at random, and sometimes they just blast a hole through the wall. After enough features, it's clear there's no convergence, just like what happened in Anthropic's experiment. The agents ultimately can't fix one problem without breaking something else.
You can also see how they shoot themselves in the foot by adding layers upon layers of defensive coding that get so think they themselves can't think through them. I once asked an agent to write a data structure that maintains an invariant in subroutine A and uses it in subroutine B. It wrote A fine, but B ignored the invariant and did a brute-force search over the data, the very thing the data structure was meant to avoid. As it was writing it the agent explained that it doesn't want to trust the invariant established in A because it might be buggy... Another thing you frequently see is that the code they write is so intent on success that it has a plan A, plan B, and plan C for everything. It tries to do something one way and adds contingencies for failure.
And so the code and the complexity compound until nothing and no one can save you. If you're lucky, your program is "finished" before that happens. My experience is mostly with gpt5.4 and 5.3-codex, although Anthropic's failed experiment shows that the Claude models suffer from similar problems. What does it say when a compiler expert that knows multiple compilers pretty much by heart, with access to thousands of tests, can't even write a C compiler? Most important software is more complex than a C compiler, isn't as well specified, and the models haven't trained on it.
I wish they could write working code; they just don't.[1] But man, can they debug (mostly because they're tenacious and tireless).
[1]: By which I don't mean they never do, but you really can't trust them to do it as you can a programmer. Knowing to code, like knowing to fly a plane, doesn't mean sometimes getting the right result. It means always getting the right result (within your capabilities that are usually known in advance in the case of humans).
The thing is for most places the kind of code they write is good enough. You have painted an awfully pessimistic picture that frankly does not mirror reality of many enterprises.
> What does it say when a compiler expert that knows multiple compilers pretty much by heart, with access to thousands of tests, can't even write a C compiler?
It does not know compilers by heart. That's just not true. The point of the experiment was to see how big of a codebase it can handle without human intervention and now we know the limits. The limitation has always been context size.
>By which I don't mean they never do, but you really can't trust them to do it as you can a programmer. Knowing to code, like knowing to fly a plane, doesn't mean sometimes getting the right result. It means always getting the right result (within your capabilities that are usually known in advance in the case of humans).
Getting things right ~90% of the time still saves me a lot of time. In fact I would assume this is how autopilot also works in that it does 90% of a job and the pilot is required to supervise it.
> literally
You were either a very talented baby or we’re justified in questioning your ability to assess the correctness of nitpicky formalisms.
Funny.
They're oddly credulous of the shovel salesmen in the gold rush, too.
E.g. when Jensen Huang said that you need to pair your $250k engineer with $250k of tokens.
A friend of mine works at a place whose CEO has been completely one-shotted; he vibe-coded an app and decided this could multiply their productivity like a hundredfold. And now he's implementing an AI mandate for every employee, replete with tracking and metrics and the threat of being fired of you don't play ball.
I was explaining this to my wife, who asked, why doesn't the CEO understand the limitations and the drawbacks the programmers are experiencing. And I said—he doesn't care, because he's looking at what other businesses are doing, what they're writing about in Bloomberg and WSJ, what "industry best practice is", and where the money is going. Trillions of dollars are going in to revolutionizing every industry with AI. If you're a CEO and you're not angling to capture a piece of that, then the board is going to have some serious questions about your capability to lead the company. Executives are often ignorant of the problems faced by line workers in a way perhaps best explained by a particular scene from Swordfish (2001): "He lives in a world beyond your world..." https://www.youtube.com/watch?v=jOV6YelKJ-A The complaints of a few programmers just don't matter when you have millions or billions of capital at your command, and business experts are saying you can tenfold your output with half the engineering workforce.
Right now there are only two choices for programmers: embrace generative AI fully and become proficient at it. Instead of surfacing problems with it, offer solutions: how can we use AI to make this better? Or have a very, very hard time working in the field.
pass a certain LOC number, the utility become negative (unless they were pure tests).
Even the utility of too many tests is negative. More upkeep and harder to change the code.
And with LLMs also more context and token usage and cost.
The biggest differentiating factor today is engineers and/or decision maker willing to say no to a certain feature or implementation.
It's too easy to add bloat and complexity that can never go away, and with the tooling we now have a significant portion of engineers are now active risk to the projects they are working on.
I disagree because I use it in a pretty huge codebase and it definitely saves time.
I'm sure an IC is not an integrated circuit or independent contractor. So what is it?
Individual Contributor, usually as opposed to any kind of management role.
It is silicon valley speech for "programmer".
AI allows executives to spend R&D to create a flywheel which builds more, faster, without hiring more. It makes every individual employee able to deliver more.
ICs dislike this because it raises expectations and puts the spotlight on delivery velocity. In a manufacturing analogy, it’s the same as adding robots that enables workers to pack twice as many pallets per day. You work the same hours, but you’re more tired, and the company pockets the profits.
Software Engineers are experiencing, many for the first time in their careers, what happens when they lose individual bargaining power. Their jobs are being redefined, and they have no say in the matter - especially in the US where “Union” is a forbidden word.
ICs dislike this because executives haven't been shy that their goal in increasing productivity with LLMs is to reduce headcount. Additionally, we have 50 years of data showing that increased productivity only marginally increases pay, if at all - all the gains are captured by the executives.
The more appropriate tools for ICs are torches and pitchforks.
IC's can cry about it because they did the exact same thing to the people they replaced.
> all the gains are captured by the executives.
No, they are captured disproportionately by the haut bourgeois capitalists. The two groups overlap to an extent (when major capitalist are nominally employed by a firm they invest in, it is usually as an executive), but executives qua executives (that is, in their role as top level managerial employees) are not the main beneficiaries of increased productivity.
> it’s the same as adding robots that enables workers to pack twice as many pallets per day.
It isn't this. This is the executive's misinterpretation.
You must be living in a different universe if you think ICs aren't enamored by AI. Every developer I know basically can't operate now without Claude Code (or equivalent).
I hope that’s exaggeration because being unable to operate without it means you’re going to do a terrible job of reviewing the code it’s producing.
Since the November/December Opus and Claude Code, I found I don't need to read the code any more. Architecture overview sure, and testing yes, but not reading the code directly any more.
Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.
I do regularly read the code that Claude outputs. And about 25% of the time the tests it writes will reimplement the code under test in the test.
Another 25% of the time the tests are wrong in some other way. Usually mocking something in a way that doesn't match reality.
And maybe 5% of the time Claude does some testing that requires a database, it will find some other database lying around and try to use that instead of what it's supposed to be doing.
And even if Claude writes a correct test, it will general have it skip the test if a dependency isn't there--no matter how fervently I tell it not to.
If you're not looking the code at all, you're building a house of cards. If you not reading the tests you're not even building you're just covering the floor in a big sloppy pile of runny shit.
I'd understand not reading the code of the system under test, but you don't even read the tests? I'd do that if my architecture and design were very precise, but at this point I'd have spent too much time designing rather than implementing (and possibly uncovering unknown unknowns in the process).
> Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.
Doesn't this take longer than reading the code?
I can see how some of this is part of the future (I remember this article talking about python modules having a big docstring at the top fully describing the public functions, and the author describing how they just update this doc, then regenerate the code fully, never reading it, and I find this quite convincing), but in the end I just want the most concise language for what I'm trying to express. If I need an edge case covered, I'd rather have a very simple test making that explicit than more verbose forms. Until we have formal specifications everywhere I guess.
But maybe I'm just not picturing what you mean exactly by "reports".
If I were you I’d very worried about getting laid off. That kind of work isn’t going to keep earning a software engineer salary.
I've seen the code these models produce without a human programmer going over the results with care. It's still slop. Better slop than in the past, but slop none the less. If you aren't at minimum reading the code yourself and you're shipping a significant amount of it, you're either effectively the first person to figure out the magic prompt to get the models to produce better code, or you're shipping slop. Personally, I wouldn't bet on the former.
Yeah, these models have definitely become more useful in the last months, but statements like "I don't need to read the code any more" still say more about the person writing that than about agents.
It's not. Most developers are pretty bad at their job, and already can't review code very effectively.
They just create even more slop currently, which will be the case until someone realizes they aren't needed to produce slop at all.
So what you're saying is that now the worst devs can produce code faster and their velocity is no longer limited by their incompetence.
Why is this supposed to be a good thing?
Come on man, that’s not what GP is saying.
How is it not? It reads to me as them saying that all these devs have deskilled from "barely competent" to "completely helpless". Or is your claim that they were actually really good devs, and the deskilling has been even more intense than I'm picturing?
Because that also sounds real bad!
What do you work on?
I find people tend to omit that on HN and folks dealing with different roles end up yelling at each other because those details are missing. Being an embedded sw engineer writing straight C/ASM is, for instance, quite different from being a frontend engineer. AI will perform quite differently in each case.
AI is very good at writing C and asm. It even writes good Verilog. Unfortunately.
My experience is that it gets the syntax right but constantly hallucinates APIs and functions that don't exist but sound like they should. It also seems to be tricked by variable names that don't line up with their usage.
If those devs can't operate without an LLM, they weren't worth their salt to begin with. I find that most competent devs are skeptical of the tech, because it doesn't help them. But even among those who embrace it, they would get by just fine if it was gone tomorrow.
The industry is filled with people who just want to close their tickets and sign off.
And plenty of prolific programmers are writing publicly about their Ai use.
[dead]
IC here enamored with LLM - my implementation speed used to be a bottleneck of what I can do both professionally and personally, and now only thoughts and idea are the limit. That is incredibly exciting.
Thoughts and idea as in "I will implement this in this structure, with these tradeoff, and it will work with these 4 APIs and have no extra features and here's how I'm (or LLM with tools is) going to run it and test it".
Thoughts and idea not as in "build facebook" - a lot of people think AI can do that, it won't (but might pretend to) and it will just lead to failure.
My competitive edge did not diminish, it expanded.
>My competitive edge did not diminish, it expanded.
Reality check: LLMs are available to everyone, dev or otherwise, so your 'competitive edge' is indeed diminished if you believe LLMs are all that.
People who will get paid more if AI eliminates jobs (in theory, anyway — execs aren't necessarily owners) versus people whose jobs will be eliminated.
The funny thing is that AI can probably replace the exec’s job before it can replace a devs job.
It's absolutely replacing their jobs, but not their positions. They use it extensively to create all the paperwork, communications, emails, translations... and they work fine for these tasks so they think it's equally useful for everything.
I believe that it's pretty close to the article thesis, just more prosaic.
And yes, the AI works great for some programming tasks, just not for everything or completely unsupervised.
What do you think the exec job is? What do they do every day, every working hour? And how will AI replace that?
It’s not a mystery… I can tell you what I do most days, and probably 80% of it is communication. An AI could do that. That communication is to learn what is going on up, down, and across the org. I mostly want to make sure we aren’t doing redundant work — though sometimes that is useful, and making sure timelines aren’t slipping. Oh, and dealing with conflicts.
The other 20% is writing: policies, SOPs, audits, grants, performance reviews, etc.
I could probably automate over half my job in n8n in a weekend… hmm… actually might try that.
[dead]
No, execs aren't owners, but... if an exec can deliver the same or better results with fewer employees, aren't they a better exec? And if so, aren't they worth more money?
(Yeah, I know, there's lots of instances of execs who got paid huge amounts of money and delivered abysmal results...)
Boards aren't exactly dummies either. If they can see their exec isn't necessary I think they'd make moves to eliminate the positions. But that's in a world where reality meets the hype, and I don't think we're there yet. It gets weirder to think that then anyone with access to the tools and some capital could reasonably make their own company to battle it out with the big guys, but that future is a lot hazier.
Not really, not unless you're C-suite or your org size is in the thousands. When Google's looking for a VP to run a 100 person department, they care about your experience running similarly sized orgs as much as they care about your ability to achieve business results. People make fun of empire building but it's absolutely rational on the individual level.
In addition to reason in the article, one thing I’ve noticed among some executives and product managers is their experience using LLM coding tools causes them to lose respect for human software engineers. I’ve seen managers lose all respect for engineering excellence and assume anything they want can be shat out by an LLM on a short deadline. Or assume because they were able to vibe code something trivial like a blog they don’t need to involve engineers in the design of anything, rather they should just be code monkeys that follow whatever design the product managers vibed up. It is really demoralizing to be talked to as if the speaker is promoting an LLM.
I'm an IC and I love it. Executives have the wrong concept of AI. For them it's chat + magic, and then it does everything. You can't work with people who have incorrect concepts about how the world works. Best ignore them.
You're right, execs keep trying to fit the LLM square peg into the "inteligent agent" round hole.
Developers use it, for groking a codebase, for implementing boilerplate, for debugging. They don't need juniors to do the grunt work anymore, they can build and throw away, the language and technology moats get smaller.
The value of low level managers, whose power came from having warm bodies to do the grunt work, diminishes.
The bean counters will be like when does it pay for itself. Will it? IDK, IDC.
Validation efforts likely become more necessary, so costs rise in another area. And product managers find they still need someone to translate the requirements well because LLMs are too agreeable. Cost optimization still needs someone to intervene as well.
I know there's an attempt to shift the development part from developers to other laypeople, but I think that's just going to frustrate everyone involved and probably settle back down into technical roles again. Well paid? Unclear.
> For them it's chat + magic, and then it does everything
Look, I know that we like poking fun at some people but generally I haven't seen execs saying this.
I've seen literally every company in the world launch a chat bot as their AI strategy. I've also had clients who "wanted to do something with AI", and they would only be happy when they saw a chat UI. I built Semantic Search. Improves business performance significantly without changing the UI. Nobody is impressed because you cannot show it around and talk about it. It looks the same. It has to look like real AI, they say.
The bigger question is if AI helps cut down the time of development by 10x (assume for this conversation), and the products are released immediately, will companies keep pushing products/functionalities out every week/month? They still have to wait and see adoption, feedback etc to see if it works or not. Sure ai speeds up development but to what end? It’s not like meta is going to compress 5yrs or instagram features into 1! No one has the pipeline built up. So not sure how it fits into the overall company strategy. It’s only helping to fire people now that’s it?
Iterating an existing product takes time, but creating a clean room clone of an existing product could be accelerated significantly with AI. We could be moving towards an environment where bigtech falls back on one of its core competencies (scale) and hoards infra while small startups pay them compute and inference costs to undercut existing consumer-facing software on price.
I don’t follow, do you mean to say that meta will become an infra provider instead of the full experience?
I can’t say who will win or lose. The value of a social network has as much to do with its userbase than its tech, so maybe Meta has a different path. Alphabet and Microsoft are who I really have in mind here.
There are two views, non-technical and technical.
For non-technical, the current meteoric rise of AI is due to the fact that AI is generally synonymous to "it can talk". It has never _really_ spoken to the wider audience that the image recognition, or various filters, or whatever classifiers they could have stumbled upon are AI as well. What we have, now, is AI in the truest sense. And executives are primarily non-technical.
As for the technical people, we know how it works, we know how it doesn't work, and we're not particularly amused.
I don't buy it. Executives worry about labor costs, ARR, RoI, etc. The grandest promises of AI are that executives will make a lot more money with a lot fewer employees. Of course they are pushing it!
ICs worry about doing their job (either doing it well because they care about their craft, or doing it good enough because they need to pay bills). AI doesn't really promise them anything. Maybe they automate some of their tasks away, but that just means they will take on more tasks. For practically any IC, there is no increase in wealth nor reduction in labor time. There is only a new quiet lingering threat that they might be laid off if an executive determines they're not needed anymore.
That's the difference in enthusiasm about AI.
I'm not gonna say "incorrect" like the absolutists. It's an interesting hypothesis, at least.
But I will insist that executives are more driven by FOMO than a teenager.
The premise is incorrect. Plenty of ICs are enamored with AI. And plenty of executives are skeptical of it.
The premise is wrong. Plenty of ICs 'enamored' with AI.
If you are not, you either have a boring job or do not have any ideas that are worth prototyping asynchronously. Or haven't tried AI in the last ~3 months.
They are, until HR comes for them.
They don’t have to use the tech, except maybe superficially. They are either being explicitly mislead by salespeople or like others have mentioned, it simply is a vehicle to confirm their own biases or annoyance at having to pay peons. It’s up to the grunts to actually make this work.
It’s like Marc Andressen bloviating about how AI will replace everyone except him.
To be fair, some of this is understandable. At some level, you’re just going to see some things as a bullet point in a daily/monthly/quarterly report and possibly a 10 minute presentation. You’re implicitly assuming that the folks under you have condensed this information into something meaningful.
I do not think most executives are particularly enamored with AI. They are being mostly driven by the fear of missing out. More precisely, their thought process is: if they bet on AI and fail, they can plausibly claim that it was the technology's fault (not good enough, poorly suited for the business, etc). But if they skip on AI by choice, and their competition succeeds, they will be blamed personally. The more hyped a technology is, the stronger this calculus is for the managers. It's like Pascal's wager in a way.
Look at any of the large developer surveys out there, AI adoption is up to 80 - 90%; ICs absolutely are enamored with AI too. HN, and social media in general, is largely an echo chamber of the loudest voices that tend to skew negative, but does not reflect the broader reality. If HN were to be believed, most of Big Tech would be dead instead of thriving more than ever.
That said, the central point of the TFA is spot-on, though it could be made more generally, as it applies to engineering as well as management: uncertainty rises sharply the higher you climb the corporate and/or seniority ladder. In fact, the most important responsibility at higher levels is to take increasing ambiguity and transform it into much more deterministic roles and tasks that can be farmed out to many more people lower on the ladder.
The biggest impact of AI is that most deterministic tasks (and even some suprisingly ambiguous ones) are now spoken for. This happens to be at the bread and butter of the junior levels, and is where most of the job displacement will happen.
I would say the most essential skill now is critical thinking, and the most essential personality trait is being comfortable with uncertainty (or as the LinkedInfluencers call it, "having a growth mindset.") Unfortunately, most of our current educational and training processes fail to adequately prepare us for this (see: "grade inflation") so at a minimum the fix needs to start there.
Executive's job is to increase profit. Reduction in employees is a primary way to do that. AI is the most promising way to reduce the need for employees.
Executives do not need actively functional systems from AI to help with their own daily work. Nothing falls over if their report is not quite right. So they are seeing AI output that is more complete for their own purposes.
But also, AI is good enough to accelerate software engineering. To the degree that there are problems with the output, well, that's why they haven't fired all the the engineers yet. And executives never really cared about code quality -- that is the engineers' problem.
What I'm trying to build for my small business client right now is not engineering but still requires some remaining employees. He's already automated a lot of it. But I'm trying to make a full version of his call little center that can run on one box like an H200. Which we can rent for like $3.59/hr. Which if I remember correctly is approximately the cost of one of his Filipino employees.
Where we are headed is that the executives are themselves pretty quickly going to be targeted for replacement. Especially those that do not have firm upper class social status that puts them in the same social group as ownership.
> individual contributors are evaluated by their execution on deterministic tasks.
Ha! Apparently the author hasn't been asked "how long will it take to code this?" yet... And isn't a common developer complaint that management does not know how to evaluate them, and substitutes things like how quickly a task gets completed, with the result that some guy looks amazing while his coworkers get stuck with all his technical debt?
I think ICs are threatened because they're told from day one how they are at will employees that can be terminated at any time with or without cause.
On top of that, places like Amazon extol the virtues of only working on projects that can be completed with entirely fungible staffing and Google tries ever so hard to electroplate this steaming turd of an ideology with iron pyrite calling fungibles "generalists."
So along comes AI coding agents, which I love as an IC because it excels at tedious work I'd rather not have to do in the first place, yet I get why others see it as a threat. But I really think it's no more of a threat than any other empty promise to cut costs with the silver bullet of the month and we just have to let the loudmouths insist otherwise until the industry figures out this isn't a magic black box. They never learn, do they? Maybe their jobs depend on never learning.
Because like everything else in technology, executive don’t understand it beyond a first order level and assign their own value system to it. It seems like magic TO THEM because they’ve never been able to orchestrate such capability without friction until now and that is the shadow of 20 years of search and semantic search stagnation mostly due to Google.
ICs see the teams being reduced as the individual productivity gets increased the amount of FTE per project goes down, and superfluous folks shown the door.
Meanwhile executives see the money related numbers go up.
Because the majority of executives think AI is a magical black box, I’d reckon.
- You need something done
- You ask someone to do it
- You check their work and they made some mistakes, but it's good enough to use
- You ultimately don't know if they're doing the best at their job but you have regular performance check-ins to be safe
As ICs we can complain all we want about the quality of AI, but as far as your manager goes - you using AI is not that much different to them having an employee.
Because it’s a mythical silver bullet of increased output combined with reduced costs.
It makes me think of an executive I once reported to who “increased velocity” by changing the utilization rate on a spreadsheet from 75% to 80%.
It's part of the standard technology buzzword rotation:
embedded/cloud/IoT --> AI --> quantum…
When the company originally known as C3 Energy changes their name to C3.quantum, you'll know where on to the next buzzword.
I’d posit that AI is good at tasks that managers have to do: it is a world composited primarily of processes and procedures set up by humans, about other humans. In other words it is just like an ai trained on text. At the worker level you have to interact with the real, outside world in some way. If I could have AI take the wheel for every share point tracker management manages to cook up, I’d be raving about it too.
AI is the much-hoped-for MBA's Stone, the magical substance which transmutes engineering work (costly) into managerial work (valuable).
I think one reason for the excitement is that the “software crisis” is real, painful, and costly. Thus it’s tempting to grasp for a shiny new silver bullet that might have a chance of solving it.
I’m neither a developer nor an executive, but from my vantage point the software crisis has to do with the fact that software development presents an existential risk to any organization that engages in it. It seems to be utterly resilient to estimation, and projects can run late by months or even years with no good explanation except “it’s management’s fault.” This has been discussed at length. If I had a good answer, “I wouldn’t still be working here” as the saying goes. But half a century after The Mythical Man Month, it still reads like it was written yesterday, and “no silver bullets” seems to ring true.
In my view, the software crisis will be resilient. Throwing more code, or more code per day, at a late project will make it later. There will be a grace period while the pace of coding seems exciting, but then the reality will set in: “We haven’t shipped a product.” And it will be management’s fault.
Well I'm not looking forward to being out of a job and health insurance.
The implementation is harder than watching a few youtube videos on it
Eh...
In my systems programming job ICs have mostly avoided it because we don't have time to learn a new thing with questionable benefits. A lot of my team are really, really good programmers and like that aspect of the job. They don't want to turn any part of it over to a machine. Now if a machine could save us from ever dealing with Jira...
That said, I have begun using AI for some things and it is starting to be useful. It's still 50/50 though, with many hallucinations that waste time but some cases where it caught very simple bugs(syntax or copy/paste errors). I think the experience of, say, systems programmers is very different vs python/web folks though. AI does a great job for my helper scripts in Python.
Management needs to take their own medicine though. They continue to refuse to leverage AI to do things it could actually be good at. I give a duplicate status to management 3x/week now. Why? AI could handle tracking and summarizing it just fine. It could also produce my monthly status for me.
Admit to having drank the koolaide, it is the first step. I wrote an entire system with tech I barely understand (duckdb), next.js etc, made 7 to 10 iterations per day, and multiple new functions and integrations in hours all while doing my main job. What does the code look like ???. It works, I do not care. Can the AI modify it in under 5 minutes, yes. New features that would take minimal a week, got done in 2 to 3 minutes. Did the AI ever complain, no it did not. Anyone who thinks they will be hand coding going forward is completely fooling themselves. The AI tests better than most engineers. When asked it builds flawless test harnesses and even suggests better solutions. Never going back.
What's an IC?
An individual contributor. Someone who delivers technical work without managing people. It is an alternative to being promoted into a management role.
Individual Contributor. Directly creates work, not a supervisor.
I liked the article but it misses one point. ICs take pride in some types of expertise they have accumulated over the years. AI kinda nullifies this. For instance, if I worked with Python/Django for ~5-10 years I might have become a sort of expert in Django. I know exactly the utility methods, conventions to use etc. But there's little need for such expertise with AI.
What? Doesn't this boil down to "people like people who reliably get results", e.g., we live in a complicated nondeterministic world but we try and make it as deterministic as possible, except for some reason you focus on the nondeterministic part for managers, and "deterministic" part for engineers?
Not even sure if determinism is a good axis to analyze this problem. Also smells extremely like concept creep - do you mean "moving up the abstraction stack" as "non determinism" too?
Real answer - it's because an LLM is better than you at the things you suck at.
For executives, that's writing code. For ICs, it's other stuff.
Devs think it will save time and execs think it will save money.
But because time is money, I think all the benefits go to the dev. The exec still needs the dev regardless
ICs are too.
Why would ICs be enamored with something quite literally designed to replace them?
Because now you need less programmers. It is self explanatory.
Is it really a mystery? A hot take?
Executives see this as way to replace labor.
The labor sees themselves being replaced.
This is a story as old as the hills.
IC’s aren’t? Really?
IC is a strange relabeling of a "worker".
When you analyze this as "Management loves AI" and "workers hate it" goes completely back to 'who owns the means of production?', and can be clearly seen within Marx's critique.
> When you analyze this as "Management loves AI" and "workers hate it" goes completely back to 'who owns the means of production?', and can be clearly seen within Marx's critique.
How? Marx's critique doesn't land here at all.
Ic differentiates a lot of more then worker and not worker. Middle management and even a level or two above aren't anything special.
Ic can refer to people leading, without direct reports, making 500k+ in comp.
> I think there’s pretty clearly a divide in AI perception between executives and individual contributors (ICs).
Narrator: there is not
[dead]
[dead]
AI has freed me from a vicious cycle that I had been corralled into as an explicit attrition tactic, and which almost ended with me being used non-consensually for reproductive purposes on at least one occasion.
It accomplished this not simply by eliminating my overpaid bullshit job as parasite attractor; but by putting an end to its pathetic semblance of a premise: building software to be used by, uh, someone? for, uh, something?
The various entities requesting the work (or, in later years, the layers of barely-sentient intermediaries between me and said entities) were hardly if ever clear on how exactly this was supposed to produce value; but now they're free, too! Free from having to even try to understand how answering that question is relevant, emdash - so in the end it worked out for them as well!
I am finally at liberty to do something worthwhile with my life, and while at this point I realize it'll take me some time to even remember what "worthwhile" even was (or whether such a thing still exists in your imaginary world of personalized sensory bubbles), I do sleep a rich REM sleep knowing society is now capable of digging its own grave without my assistance. Seriously, I was looking at my bank account and getting a little worried.
I am told that mine is a minority position: if you happen to be the kind of person who believes that more is better, no matter more of what, rest assured you and your eventual progeny will be quite safe - for a while, anyway - in your new role as AI trainer (or is it AI fodder, let's let the market decide!)
Well, turns out when we are all busy looking the part, it becomes impossible for anyone to actually play the part; but also nobody notices, so this is fine too!
Just one request on my part: if possible, do shut up while figuring out how to better turn yourself and our world into paperclips, alright? Besides the ones that you recognize as people, a whole bunch of other people do live on this here planetation - and I hear they find all the AI blather to be mighty annoying.
Reads like an extended slop LinkedIn post. The author poses a question with an obvious answer yet answers with the most galaxy brain take possible while dropping in some academic concepts to make themselves sound like a thought leader despite probably only taking an intro class in college 10+ years ago.