> Software engineers are scared of designing things themselves.
When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.
Those beliefs aren't always true, but they're often true.
Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.
And there was a time when using libraries and frameworks was the right thing to do, for that very reason. But LLMs have the equivalent of way more experience than any single programmer, and can generate just the bit of code that you actually need, without having to include the whole framework.
It's strange to me when articles like this describe the 'pain of writing code'. I've always found that the easy part.
Anyway, this stuff makes me think of what it would be like if you had Tolkein around today using AI to assist him in his writing.
'Claude, generate me a paragraph describing Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
'Revise that so that Sam is Harsher and Frodo more stubborn.'
Sooner or later I look at that and think he'd be better off just writing the damned book instead of wasting so much time writing prompts.
Your last sentence describes my thoughts exactly. I try to incorporate Claude into my workflow, just to see what it can do, and the best I’ve ended up with is - if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
Even just some AI-assisted development in the trickier parts of my code bases completely robs me of understanding. And those are the parts that need my understanding the most!
I dont really understand how this is possible. I've built some very large applications, and even a full LLM data curation,tokenizer, pretrain, posttrain SFT/DPO pipeline with LLM's and it most certainly took far less time than if i had done it manually. Sure it isnt all optimal...but it most certainly isnt subpar, and it is fully functional
> if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
I believe the argument from the other camp is that you don't need to understand the code anymore, just like you don't need to understand the assembly language.
Of all the points the other side makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.
If you only understand the code by talking to AI, you would’ve been able to ask AI “how do we do a business feature” and ai would spit out a detailed answer, for a codebase that just says “pretend there is a codebase here”. This is of course an extreme example, and you would probably notice that, but this applies at all levels.
Any detail, anywhere cannot be fully trusted. I believe everyone’s goal should be to prompt ai such that code is the source of truth, and keep the code super readable.
If ai is so capable, it’s also capable of producing clean readable code. And we should be reading all of it.
> We don’t have to look at assembly, because a compiler produces the same result every time.
This is technically true in the narrowest possible sense and practically misleading in almost every way that matters. Anyone who's had a bug that only manifests at -O2, or fought undefined behavior in C that two compilers handle differently, or watched MSVC and GCC produce meaningfully different codegen from identical source, or hit a Heisenbug that disappears when you add a printf ... the "deterministic compiler" is doing a LOT of work in that sentence that actual compilers don't deliver on.
Also what's with the "sides" and "camps?" ... why would you not keep your identity small here? Why define yourself as a {pro, anti} AI person so early? So weird!
You just described deterministic behavior. Bugs are also deterministic. You don’t get different bugs every time you compile the same code the same way. With LLMs you do.
Re: “other side” - I’m quoting the grandparent’s framing.
People who really care about performance still do look at the assembly. Very few people write assembly anymore, a larger number do look at assembly every so often. It’s still a minority of people though.
I guess it would be similar here: a small few people will hand write key parts of code, a larger group will inspect the code that’s generated, and a far larger group won’t do either. At least if AI goes the way that the “other side” says.
>I believe the argument from the other camp is that you don't need to understand the code anymore
Then what stops anyone who can type in their native language to, ultimately when LLM's are perfected, just order their own software instead of using anybody else's (speaking about native apps like video games, mobile phones, desktop, etc.)?
Do they actually believe we'll need a bachelor's degree to prompt program in a world where nobody cares about technical details, because the LLM's will be taking care of? Actually, scratch that. Why would the companies who're pouring gorrilions of dollars in investment even give access to such power in an affordable way?
The deeper I look in the rabbit hole they think we're walking towards the more issues I see.
At least for me, the game-changer was realizing I could (with the help of AI) write a detailed plan up front for exactly what the code would be, and then have the AI implement it in incremental steps.
Gave me way more control/understanding over what the AI would do, and the ability to iterate on it before actually implementing.
sorry for being blunt, but if you have tried once, twice and came to this conclusion, it is definitely a skill issue, I never got comfortable by writing 3 lines of Java, Python or Go or any other language, it took me hundreds of hours spent doing non-sense, failing miserably and finding out that I was building things which already exists in std lib.
CI is failing. It passed yesterday. Is there a flaky API being called somewhere? Did a recent commit introduce a breaking change? Maybe one of my third-party dependencies shipped a breaking change?
I was going to work on new code, but now I have to spend between 5 minutes and an hour+ - impossible to predict - solving this new frustration that just cropped up.
I love building things and solving new problems. I'd rather not have that time stolen from me by tedious issues like this... especially now I can outsource the CI debugging to an agent.
These days if something flakes out in CI I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.
> I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.
Same experience, I don't know why people keep saying code was easy part, sure, only when you are writing a boilerplate which is easy and expectations are clear.
I agree code is easier than some other parts, but not the easiest, industry employed millions of us, to write that easy thing.
When working on large codebases or building something in the flow, I just don't want to read all the OAuth2 scopes Google requires me to obtain, my experience was never: "now I will integrate Gmail, let me do gmail.FetchEmails(), cool it works, on to the next thing"
> It's strange to me when articles like this describe the 'pain of writing code'.
I find it strange to compare the comment sections for AI articles with those about vim/emacs etc.
In the vim/emacs comments, people always state that typing in code hardly takes any time, and thinking hard is where they spend their time, so it's not worth learning to type fast. Then in the AI comments, they say that with AI writing the code, they are free'd up to spend more time thinking and less time coding. If writing the code was the easy part in the first place, and wasn't even worth learning to type faster, then how much value can AI be adding?
Now, these might be disjoint sets of people, but I suspect (with no evidence of course) there's a fairly large overlap between them.
What I never understand is that people seem to think the conception of the idea and the syntactical nitty gritty of the code are completely independent domains. When I think about “how software works” I am at some level thinking about how the code works too, not just high level architecture. So if I no longer concern myself with the code, I really lose a lot of understanding about how the software works too.
Writing the code is where I discover the complexity I missed while planning. I don't truly understand my creation until I've gone through a few iterations of this. Maybe I'm just bad at planning.
At first I thought you were referring to the debates over using vim or using emacs, but I think you mean to refer to the discussions about learning to use/switching to powerful editors like vim or emacs. If you learn and use a sharp, powerful editor and learn to type fast, the "burden" of editing and typing goes away.
People are different. Some are painters and some are sculptors. Andy Warhol was a master draftsman but he didn't get famous off of his drawings. He got famous off of screen printing other people's art that he often didn't own. He just pioneered the technique and because it was new, people got excited, and today he's widely considered to be a generational artistic genius.
I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.
If you use an LLM assisted workflow to write something that a lot of people love, then you have created art and you are a great artist. It's probable that if Tolkien was born in our time instead of his, he'd be using modern tools while still creating great art, because his creative mind and his work ethic are the most important factors in the creative process.
I'm not of the opinion that any LLM will ever provide quality that comes close to a master work by itself, but I do think they will be valuable tools for a lot of creative people in the grueling and unrewarding "just make it exist first" stage of the creative process, while genius will still shine as it always has in the "you can make it good later" stage.
I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.
If the ends justifies the means is a well-worn disagreement/debate, and I think the only solid conclusion we've come to as a society is that it depends.
That's a moral debate, not suitable for this discussion.
The discussion at hand is about purity and efficiency. Some people are process oriented, perfectionists, purists that take great pride in how they made something. Even if the thing they made isn't useful at all to anyone except to stroke their own ego.
Others are more practical and see a tool as a tool, not every hammer you make needs to be beautiful and made from the best materials money can buy.
Depending on the context either approach can be correct. For some things being a detail oriented perfectionist is good. Things like a web framework or a programming language or an OS. But for most things, just being practical and finding a cheap and clever way to get to where you want to go will outperform most over engineering.
Current models won't write anything new, they are "just" great at matching, qualifying, and copying patterns. They bring a lot of value right now, but there is no creativity.
I was talking to a coworker that really likes AI tooling and it came up that they feel stronger reading unfamiliar code than writing code.
I wonder how much it comes down to that divide. I also wonder how true that is, or if they’re just more trusting that the function does what its name implies the way they think it should.
I suspect you, like me, feel more comfortable with code we’ve written than having to review totally foreign code. The rate limit is in the high level design, not in how fast I can throw code at a file.
It might be a difference in cognition, or maybe we just have a greater need to know precisely how something works instead of accepting a hand wavey “it appears to work, which is good enough”.
Tolkien's book is an art, programs are supposed to do something.
Now, some program may be considered art (e.g. codegolf) or considered art by their creator. I consider my programs and code are only the means to get the computer to do what it wants, and there are also easy way to ensure that they do what we want.
> Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
Is exactly what programs are. Not the minutiae of the language within.
I agree with your point. My concern is more about the tedious aspects. You could argue that tedium is part of what makes the craft valuable, and there's truth to that. But it comes down to trade-offs, what could I accomplish with that saved time, and would I get more value from those other pursuits?
If you're gonna take this track, at least be honest with yourself. Does your boss get more value out of you? You aren't going to get a kickback from being more productive, but your boss sure will.
I honestly think the stuff AI is really good at is the stuff around the programming that keeps you from the actual programming.
Take a tool like Gradle. Bigger pain in the ass using an actual cactus as a desk chair. It has a staggering rate of syntax and feature churn with every version upgrade, sprawling documentation that is clearly written by space aliens, every problem is completely ungoogleable as every single release does things differently and no advice stays valid for more than 25 minutes.
It's a comically torturous DevEx. You can literally spend days trying to get your code to compile again, and not a second of that time will be put toward anything productive. Sheer frustration. Just tears. Mad laughter. Rocking back and forth.
"Hey Claude, I've upgraded to this week's Gradle and now I'm getting this error I wasn't getting with last week's version, what could be going wrong?" makes all that go away in 10 minutes.
I had this moment recently with implementing facebook oauth. I don’t need to spend mental cycles figuring that out, doing the back and forth with their API, pulling my hair out at their docs, etc. I just want it to work and build my app. AI just did that part for me and could move on.
The absence of evidence is evidence in its own way. I don’t understand how there haven’t been more studies on this yet. The one from last year that showed AI made people think they were faster but were actually slower gets cited a lot, and I know that was a small study with older tools, but it’s amazing that that hasn’t been repeated. Or maybe it has and we don’t know because the results got buried.
One thing I’ve noticed is that effort may be saved but not as much time. The agent can certainly type faster than me but I have to sit there and watch it work and then check its work when done. There’s certainly some time savings but not what you think.
Another thing I've noticed is that using AI, I'm less likely to give existing code another look to see if there's already something in it that does what I need. It's so simple to get the AI to spin up a new class / method that gets close to what I want, so sometimes I end up "giving orders first, asking questions later" and only later realizing that I've duplicated functionality.
If Tolkien had not lived an entire life, fought in a war, been buddies with other authors, and also been a decent writer, the story doesn’t exist. And an LLM won’t come up with it.
An LLM isn’t coming up with the eye of Sauron, or the entire backstory of the ring, or gollum, etc etc
The LLM can’t know Tolkien had a whole universe built in his head that he worked for decades to get on to paper.
I’m so tired of this whole “an LLM just does what humans already do!” And then conflating that with “fuck all this LLM slop!”
I can only speak for myself but for me, it's all about the syntax. I am terrible at recalling the exact name of all the functions in a library or parameters in an API, which really slows me down when writing code. I've also explored all kinds of programming languages in different paradigms, which makes it hard to recall the exact syntax of operators (is comparison '=' or '==' in this language? Comments are // or /*? How many parameters does this function take, and in what order...) or control structures. But I'm good at high level programming concepts, so it's easy to say what I want in technical language and let the LLM find the exact syntax and command names for me.
I guess if you specialise in maintaining a code base with a single language and a fixed set of libraries then it becomes easier to remember all the details, but for me it will always be less effort to just search the names for whatever tools I want to include in a program at any point.
I agree with a bunch of this (I'm almost exclusively doing python and bash; bash is the one I can never remember more than the basics of). I will give the caveat that I historically haven't made use of fancy IDEs with easy lookup of function names, so would semi-often be fixing "ugh I got the function name wrong" mistakes.
Similar to how you outlined multi-language vs specialist, I wonder if "full stack" vs "niche" work unspokenly underlies some of the camps of "I just trust the AI" vs "it's not saving me any time".
There's a joke that's not entirely a joke that the job of a Google SWE is converting from one protobuf to another. That's generally not very fun code, IMO (which may differ from your opinion and that's why they're opinions!). Otoh, figuring out and writing some interesting logic catches my brain in a way that dealing with formats and interoperability stuff doesn't usually.
We're all did but we all probably have things we like more than others.
I mean, I agree if it's really just "machine translate this code to use the approved method of doing this thing". That seems like a perfect use case for AI. Though one would think Google would already have extensive code mod infrastructure for that kind of thing.
But those aren't the stories you hear about with people coding with AI, which is what prompted my response.
They do and I think a lot of that is LLM'd these days, though that's just what I hear third-hand.
I do agree that this:
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
seems more than a little overblown. But I do sympathize with not feeling motivated to write a lot of glue and boilerplate, and that "meh" often derails me on personal projects where it's just my internal motivation competing against my internal de-motivation. LLMs have been really good there, especially since many of those are cases where only I will run or deal with the code and it won't be exposed to the innertubes.
Maybe the author can't touch type, but that's a separate problem with its own solution. :)
“He’s a liar and a sneak, Mr. Frodo, and I’ll say it plain — he’d slit our throats in our sleep if he thought he could get away with it,” Sam spat, glaring at the hunched figure scrabbling over the stones ahead. “Every word out of that foul mouth is poison dressed up as helpfulness, and I’m sick of pretending otherwise.” Frodo stopped walking and turned sharply, his eyes flashing with an intensity that made Sam take half a step back. “Enough, Sam. I won’t hear it again. I have decided. Sméagol is our guide and he is under my protection — that is the end of it.” Sam’s face reddened. “Protection! You’re protecting the very thing that wants to destroy you! He doesn’t care about you, Mr. Frodo. You’re nothing to him but the hand that carries what he wants!” But Frodo’s expression had hardened into something almost unrecognizable, a cold certainty that brooked no argument. “You don’t understand what this Ring does to a soul, Sam. You can’t understand it. I feel it every moment of every day, and if I say there is still something worth saving in that creature, then you will trust my judgment or you will walk behind me in silence. Those are your choices.” Sam opened his mouth, then closed it, stung as if he’d been struck. He fell back a pace, blinking hard, and said nothing more — though the look he fixed on Gollum’s retreating back was one of pure, undisguised loathing.
Claude already knows who the characters Frodo, Sam, and Gollum are, what their respective character traits are, and how they interacted with each other. This isn't the same as writing something new.
Please forgive me for being blunt, I want to emphasize how much this strikes me.
Your post feels like the last generation lamenting the new generation. Why can't we just use radios and slide rules?
If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?
There's so much plumbing and refactoring bullshit in writing code. I've written years of five nines high SLA code that moves billions of dollars daily. I've had my excitement setting up dev tools and configuring vim a million ways. I want starships now.
I want to see the future unfold during my career, not just have it be incrementalism until I retire.
I want robots walking around in my house, doing my chores. I want a holodeck. I want to be able to make art and music and movies and games. I will not be content with twenty more years of cellphone upgrades.
God, just the thought of another ten years of the same is killing me. It's so fucking mundane.
I think my take on the matter comes from being a games developer. I work on a lot of code for which agentic programming is less than ideal - code which solves novel problems and sometimes requires a lot of precise performance tuning, and/or often has other architectural constraints.
I don't see agentic programming coming to take my lunch any time soon.
What I do see it threatening is repetitive quasi carbon copy development work of the kind you've mentioned - like building web applications.
Nothing wrong with using these tools to deal with that, but I do think that a lot of the folks from those domains lack experience with heavier work, and falsely extrapolate the impact it's having within their domain to be applicable across the board.
> Your post feels like the last generation lamenting the new generation.
> The future is exciting.
Not the GP, but I honestly wanted to be excited about LLMs. And they do have good uses. But you quickly start to see the cracks in them, and they just aren't nearly as exciting as I thought they'd be. And a lot of the coding workflows people are using just don't seem that productive or valuable to me. AI just isn't solving the hard problems in software development. Maybe it will some day.
> Your post feels like the last generation lamenting the new generation [...] There's so much plumbing and refactoring bullshit in writing code [...] I've had my excitement
I don't read the OP as saying that: to me they're saying you're still going to have plumbing and bullshit, it's just your plumbing and bullshit is now going to be in prompt engineering and/or specifications, rather than the code itself.
I want to live forever and set foot on distant planets in other galaxies.
Got a prescription for that too?
I've made films for fifteen years. I hate the process.
Every one of my friends and colleagues that went to film school found out quickly that their dreams would wither and die on the vine due to the pyramid nature of studio capital allocation and expenditure. Not a lot of high autonomy in that world. Much of it comes with nepotism.
There are so many things I wish to do with technology that I can't because of how much time and effort and energy and money are required.
I wish I could magic together a P2P protocol that replaced centralized social media. I wish I could build a completely open source GPU driver stack. I wish I could make Rust compile faster or create an open alternative to AWS or GCP. I wish for so many things, but I'm not Fabrice Bellard.
I don't want to constrain people to the shitty status quo. Because the status quo is shitty. I want the next generation to have better than the bullshit we put up with. If they have to suffer like we suffered, we failed.
I want the future to climb out of the pit we're in and touch the stars.
Computing technology always becomes cheaper and more powerful over time. But it's a slow process. The rate of improvement for LLMs is already decreasing. You will die of old age before the technology that you seem to be looking for arrives.
> If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?
To go off the deep end… I actually think this LLM assistant stuff is a precondition to space exploration. I can see the need for a offline compressed corpus of all human knowledge that can do tasks and augment the humans aboard the ship. You’ll need it because the latency back to earth is a killer even for a “simple” interplanetary trip to mars—that is 4 to 24 minutes round trip! Hell even the moon has enough latency to be annoying.
Granted right now the hardware requirements and rapid evolution make it infeasible to really “install it” on some beefcake system but I’m almost positive the general form of moores law will kick in and we’ll have SOTA models on our phones in no time. These things will be pervasive and we will rely on them heavily while out in space and on other planets for every conceivable random task.
They’ll have to function reliably offline (no web search) which means they probably need to be absolutely massive models. We’ll have to find ways to selectively compress knowledge. For example we might allocate more of the model weights to STEM topics and perhaps less to, I dunno, the fall of the Roman Empire, Greek gods or the career trajectory of Pauly Shore. the career trajectory of Pauly Shore. But perhaps not, because who knows—-maybe a deep familiarity with Bio-Dome is what saves the colony on Kepler-452b
The author seems to mistake having to update Node.js for a security patch to be a curse rather than a blessing.
The alternative is that your bespoke solution has undiscovered security vulnerabilities, probably no security community, and no easy fix for either of those.
You get the privilege of patching Node.js.
Similarly, as a hiring manager, you can hire a React developer. You can't hire a "proprietary AI coded integrated project" developer.
This piece seems to say more about React than it says about a general shift in software engineering.
Don't like React? Easiest it's ever been not to use it.
Don't like libraries, abstractions and code reuse in general? Avoid them at your peril. You will quickly reach the frontier of your domain knowledge and resourcing, and start producing bespoke square wheels without a maintenance plan.
Yeah, I really don't get it. So instead of using someone else's framework, you're using an AI to write a (probably inferior and less thoroughly tested and considered) framework. And your robot employee is probably pulling a bunch of stuff (not quite verbatim, of course) from existing relevant open source frameworks anyway. Big whoop?
It's quite easy to make things without react, it's not our fault that business leaders don't let devs choose how to solve problems but hey who am I to complain? React projects allow me to pay my bills! I've never seen a good "react" project yet and I've been working professionally with react since before class components were a thing.
Every react code base has their own unique failures due to npm ecosystem, this will never change. In fact, the best way to anticipate what kind patterns are in a given react project is to look at their package.json.
I fail to see the obvious wisdom in having AI re-implement chunks of existing frameworks without the real-world battle testing, without the supporting ecosystem, and without the common parlance and patterns -- all of which are huge wins if you ever expand development beyond a single person.
It's worth repeating too, that not everything needs to be a react project. I understand the author enjoys the "vibe", but that doesn't make it a ground truth. AI can be a great accelerator, but we should be very cognizant of what we abdicate to it.
In fact I would argue that the post reads as though the developer is used to mostly working alone, and often choosing the wrong tool for the job. It certainly doesn't support the claim of the title
AI has a lot of "leaders" currently working through a somewhat ignorant discovery of existing domain knowledge (ask me how being a designer has felt in the last 15 years of UX Leadership™ slowly realizing there's depth to the craft).
In recent months, we have MCPs, helping lots of people realize that huh, when services have usable APIs, you can connect them together!
In the current case: AI can do the tedious things for me -> Huh, discarding vast dependency trees (because I previously wanted the tedious stuff done for me too) lessens my risk surface!
They really are discovered truths, but no one's forcing them to come with an understanding of the tradeoffs happening.
> the supporting ecosystem, ... the common parlance and patterns
Which are often the top reason to use a framework at all.
I could re-implement a web frame work in python if I needed to but then I would lose all the testing, documentation, middle-ware and worst of all the next person would have to show up and re learn everything I did and understand my choices.
I would think that frameworks make more sense than ever with LLMs.
The benefits of frameworks were always having something well tested that you knew would do the job, and that after a bit of use you'd be familiar with, and the same still stands.
LLMs still aren't AGI, and they learn by example. The reason they are decent at writing React code is because they were trained on a lot of it, and they are going to be better at generating based on what they were trained on, than reinventing the wheel.
As the human-in-the-loop, having the LLM generate code for a framework you are familiar with (or at least other people are familiar with) also let's you step in and fix bugs if necessary.
If we get to a point, post-AGI, where we accept AGI writing fully custom code for everything (but why would it - if it has human-level intelligence, wouldn't it see the value in learning and using well-debugged and optimized frameworks?!), then we will have mostly lost control of the process.
It’s fun to ask the models their input. I was working on diagrams and was sure Claude would want some python / js framework to handle layout and nodes and connections. It said “honestly I find it easiest to just write the svg code directly”.
That is fun, but it doesn’t mean that the model finds it easier or will actually work better that way, that just means that in its training data many people said something like “honestly I find it easiest to just write the svg code directly” in response to similar questions
> Since [a few months ago], things have dramatically changed...
It's not like we haven't heard that one before. Things have changed, but it's been a steady march. The sudden magic shift, at a different point for everyone, is in the individual mind.
Regarding the epiphany... since people have been heavily overusing frameworks -- making their projects more complex, more brittle, more disorganized, more difficult to maintain -- for non-technical reasons, people aren't going to stop just because LLMs make them less necessary; The overuse wasn't necessary in the first place.
Perhaps unnecessary framework usage will drop, though, as the new hype replaces the old hype. But projects won't be better designed, better organized, better through-through.
My biggest concern with AI is that I'm not sure how a software engineer can build up this sort of high-level intuition:
> I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am.
Without a significant development period of this:
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
A professional mathematician should use every computer aid at their disposal if it's appropriate. But a freshman math major who isn't spending most of their time with just a notebook or chalk board is probably getting in the way of their own progress.
Granted, this was already an issue, to a lesser extent, with the frameworks that the author scorns. It's orders of magnitude worse with generative AI.
I'm not sure. I don't know about deep expertise and mastery, but I can attest that my fluency skyrocketed as the result of AI in several languages, simply because the friction involved in writing them went own by orders of magnitude. So I am writing way more code now in domains that I previously avoided, and I noticed that I am now much more capable there even without the AI.
What I don't know is what state I'd be in right now, if I'd had AI from the start. There are definitely a ton of brain circuits I wouldn't have right now.
Counterpoint: I've actually noticed them holding me back. I have 20 years of intuition built up now for what is hard and what is easy, and most of it became wrong overnight, and is now limiting me for no real reason.
The hardest part to staying current isn't learning, but unlearning. You must first empty your cup, and all that.
I have been using Cursor w/ Opus 4.x to do extensive embedded development work over the past six months in particular. My own take on this topic is that for all of the chatter about LLMs in software engineering, I think a lot of folks are missing the opportunity to pull back and talk about LLMs in the context of engineering writ large. [I'm not capitalizing engineering because I'm using the HN lens of product development, not building bridges or nuclear reactors.]
LLMs have been a critical tool not just in my application but in my circuit design, enclosure design (CAD, CNC) and I am the conductor where these three worlds meet. The degree to which LLMs can help with EE is extraordinary.
A few weeks ago I brought up a new IPS display panel that I've had custom made for my next product. It's a variant of the ST7789. I gave Opus 4.5 the registers and it produced wrapper functions that I could pass to LVGL in a few minutes, requiring three prompts.
This is just one of countless examples where I've basically stopped using libraries for anything that isn't LVGL, TinyUSB, compression or cryptography. The purpose built wrappers Opus can make are much smaller, often a bit faster, and perhaps most significantly not encumbered with the mental model of another developer's assumptions about how people should use their library. Instead of a kitchen sink API, I/we/it created concise functions that map 1:1 to what I need them to do.
Where I agree with the author of this post is that I feel like perhaps it's time for a lot of libraries to sunset. I don't think replacing frameworks is the correct abstraction at all but I do think that it no longer makes sense to spend time integrating libraries when what you really need are purpose-built functions that do exactly what you want instead of what some library author thought you should want.
Using a framework gives you some assurance that the underlying methods are well designed. If you don't know how to spot issues in auth design, then using an LLM instead of a library is a bad idea.
I agree though there's many non-critical libraries that could be replaced with helper methods. It also coincides with more awareness of supply chain risks.
If you use a well regarded library, you can trust that most things in it were done with intention. If an expectation is violated, that's a learning opportunity.
With the AI firehose, you can't really treat it the same way. Bad patterns don't exactly stand out.
Maybe it'll be fine but I still expect to see a lot of code bases saddled with garbage for years to come.
I disagree about ditching abstractions. Programmatic abstractions aren't just a way to reduce the amount of code you write, they're also a common language to understand large systems more easily, and a way to make sure systems that get built are predictable.
I share that notion, but I think the abstractions are the foundational tech stack we have had for decades, like the Web Standard or even bash. You need constraints, but not the unnecessary complexity that comes with many modern tech stacks (react/next) that were build around SV's hyper-scalability monopoly mentality. Reach for simple tools if the task is simple: KISS.
Not only that, but a way to factor systems so you can make changes to them without spooky action at a distance. Of course, you have to put in a lot of effort to make that happen, but that's why it doesn't seem to me that LLM's are solving the hard part of software development in the first place.
So the suggestion here is that instead of using battle tested libraries/frameworks, everyone should now build their own versions, each with an unique set of silent bugs?
Even with a perfect coding agent, we code to discover what correct even is.
Team decides on vague requirements, then you actually have to implement something. Well that 'implementing' means iterating until you discover the correct thing. Usually in lots of finicky decisions.
Sometimes you might not care about those decisions, so you one shot one big change. But in my experience, the day-to-day on a production app you can 100% write all the code with Claude, but you're still trying to translate high level requirements into "low"-level decisions.
But in the end its nice not to care about the code monkey work going all over a codebase, adding a lot of trivial changes by hand, etc.
> In my mind, besides the self declared objectives, frameworks solve three problems .. “Simplification” .. Automation .. Labour cost.
I think you are missing Consistency, unless you don't count frameworks that you write as frameworks? There are 100 different ways of solving the same problem, and using a framework--- off the shelf or home made--- creates consistency in the way problems are solved.
This seems even more important with AI, since you lose context on each task, so you need it to live within guardrails and best practices or it will make spaghetti.
> We can finally get rid of all that middle work. That adapting layer of garbage we blindly accepted during these years. A huge amount of frameworks and libraries and tooling that has completely polluted software engineering, especially in web, mobile and desktop development. Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
I disagree. At least for a little while until models improve to truly superhuman reasoning*, frameworks and libraries providing abstractions are more valuable than ever. The risk/reward for custom work vs library has just changed in unforeseen ways that are orthogonal to time and effort spent.
Not only do LLMs make customization of forks and the resulting maintenance a lot easier, but the abstractions are now the most valuable place for humans to work because it creates a solid foundation for LLMs to build on. By building abstractions that we validate as engineers, we’re encoding human in the loop input without the end-developer having to constantly hand hold the agent.
What we need now is better abstractions for building verification/test suites and linting so that agents can start to automatically self improve their harness. Skills/MCP/tools in general have had the highest impact short of model improvements and there’s so much more work to be done there.
* whether this requires full AGI or not, I don’t know.
There was a time around 2016 where you weren't allowed to write a React application without also writing a "Getting Started with React" blog post. Having trained on all of that, the AI probably thinks React is web development.
I guess juniors are different these days. In my generation a lot of people's first contact with code was doing basic (html, css, bits of js) web development. That was how I got started at like 12 or 13.
Few months ago I did exactly this. But over time I threw away all the generated js,css and html. It was unmaintenable mess. I finally chose Svelte and stuck with it. Now I have a codebase which makes sense to me.
I did asked AI to generate landing page. This gave me the initial headers, footers and styles that I used for my webapp but I threw away everything else.
Frameworks are the reasons why AI can learn patterns and repeat, without frameworks you will be burning credits just to do things that been optimized already and completed. Unless you are Anthropic investor, thats not the way to improve your coding.
Intellectual surrender is exactly the risk I fear with coding agents. Will the next generation of software ‘developers’ still know how to code? Seems coding agents are in a way taking us further from understanding the machine, just like frameworks have in the past.
Software has always been about abstraction. This one, in a way, is the ultimate abstraction. However it turns out that LLMs are a pretty powerful learning tool. One just needs the discipline to use it.
> But the true revolution happened clearly last year
Oh, that seems like a good bit of time!
> and since December 2025
So like..1 or 2 months ago? This is like saying “over half of people who tried our product loved it - all 51% of them!”. This article is pushing hype, and is mistaking Anthropics pre IPO marketing drive as actual change.
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
I constantly see this and think I must be operating in a different world. This never took significant amounts of time. Are people using react to make text blogs or something?
When you choose the right framework it saves you enormous amounts of time. Sounds like the author has trouble separating hype from fact. Pick the right framework and your LLM will work better, too.
The pendulum swing described here is real but I think the underlying issue is subtler than "AI vs. no AI."
The actual problem most teams have isn't writing code — it's understanding what the code they already depend on is doing. You can vibe-code a whole app in a weekend, but when one of your 200 transitive dependencies ships a breaking change in a patch release, no amount of AI is going to help you debug why your auth flow suddenly broke.
The skill that's actually becoming more valuable isn't "writing code from scratch" — it's maintaining awareness of the ecosystem you're building on. Knowing when Node ships a security fix that affects your HTTP handling, or when a React minor changes the reconciliation behavior, or when Postgres deprecates a function you use in 50 queries.
That's the boring, unsexy part of engineering that AI doesn't solve and most developers skip until something catches fire.
> no amount of AI is going to help you debug why your auth flow suddenly broke.
What? Coding agents are very capable at helping fix bugs in specific domains. Your examples are like, the exact place where AI can add value.
You do an update, things randomly break: tell Claude to figure it out and it can go look up the breaking changes in the new versions, read your code and tell you what happened and fix it for you.
That took the strangest turn. It started with empowerment to do much more (and that I reallY agree with) — to then use it to... build everything from scratch? What? Why?
What a framework gives me is mostly other people having done precisely the architectural work, that is a prequisite to my actual work. It's fantastic, for the same reason that automatic coding is. I want to solve unsolved problems asap.
I am so confused by the disconnect that I feel like I must be missing something.
Strange how many people are comparing code to art. Software engineering has never been about the code written, it’s about solving problems with software. With AI we can solve more problems with software. I have been writing code for 25 years, I love using AI. It allows me to get to the point faster.
The author is right, eliminating all this framework cruft will be a boon for building great software. I was a skeptic but it seems obvious now its largely going to be an improvement.
Pretty much completely disagree with the OP. Software Engineering never left, maybe the author moved away from it instead.
> Stop wrapping broken legs in silk. Start building things that are yours.
This however is deeply wrong for me. Anyone who writes and reviews code regularly knows very well that reading code doesn't lead to the same deep intuitive understanding of the codebase as writing same code.
So, no, with AI you are not building things which are yours. You might call them yours, but you lose deeper understanding of what you built.
You're right, clearly I've tried to be a bit provocative to pass the message, but I'm not religious in this sense. Minimal frameworks that really solve a problem cleanly and are adopted with intention are welcome.
This is about green field development which is relatively rare. Much of the time the starting point is a bunch of code using React or maybe just a lump of PHP. Business logic ends up plunked down all over the place and LLMs tend to make a huge mess with all this unless kept on a tight leash.
I'm glad this guy is doing well, but I'm dreading the amount of work being created for people who can reverse engineer the mountains of hallucinated bullshit that he and others are now actively producing.
And if the frameworks aren't useful then maybe work up the chain and ditch compilers next?
> The three problems frameworks solve (or claim to) [..] Simplification [..] Automation [..] Labour cost
and he misses _the most important problem frameworks solve_
which is correctness
when it comes to programming most things are far more complicated in subtle annoying ways then they seem to be
and worse while you often can "cut away" on this corner cases this also tends to lead to obscure very hard to find bugs including security issues which have a tendency to pop up way later when you haven't touched to code for a while and don't remember which corner you cut (and with AI you like did never know which corner you did cut)
like just very recently some very widely used python libraries had some pretty bad bugs wrt. "basic" HTTP/web topics like http/multipart request smuggling, DOS from "decompression bombs" and similar
and while this might look like it's a counter argument, it speaks for strict code reuse even for simple topics. Because now this bugs have been fixed! And that is a very common topic for frameworks/libraries, they start out with bugs, and sadly often the same repeated common bugs known from other frameworks, and then over time things get ironed out.
But with AI there is an issue, a lot of the data it's trained on is code _which does many of this "typical" issues wrong_.
And it's non-determenistic, and good at "hiding" bugs, especially the kind of bugs which anyway are prone to pass human reviews.
So you _really_ would want to maximize use of frameworks and libraries when using AI, as that large part of the AI reliability issues.
But what does change is that there is much less reason to give frameworks/libraries "neat compact APIs" (which is a common things people spend A LOT of time one and which is prone to be the source of issues as people insist on making things "look simpler" then they are and in turn accidentally make them not just simpler but outright wrong, or prevent use-cases you might need).
Now depending on you definition of framework you could argue that AI removes boiler-parts issues in ways which allow effectively replacing all frameworks with libraries.
But you still need to review code, especially AI generated code. To some degree the old saying that code is far more read then written is even more true with AI (as most isn't "written"(by human) anymore). Now you could just not review AI code, but that can easily count as gross negligence and in some jurisdictions it's not (fully) possible to opt out of damages from gross negligence no matter what you put in TOS or other contracts. I.e. I can't recommend such negligent actions.
So IMHO there is still use for some kind of frameworks, even if what you want from them will likely start to differ and many of them can be partially or fully "librarified".
There is yet another issue: the end-users are fickle fashion minded people, and will literally refuse to use an application if it does not look like the latest React-style. They do not want to be seen using "old" software, like wearing the wrong outfit or some such nonsense. This is real, and baffling.
> Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
LLM generated code is the ultimate abstraction. A mess of code with no trusted origin that nobody has ever understood. It's worse than even the worst maintained libraries and frameworks in every way.
In big corporations that's how it is. Developers are told to only implement what is in the specs and if they have any objection, they need to raise it to PM who will then forward it to the system architect etc.
So that creates the notion as if the design was something out of reach. I met developers now who cannot develop anything on their own if it doesn't have a ticket that explains everything and hand holds them. If something is not clear they are stuck and need help of senior engineers.
Nah. Nothing has changed. To offload the work to an agent and make it a productivity gain it is exactly the same as using a framework, it's a black box portion of your system, written by someone else, that you don't understand.
Unless you are quite literally spending almost the same amount of time you'd spend yourself to deeply understand each component, at which point, you could write it yourself anyway, nothing has changed when it comes to the dynamics of actually authoring systems.
There are exceptions, but generally speaking untempered enthusiasm for agents correlates pretty well with lack of understanding about what engineering software actually entails (it's about relational and conceptual comprehension, communication, developing shared knowledge, and modeling, not about writing code or using particular frameworks!)
EDIT: And to be clear, the danger of "agentizing" software engineering is precisely that it promotes a tendency to obscure information about the system, turn engineers into personal self-llm silos, and generally discard all the second-order concerns that make for good systems, resilience, modifiability, intelligibility, performance.
I feel the same way, but I’m not a traditional software engineer. Just an old-school Webmaster who’s been trying to keep up with things, but I’ve had to hire developers all along.
I’m an idea’s guy, and in the past month or so my eyes have also fully opened to what’s coming.
But there’s a big caveat. While the actual grunt work and development is going away, there’s no telling when the software engineering part is going to go away as well. Even the ideas guy part. What happens when a simple prompt from someone who doesn’t even know what they’re doing results in an app that you couldn’t have done as well with whatever software engineering skills you have?
> Software engineers are scared of designing things themselves.
When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.
Those beliefs aren't always true, but they're often true.
Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.
And there was a time when using libraries and frameworks was the right thing to do, for that very reason. But LLMs have the equivalent of way more experience than any single programmer, and can generate just the bit of code that you actually need, without having to include the whole framework.
It's strange to me when articles like this describe the 'pain of writing code'. I've always found that the easy part.
Anyway, this stuff makes me think of what it would be like if you had Tolkein around today using AI to assist him in his writing.
'Claude, generate me a paragraph describing Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
'Revise that so that Sam is Harsher and Frodo more stubborn.'
Sooner or later I look at that and think he'd be better off just writing the damned book instead of wasting so much time writing prompts.
Your last sentence describes my thoughts exactly. I try to incorporate Claude into my workflow, just to see what it can do, and the best I’ve ended up with is - if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
Even just some AI-assisted development in the trickier parts of my code bases completely robs me of understanding. And those are the parts that need my understanding the most!
I dont really understand how this is possible. I've built some very large applications, and even a full LLM data curation,tokenizer, pretrain, posttrain SFT/DPO pipeline with LLM's and it most certainly took far less time than if i had done it manually. Sure it isnt all optimal...but it most certainly isnt subpar, and it is fully functional
So you skipped the code review and just checked that it does what you needed it to do?
I don't know how anyone can make this assumption in good faith. The poster did not imply anything along those lines.
> I would have finished the project in the same amount of time
Probably less time, because you understood the details better.
> if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
I believe the argument from the other camp is that you don't need to understand the code anymore, just like you don't need to understand the assembly language.
Of all the points the other side makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.
If you only understand the code by talking to AI, you would’ve been able to ask AI “how do we do a business feature” and ai would spit out a detailed answer, for a codebase that just says “pretend there is a codebase here”. This is of course an extreme example, and you would probably notice that, but this applies at all levels.
Any detail, anywhere cannot be fully trusted. I believe everyone’s goal should be to prompt ai such that code is the source of truth, and keep the code super readable.
If ai is so capable, it’s also capable of producing clean readable code. And we should be reading all of it.
> other side???
> We don’t have to look at assembly, because a compiler produces the same result every time.
This is technically true in the narrowest possible sense and practically misleading in almost every way that matters. Anyone who's had a bug that only manifests at -O2, or fought undefined behavior in C that two compilers handle differently, or watched MSVC and GCC produce meaningfully different codegen from identical source, or hit a Heisenbug that disappears when you add a printf ... the "deterministic compiler" is doing a LOT of work in that sentence that actual compilers don't deliver on.
Also what's with the "sides" and "camps?" ... why would you not keep your identity small here? Why define yourself as a {pro, anti} AI person so early? So weird!
You just described deterministic behavior. Bugs are also deterministic. You don’t get different bugs every time you compile the same code the same way. With LLMs you do.
Re: “other side” - I’m quoting the grandparent’s framing.
GCC is, I imagine, several orders of magnitude mor deterministic than an LLM.
It’s not _more_ deterministic. It’s deterministic, period. The LLMs we use today are simply not.
People who really care about performance still do look at the assembly. Very few people write assembly anymore, a larger number do look at assembly every so often. It’s still a minority of people though.
I guess it would be similar here: a small few people will hand write key parts of code, a larger group will inspect the code that’s generated, and a far larger group won’t do either. At least if AI goes the way that the “other side” says.
>I believe the argument from the other camp is that you don't need to understand the code anymore
Then what stops anyone who can type in their native language to, ultimately when LLM's are perfected, just order their own software instead of using anybody else's (speaking about native apps like video games, mobile phones, desktop, etc.)?
Do they actually believe we'll need a bachelor's degree to prompt program in a world where nobody cares about technical details, because the LLM's will be taking care of? Actually, scratch that. Why would the companies who're pouring gorrilions of dollars in investment even give access to such power in an affordable way?
The deeper I look in the rabbit hole they think we're walking towards the more issues I see.
At least for me, the game-changer was realizing I could (with the help of AI) write a detailed plan up front for exactly what the code would be, and then have the AI implement it in incremental steps.
Gave me way more control/understanding over what the AI would do, and the ability to iterate on it before actually implementing.
quite a bit of software you would need to understand the assembly. not everything is web-services.
skill issue.
sorry for being blunt, but if you have tried once, twice and came to this conclusion, it is definitely a skill issue, I never got comfortable by writing 3 lines of Java, Python or Go or any other language, it took me hundreds of hours spent doing non-sense, failing miserably and finding out that I was building things which already exists in std lib.
Have you really never found writing code painful?
CI is failing. It passed yesterday. Is there a flaky API being called somewhere? Did a recent commit introduce a breaking change? Maybe one of my third-party dependencies shipped a breaking change?
I was going to work on new code, but now I have to spend between 5 minutes and an hour+ - impossible to predict - solving this new frustration that just cropped up.
I love building things and solving new problems. I'd rather not have that time stolen from me by tedious issues like this... especially now I can outsource the CI debugging to an agent.
These days if something flakes out in CI I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.
> I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.
Same experience, I don't know why people keep saying code was easy part, sure, only when you are writing a boilerplate which is easy and expectations are clear.
I agree code is easier than some other parts, but not the easiest, industry employed millions of us, to write that easy thing.
When working on large codebases or building something in the flow, I just don't want to read all the OAuth2 scopes Google requires me to obtain, my experience was never: "now I will integrate Gmail, let me do gmail.FetchEmails(), cool it works, on to the next thing"
> It's strange to me when articles like this describe the 'pain of writing code'.
I find it strange to compare the comment sections for AI articles with those about vim/emacs etc.
In the vim/emacs comments, people always state that typing in code hardly takes any time, and thinking hard is where they spend their time, so it's not worth learning to type fast. Then in the AI comments, they say that with AI writing the code, they are free'd up to spend more time thinking and less time coding. If writing the code was the easy part in the first place, and wasn't even worth learning to type faster, then how much value can AI be adding?
Now, these might be disjoint sets of people, but I suspect (with no evidence of course) there's a fairly large overlap between them.
What I never understand is that people seem to think the conception of the idea and the syntactical nitty gritty of the code are completely independent domains. When I think about “how software works” I am at some level thinking about how the code works too, not just high level architecture. So if I no longer concern myself with the code, I really lose a lot of understanding about how the software works too.
Writing the code is where I discover the complexity I missed while planning. I don't truly understand my creation until I've gone through a few iterations of this. Maybe I'm just bad at planning.
At first I thought you were referring to the debates over using vim or using emacs, but I think you mean to refer to the discussions about learning to use/switching to powerful editors like vim or emacs. If you learn and use a sharp, powerful editor and learn to type fast, the "burden" of editing and typing goes away.
People are different. Some are painters and some are sculptors. Andy Warhol was a master draftsman but he didn't get famous off of his drawings. He got famous off of screen printing other people's art that he often didn't own. He just pioneered the technique and because it was new, people got excited, and today he's widely considered to be a generational artistic genius.
I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.
If you use an LLM assisted workflow to write something that a lot of people love, then you have created art and you are a great artist. It's probable that if Tolkien was born in our time instead of his, he'd be using modern tools while still creating great art, because his creative mind and his work ethic are the most important factors in the creative process.
I'm not of the opinion that any LLM will ever provide quality that comes close to a master work by itself, but I do think they will be valuable tools for a lot of creative people in the grueling and unrewarding "just make it exist first" stage of the creative process, while genius will still shine as it always has in the "you can make it good later" stage.
I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.
If the ends justifies the means is a well-worn disagreement/debate, and I think the only solid conclusion we've come to as a society is that it depends.
That's a moral debate, not suitable for this discussion.
The discussion at hand is about purity and efficiency. Some people are process oriented, perfectionists, purists that take great pride in how they made something. Even if the thing they made isn't useful at all to anyone except to stroke their own ego.
Others are more practical and see a tool as a tool, not every hammer you make needs to be beautiful and made from the best materials money can buy.
Depending on the context either approach can be correct. For some things being a detail oriented perfectionist is good. Things like a web framework or a programming language or an OS. But for most things, just being practical and finding a cheap and clever way to get to where you want to go will outperform most over engineering.
Current models won't write anything new, they are "just" great at matching, qualifying, and copying patterns. They bring a lot of value right now, but there is no creativity.
95% of the industry wasn't creating creative value, it was repetitive.
* auth + RBAC, known problem, just needs integration
* 3rd party integration, they have API, known problem, just needs integration
* make webpage responsive, millions of CSS lines
* even video gaming, most engines are already written, just add your character and call couple of APIs to move them in the 3D space
That's why they bring a lot of value. Plus, new models and methods enable solutions that weren't available a decade ago.
I was talking to a coworker that really likes AI tooling and it came up that they feel stronger reading unfamiliar code than writing code.
I wonder how much it comes down to that divide. I also wonder how true that is, or if they’re just more trusting that the function does what its name implies the way they think it should.
I suspect you, like me, feel more comfortable with code we’ve written than having to review totally foreign code. The rate limit is in the high level design, not in how fast I can throw code at a file.
It might be a difference in cognition, or maybe we just have a greater need to know precisely how something works instead of accepting a hand wavey “it appears to work, which is good enough”.
Tolkien's book is an art, programs are supposed to do something.
Now, some program may be considered art (e.g. codegolf) or considered art by their creator. I consider my programs and code are only the means to get the computer to do what it wants, and there are also easy way to ensure that they do what we want.
> Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
Is exactly what programs are. Not the minutiae of the language within.
“ What’s gone is the tearing, exhausting manual labour of typing every single line of code.”
Yeah, this was always the easy part.
I agree with your point. My concern is more about the tedious aspects. You could argue that tedium is part of what makes the craft valuable, and there's truth to that. But it comes down to trade-offs, what could I accomplish with that saved time, and would I get more value from those other pursuits?
If you're gonna take this track, at least be honest with yourself. Does your boss get more value out of you? You aren't going to get a kickback from being more productive, but your boss sure will.
I honestly think the stuff AI is really good at is the stuff around the programming that keeps you from the actual programming.
Take a tool like Gradle. Bigger pain in the ass using an actual cactus as a desk chair. It has a staggering rate of syntax and feature churn with every version upgrade, sprawling documentation that is clearly written by space aliens, every problem is completely ungoogleable as every single release does things differently and no advice stays valid for more than 25 minutes.
It's a comically torturous DevEx. You can literally spend days trying to get your code to compile again, and not a second of that time will be put toward anything productive. Sheer frustration. Just tears. Mad laughter. Rocking back and forth.
"Hey Claude, I've upgraded to this week's Gradle and now I'm getting this error I wasn't getting with last week's version, what could be going wrong?" makes all that go away in 10 minutes.
I'm glad to hear the gradle experience hasn't changed in the decade since I started avoiding it.
I had this moment recently with implementing facebook oauth. I don’t need to spend mental cycles figuring that out, doing the back and forth with their API, pulling my hair out at their docs, etc. I just want it to work and build my app. AI just did that part for me and could move on.
Integrating auth code is probably a good example of code you want to understand, rather than just seeing that it appears to work.
I think it's still an open question if it's actually a net savings of time.
The absence of evidence is evidence in its own way. I don’t understand how there haven’t been more studies on this yet. The one from last year that showed AI made people think they were faster but were actually slower gets cited a lot, and I know that was a small study with older tools, but it’s amazing that that hasn’t been repeated. Or maybe it has and we don’t know because the results got buried.
One thing I’ve noticed is that effort may be saved but not as much time. The agent can certainly type faster than me but I have to sit there and watch it work and then check its work when done. There’s certainly some time savings but not what you think.
Another thing I've noticed is that using AI, I'm less likely to give existing code another look to see if there's already something in it that does what I need. It's so simple to get the AI to spin up a new class / method that gets close to what I want, so sometimes I end up "giving orders first, asking questions later" and only later realizing that I've duplicated functionality.
Isn't that what Tolkien did in his head? Write something, learn what he liked/didn't like then revise the words? Rinse/repeat. Same process here.
If Tolkien had not lived an entire life, fought in a war, been buddies with other authors, and also been a decent writer, the story doesn’t exist. And an LLM won’t come up with it.
An LLM isn’t coming up with the eye of Sauron, or the entire backstory of the ring, or gollum, etc etc
The LLM can’t know Tolkien had a whole universe built in his head that he worked for decades to get on to paper.
I’m so tired of this whole “an LLM just does what humans already do!” And then conflating that with “fuck all this LLM slop!”
Pain can mean tedium rather than intellectual challenge.
I really struggle to understand how people can find coding more tedious than prompting. To each their own I guess.
I can only speak for myself but for me, it's all about the syntax. I am terrible at recalling the exact name of all the functions in a library or parameters in an API, which really slows me down when writing code. I've also explored all kinds of programming languages in different paradigms, which makes it hard to recall the exact syntax of operators (is comparison '=' or '==' in this language? Comments are // or /*? How many parameters does this function take, and in what order...) or control structures. But I'm good at high level programming concepts, so it's easy to say what I want in technical language and let the LLM find the exact syntax and command names for me.
I guess if you specialise in maintaining a code base with a single language and a fixed set of libraries then it becomes easier to remember all the details, but for me it will always be less effort to just search the names for whatever tools I want to include in a program at any point.
I agree with a bunch of this (I'm almost exclusively doing python and bash; bash is the one I can never remember more than the basics of). I will give the caveat that I historically haven't made use of fancy IDEs with easy lookup of function names, so would semi-often be fixing "ugh I got the function name wrong" mistakes.
Similar to how you outlined multi-language vs specialist, I wonder if "full stack" vs "niche" work unspokenly underlies some of the camps of "I just trust the AI" vs "it's not saving me any time".
Some code is fun and some sucks?
There's a joke that's not entirely a joke that the job of a Google SWE is converting from one protobuf to another. That's generally not very fun code, IMO (which may differ from your opinion and that's why they're opinions!). Otoh, figuring out and writing some interesting logic catches my brain in a way that dealing with formats and interoperability stuff doesn't usually.
We're all did but we all probably have things we like more than others.
I mean, I agree if it's really just "machine translate this code to use the approved method of doing this thing". That seems like a perfect use case for AI. Though one would think Google would already have extensive code mod infrastructure for that kind of thing.
But those aren't the stories you hear about with people coding with AI, which is what prompted my response.
They do and I think a lot of that is LLM'd these days, though that's just what I hear third-hand.
I do agree that this:
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
seems more than a little overblown. But I do sympathize with not feeling motivated to write a lot of glue and boilerplate, and that "meh" often derails me on personal projects where it's just my internal motivation competing against my internal de-motivation. LLMs have been really good there, especially since many of those are cases where only I will run or deal with the code and it won't be exposed to the innertubes.
Maybe the author can't touch type, but that's a separate problem with its own solution. :)
Claude Opus 4.6:
“He’s a liar and a sneak, Mr. Frodo, and I’ll say it plain — he’d slit our throats in our sleep if he thought he could get away with it,” Sam spat, glaring at the hunched figure scrabbling over the stones ahead. “Every word out of that foul mouth is poison dressed up as helpfulness, and I’m sick of pretending otherwise.” Frodo stopped walking and turned sharply, his eyes flashing with an intensity that made Sam take half a step back. “Enough, Sam. I won’t hear it again. I have decided. Sméagol is our guide and he is under my protection — that is the end of it.” Sam’s face reddened. “Protection! You’re protecting the very thing that wants to destroy you! He doesn’t care about you, Mr. Frodo. You’re nothing to him but the hand that carries what he wants!” But Frodo’s expression had hardened into something almost unrecognizable, a cold certainty that brooked no argument. “You don’t understand what this Ring does to a soul, Sam. You can’t understand it. I feel it every moment of every day, and if I say there is still something worth saving in that creature, then you will trust my judgment or you will walk behind me in silence. Those are your choices.” Sam opened his mouth, then closed it, stung as if he’d been struck. He fell back a pace, blinking hard, and said nothing more — though the look he fixed on Gollum’s retreating back was one of pure, undisguised loathing.
Claude already knows who the characters Frodo, Sam, and Gollum are, what their respective character traits are, and how they interacted with each other. This isn't the same as writing something new.
Please forgive me for being blunt, I want to emphasize how much this strikes me.
Your post feels like the last generation lamenting the new generation. Why can't we just use radios and slide rules?
If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?
There's so much plumbing and refactoring bullshit in writing code. I've written years of five nines high SLA code that moves billions of dollars daily. I've had my excitement setting up dev tools and configuring vim a million ways. I want starships now.
I want to see the future unfold during my career, not just have it be incrementalism until I retire.
I want robots walking around in my house, doing my chores. I want a holodeck. I want to be able to make art and music and movies and games. I will not be content with twenty more years of cellphone upgrades.
God, just the thought of another ten years of the same is killing me. It's so fucking mundane.
The future is exciting.
Bring it.
I think my take on the matter comes from being a games developer. I work on a lot of code for which agentic programming is less than ideal - code which solves novel problems and sometimes requires a lot of precise performance tuning, and/or often has other architectural constraints.
I don't see agentic programming coming to take my lunch any time soon.
What I do see it threatening is repetitive quasi carbon copy development work of the kind you've mentioned - like building web applications.
Nothing wrong with using these tools to deal with that, but I do think that a lot of the folks from those domains lack experience with heavier work, and falsely extrapolate the impact it's having within their domain to be applicable across the board.
> Your post feels like the last generation lamenting the new generation.
> The future is exciting.
Not the GP, but I honestly wanted to be excited about LLMs. And they do have good uses. But you quickly start to see the cracks in them, and they just aren't nearly as exciting as I thought they'd be. And a lot of the coding workflows people are using just don't seem that productive or valuable to me. AI just isn't solving the hard problems in software development. Maybe it will some day.
> Your post feels like the last generation lamenting the new generation [...] There's so much plumbing and refactoring bullshit in writing code [...] I've had my excitement
I don't read the OP as saying that: to me they're saying you're still going to have plumbing and bullshit, it's just your plumbing and bullshit is now going to be in prompt engineering and/or specifications, rather than the code itself.
> I want to be able to make art and music and movies and games.
Then make them. What's stopping you?
I want to live forever and set foot on distant planets in other galaxies.
Got a prescription for that too?
I've made films for fifteen years. I hate the process.
Every one of my friends and colleagues that went to film school found out quickly that their dreams would wither and die on the vine due to the pyramid nature of studio capital allocation and expenditure. Not a lot of high autonomy in that world. Much of it comes with nepotism.
There are so many things I wish to do with technology that I can't because of how much time and effort and energy and money are required.
I wish I could magic together a P2P protocol that replaced centralized social media. I wish I could build a completely open source GPU driver stack. I wish I could make Rust compile faster or create an open alternative to AWS or GCP. I wish for so many things, but I'm not Fabrice Bellard.
I don't want to constrain people to the shitty status quo. Because the status quo is shitty. I want the next generation to have better than the bullshit we put up with. If they have to suffer like we suffered, we failed.
I want the future to climb out of the pit we're in and touch the stars.
Computing technology always becomes cheaper and more powerful over time. But it's a slow process. The rate of improvement for LLMs is already decreasing. You will die of old age before the technology that you seem to be looking for arrives.
> If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?
To go off the deep end… I actually think this LLM assistant stuff is a precondition to space exploration. I can see the need for a offline compressed corpus of all human knowledge that can do tasks and augment the humans aboard the ship. You’ll need it because the latency back to earth is a killer even for a “simple” interplanetary trip to mars—that is 4 to 24 minutes round trip! Hell even the moon has enough latency to be annoying.
Granted right now the hardware requirements and rapid evolution make it infeasible to really “install it” on some beefcake system but I’m almost positive the general form of moores law will kick in and we’ll have SOTA models on our phones in no time. These things will be pervasive and we will rely on them heavily while out in space and on other planets for every conceivable random task.
They’ll have to function reliably offline (no web search) which means they probably need to be absolutely massive models. We’ll have to find ways to selectively compress knowledge. For example we might allocate more of the model weights to STEM topics and perhaps less to, I dunno, the fall of the Roman Empire, Greek gods or the career trajectory of Pauly Shore. the career trajectory of Pauly Shore. But perhaps not, because who knows—-maybe a deep familiarity with Bio-Dome is what saves the colony on Kepler-452b
Burn the planet to the ground because your life is boring. Extremely mature stance you've got there
This is 1960's era anti-nuclear all over again.
People on Reddit posting AI art are getting death threats. It's absurd.
The author seems to mistake having to update Node.js for a security patch to be a curse rather than a blessing.
The alternative is that your bespoke solution has undiscovered security vulnerabilities, probably no security community, and no easy fix for either of those.
You get the privilege of patching Node.js.
Similarly, as a hiring manager, you can hire a React developer. You can't hire a "proprietary AI coded integrated project" developer.
This piece seems to say more about React than it says about a general shift in software engineering.
Don't like React? Easiest it's ever been not to use it.
Don't like libraries, abstractions and code reuse in general? Avoid them at your peril. You will quickly reach the frontier of your domain knowledge and resourcing, and start producing bespoke square wheels without a maintenance plan.
Yeah, I really don't get it. So instead of using someone else's framework, you're using an AI to write a (probably inferior and less thoroughly tested and considered) framework. And your robot employee is probably pulling a bunch of stuff (not quite verbatim, of course) from existing relevant open source frameworks anyway. Big whoop?
It's not really easy to not use React, since it was hyped to no end and now is entrenched. Try to get a frontend job without knowing React.
That's a different complaint.
It's quite easy to make things without react, it's not our fault that business leaders don't let devs choose how to solve problems but hey who am I to complain? React projects allow me to pay my bills! I've never seen a good "react" project yet and I've been working professionally with react since before class components were a thing.
Every react code base has their own unique failures due to npm ecosystem, this will never change. In fact, the best way to anticipate what kind patterns are in a given react project is to look at their package.json.
I fail to see the obvious wisdom in having AI re-implement chunks of existing frameworks without the real-world battle testing, without the supporting ecosystem, and without the common parlance and patterns -- all of which are huge wins if you ever expand development beyond a single person.
It's worth repeating too, that not everything needs to be a react project. I understand the author enjoys the "vibe", but that doesn't make it a ground truth. AI can be a great accelerator, but we should be very cognizant of what we abdicate to it.
In fact I would argue that the post reads as though the developer is used to mostly working alone, and often choosing the wrong tool for the job. It certainly doesn't support the claim of the title
> re-implement chunks of existing frameworks without the real-world battle testing
The trend of copying code from StackOverflow has just evolved to the AI era now.
I also expect people will attempt complete rewrites of systems without fully understanding the implications or putting safeguards in place.
AI simply becomes another tool that is misused, like many others, by unexperienced developers.
I feel like nothing has changed on the human side of this equation.
AI has a lot of "leaders" currently working through a somewhat ignorant discovery of existing domain knowledge (ask me how being a designer has felt in the last 15 years of UX Leadership™ slowly realizing there's depth to the craft).
In recent months, we have MCPs, helping lots of people realize that huh, when services have usable APIs, you can connect them together!
In the current case: AI can do the tedious things for me -> Huh, discarding vast dependency trees (because I previously wanted the tedious stuff done for me too) lessens my risk surface!
They really are discovered truths, but no one's forcing them to come with an understanding of the tradeoffs happening.
> the supporting ecosystem, ... the common parlance and patterns
Which are often the top reason to use a framework at all.
I could re-implement a web frame work in python if I needed to but then I would lose all the testing, documentation, middle-ware and worst of all the next person would have to show up and re learn everything I did and understand my choices.
If the author is this Alain di Chiappari, he works for a telehealth and psychology site:
https://theorg.com/org/unobravo-telehealth-psychology-servic...
It is interesting how many telehealth and crypto people are promoting AI (David Sacks being the finest of all specimens).
The article itself is of course an AI assisted mashup of all propaganda talking points. People using Unobravo should take note.
I would think that frameworks make more sense than ever with LLMs.
The benefits of frameworks were always having something well tested that you knew would do the job, and that after a bit of use you'd be familiar with, and the same still stands.
LLMs still aren't AGI, and they learn by example. The reason they are decent at writing React code is because they were trained on a lot of it, and they are going to be better at generating based on what they were trained on, than reinventing the wheel.
As the human-in-the-loop, having the LLM generate code for a framework you are familiar with (or at least other people are familiar with) also let's you step in and fix bugs if necessary.
If we get to a point, post-AGI, where we accept AGI writing fully custom code for everything (but why would it - if it has human-level intelligence, wouldn't it see the value in learning and using well-debugged and optimized frameworks?!), then we will have mostly lost control of the process.
It’s fun to ask the models their input. I was working on diagrams and was sure Claude would want some python / js framework to handle layout and nodes and connections. It said “honestly I find it easiest to just write the svg code directly”.
That is fun, but it doesn’t mean that the model finds it easier or will actually work better that way, that just means that in its training data many people said something like “honestly I find it easiest to just write the svg code directly” in response to similar questions
> Since [a few months ago], things have dramatically changed...
It's not like we haven't heard that one before. Things have changed, but it's been a steady march. The sudden magic shift, at a different point for everyone, is in the individual mind.
Regarding the epiphany... since people have been heavily overusing frameworks -- making their projects more complex, more brittle, more disorganized, more difficult to maintain -- for non-technical reasons, people aren't going to stop just because LLMs make them less necessary; The overuse wasn't necessary in the first place.
Perhaps unnecessary framework usage will drop, though, as the new hype replaces the old hype. But projects won't be better designed, better organized, better through-through.
My biggest concern with AI is that I'm not sure how a software engineer can build up this sort of high-level intuition:
> I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am.
Without a significant development period of this:
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
A professional mathematician should use every computer aid at their disposal if it's appropriate. But a freshman math major who isn't spending most of their time with just a notebook or chalk board is probably getting in the way of their own progress.
Granted, this was already an issue, to a lesser extent, with the frameworks that the author scorns. It's orders of magnitude worse with generative AI.
I'm not sure. I don't know about deep expertise and mastery, but I can attest that my fluency skyrocketed as the result of AI in several languages, simply because the friction involved in writing them went own by orders of magnitude. So I am writing way more code now in domains that I previously avoided, and I noticed that I am now much more capable there even without the AI.
What I don't know is what state I'd be in right now, if I'd had AI from the start. There are definitely a ton of brain circuits I wouldn't have right now.
Counterpoint: I've actually noticed them holding me back. I have 20 years of intuition built up now for what is hard and what is easy, and most of it became wrong overnight, and is now limiting me for no real reason.
The hardest part to staying current isn't learning, but unlearning. You must first empty your cup, and all that.
I have been using Cursor w/ Opus 4.x to do extensive embedded development work over the past six months in particular. My own take on this topic is that for all of the chatter about LLMs in software engineering, I think a lot of folks are missing the opportunity to pull back and talk about LLMs in the context of engineering writ large. [I'm not capitalizing engineering because I'm using the HN lens of product development, not building bridges or nuclear reactors.]
LLMs have been a critical tool not just in my application but in my circuit design, enclosure design (CAD, CNC) and I am the conductor where these three worlds meet. The degree to which LLMs can help with EE is extraordinary.
A few weeks ago I brought up a new IPS display panel that I've had custom made for my next product. It's a variant of the ST7789. I gave Opus 4.5 the registers and it produced wrapper functions that I could pass to LVGL in a few minutes, requiring three prompts.
This is just one of countless examples where I've basically stopped using libraries for anything that isn't LVGL, TinyUSB, compression or cryptography. The purpose built wrappers Opus can make are much smaller, often a bit faster, and perhaps most significantly not encumbered with the mental model of another developer's assumptions about how people should use their library. Instead of a kitchen sink API, I/we/it created concise functions that map 1:1 to what I need them to do.
Where I agree with the author of this post is that I feel like perhaps it's time for a lot of libraries to sunset. I don't think replacing frameworks is the correct abstraction at all but I do think that it no longer makes sense to spend time integrating libraries when what you really need are purpose-built functions that do exactly what you want instead of what some library author thought you should want.
Using a framework gives you some assurance that the underlying methods are well designed. If you don't know how to spot issues in auth design, then using an LLM instead of a library is a bad idea.
I agree though there's many non-critical libraries that could be replaced with helper methods. It also coincides with more awareness of supply chain risks.
I think this is a subtle but important point.
If you use a well regarded library, you can trust that most things in it were done with intention. If an expectation is violated, that's a learning opportunity.
With the AI firehose, you can't really treat it the same way. Bad patterns don't exactly stand out.
Maybe it'll be fine but I still expect to see a lot of code bases saddled with garbage for years to come.
I disagree about ditching abstractions. Programmatic abstractions aren't just a way to reduce the amount of code you write, they're also a common language to understand large systems more easily, and a way to make sure systems that get built are predictable.
I share that notion, but I think the abstractions are the foundational tech stack we have had for decades, like the Web Standard or even bash. You need constraints, but not the unnecessary complexity that comes with many modern tech stacks (react/next) that were build around SV's hyper-scalability monopoly mentality. Reach for simple tools if the task is simple: KISS.
Not only that, but a way to factor systems so you can make changes to them without spooky action at a distance. Of course, you have to put in a lot of effort to make that happen, but that's why it doesn't seem to me that LLM's are solving the hard part of software development in the first place.
So the suggestion here is that instead of using battle tested libraries/frameworks, everyone should now build their own versions, each with an unique set of silent bugs?
> Why do you ever need, for most of the use cases you can think of, a useless, expensive, flawed, often vulnerable framework
Like the vibe coded solution won't be flawed and vulnerable
Exactly, AI will finally put a stop to the "do not implement your own crypto" fad /s
https://security.stackexchange.com/questions/209652/why-is-i...
Even with a perfect coding agent, we code to discover what correct even is.
Team decides on vague requirements, then you actually have to implement something. Well that 'implementing' means iterating until you discover the correct thing. Usually in lots of finicky decisions.
Sometimes you might not care about those decisions, so you one shot one big change. But in my experience, the day-to-day on a production app you can 100% write all the code with Claude, but you're still trying to translate high level requirements into "low"-level decisions.
But in the end its nice not to care about the code monkey work going all over a codebase, adding a lot of trivial changes by hand, etc.
> In my mind, besides the self declared objectives, frameworks solve three problems .. “Simplification” .. Automation .. Labour cost.
I think you are missing Consistency, unless you don't count frameworks that you write as frameworks? There are 100 different ways of solving the same problem, and using a framework--- off the shelf or home made--- creates consistency in the way problems are solved.
This seems even more important with AI, since you lose context on each task, so you need it to live within guardrails and best practices or it will make spaghetti.
> We can finally get rid of all that middle work. That adapting layer of garbage we blindly accepted during these years. A huge amount of frameworks and libraries and tooling that has completely polluted software engineering, especially in web, mobile and desktop development. Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
I disagree. At least for a little while until models improve to truly superhuman reasoning*, frameworks and libraries providing abstractions are more valuable than ever. The risk/reward for custom work vs library has just changed in unforeseen ways that are orthogonal to time and effort spent.
Not only do LLMs make customization of forks and the resulting maintenance a lot easier, but the abstractions are now the most valuable place for humans to work because it creates a solid foundation for LLMs to build on. By building abstractions that we validate as engineers, we’re encoding human in the loop input without the end-developer having to constantly hand hold the agent.
What we need now is better abstractions for building verification/test suites and linting so that agents can start to automatically self improve their harness. Skills/MCP/tools in general have had the highest impact short of model improvements and there’s so much more work to be done there.
* whether this requires full AGI or not, I don’t know.
I have to tell claude specifically to use plain html css js, else it goes on building react
There was a time around 2016 where you weren't allowed to write a React application without also writing a "Getting Started with React" blog post. Having trained on all of that, the AI probably thinks React is web development.
Tell claude to build a functional website using plain html and css and no frameworks and it'll do it in a second. Now try that with a junior dev.
I guess juniors are different these days. In my generation a lot of people's first contact with code was doing basic (html, css, bits of js) web development. That was how I got started at like 12 or 13.
Indeed, this has been one of the first things I've noticed
Few months ago I did exactly this. But over time I threw away all the generated js,css and html. It was unmaintenable mess. I finally chose Svelte and stuck with it. Now I have a codebase which makes sense to me.
I did asked AI to generate landing page. This gave me the initial headers, footers and styles that I used for my webapp but I threw away everything else.
Frameworks are the reasons why AI can learn patterns and repeat, without frameworks you will be burning credits just to do things that been optimized already and completed. Unless you are Anthropic investor, thats not the way to improve your coding.
Frameworks are stable by design, generated code isn't. Why people still had to learn math when calculator was invented?
I feel the opposite. Frameworks and standardization becomes even more important when using AI.
Intellectual surrender is exactly the risk I fear with coding agents. Will the next generation of software ‘developers’ still know how to code? Seems coding agents are in a way taking us further from understanding the machine, just like frameworks have in the past.
Software has always been about abstraction. This one, in a way, is the ultimate abstraction. However it turns out that LLMs are a pretty powerful learning tool. One just needs the discipline to use it.
> But the true revolution happened clearly last year
Oh, that seems like a good bit of time!
> and since December 2025
So like..1 or 2 months ago? This is like saying “over half of people who tried our product loved it - all 51% of them!”. This article is pushing hype, and is mistaking Anthropics pre IPO marketing drive as actual change.
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
I constantly see this and think I must be operating in a different world. This never took significant amounts of time. Are people using react to make text blogs or something?
When you choose the right framework it saves you enormous amounts of time. Sounds like the author has trouble separating hype from fact. Pick the right framework and your LLM will work better, too.
The pendulum swing described here is real but I think the underlying issue is subtler than "AI vs. no AI."
The actual problem most teams have isn't writing code — it's understanding what the code they already depend on is doing. You can vibe-code a whole app in a weekend, but when one of your 200 transitive dependencies ships a breaking change in a patch release, no amount of AI is going to help you debug why your auth flow suddenly broke.
The skill that's actually becoming more valuable isn't "writing code from scratch" — it's maintaining awareness of the ecosystem you're building on. Knowing when Node ships a security fix that affects your HTTP handling, or when a React minor changes the reconciliation behavior, or when Postgres deprecates a function you use in 50 queries.
That's the boring, unsexy part of engineering that AI doesn't solve and most developers skip until something catches fire.
> no amount of AI is going to help you debug why your auth flow suddenly broke.
What? Coding agents are very capable at helping fix bugs in specific domains. Your examples are like, the exact place where AI can add value.
You do an update, things randomly break: tell Claude to figure it out and it can go look up the breaking changes in the new versions, read your code and tell you what happened and fix it for you.
> That adapting layer of garbage we blindly accepted during these years.
Wouldn't everything that agents produce be better described as a "layer of garbage?"
It never left, welcome back to software engineering though!
Thank you, I'm glad to be back!
That took the strangest turn. It started with empowerment to do much more (and that I reallY agree with) — to then use it to... build everything from scratch? What? Why?
What a framework gives me is mostly other people having done precisely the architectural work, that is a prequisite to my actual work. It's fantastic, for the same reason that automatic coding is. I want to solve unsolved problems asap.
I am so confused by the disconnect that I feel like I must be missing something.
Strange how many people are comparing code to art. Software engineering has never been about the code written, it’s about solving problems with software. With AI we can solve more problems with software. I have been writing code for 25 years, I love using AI. It allows me to get to the point faster.
The author is right, eliminating all this framework cruft will be a boon for building great software. I was a skeptic but it seems obvious now its largely going to be an improvement.
It's actually so over
Pretty much completely disagree with the OP. Software Engineering never left, maybe the author moved away from it instead.
> Stop wrapping broken legs in silk. Start building things that are yours.
This however is deeply wrong for me. Anyone who writes and reviews code regularly knows very well that reading code doesn't lead to the same deep intuitive understanding of the codebase as writing same code.
So, no, with AI you are not building things which are yours. You might call them yours, but you lose deeper understanding of what you built.
If a framework, best a minimal one using web standards E.g. svelte or https://nuejs.org/.
You're right, clearly I've tried to be a bit provocative to pass the message, but I'm not religious in this sense. Minimal frameworks that really solve a problem cleanly and are adopted with intention are welcome.
This is about green field development which is relatively rare. Much of the time the starting point is a bunch of code using React or maybe just a lump of PHP. Business logic ends up plunked down all over the place and LLMs tend to make a huge mess with all this unless kept on a tight leash.
I'm glad this guy is doing well, but I'm dreading the amount of work being created for people who can reverse engineer the mountains of hallucinated bullshit that he and others are now actively producing.
And if the frameworks aren't useful then maybe work up the chain and ditch compilers next?
Mindblowing observations.
> The three problems frameworks solve (or claim to) [..] Simplification [..] Automation [..] Labour cost
and he misses _the most important problem frameworks solve_
which is correctness
when it comes to programming most things are far more complicated in subtle annoying ways then they seem to be
and worse while you often can "cut away" on this corner cases this also tends to lead to obscure very hard to find bugs including security issues which have a tendency to pop up way later when you haven't touched to code for a while and don't remember which corner you cut (and with AI you like did never know which corner you did cut)
like just very recently some very widely used python libraries had some pretty bad bugs wrt. "basic" HTTP/web topics like http/multipart request smuggling, DOS from "decompression bombs" and similar
and while this might look like it's a counter argument, it speaks for strict code reuse even for simple topics. Because now this bugs have been fixed! And that is a very common topic for frameworks/libraries, they start out with bugs, and sadly often the same repeated common bugs known from other frameworks, and then over time things get ironed out.
But with AI there is an issue, a lot of the data it's trained on is code _which does many of this "typical" issues wrong_.
And it's non-determenistic, and good at "hiding" bugs, especially the kind of bugs which anyway are prone to pass human reviews.
So you _really_ would want to maximize use of frameworks and libraries when using AI, as that large part of the AI reliability issues.
But what does change is that there is much less reason to give frameworks/libraries "neat compact APIs" (which is a common things people spend A LOT of time one and which is prone to be the source of issues as people insist on making things "look simpler" then they are and in turn accidentally make them not just simpler but outright wrong, or prevent use-cases you might need).
Now depending on you definition of framework you could argue that AI removes boiler-parts issues in ways which allow effectively replacing all frameworks with libraries.
But you still need to review code, especially AI generated code. To some degree the old saying that code is far more read then written is even more true with AI (as most isn't "written"(by human) anymore). Now you could just not review AI code, but that can easily count as gross negligence and in some jurisdictions it's not (fully) possible to opt out of damages from gross negligence no matter what you put in TOS or other contracts. I.e. I can't recommend such negligent actions.
So IMHO there is still use for some kind of frameworks, even if what you want from them will likely start to differ and many of them can be partially or fully "librarified".
There is yet another issue: the end-users are fickle fashion minded people, and will literally refuse to use an application if it does not look like the latest React-style. They do not want to be seen using "old" software, like wearing the wrong outfit or some such nonsense. This is real, and baffling.
> Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
LLM generated code is the ultimate abstraction. A mess of code with no trusted origin that nobody has ever understood. It's worse than even the worst maintained libraries and frameworks in every way.
"Software engineers are scared of designing things themselves."
what?
Read the following paragraph. The author isn't wrong.
> I want to build X > "Hey claude, how would you make X" > Here's how I'd build X... [Plan mode on]
In big corporations that's how it is. Developers are told to only implement what is in the specs and if they have any objection, they need to raise it to PM who will then forward it to the system architect etc.
So that creates the notion as if the design was something out of reach. I met developers now who cannot develop anything on their own if it doesn't have a ticket that explains everything and hand holds them. If something is not clear they are stuck and need help of senior engineers.
With a line like that I wouldn't trust anything this guy has to say.
Thank you for the constructive feedback :)
Nah. Nothing has changed. To offload the work to an agent and make it a productivity gain it is exactly the same as using a framework, it's a black box portion of your system, written by someone else, that you don't understand.
Unless you are quite literally spending almost the same amount of time you'd spend yourself to deeply understand each component, at which point, you could write it yourself anyway, nothing has changed when it comes to the dynamics of actually authoring systems.
There are exceptions, but generally speaking untempered enthusiasm for agents correlates pretty well with lack of understanding about what engineering software actually entails (it's about relational and conceptual comprehension, communication, developing shared knowledge, and modeling, not about writing code or using particular frameworks!)
EDIT: And to be clear, the danger of "agentizing" software engineering is precisely that it promotes a tendency to obscure information about the system, turn engineers into personal self-llm silos, and generally discard all the second-order concerns that make for good systems, resilience, modifiability, intelligibility, performance.
I feel the same way, but I’m not a traditional software engineer. Just an old-school Webmaster who’s been trying to keep up with things, but I’ve had to hire developers all along.
I’m an idea’s guy, and in the past month or so my eyes have also fully opened to what’s coming.
But there’s a big caveat. While the actual grunt work and development is going away, there’s no telling when the software engineering part is going to go away as well. Even the ideas guy part. What happens when a simple prompt from someone who doesn’t even know what they’re doing results in an app that you couldn’t have done as well with whatever software engineering skills you have?