> The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t.
For me, however, there is one issue: how can I utilize AI without degenerating my own abilities? I use AI sparingly because, to be honest, every time I use AI, I feel like I'm getting a little dumber.
I fear that excessive use of AI will lead to the loss of important skills on the one hand and create dependencies on the other.
Who benefits if we end up with a generation of software developers who can no longer program without AI? Programming is not just writing code, but a process of organizing, understanding, and analyzing.
What I want above all is AI that helps me become better at my job and continue to build skills and knowledge, rather than making me dependent on it.
If we sees ourselves less as a programmer and more as a software builder, then it doesn’t really matter if our programming skills atrophy in the process of adopting this tool, because it affords us to build at a higher abstraction level), kind of like how a PM does it. This up-leveling in abstractions have happened over and over in software engineering as our tooling improves over time. I’m sure some excellent software engineers here couldn’t write in assembly code to save their lives, but are wildly productive and respected for what they do - building excellent software.
That said, as long as there’s the potential for AI to hallucinate, we’ll always need to be vigilant - for that reason I would want to keep my programming skills sharp.
AI assisted software building by day, artisanal coder by night perhaps.
> how can I utilize AI without degenerating my own abilities?
Couldn't the same statement, to some extent, be applied to using a sorting lib instead of writing your own sorting algorithm? Or how about using a language like python instead of manually handling memory allocation and garbage collection in C?
> What I want above all is AI that helps me become better at my job and continue to build skills and knowledge
So far, on my experience, the quality of what AI outputs is directly related to the quality of the input. I've seen some AI projects made by junior devs that a incredibly messy and confusing architecture, despite they using the same language and LLM model that I use? The main difference? The AI work was based on the patterns and architecture that I designed thanks to my knowledge, which also happens to ensure that the AI will produce less buggy software.
>For me, however, there is one issue: how can I utilize AI without degenerating my own abilities?
My cynical view is you can't, and that's the point. How many times before have we seen the pattern of "company operates at staggering losses while eliminating competition or becoming entrenched in enough people's lives, and then clamps down to make massive profits"?
Do you save time by using a calculator / spreadsheet or try to do all calculations in your head, because your ability to do quick calculations degrades the more you rely on tools to do it.
I'm not too worried about degrading abilities since my fundamentals are sound and if I get rusty due to lack of practice, I'm only a prompt away from asking my expert assistant to throw down some knowledge to bring me back up to speed.
Whilst my hands on programming has reduced, the variety of Software I create has increased. I used to avoid writing complex automation scripts in bash because I kept getting blocked trying to remember its archaic syntax, so I'd typically use bun/node for complex scripts, but with AI I've switched back to writing most of my scripts in bash (it's surprising at what's capable in bash), and have automated a lot more of my manual workflows since it's so easy to do.
I also avoided Python because the lack of typing and api discovery slowed me down a lot, but with AI autocomplete whenever I need to know how to do something I'll just write a method stub with comments and AI will complete it for me. I', now spending lots of time writing Python, to create AI Tools and Agents, ComfyUI Custom Nodes, Image and Audio Classifiers, PIL/ffmpeg transformations, etc. Things I'd never consider before AI.
I also don't worry about its effects as I view it as inevitable, with the pendulum having swung towards code now being dispensable/cheap to create, what's more important is velocity and being able to execute your ideas quickly, for me that's using AI where I can.
You can’t and that’s the new normal. We’re probably the only generation which was given an opportunity to get properly good at coding. No such luxury will be available in a few years optimistically; pessimistically it’s been taken away with GPT 5.2 and Opus 4.5.
If that's the case (and I'm not convinced it is), shouldn't retaining that skill be the priority for anyone who has already acquired it? I've yet to see any evidence AI can turn someone who can't code into a substitute for someone who can. If the supply of that skill is going to dry up, surely it will only become more valuable. If using AI erodes it, the logical thing would be not to use AI.
> If that's the case [...], shouldn't retaining that skill be the priority for anyone who has already acquired it?
Indeed I believe that, but in my experience these skills get more and more useless in the job market. In other words: retaining such (e.g. low-level coding) skills is an intensively practises hobby of such people that is (currently) of "no use" in the job market.
That's the correct diagnosis IMHO, but getting good as software engineering is ~3 years of serious studying and ~5-10 years of serious work and that's after you've learned to code, which is easier to some and more difficult to others.
Compare ROI of that to being able to get kinda the software you need in a few hours of prompting; it's a new paradigm, progress is (still) exponential and we don't know where exactly things will settle.
Experts will get scarce and very sought after, but once they start to retire in 10-20-30 years... either dark ages or AI overlords await us.
i think cs students should force themselves to learn the real thing and write the code themselves, at least for their assignments. i have seen that a lot of recent cs grads that has gpt in most of their cs life basically cannot write proper code, with or without ai.
This is part of the learning curve. When you vibe code you produce something that is as if someone else wrote it. It’s important to learn when that’s appropriate versus using it in a more limited way or not at all.
You can always ask it to nudge you in the right direction instead of giving the solution right away. I suspect this way of using it is not very popular though.
This is not a new problem I think. How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?
If you're not careful and always rely on them as a crutch, they'll remain just that; without actually "incrementing" you.
I think this is a very good question. How should we actually be using our tools such that we're not degenerating, but growing instead?
My answer is: use AI exactly for the tasks that you as a tech lead on a project would be ok delegating to someone else. I.e. you still own the project and probably want to devote your attention to all of the aspects that you HAVE to be on to of, but there are probably a lot of tasks where you have a clear definition of the task and its boundaries, and you should be ok to delegate and then review.
This gets particularly tricky when the task requires a competency that you yourself lack. But here too the question is - would you be willing to delegate it to another human whom you don't fully trust (e.g. a contractor)? The answer for me is in many cases "yes, but I need to learn about this enough so that I can evaluate their work" - so that's what I do, I learn what I need to know at the level of the tech lead managing them, but not at the level of the expert implementing it.
As humans we have developed tools to ease our physical needs (we don’t need to run, walk or lift things) and now we have a tool that thinks and solve problems for us
> how can I utilize AI without degenerating my own abilities?
Personally I think my skill lies in solving the problem by designing and implementing the solution, but not how I code day-to-day. After you write the 100th getter/setter you're not really adding value, you're just performing a chore because of language/programming patterns.
Using AI and being productive with it is an ability and I can use my time more efficiently than if I were not to use it. I'm a systems engineer and have done some coding in various languages, can read pretty much anything, but am nowhere near mastery in any of the languages I like.
Setting up a project, setting up all the tools and boilerplate, writing the main() function, etc are all tasks that if you're not 100% into the language take some searching and time to fiddle. With AI it's a 2-line prompt.
Introducing plumbing for yet another feature is another chore: search for the right libraries/packages, add dependencies, learn to use the deps, create a bunch of files, sketch the structs/classes, sketch the methods, but not everything is perfectly clear yet, so the first iteration is "add a bunch of stuff, get a ton of compiler warnings, and then refine the resulting mess". With AI it's a small paragraph of text describing what I want and how I'd like it done, asking for a plan, and then simply saying "yes" if it makes sense. Then wait 5-15m. Meanwhile I'm free to look at what it's doing and if it's doing something stupid wrong, or think about the next logical step.
Normally the result for me has been 90% good, I may need to fix a couple things I don't like, but then syntax and warnings have already been worked out, so I can focus on actually reading, understanding and modifying the logic and catching actual logic issues. I don't need to spend 5+ days learning how to use an entire library, only to find out that the specific one I selected is missing feature X that I couldn't foresee using last week. That part takes now 10m and I don't have to do it myself, I just bring the finishing touches where AI cannot get to (yet?).
I've found that giving the tool (I personally love Copilot/Claude) all the context you have (e.g. .github/copilot-instructions.md) makes a ton of difference with the quality of the results.
you on another comment in here lol you’re blinded by the hate! let it flow through you! insult the users of the bad neural nets until the world heals! don’t back down!
In my experience, people who bombard threads with insults based on the technology people use (a specific set of neural networks in this case) are…well, don’t have much better to do in life. you openly advocated for insulting and bullying people in your other comments. don’t back down with this “it’s just an observation” bs, own it! be you!
…or change your behavior and be a better person, whatever works
btw I’ve been doing “the AI” stuff since 2018 in industry and before in academia. I find your worldview displayed in your comments to be incredibly small and easily dismissible
You want to market to engineers, stick to provable statements. And address some of their concerns. With something other than "AI is evolving constantly, all your problems will be solved in 6 months, just keep paying us."
Oh by the way, what is the OP trying to sell with these FOMO tactics? Yet another ChatGPT frontend?
I'll take that, but don't see how it's so different from the intent I've always had of "automating myself out of the job". When I want to do "engineering", I can always spin up Factorio or Turing Complete. But for the rest of the time, I care about the result rather than the process. For example, before starting to implement a tool, I'll always first search online for whether there is already a good tool that would address my need, and if so, I'll generally utilize that.
You download a tool written by a human, you can reasonably expect that it does what the author claims it does. And more, you can reasonably expect that if it fails it will fail in the same way in the same conditions.
I wrote some Turing Machine programs back in my Philosophy of Computer Science class during the 80's, but since then my Turing Machine programming skills have atrophied, and I let LLMs write them for me now.
Perhaps in that case the critics should bend their ire on the marketing departments, rather than trashing the tech?
Really though, the potential in this tech is unknown at this point. The measures we have suggest there's no slowdown in progress, and it isn't unreasonable for any enthusiast or policy maker to speculate about where it could go, or how we might need to adjust our societies around it.
At least AI can write grammatically correct sentences and use punctuation and proper capitalization.
As a 0.1x low effort Hacker News user who can't lift a pinky to press a shift or punctuation key, you should consider using AI to improve the quality of your repetitive off-topic hostile vibe postings and performative opinions.
Or put down the phone and step away from the toilet.
And you just unwittingly proved my point, so I'm downgrading you to an 0.01x low effort Hacker News user.
If there are no other effects of AI than driving people like you out of the industry, then it's proven itself quite useful.
Edit: ok I will concede that point to you that I was mistaken about 0.01x, for candidly admitting (and continuously providing incontrovertible proof) that you're only a 0.001x low effort Hacker News user. I shouldn't have overestimated you, given all the evidence.
Is it possible you're not the target audience if you are aware that LLMs are impressive and useful? Regardless of the inane hype and bubble around them.
It does require excluding things yourself, but I've had a lot of success with uBlacklist. Some of the spam sites all have certain common characteristics that you can specifically search for to come up with a full search result of spam domains
"AI coding is so much better now that any skepticism from 6 months ago is invalid" has been the refrain for the last 3 years. After the first few cycles of checking it out and realizing that it's still not meeting your quality bar, it's pretty reasonable to dismiss the AI hype crowd.
I think we have a real inflection point now. I try it a bit every year and was always underwhelmed. Halfway through this year was the first time it really impressed me. I now use Claude Code.
But Claude Code costs money. You really want to introduce a critical dependency into your workflow that will simultaneously atrophy your skills and charge you subscription fees?
A year ago I could get o1-mini to write tests some of the time that I would then need to fix. Now I can get Opus 4.5 to do fairly complicated refactors with no mistakes.
These tools are seriously starting to become actually useful, and I’m sorry but people aren’t lying when they say things have changed a lot over the last year.
It might even be true this time, but there is no real mystery why many aren't inclined to invest more time figuring it out for themselves every few months. No need for the author of the original article to reach for "they are protecting their fragile egos" style of explanation.
The productivity improvements speak for themselves. Over time, those who can use ai well and those who cannot will be rewarded or penalized by the free market accordingly.
That's what it really all comes down to, isn't it?
It doesn't matter if you're using AI or not, just like it never mattered if you were using C or Java or Lisp, or using Emacs or Visual Studio, or using a debugger or printf's, or using Git or SVN or Rational ClearCase.
What really matters is in the end is, what you bring to market, and what your audience thinks of your product.
So use all the AI you want. Or don't use it. Or use it half the time. Or use it for the hard stuff, but not the easy stuff. Or use it for the easy stuff, but not the hard stuff. Whatever! You can succeed in the market with AI-generated product; you can fail in the market with AI-generated product. You can succeed in the market with human-generated product; you can fail in the market with human-generated product.
If there’s evidence of productivity improvements through AI use, please provide more information. From what I’ve seen, the actual data shows that AI use slows developers down.
The sheer number of projects I've completed that I truly would never have been able to even make a dent in is evidence enough for me. I don't think research will convince you. You need to either watch someone do it, or experiment with it yourself. Get your hands dirty on an audacious project with Claude code.
Meanwhile I'm getting a 5000 lines PR with code that's all clearly AI generated.
It's full of bloat; Unused http endpoints, lots of small utility functions that could have been inlined (but now come with unit tests!), missing translations, only somewhat correct design...
The quality wasn't perfect before, now it has taken a noticeable dip. And new code is being added faster than ever. There is no way to keep up.
I feel that I can either just give in and stop caring about quality, or I'll be fixing everyone else's AI code all of my time.
I'm sure that all my particular colleagues are just "holding it wrong", but this IS a real experience that I'm having, and it's been getting worse for a couple of years now.
I am also using AI myself, just in a much more controlled way, and I'm sure there's a sweet spot somewhere between "hand-coding" and vibing.
I just feel that as you inch in on that sweet spot, the advertised gains slowly wash away, and you are left with a tangible, but not as mindblowing improvement.
> "The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t. The first group is shipping faster, taking on bigger challenges. The second group is… not."
Honest question for the engineers here. Have you seen this happening at your company? Are strong engineers falling behind when refusing to integrate AI into their workflow?
As before, the big gap I still see is between engineers who set something up the right way and engineers who push code up without considering the bigger picture.
One nice change however is that you can guide the latter towards a total refactor during code review and it takes them a ~day instead of a ~week.
No. The opposite. The people who “move faster” are literally just producing tech debt that they get a quick high five for, then months later we limp along still dealing with it.
A guy will proudly deploy something he vibe coded, or “write the documentation” for some app that a contractor wrote, and then we get someone in the business telling us there’s a bug because it doesn’t do what the documentation says, and now I’m spending half a day in meetings to explain and now we have a project to overhaul the documentation (meaning we aren’t working on other things), all because someone spent 90 seconds to have AI generate “documentation” and gave themselves a pat on the back.
I look at what was produced and just lay my head down on the desk. It’s all crap. I just see a stream of things to fix, convention not followed, 20 extra libraries included when 2 would have done. Code not organized, where this new function should have gone in a different module, because where it is now creates tight coupling between two modules that were intentionally built to not be coupled before.
It’s a meme at this point to say, ”all code is tech debt”, but that’s all I’ve seen it produce: crap that I have to clean up, and it can produce it way faster than I can clean it up, so we literally have more tech debt and more non-working crap than we would have had if we just wrote it by hand.
We have a ton of internal apps that were working, then someone took a shortcut and 6 months later we’re still paying for the shortcut.
It’s not about moving faster today. It’s about keeping the ship pointed in the right direction. AI is a guy a guy on a jet ski doing backflips, telling is we’re falling behind because our cargo ship hasn’t adopted jet skis.
AI is a guy on his high horse, telling everyone how much faster they could go if they also had a horse. Except the horse takes a dump in the middle of the office and the whole office spends half their day shoveling crap because this one guy thinks he’s going faster.
What worries me is how AI impacts neurodivergent programmers. I have ADHD and it simply doesn't work for me to constantly be switching context between the code I'm writing and the AI chat. I am terrified that I will be forced out of the industry if I can't keep up with people who are able to use AI.
Fellow diagnosed ADHD here. And I know every ADHD is different and people are different.
What helps me is:
- Prefer faster models like VSCode's Copilot Raptor Mini which, despite the name, is like 80% capable of what Sonnet 4.5 is. And is much faster. It is a fine tunned GPT 5 mini.
- Start writting the next prompt while LLMs work or keep pondering about the current problem at hand. This helps our chaotic brain to keep focused.
I find that any additional overhead caused by the separate AI chat is saved 20x over by basically never having to use a browser to look at documentation and S/O while coding.
That makes sense. I do use AI for questions like "what's the best way to flatten a list of lists in Python" or "what is the interface for this library function". I just don't use it the way I see some people do where they have it write the rough draft of their code or identify where a bug is.
> If you’ve actually tried modern tools and they didn’t work for you, that’s a conversation worth having. But “I tried ChatGPT in 2022” isn’t that conversation.
How many people are actually saying this? Also how does one use modern coding tools in heavily regulated contexts, especially in Europe?
I can't disagree with the article and say that AI has gotten worse because it truly hasn't, but it still requires a lot of hand holding. This is especially true when you're 'not allowed' to send the full context of a specific task (like in health care). For now at least.
This story ends up being relevant in a metaphorical way.
My aunt was born in the 1940s, and was something of an old fashioned feminist. She didn't know why wasn't allowed to wear pants, or why she had to wait for the man to make the first move, etc. She tells a story about a man who ditched her at a dance once because she didn't know the "latest dance." Apparently in the 1950s, some idiot was always inventing a new dance that everyone _just had follow_. The young man was so embarrassed that he left her at the dance.
I still think about this story, and think about how awful it would have been to live in the 40s. There always has been social pressure and change, but the "everyone's got to learn new stupid dances all the time" sort of pressure feels especially awful.
This really reminds me of the last 10-20 years in technology. "Hey, some dumb assholes have built some new technology, and you don't really have the choice to ignore it. You either adopt it too, or are left behind."
As I see it, this is an inherent part of the tech industry. Unless you expressly choose to focus your career on maintaining legacy code, your value as a dev depends on your ability and willingness to continuously learn new tech.
Just normal Luddite things, which attracts those most threatened in their personal identity by the new technology.
You see it obviously with the artists and image/video generators too.
We did this with Dadaism and Impressionism and photography before this too with art.
Ultimately, it's just more abstraction that we have to get used to -- art is stuff people create with their human expression.
It is funny to see everyone argue so vehemently without any interest in the same arguments that happened in the past.
Exit through the giftshop is a good movie that explores that topic too, though with near-plagiarized mass production, not LLMs, but I guess that's pretty similar too!
I mean, luddites have consistently been correct. Technological advancements have consistently been used to benefit the rich at the expense of regular people.
The early Industrial Revolution that the original Luddites objected to resulted in horrible working conditions and a power shift from artisans to factory workers.
Dadism was a reaction to WWI where the aristocracy's greed and petty squabbling led to 17 million deaths.
I don't disagree with that, just that there's anything that can be done about it. Which technology did we successfully roll back? Nukes are the closest I think you can get and those are very hard to make and still exist in abundance, we just somewhat controlled who can have them
Quite a few come to mind: chemical and biological weapons, beanie babies, NFTs, garbage pail kids... Some take real effort to eradicate, some die out when people get bored and move on.
Today's version of "AI," i.e. large language models for emitting code, is on the level of fast fashion. It's novel and surprising that you can get a shirt for $5, then you realize that it's made in a sweatshop, and it falls apart after a few washings. There will always be a market for low-quality clothes, but they aren't "disrupting non-nudity."
So are beanie babies, NFTs and garbage pail kids -- Things that have fallen out of fashion isn't the same thing as eradicating a technology. I think that's part of the difficulty, how could you roll back knowledge without some Khmer Rouge generational trauma?
I think about the original use of steam engines and the industrial revolution -- Steam engines were so inefficient, their use didn't make sense outside of pulling its own fuel out of the ground -- Many people said haha look how silly and inefficient this robot labor is. We can see how that all turned out.[2]
> Things that have fallen out of fashion isn't the same thing as eradicating a technology.
That's true. Ruby still exists, for example, though it's sitting down below COBOL on the Tiobe index. There's probably a community trading garbage pail kids on Facebook Marketplace as well. Ideas rarely die completely.
Burning fossil fuels to turn heat into kinetic energy is genuinely better than using draft animals or human slaves. Creating worse code (or worse clothing) for less money is a tradeoff that only works for some situations.
Since you're a real established artist, I want to make my point more clear: I am not an artist and while AI image tools let me make fun pictures and not be reliant on artists for projects, it doesn't imbue me with the creativity to create artistic works that _move_ people or comment on our society. AI doesn't give or take that from you, and I argue that is what truly separates art and artists from doodles and doodlers.
> Malcolm L. Thomas argued in his 1970 history The Luddites that machine-breaking was one of the very few tactics that workers could use to increase pressure on employers, undermine lower-paid competing workers, and create solidarity among workers. "These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made." [emph. added] Historian Eric Hobsbawm has called their machine wrecking "collective bargaining by riot", which had been a tactic used in Britain since the Restoration because manufactories were scattered throughout the country, and that made it impractical to hold large-scale strikes. An agricultural variant of Luddism occurred during the widespread Swing Riots of 1830 in southern and eastern England, centring on breaking threshing machines.
Luddites were closer to “class struggle by other means” than “identity politics.”
I'm not an AI fanatic, but I do use ChatGPT often. In my experience, ChatGPT now is only marginally better than it was in 2022. The only real improvements is due to "thinking" abilities, i.e. searching the web and spending more tokens (basically prompting itself). The underlying model still to me feels largely the same.
I feel like I'm living in a different world when every time a new model comes out, everyone is in awe, and it scores exceptionally well on some benchmark that no one heard of before before the model even launched. And then when I use it, it feels like it's exactly the same as all models before, and makes the same stupid mistakes as always.
I just don't find it interesting. The only thing less interesting is the constant evangelism about it.
I also find that the actual coding is important. The typing may not be the most ineresting bit, but it's one of the steps that helps refine the architecture I had in my head.
100% agree. My only super power is weaponized “trying to understand”, spending a Saturday night in an obsessive fever dream of trying to wrap my head around some random idea.
That happens to produce good code as a side effect. And a chat bot is perfect for this.
But my obsession is not with output. Every time I use AI agents, even if it does exactly what I wanted, it’s unsatisfying. It’s not sometning I’m ever going to obsess over in my spare time.
It's good to be skeptical of new ideas as long as you don't box yourself in with dogmatism. If you're young you do this by looking at the world with fresh eyes. If you are experienced you do it by identifying assumptions and testing them.
Ha! I just saw one of these this morning on LinkedIn, an engineer complaining about AI / Vibecoding and thought exactly the same. I find these overreactions amusing.
I don’t know why this is so controversial it’s just a tool, you should learn to use it otherwise as the author of this post said you will get left behind but don’t cut yourself on the new tool (lots of people are doing this).
I personally love it because it allows me to create personal tools on the side that I just wouldn’t have had time for in the past. The quality doesn’t matter so much for my personal projects and I am so much more effective with the additional tools I’m able to create.
> I don’t know why this is so controversial it’s just a tool
Do you really "don't know why"? Are you sure?
I believe that ignoring the consequences that commercial LLMs are having on the general public today is just as radical as being totally opposed to them. I can at least understand the ethical concerns, but being completely unaware of the debate on artificial intelligence at this stage is really something that leaves me speechless, let me tell you.
AI is a tool. As every other tool under the sun, it has strengths and weaknesses, it's our job, as software engineers to try it out and understand when/how to use it on our workflows, or if if fits our use cases at all.
If you disagree with the above statement, try replacing "AI" with "Docker", "Kubernetes", "Microservices architecture", "NoSQL", or any other tool/language/paradigm that was widely adopted in the software development industry until people realized it's awesome for some scenarios but not a be-all and end-all solution.
I wonder how many of us are like me: Just waiting for AI to get Good Enough (TM). The skill required to use AI is probably decreasing, and the AI getting better, so why not just wait? Time will tell.
Exactly, if these tools are going to be so revolutionary and different within the next 6 months and even more so beyond that - there's no advantage to being an early adopter since your progress becomes invalid, may as well wait until it is good enough.
I like learning, I like programming, primarily because it lets me create whatever App I want. I'm continually choosing the most productive languages, IDEs and tooling that lets me be the most productive. I view AI in the same regard, where it lets me achieve whatever I want to create, but much faster.
Sure if you want to learn programming languages for programming sake, then yeah don't Vibe Code (i.e. text prompting AI to code), use AI as a knowledgeable companion that's readily on hand to help you whenever you get stuck. But if your goal is to create Software that achieves your objectives then you're doing yourself a disservice if you're not using AI to its maximum potential.
Given my time on this earth is finite, I'm in the camp of using AI to be as productive as possible. But that's still not everything yet, I'm not using it for backend code as I need to verify every change. But more than happy to Vibe code UIs (after I spend time laying down a foundation to make it intuitive where new components/pages go and API integration).
Other than that I'll use AI where I can (UIs, automation & deployment scripts, etc), I've even switched over to using React/Next.js for new Apps because AI is more proficient with it. Even old Apps that I wouldn't normally touch because it used legacy tech that's deprecated, I'll just rewrite the entire UI in React/Next.js to get it to a place where I can use text prompts to add new features. It took about ~20mins for Claude Code to get the initial rewrite implemented (using the old code base as a guide) then a few hours over that to walk through every feature and prompt it to add features it missed or fix broken functionality [1]. I ended up spending more time migrating it from AWS/ECS/RDS to Hetzner w/ automated backups - then the actual rewrite.
I don't have a beard, but if I did I'm sure it would be white, beyond grey.
It's okay. It's okay to feel annoyed, you have a tough battle ahead of you, you poor things.
I may be labelled a grey beard but at least I get to program computers. By the time you have a grey beard maybe you are only allowed to talk to them. If you are lucky and the billionares that own everything let you...
Sorry :) I couldn't resist. I think I'm the oldest person in the department and I think also that I am probably one of the ones that have been using AI in software development the most.
Don't be so quick to point at old people and make assumptions. Sometimes all those years actually translate into useful experience :)
Possibly. The focus of a lot of young people should be to try and effect political change that allows billionares wealth grow unended. AI is all going to accelerate this very rapidly now. Just look at what kind of world some of those with the most wealth are wanting to impose on the others now. It's frightening.
My reasons for initially dismissing it is because to me it felt like it was taking the fun part of the job. We have all these tasks, and writing the code is this creative act, designed to be read by other humans. Just like how I don’t want AI to write music for me.
But I see where things are going. I tried some of the newer tooling over the past few weeks. They’re too useful to ignore now. It feels like we’re entering into an industrial age for software.
it does seem like the skepticism is fading. I do think engineers that outright refuse to use AI (typically on some odd moral principle) are in for a bad time
Maybe I've always been a terrible engineer but I'm humble enough to admit the way I code has always been exactly like the LLM. If it's something brand new I'm googling it and pattern matching how to write it. If it's based on existing functionality I'm doing ctrl + f and pattern matching based on that and how to insert the minimal code changes to accomplish the task
Many have the attitude of finding one edge case that it doesn’t work well and dismiss AI as useful tool
I’m an early adopter and nowadays all I do is to co-write context documents so that my assistant can generate the code I need
AI gives you an approximated answer, it depends on you how to steer it to a good enough answer and this takes time and learning curve … and evolves really fast
Some people are just not good at constantly learning things
> Many have the attitude of finding one edge case that it doesn’t work well and dismiss AI as useful tool
Many programmers work on problems (nearly) *all day* where AI does not work well.
> AI gives you an approximated answer, it depends on you how to steer it to a good enough answer
Many programmers work on problem where correctness is of essential importance, i.e. if a code block is "semi-right" it is of no use - and even having to deal with code blocks where you cannot trust that the respective programmer did think deeply about such questions is a huge time sink.
> Some people are just not good at constantly learning things
Rather: some people are just not good at constantly looking beyond their programming bubble where AI might have some use.
Jenny, please try to conduct yourself with some sense of decorum here -- These are real people you're bullying. This isn't a hatemonger platform like some of the others. Please try to do better
they called me an idiot in the other thread for pointing out AI is broader than just LLMs (after they called everyone that uses AI an idiot) lol they’re clearly very angry and bitter, and I believe this is not the first account they’ve made to bombard threads with insults. in another comment they advocate for insulting the “AI idiots”
it’s not bullying in that it’s more entertaining than insulting, but still
ah in another comment (I am enjoying reading these):
> Ruthlessly bully LLM idiots
quite openly advocating for “bullying” people that use the bad scary neural nets!
If you feel the need to hype up AI to this degree, you should provide some data proving that AI use actually increases productivity. This type of fact-free polemic isn’t interesting or useful.
This fits my experience: programmers who are very vocal in their hate of using AI for programming work have in my opinion traits that make them great programmers (but I have to admit that such people often do score not very high on the Agreeableness personality trait :-) ).
As one of those on the skeptical side, one train of thought I have not seen people even mention is, the way we’re using LLMs to code now is largely to use a less precise language (mostly English) to specify what’s often a very precise problem and solution. Why would we think that spoken language is the best interface for doing this?
I work in crypto (L1 chain) as a DevOps engineer (LOTS of baremetal, LOTS of CI/CD etc) and it's been amazing to see what Claude can do in this space too.
e.g. had an issue with connecting to AWS S3, gave Claude some of the code to connect and it diagnosed a CREDENTIALS issue without seeing the credentials file nor seeing the error itself. It can even find issues like "oh, you have an extra space in front of the build parameter that the user passed into a Jenkins job". Something that a human might have found in 30+ minutes of grepping, checking etc it found in <30 seconds.
It also makes it trivial to do things like "hey, convert all of the print statements in this python script to log messages with ISO 8601 time format".
Folks talk about "but it adds bugs" but I'm going to make the opposite argument:
The excuse of "we don't have time to make this better" is effectively gone. Quality code that is well instrumented, has good metrics and easy to parse logs is only a few prompts away. Now, one could argue that was the case BEFORE we had AI/LLMs and it STILL didn't happen so I'm going to assume folks that can do clean up (SRE/DevOps/code refactor specialists) are still going to be around.
> gave Claude some of the code to connect and it diagnosed a CREDENTIALS issue without seeing the credentials file nor seeing the error itself
10 years ago google would have had a forum post describing your exact problem with solutions within the first 5 results.
Today google delivers 3 pages of content farm spam with basic tutorials, 30% of them vaguely related to your problem, 70% just containing "aws" somewhere, then stops delivering results.
The LLM is just fixing search for you.
Edit: and by the way, it can fix search for you just because somewhere out there there are forum posts describing your exact problem.
What's a "Code Refactor Specialist"?
Are you implying that in the future we'll have programmers who will just write code using AI and a specialist role whose job it would be to clean up that code? That isn't going to work, you'll need a superhuman for that role. People who write the code using AI have to be the ones who review it and they have to be responsible for the quality of that code.
Yes I remember a while ago it fixing a pipeline problem because I had managed to copy and paste an IP with one of the digits missing at the end. Spent about an hour before that looking at everything else (all the other steps succeeded, but the last one 'timed out', because I copy and pasted it wrong at the end). Took it <30secs as you said to instantly diagnose the problem.
What you suggested here is trivial with existing tools—linters in the first case, search-and-replace functions in editors for the second.
I have yet to see any evidence of the third case. I'm close to banning AI for my junior devs. Their code quality is atrocious. I don't have time for all that cleanup. Write it good the first time around.
We are moving up an abstraction layer. From the perspective of the business, my job is not to write code, my job is to ship products. The language you use to ship products is your tool of choice. Sure, it could be Python or Typescript, but my tool of choice is natural language.
I'm not even sure there is much room left for one.
There is very little alignment in starting assumptions between most parties in this convo. One guy is coding mission critical stuff, the other is doing throw away projects. One guy depends on coding to put food on table, the other does not. One guy wants to understand every LoC, other is happy to vibe code. One is a junior looking for first job, other is in management in google after being promoted out of engineering. One guy has access to $200/m tech, the other does not. etc etc
We can't even get consensus on tab vs spaces...we're not going to get AI & coding down to consensus or who is "right".
Perhaps a bit a nihilistic & jaded, but I'm very much leaning towards "place your bets & may the odds be ever in your favour".
We’ve been “losing skills” to better tools forever, and it’s usually been a net positive. Nobody hand-writes a sorting algorithm in production to “stay sharp”, most of us don’t do long division because calculators exist, and plenty of great engineers today couldn’t write assembly (or even manage memory in C) comfortably. That didn’t make the industry worse; it let us build bigger things by working at higher abstraction.
LLM-assisted coding feels like the next step in that same pattern. The difference is that this abstraction layer can confidently make stuff up: hallucinated APIs, wrong assumptions, edge cases it didn’t consider. So the work doesn’t disappear, it shifts. The valuable skill becomes guiding it: specifying the task clearly, constraining the solution, reviewing diffs, insisting on tests, and catching the “looks right but isn’t” failures. In practice it’s like having a very fast junior dev who never gets tired and also never says “I’m not sure”.
That’s why I don’t buy the extremes on either side. It’s not magic, and it’s not useless. Used carelessly, it absolutely accelerates tech debt and produces bloated code. Used well, it can take a lot of the grunt work off your plate (refactors, migrations, scaffolding tests, boilerplate, docs drafts) and leave you with more time for the parts that actually require engineering judgement.
On the “will it make me dumber” worry: only if you outsource judgement. If you treat it as a typing/lookup/refactor accelerator and keep ownership of architecture, correctness, and debugging, you’re not getting worse—you’re just moving your attention up the stack. And if you really care about maintaining raw coding chops, you can do what we already do in other areas: occasionally turn it off and do reps, the same way people still practice mental math even though Excel exists.
Privacy/ethics are real concerns, but that’s a separate discussion; there are mitigations and alternatives depending on your threat model.
At the end of the day, the job title might stay “software engineer”, but the day-to-day shifts toward “AI guide + reviewer + responsible adult.” And like every other tooling jump, you don’t have to love it, but you probably do have to learn it—because you’ll end up maintaining and reviewing AI-shaped code either way.
Basically, I think the author hit just in the point.
If you don't see the limitations of vibe coding, I shudder on the idea of maintaining your code even pre-AI.
Do I use it? Yes, a lot, actually. But I also spend a lot of tunning prunning its overly verbose and bizantine code, my esc key is fading from the amount of times I've interrupted it to steer it towards a non-idiotic direction.
It is useful, but if you trust it too much, you're creating a mountain of technical debt.
The fact that i hear this mantra over and over again:
"She wrote a thing in a day that would have taken me a month"
This scares me. A lot.
I never found the coding part to be a bottle neck, but the issues arise after the damn thing is in prod. If i work on something big (that will take me a month) thats going to be anywhere from (im winging these numbers) 10K LOC to 25K LOC).
If thats a bechmark for me the next guy using AI will spew out at a bare minimun double the amount of code, and in many cases 3x-4x.
The surface area for bugs are just vastly bigger, and fixing these bugs will eventually take more time than you "won" using AI in the first place.
It really depends on how you use it. I really like using AI for prototyping new ideas (it can run on the background while I work on the main project) and for getting the boring grunt work (such as creating CRUD endpoints on a RESTful API) out of the way. Leaving me more time to focus on the code that really is challenging and need a deeper understanding of the business or the system as a whole.
The boring stuff like crud always needs design. Else you end up with a 2006 era PHP-like "this is a rest api" spaghetti monster. The fact that AI cant do this (and probably never will) is just another showstopper.
I tried AI, but the code it produces (on a higher level) is of really poor quality. Refactoring this is a HUGE PITA.
I keep seeing this over and over by so called "engineers".
You can dismiss the current crop of transformers without dismissing the wider AI category. To me this is like saying that users "dismiss Computers" because they dismiss Windows and instead prefer Linux. Rejecting modern practices for not getting on the microservice hype train or not using React.
Intellisense pre-GPT is a good example of AI that wasn't using transformers.
And of course, you can have both criticise some usages of transformers in IDEs and editors while appreciating and using others.
"My coworker uses Claude Code now. She finished a project last week that would’ve taken me a month". This is one of those generalisations. There is no nuance here. The range of usage from boilerplate to vibe code level is vast. Quickly churning out code is not a virtue. It is not impressive to ship something only to find critical bugs on the first day. Nor is it a virtue using it at the cost of losing understanding of the codebase.
This rigid thinking by devs needs to stop imo. For so called rational thinkers, the development world is rife with dogma and simplistic binary thinking.
If using transformers at any level is cost-effective for all, the data will speak for itself. Vague statements and broad generalisations are not going to sway anyone and will just make this kind of articles sound like validation seeking behaviour.
Author doesn't consider the possibility that engineers dismiss AI after they constantly tried it. Not once, not twice, but consistently.
I am one of those dismissers. I am constantly trash talking AI. Also, I have tried more tools and more stress scenarios than a lot of enthusiasts. The high bars are not in my head, they are on my repositories.
Talk is cheap. Show me your AI generated code. Talk tech, not drama.
It's been somewhat disheartening to see many techie spaces (also HackerNews) become so skeptical and anti AI. It's as-if the luddites are at it again and they're just refuting progress because of a bad impression or because they fear the consequences.
AI is a tool and it shuold be treated as such.
Also, beware of snake oil salesmen.
Is AI going to integrate widely into the world? Yes.
Is it also going to destroy all the jobs in the world? Of course not, luddites don't understand the naïvety of this position.
And even if LLMs turn out to really be a net positive and a requirement for the job, they're antithetical to what most software developers appreciate and enjoy (precision, control, predictability, efficiency...).
There sure seems to be two kinds of software developers: those who enjoy the practice and those who're mostly in for the pay. If LLMs win it will be second ones who'll stay on the job, and that's fine; it won't mean that the first group was made of luddites, but that the job has turned into crap that others will take over.
The two categories of software developers you mention already existed pre ChatGPT and will likely continue to exist. If anything, AI's going to make those who're in it just for the money much less relevant.
Do you really think that Software Engineering is going to be less about precision, control, predictability, and efficiency? These are fundamental skills regardless of AI.
As someone whose stance is to be extremely skeptical of AI, I threw Claude at a complex feature request in a codebase I wasn't very familiar with, and it managed to come up with a solution that was 99% acceptable. I was very impressed, so I started using it more.
But it's really a mixed bag, because for the subsequent 3-4 tasks in a codebase that I was familiar with, Claude managed to produce over-commented, over-engineered slop that didn't do what I asked for and took shortcuts in implementing the requirements.
I definitely wouldn't dismiss AI at this point because it occasionally astounds me and does things I would never in my life have imagined possible. But at other times, it's still like an ignorant new junior developer. Check back again in 6 months I guess.
> The gap is widening between engineers who’ve integrated these tools and engineers who haven’t.
Let‘s wait with the evaluation until the honeymoon phase is over.
At the moment there are plenty of companies that offer cheap AI tools. It will not stay that way.
At the moment most of their training data is man made and not AI made which makes AIs worse if used for training.
It will not stay that way.
Yeah it boggles my mind all the people on here constantly dismissing LLMs.
It's very clearly getting better and better rapidly. I don't think this train is stopping even if this bubble bursts.
The cold ass reality is: We're going to need a lot less software engineers moving forward. Just like agriculture now needs way less humans to do the same work than in the past.
I hate to be blunt but if you're in the bottom half of the developer skill bell curve, you're cooked.
If you hate reading other people's code, then you'll hate reading llm generated code, then all you'll ever be with ai at best is yet another vibe coder who produces piles of code they never intend to read, so you should have found another career even before llms were a thing.
Responsible use of ai means reading lots and lots of generated code, understanding it, reviewing and auditing it, not "vibe coding" for the purpose of avoiding ever reading any code.
> If you hate reading other people's code, then you'll hate reading llm generated code, then all you'll ever be with ai at best is yet another vibe coder who produces piles of code they never intend to read, so you should have found another career even before llms were a thing.
I do like to read other people's code if it is of an exceptional high standard. But otherwise I am very vocal in criticizing it.
IMO Those screencasts work because they are painstakingly planned toy projects from scratch
Even without AI you cannot do a tight 10 minute video on legacy code unless you have done a lot of work ahead of time to map it out and then what’s the point
That would be fantastic. I’ve seen so many claims like the author’s
> [Claude Code and Cursor] can now work across entire codebases, understand project context, refactor multiple files at once, and iterate until it’s really done.
But I haven’t seen anyone doing this on e.g. YouTube? Maybe that kind of content isn’t easy to monetize, but if it’s as easy to use AI as everyone says surely someone would try.
> if it’s as easy as everyone says surely someone would try.
Yeah, 18 months ago we were apparently going to have personal SaaSes and all sorts of new software - I don't see anything but an even more unstable web than ever before
I would never have had a working LoongArch emulator in 2 weeks at the kind of quality that I desire without it. Not because it writes perfect code, but because it sets everything up according to my will, does some things badly, and then I can take over and do the rest. The first week I was just amending a single commit that set everything up right and got a few programs working. A week after that it runs on multiple platforms with JIT-compilation. I'm not sure what to say, really. I obviously understand the subject matter deeply in this case. I probably wouldn't have had this result if I ventured into the unknown.
Although, I also made it create Rust and Go bindings. Two languages I don't really know that well. Or, at least not well enough for that kind of start-to-finish result.
Another commenter wrote a really interesting question: How do you not degrade your abilities? I have to say that I still had to spend days figuring out really hard problems. Who knew that 64-bit MinGW has a different struct layout for gettimeofday than 64-bit Linux? It's not that it's not obvious in hindsight, but it took me a really long time to figure out that was the issue, when all I have to go on is something that looks like incorrect instruction emulation. I must have read the LoongArch manual up and down several times and gone through instructions one by one, disabling everything I could think of, before finally landing on the culprit just being a mis-emulated kind-of legacy system call that tells you the time. ... and if the LLM had found this issue for me, I would have been very happy about it.
There are still unknowns that LLMs cannot help with, like running Golang programs inside the emulator. Golang has a complex run-time that uses signal-based preemption (sysmon) and threads and many other things, which I do emulate, but there is still something missing to pass all the way through to main() even for a simple Hello World. Who knows if it's the ucontext that signals can pass or something with threads or per-state signal state. Progression will require reading the Go system libraries (which are plain source code), the assembly for the given architecture (LA64), and perhaps instrumenting it so that I can see what's going wrong. Another route could be implementing an RSP server for remote GDB via a simple TCP socket.
As a conclusion, I will say that I can only remember twice I ditched everything the LLM did and just did it myself from scratch. It's bound to happen, as programming is an opinionated art. But I've used it a lot just to see what it can dream up, and it has occasionally impressed. Other times I'm in disbelief as it mishandles simple things like preventing an extra masking operation by moving something signed into the top bits so that extracting it is a single shift, while sharing space with something else in the lower bits.
Overall, I feel like I've spent more time thinking about more high-level things (and occasionally low-level optimizations).
I am neither pro- nor anti-AI. I just don't like the manipulative and blackmailish tactics its proponents use to get me to use it. I will use it whenever I find it useful, not because you tell me I'm getting "left behind" by not adopting it.
> if you haven’t tried modern AI coding tools recently, try one this week.
I don’t think I will. I am glad I have made the radical decision, for myself, to wilfully remain strict in my stance against generative AI, especially for coding. It doesn’t have to be rational, there is good in believing in something and taking it to its extreme. Some avoid proprietary software, others avoid eating sentient beings, I avoid generative AI on pure principle.
This way I don’t have to suffer from these articles that want to make you feel bad, and become almost pleading, “please use AI, it’s good now, I promise” which I find frankly pathetic. Why do people care so much about it to have to convince others in this sad routine? It honestly feels like some kind of inferiority complex, as if it is so unbearable that other people might dislike your favourite tool, that you desperately need them to reconsider.
I rely on AI coding tools. I don’t need to think about it to know they’re great. I have instincts which tell me convenience = dopamine = joy.
I tested ChatGPT in 2022, and asked it to write something. It (obviously) got some things wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I've forgotten that lesson. Why wouldn't I? I've been offloading all sorts of meaningful cognitive processes to AI tools since then.
I use Claude Code now. I finished a project last week that would’ve taken me a month. My senior coworker took one look at it and found 3 major flaws. QA gave it a try and discovered bugs, missing features, and one case of catastrophic data loss. I call that “nitpicking.” They say I don’t understand the engineering mindset or the sense of responsibility over what we build. (I told them it produces identical results and they said I'm just admitting I can't tell the difference between skill and scam).
“The code people write is always unfinished,” I always say. Unlike AI code, which is full of boilerplate, adjusted to satisfy the next whim even faster, and generated by the pound.
I never look at Stack Overflow anymore, it's dead. Instead I want the info to be remixed and scrubbed of all its salient details, and have an AI hallucinate the blanks. Thay way I can say that "I built this" without feeling like a fraud or a faker. The distinction is clear (well, at least in my head).
Will I ever be good enough to code by myself again? No. When a machine showed up that told me flattering lies while sounding like a silicon valley board room after a pile of cocaine, I jumped in without a parachute [rocket emoji].
I also personally started to look down on anyone who didn't do the same, for threatening my sense of competence.
from some of the engineers I've debated this over, I think some of them have just dug their heels in at this point and have decided they're never going to use LLM tools period, and are just clinging to the original arguments without really examining the reality of the situation. In particular this "The LLM is going to hallucinate subtle bugs I can't catch" one. The idea that LLMs make subtle mistakes that are somehow more subtle, insidious and uncatchable compared to any random 25 pull requests you get from humans is simply ridiculous. The LLM makes mistakes that stick out to you like a sore thumb, because they're not your mistakes. The hardest mistakes to catch are your own, because your thinking patterns are what made them in the first place.
The biggest problem with LLMs for code that is ongoing is that they have no ability to express low confidence in solutions where they don't really have an answer, instead they just hallucinate things. Claude will write ten great bash lines for you but then on the eleventh it will completely hallucinate an option on some linux utility you hardly have time to care about, where the correct answer is "these tools don't actually do that and I dont have an easy answer for how you could do that". At this point I am very keen to notice when Claude gets itself into an endless ongoing loop of thought that I'm going about something the wrong way. Someone less experienced would have a very hard time recognizing the difference.
> The idea that LLMs make subtle mistakes that are somehow more subtle, insidious and uncatchable compared to any random 25 pull requests you get from humans is simply ridiculous.
This is plainly true, and you are just angry that you don't have a rebuttal
I didnt say the LLM does not make mistakes, I said the idea that a reviewer is going to miss them at some rate that is any different from mistakes a human would make, is ridiculous.
Missing in these discussions is what kinds of code people are talking about. Clearly if we're talking about a dense, highly mathematical algorithm, I would not have an LLM anywhere near that. We are talking about day-to-day boilerplate / plumbing stuff. The vast majority of boring grunt work that is not intellectually stimulating. If your job is all Carnegie-Mellon level PHD algorithm work, then good for you.
edit: I get that it looks like you made this account four days ago to troll HN on AI stuff. I get it, I have a bit of a mission here to pointedly oppose the entrenched culture (namely the extreme right wing elements of it). But your trolling is careless and repetitive enough that it looks like.....is this an LLM account instructed to troll HN users on LLM use ? funny
> The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t.
For me, however, there is one issue: how can I utilize AI without degenerating my own abilities? I use AI sparingly because, to be honest, every time I use AI, I feel like I'm getting a little dumber. I fear that excessive use of AI will lead to the loss of important skills on the one hand and create dependencies on the other. Who benefits if we end up with a generation of software developers who can no longer program without AI? Programming is not just writing code, but a process of organizing, understanding, and analyzing. What I want above all is AI that helps me become better at my job and continue to build skills and knowledge, rather than making me dependent on it.
If we sees ourselves less as a programmer and more as a software builder, then it doesn’t really matter if our programming skills atrophy in the process of adopting this tool, because it affords us to build at a higher abstraction level), kind of like how a PM does it. This up-leveling in abstractions have happened over and over in software engineering as our tooling improves over time. I’m sure some excellent software engineers here couldn’t write in assembly code to save their lives, but are wildly productive and respected for what they do - building excellent software.
That said, as long as there’s the potential for AI to hallucinate, we’ll always need to be vigilant - for that reason I would want to keep my programming skills sharp.
AI assisted software building by day, artisanal coder by night perhaps.
Isn't this the exact reason why modern software is so bloated?
> how can I utilize AI without degenerating my own abilities?
Couldn't the same statement, to some extent, be applied to using a sorting lib instead of writing your own sorting algorithm? Or how about using a language like python instead of manually handling memory allocation and garbage collection in C?
> What I want above all is AI that helps me become better at my job and continue to build skills and knowledge
So far, on my experience, the quality of what AI outputs is directly related to the quality of the input. I've seen some AI projects made by junior devs that a incredibly messy and confusing architecture, despite they using the same language and LLM model that I use? The main difference? The AI work was based on the patterns and architecture that I designed thanks to my knowledge, which also happens to ensure that the AI will produce less buggy software.
>For me, however, there is one issue: how can I utilize AI without degenerating my own abilities?
My cynical view is you can't, and that's the point. How many times before have we seen the pattern of "company operates at staggering losses while eliminating competition or becoming entrenched in enough people's lives, and then clamps down to make massive profits"?
Do you save time by using a calculator / spreadsheet or try to do all calculations in your head, because your ability to do quick calculations degrades the more you rely on tools to do it.
I'm not too worried about degrading abilities since my fundamentals are sound and if I get rusty due to lack of practice, I'm only a prompt away from asking my expert assistant to throw down some knowledge to bring me back up to speed.
Whilst my hands on programming has reduced, the variety of Software I create has increased. I used to avoid writing complex automation scripts in bash because I kept getting blocked trying to remember its archaic syntax, so I'd typically use bun/node for complex scripts, but with AI I've switched back to writing most of my scripts in bash (it's surprising at what's capable in bash), and have automated a lot more of my manual workflows since it's so easy to do.
I also avoided Python because the lack of typing and api discovery slowed me down a lot, but with AI autocomplete whenever I need to know how to do something I'll just write a method stub with comments and AI will complete it for me. I', now spending lots of time writing Python, to create AI Tools and Agents, ComfyUI Custom Nodes, Image and Audio Classifiers, PIL/ffmpeg transformations, etc. Things I'd never consider before AI.
I also don't worry about its effects as I view it as inevitable, with the pendulum having swung towards code now being dispensable/cheap to create, what's more important is velocity and being able to execute your ideas quickly, for me that's using AI where I can.
You can’t and that’s the new normal. We’re probably the only generation which was given an opportunity to get properly good at coding. No such luxury will be available in a few years optimistically; pessimistically it’s been taken away with GPT 5.2 and Opus 4.5.
If that's the case (and I'm not convinced it is), shouldn't retaining that skill be the priority for anyone who has already acquired it? I've yet to see any evidence AI can turn someone who can't code into a substitute for someone who can. If the supply of that skill is going to dry up, surely it will only become more valuable. If using AI erodes it, the logical thing would be not to use AI.
> If that's the case [...], shouldn't retaining that skill be the priority for anyone who has already acquired it?
Indeed I believe that, but in my experience these skills get more and more useless in the job market. In other words: retaining such (e.g. low-level coding) skills is an intensively practises hobby of such people that is (currently) of "no use" in the job market.
That's the correct diagnosis IMHO, but getting good as software engineering is ~3 years of serious studying and ~5-10 years of serious work and that's after you've learned to code, which is easier to some and more difficult to others.
Compare ROI of that to being able to get kinda the software you need in a few hours of prompting; it's a new paradigm, progress is (still) exponential and we don't know where exactly things will settle.
Experts will get scarce and very sought after, but once they start to retire in 10-20-30 years... either dark ages or AI overlords await us.
i think cs students should force themselves to learn the real thing and write the code themselves, at least for their assignments. i have seen that a lot of recent cs grads that has gpt in most of their cs life basically cannot write proper code, with or without ai.
In the same way we make kids learn addition and multiplication even though they have access to calculators
This is part of the learning curve. When you vibe code you produce something that is as if someone else wrote it. It’s important to learn when that’s appropriate versus using it in a more limited way or not at all.
I worry about that too.
But at this point, it's like refusing to use vehicles to travel long distances in fear of becoming physicaly unfit. We go to the gym.
I like that analogy a lot. FWIW I also find myself learning a lot more at a higher rate with LLM usage
You can always ask it to nudge you in the right direction instead of giving the solution right away. I suspect this way of using it is not very popular though.
This is not a new problem I think. How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?
If you're not careful and always rely on them as a crutch, they'll remain just that; without actually "incrementing" you.
I think this is a very good question. How should we actually be using our tools such that we're not degenerating, but growing instead?
> How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?
By writing down every foreign word/phrase that I don't know, and adding a card for it to my cramming card box.
My answer is: use AI exactly for the tasks that you as a tech lead on a project would be ok delegating to someone else. I.e. you still own the project and probably want to devote your attention to all of the aspects that you HAVE to be on to of, but there are probably a lot of tasks where you have a clear definition of the task and its boundaries, and you should be ok to delegate and then review.
This gets particularly tricky when the task requires a competency that you yourself lack. But here too the question is - would you be willing to delegate it to another human whom you don't fully trust (e.g. a contractor)? The answer for me is in many cases "yes, but I need to learn about this enough so that I can evaluate their work" - so that's what I do, I learn what I need to know at the level of the tech lead managing them, but not at the level of the expert implementing it.
Regular coding gyms and problem solving drills
As humans we have developed tools to ease our physical needs (we don’t need to run, walk or lift things) and now we have a tool that thinks and solve problems for us
> how can I utilize AI without degenerating my own abilities?
Personally I think my skill lies in solving the problem by designing and implementing the solution, but not how I code day-to-day. After you write the 100th getter/setter you're not really adding value, you're just performing a chore because of language/programming patterns.
Using AI and being productive with it is an ability and I can use my time more efficiently than if I were not to use it. I'm a systems engineer and have done some coding in various languages, can read pretty much anything, but am nowhere near mastery in any of the languages I like.
Setting up a project, setting up all the tools and boilerplate, writing the main() function, etc are all tasks that if you're not 100% into the language take some searching and time to fiddle. With AI it's a 2-line prompt.
Introducing plumbing for yet another feature is another chore: search for the right libraries/packages, add dependencies, learn to use the deps, create a bunch of files, sketch the structs/classes, sketch the methods, but not everything is perfectly clear yet, so the first iteration is "add a bunch of stuff, get a ton of compiler warnings, and then refine the resulting mess". With AI it's a small paragraph of text describing what I want and how I'd like it done, asking for a plan, and then simply saying "yes" if it makes sense. Then wait 5-15m. Meanwhile I'm free to look at what it's doing and if it's doing something stupid wrong, or think about the next logical step.
Normally the result for me has been 90% good, I may need to fix a couple things I don't like, but then syntax and warnings have already been worked out, so I can focus on actually reading, understanding and modifying the logic and catching actual logic issues. I don't need to spend 5+ days learning how to use an entire library, only to find out that the specific one I selected is missing feature X that I couldn't foresee using last week. That part takes now 10m and I don't have to do it myself, I just bring the finishing touches where AI cannot get to (yet?).
I've found that giving the tool (I personally love Copilot/Claude) all the context you have (e.g. .github/copilot-instructions.md) makes a ton of difference with the quality of the results.
[flagged]
This will not age well
[flagged]
Is it your intention to add anything else to this thread, besides insulting those enthusiastic about AI as 'idiots'?
it's not an insult, it's an observation
> push back by insulting AI idiots
you on another comment in here lol you’re blinded by the hate! let it flow through you! insult the users of the bad neural nets until the world heals! don’t back down!
This was a good faith observation.
In my experience, AI users are meaningfully stupider and more gullible than the rest of the population.
In my experience, people who bombard threads with insults based on the technology people use (a specific set of neural networks in this case) are…well, don’t have much better to do in life. you openly advocated for insulting and bullying people in your other comments. don’t back down with this “it’s just an observation” bs, own it! be you!
…or change your behavior and be a better person, whatever works
btw I’ve been doing “the AI” stuff since 2018 in industry and before in academia. I find your worldview displayed in your comments to be incredibly small and easily dismissible
Some engineers don't dismiss LLMs.
They dismiss the religion like hype machine.
You want to market to engineers, stick to provable statements. And address some of their concerns. With something other than "AI is evolving constantly, all your problems will be solved in 6 months, just keep paying us."
Oh by the way, what is the OP trying to sell with these FOMO tactics? Yet another ChatGPT frontend?
I think we should label devs overreliant on AI as "Engineers who dismiss themselves"
I'll take that, but don't see how it's so different from the intent I've always had of "automating myself out of the job". When I want to do "engineering", I can always spin up Factorio or Turing Complete. But for the rest of the time, I care about the result rather than the process. For example, before starting to implement a tool, I'll always first search online for whether there is already a good tool that would address my need, and if so, I'll generally utilize that.
The nondeterminism is what makes LLMs different.
You download a tool written by a human, you can reasonably expect that it does what the author claims it does. And more, you can reasonably expect that if it fails it will fail in the same way in the same conditions.
Cracktorio! ;) I also love Dyson Sphere Program.
I wrote some Turing Machine programs back in my Philosophy of Computer Science class during the 80's, but since then my Turing Machine programming skills have atrophied, and I let LLMs write them for me now.
Perhaps in that case the critics should bend their ire on the marketing departments, rather than trashing the tech?
Really though, the potential in this tech is unknown at this point. The measures we have suggest there's no slowdown in progress, and it isn't unreasonable for any enthusiast or policy maker to speculate about where it could go, or how we might need to adjust our societies around it.
> it isn't unreasonable for any enthusiast or policy maker to speculate about where it could go
What is posted to HN daily is beyond speculation. I suppose a psychologist has a term for what it is, I don't.
Edit: well, guess what? I asked an "AI":
Psychological Drivers of AI Hype:
By the way, the text formatting is done by the "AI" as well. Asked it to make the table look like a table on HN specifically.idiocy, delusion, propaganda, lies, and manipulation are a few terms i came up with off the top of my head
At least AI can write grammatically correct sentences and use punctuation and proper capitalization.
As a 0.1x low effort Hacker News user who can't lift a pinky to press a shift or punctuation key, you should consider using AI to improve the quality of your repetitive off-topic hostile vibe postings and performative opinions.
Or put down the phone and step away from the toilet.
while they're definitely overdoing it, i'd say that picking on spelling, grammar etc instead of the content is also low effort
i'm right and he's mad
AI is for idiots
You sound quite mad by both meanings of the term.
And you just unwittingly proved my point, so I'm downgrading you to an 0.01x low effort Hacker News user.
If there are no other effects of AI than driving people like you out of the industry, then it's proven itself quite useful.
Edit: ok I will concede that point to you that I was mistaken about 0.01x, for candidly admitting (and continuously providing incontrovertible proof) that you're only a 0.001x low effort Hacker News user. I shouldn't have overestimated you, given all the evidence.
0.001x
> You want to market to engineers, stick to provable statements.
And that's where the "AI" is lacking.
"AI can write a testcase". Can it write a _correct_ test case (i.e. one that i only have to review, like i review my colleague work) ?
"AI can write requirements". Now, that i'm still waiting to see.
> Can it write a _correct_ test case
And is the test case useful for something? On non textbook code?
spoiler: it is not
AI developers are 0.1x "engineers"
Is it possible you're not the target audience if you are aware that LLMs are impressive and useful? Regardless of the inane hype and bubble around them.
You have to be particularly gullible to fall for these tactics. Especially when the quality of LLM products has declined over the last 18 months.
Allow me to repeat myself: AI is for idiots.
Nah, LLMs have fixed search. For now. I use them daily for that.
Fully expect them to include youtube levels of advertising in 1-2 years though. Just to compensate for the results being somewhat not spammy.
[flagged]
The other option is traditional search engines, which are chock full of spam.
Or perhaps Kagi, but apparently they only give you the tools to exclude the content farms, they don't exclude them themselves.
It does require excluding things yourself, but I've had a lot of success with uBlacklist. Some of the spam sites all have certain common characteristics that you can specifically search for to come up with a full search result of spam domains
Why? I'm happy to spend VC money while they're offering it. If and when they stop giving me an offering that I'm satisfied with, I'll stop using it.
"i'm happy to smoke crack while the dealer is paying for it"
"AI coding is so much better now that any skepticism from 6 months ago is invalid" has been the refrain for the last 3 years. After the first few cycles of checking it out and realizing that it's still not meeting your quality bar, it's pretty reasonable to dismiss the AI hype crowd.
I think we have a real inflection point now. I try it a bit every year and was always underwhelmed. Halfway through this year was the first time it really impressed me. I now use Claude Code.
But Claude Code costs money. You really want to introduce a critical dependency into your workflow that will simultaneously atrophy your skills and charge you subscription fees?
Because it has been true for the last 3 years. Just because a saying is repeated a lot doesn't mean it's wrong.
A year ago I could get o1-mini to write tests some of the time that I would then need to fix. Now I can get Opus 4.5 to do fairly complicated refactors with no mistakes.
These tools are seriously starting to become actually useful, and I’m sorry but people aren’t lying when they say things have changed a lot over the last year.
It might even be true this time, but there is no real mystery why many aren't inclined to invest more time figuring it out for themselves every few months. No need for the author of the original article to reach for "they are protecting their fragile egos" style of explanation.
The productivity improvements speak for themselves. Over time, those who can use ai well and those who cannot will be rewarded or penalized by the free market accordingly.
That's what it really all comes down to, isn't it?
It doesn't matter if you're using AI or not, just like it never mattered if you were using C or Java or Lisp, or using Emacs or Visual Studio, or using a debugger or printf's, or using Git or SVN or Rational ClearCase.
What really matters is in the end is, what you bring to market, and what your audience thinks of your product.
So use all the AI you want. Or don't use it. Or use it half the time. Or use it for the hard stuff, but not the easy stuff. Or use it for the easy stuff, but not the hard stuff. Whatever! You can succeed in the market with AI-generated product; you can fail in the market with AI-generated product. You can succeed in the market with human-generated product; you can fail in the market with human-generated product.
If there’s evidence of productivity improvements through AI use, please provide more information. From what I’ve seen, the actual data shows that AI use slows developers down.
The sheer number of projects I've completed that I truly would never have been able to even make a dent in is evidence enough for me. I don't think research will convince you. You need to either watch someone do it, or experiment with it yourself. Get your hands dirty on an audacious project with Claude code.
Research is the only thing that will convince me. That’s the way it should be.
What does “can use” mean though. You just ask it to do things in basic English. Everyone can do that with no training.
Do you have evidence?
0.1x
If only you put half as much effort into learning ai as you do trolling people who are getting gains from it...
[flagged]
Meanwhile I'm getting a 5000 lines PR with code that's all clearly AI generated.
It's full of bloat; Unused http endpoints, lots of small utility functions that could have been inlined (but now come with unit tests!), missing translations, only somewhat correct design...
The quality wasn't perfect before, now it has taken a noticeable dip. And new code is being added faster than ever. There is no way to keep up.
I feel that I can either just give in and stop caring about quality, or I'll be fixing everyone else's AI code all of my time.
I'm sure that all my particular colleagues are just "holding it wrong", but this IS a real experience that I'm having, and it's been getting worse for a couple of years now.
I am also using AI myself, just in a much more controlled way, and I'm sure there's a sweet spot somewhere between "hand-coding" and vibing.
I just feel that as you inch in on that sweet spot, the advertised gains slowly wash away, and you are left with a tangible, but not as mindblowing improvement.
> "The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t. The first group is shipping faster, taking on bigger challenges. The second group is… not."
Honest question for the engineers here. Have you seen this happening at your company? Are strong engineers falling behind when refusing to integrate AI into their workflow?
As before, the big gap I still see is between engineers who set something up the right way and engineers who push code up without considering the bigger picture.
One nice change however is that you can guide the latter towards a total refactor during code review and it takes them a ~day instead of a ~week.
If anything I’ve learnt more because I’m having to go and find bugs in areas that I’m not super clued up on… yet ;)
No. The opposite. The people who “move faster” are literally just producing tech debt that they get a quick high five for, then months later we limp along still dealing with it.
A guy will proudly deploy something he vibe coded, or “write the documentation” for some app that a contractor wrote, and then we get someone in the business telling us there’s a bug because it doesn’t do what the documentation says, and now I’m spending half a day in meetings to explain and now we have a project to overhaul the documentation (meaning we aren’t working on other things), all because someone spent 90 seconds to have AI generate “documentation” and gave themselves a pat on the back.
I look at what was produced and just lay my head down on the desk. It’s all crap. I just see a stream of things to fix, convention not followed, 20 extra libraries included when 2 would have done. Code not organized, where this new function should have gone in a different module, because where it is now creates tight coupling between two modules that were intentionally built to not be coupled before.
It’s a meme at this point to say, ”all code is tech debt”, but that’s all I’ve seen it produce: crap that I have to clean up, and it can produce it way faster than I can clean it up, so we literally have more tech debt and more non-working crap than we would have had if we just wrote it by hand.
We have a ton of internal apps that were working, then someone took a shortcut and 6 months later we’re still paying for the shortcut.
It’s not about moving faster today. It’s about keeping the ship pointed in the right direction. AI is a guy a guy on a jet ski doing backflips, telling is we’re falling behind because our cargo ship hasn’t adopted jet skis.
AI is a guy on his high horse, telling everyone how much faster they could go if they also had a horse. Except the horse takes a dump in the middle of the office and the whole office spends half their day shoveling crap because this one guy thinks he’s going faster.
What worries me is how AI impacts neurodivergent programmers. I have ADHD and it simply doesn't work for me to constantly be switching context between the code I'm writing and the AI chat. I am terrified that I will be forced out of the industry if I can't keep up with people who are able to use AI.
Fellow diagnosed ADHD here. And I know every ADHD is different and people are different.
What helps me is:
- Prefer faster models like VSCode's Copilot Raptor Mini which, despite the name, is like 80% capable of what Sonnet 4.5 is. And is much faster. It is a fine tunned GPT 5 mini.
- Start writting the next prompt while LLMs work or keep pondering about the current problem at hand. This helps our chaotic brain to keep focused.
I find that any additional overhead caused by the separate AI chat is saved 20x over by basically never having to use a browser to look at documentation and S/O while coding.
That makes sense. I do use AI for questions like "what's the best way to flatten a list of lists in Python" or "what is the interface for this library function". I just don't use it the way I see some people do where they have it write the rough draft of their code or identify where a bug is.
And studies find that that 20x is actually a 0.8x
0.1x
> If you’ve actually tried modern tools and they didn’t work for you, that’s a conversation worth having. But “I tried ChatGPT in 2022” isn’t that conversation.
How many people are actually saying this? Also how does one use modern coding tools in heavily regulated contexts, especially in Europe?
I can't disagree with the article and say that AI has gotten worse because it truly hasn't, but it still requires a lot of hand holding. This is especially true when you're 'not allowed' to send the full context of a specific task (like in health care). For now at least.
This story ends up being relevant in a metaphorical way.
My aunt was born in the 1940s, and was something of an old fashioned feminist. She didn't know why wasn't allowed to wear pants, or why she had to wait for the man to make the first move, etc. She tells a story about a man who ditched her at a dance once because she didn't know the "latest dance." Apparently in the 1950s, some idiot was always inventing a new dance that everyone _just had follow_. The young man was so embarrassed that he left her at the dance.
I still think about this story, and think about how awful it would have been to live in the 40s. There always has been social pressure and change, but the "everyone's got to learn new stupid dances all the time" sort of pressure feels especially awful.
This really reminds me of the last 10-20 years in technology. "Hey, some dumb assholes have built some new technology, and you don't really have the choice to ignore it. You either adopt it too, or are left behind."
As I see it, this is an inherent part of the tech industry. Unless you expressly choose to focus your career on maintaining legacy code, your value as a dev depends on your ability and willingness to continuously learn new tech.
Just normal Luddite things, which attracts those most threatened in their personal identity by the new technology.
You see it obviously with the artists and image/video generators too.
We did this with Dadaism and Impressionism and photography before this too with art.
Ultimately, it's just more abstraction that we have to get used to -- art is stuff people create with their human expression.
It is funny to see everyone argue so vehemently without any interest in the same arguments that happened in the past.
Exit through the giftshop is a good movie that explores that topic too, though with near-plagiarized mass production, not LLMs, but I guess that's pretty similar too!
https://daily.jstor.org/when-photography-was-not-art/
https://www.youtube.com/watch?v=IqVXThss1z4
https://en.wikipedia.org/wiki/Dada
I mean, luddites have consistently been correct. Technological advancements have consistently been used to benefit the rich at the expense of regular people.
The early Industrial Revolution that the original Luddites objected to resulted in horrible working conditions and a power shift from artisans to factory workers.
Dadism was a reaction to WWI where the aristocracy's greed and petty squabbling led to 17 million deaths.
I don't disagree with that, just that there's anything that can be done about it. Which technology did we successfully roll back? Nukes are the closest I think you can get and those are very hard to make and still exist in abundance, we just somewhat controlled who can have them
> Which technology did we successfully roll back?
Quite a few come to mind: chemical and biological weapons, beanie babies, NFTs, garbage pail kids... Some take real effort to eradicate, some die out when people get bored and move on.
Today's version of "AI," i.e. large language models for emitting code, is on the level of fast fashion. It's novel and surprising that you can get a shirt for $5, then you realize that it's made in a sweatshop, and it falls apart after a few washings. There will always be a market for low-quality clothes, but they aren't "disrupting non-nudity."
Chemical weapons still exist and are used[1]
So are beanie babies, NFTs and garbage pail kids -- Things that have fallen out of fashion isn't the same thing as eradicating a technology. I think that's part of the difficulty, how could you roll back knowledge without some Khmer Rouge generational trauma?
I think about the original use of steam engines and the industrial revolution -- Steam engines were so inefficient, their use didn't make sense outside of pulling its own fuel out of the ground -- Many people said haha look how silly and inefficient this robot labor is. We can see how that all turned out.[2]
1: https://www.armscontrol.org/factsheets/timeline-syrian-chemi...
2: https://en.wikipedia.org/wiki/Newcomen_atmospheric_engine
> Things that have fallen out of fashion isn't the same thing as eradicating a technology.
That's true. Ruby still exists, for example, though it's sitting down below COBOL on the Tiobe index. There's probably a community trading garbage pail kids on Facebook Marketplace as well. Ideas rarely die completely.
Burning fossil fuels to turn heat into kinetic energy is genuinely better than using draft animals or human slaves. Creating worse code (or worse clothing) for less money is a tradeoff that only works for some situations.
This guy's knowledge of art history is the Dada wikipedia page and the Banksy movie from 20 years ago.
Allow me to repeat myself: AI is for idiots.
Since you're a real established artist, I want to make my point more clear: I am not an artist and while AI image tools let me make fun pictures and not be reliant on artists for projects, it doesn't imbue me with the creativity to create artistic works that _move_ people or comment on our society. AI doesn't give or take that from you, and I argue that is what truly separates art and artists from doodles and doodlers.
gottem Jenny
> Just normal Luddite things, which attracts those most threatened in their personal identity by the new technology.
I feel like “Luddite” is a misunderstood term.
https://en.wikipedia.org/wiki/Luddite
> Malcolm L. Thomas argued in his 1970 history The Luddites that machine-breaking was one of the very few tactics that workers could use to increase pressure on employers, undermine lower-paid competing workers, and create solidarity among workers. "These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made." [emph. added] Historian Eric Hobsbawm has called their machine wrecking "collective bargaining by riot", which had been a tactic used in Britain since the Restoration because manufactories were scattered throughout the country, and that made it impractical to hold large-scale strikes. An agricultural variant of Luddism occurred during the widespread Swing Riots of 1830 in southern and eastern England, centring on breaking threshing machines.
Luddites were closer to “class struggle by other means” than “identity politics.”
I'm not an AI fanatic, but I do use ChatGPT often. In my experience, ChatGPT now is only marginally better than it was in 2022. The only real improvements is due to "thinking" abilities, i.e. searching the web and spending more tokens (basically prompting itself). The underlying model still to me feels largely the same.
I feel like I'm living in a different world when every time a new model comes out, everyone is in awe, and it scores exceptionally well on some benchmark that no one heard of before before the model even launched. And then when I use it, it feels like it's exactly the same as all models before, and makes the same stupid mistakes as always.
I just don't find it interesting. The only thing less interesting is the constant evangelism about it.
I also find that the actual coding is important. The typing may not be the most ineresting bit, but it's one of the steps that helps refine the architecture I had in my head.
100% agree. My only super power is weaponized “trying to understand”, spending a Saturday night in an obsessive fever dream of trying to wrap my head around some random idea.
That happens to produce good code as a side effect. And a chat bot is perfect for this.
But my obsession is not with output. Every time I use AI agents, even if it does exactly what I wanted, it’s unsatisfying. It’s not sometning I’m ever going to obsess over in my spare time.
It's good to be skeptical of new ideas as long as you don't box yourself in with dogmatism. If you're young you do this by looking at the world with fresh eyes. If you are experienced you do it by identifying assumptions and testing them.
Ha! I just saw one of these this morning on LinkedIn, an engineer complaining about AI / Vibecoding and thought exactly the same. I find these overreactions amusing.
I don’t know why this is so controversial it’s just a tool, you should learn to use it otherwise as the author of this post said you will get left behind but don’t cut yourself on the new tool (lots of people are doing this).
I personally love it because it allows me to create personal tools on the side that I just wouldn’t have had time for in the past. The quality doesn’t matter so much for my personal projects and I am so much more effective with the additional tools I’m able to create.
> I don’t know why this is so controversial it’s just a tool
Do you really "don't know why"? Are you sure?
I believe that ignoring the consequences that commercial LLMs are having on the general public today is just as radical as being totally opposed to them. I can at least understand the ethical concerns, but being completely unaware of the debate on artificial intelligence at this stage is really something that leaves me speechless, let me tell you.
AI is a tool. As every other tool under the sun, it has strengths and weaknesses, it's our job, as software engineers to try it out and understand when/how to use it on our workflows, or if if fits our use cases at all.
If you disagree with the above statement, try replacing "AI" with "Docker", "Kubernetes", "Microservices architecture", "NoSQL", or any other tool/language/paradigm that was widely adopted in the software development industry until people realized it's awesome for some scenarios but not a be-all and end-all solution.
I wonder how many of us are like me: Just waiting for AI to get Good Enough (TM). The skill required to use AI is probably decreasing, and the AI getting better, so why not just wait? Time will tell.
Exactly, if these tools are going to be so revolutionary and different within the next 6 months and even more so beyond that - there's no advantage to being an early adopter since your progress becomes invalid, may as well wait until it is good enough.
I like learning, I like programming, primarily because it lets me create whatever App I want. I'm continually choosing the most productive languages, IDEs and tooling that lets me be the most productive. I view AI in the same regard, where it lets me achieve whatever I want to create, but much faster.
Sure if you want to learn programming languages for programming sake, then yeah don't Vibe Code (i.e. text prompting AI to code), use AI as a knowledgeable companion that's readily on hand to help you whenever you get stuck. But if your goal is to create Software that achieves your objectives then you're doing yourself a disservice if you're not using AI to its maximum potential.
Given my time on this earth is finite, I'm in the camp of using AI to be as productive as possible. But that's still not everything yet, I'm not using it for backend code as I need to verify every change. But more than happy to Vibe code UIs (after I spend time laying down a foundation to make it intuitive where new components/pages go and API integration).
Other than that I'll use AI where I can (UIs, automation & deployment scripts, etc), I've even switched over to using React/Next.js for new Apps because AI is more proficient with it. Even old Apps that I wouldn't normally touch because it used legacy tech that's deprecated, I'll just rewrite the entire UI in React/Next.js to get it to a place where I can use text prompts to add new features. It took about ~20mins for Claude Code to get the initial rewrite implemented (using the old code base as a guide) then a few hours over that to walk through every feature and prompt it to add features it missed or fix broken functionality [1]. I ended up spending more time migrating it from AWS/ECS/RDS to Hetzner w/ automated backups - then the actual rewrite.
[1] https://react-templates.net/docs/vibe-coding/rewrite-legacy-...
They will get on board when its good enough
I can't imagine being so eager to socially virtue signal. Presumably some greybeard told him it was a waste of time and it upset him
That's just blatant ageism!
I don't have a beard, but if I did I'm sure it would be white, beyond grey.
It's okay. It's okay to feel annoyed, you have a tough battle ahead of you, you poor things.
I may be labelled a grey beard but at least I get to program computers. By the time you have a grey beard maybe you are only allowed to talk to them. If you are lucky and the billionares that own everything let you...
Sorry :) I couldn't resist. I think I'm the oldest person in the department and I think also that I am probably one of the ones that have been using AI in software development the most.
Don't be so quick to point at old people and make assumptions. Sometimes all those years actually translate into useful experience :)
Possibly. The focus of a lot of young people should be to try and effect political change that allows billionares wealth grow unended. AI is all going to accelerate this very rapidly now. Just look at what kind of world some of those with the most wealth are wanting to impose on the others now. It's frightening.
My reasons for initially dismissing it is because to me it felt like it was taking the fun part of the job. We have all these tasks, and writing the code is this creative act, designed to be read by other humans. Just like how I don’t want AI to write music for me.
But I see where things are going. I tried some of the newer tooling over the past few weeks. They’re too useful to ignore now. It feels like we’re entering into an industrial age for software.
it’s linked in the blog, but the prior post is great on its own too: https://terriblesoftware.org/2025/12/11/ai-can-write-your-co...
it does seem like the skepticism is fading. I do think engineers that outright refuse to use AI (typically on some odd moral principle) are in for a bad time
Maybe I've always been a terrible engineer but I'm humble enough to admit the way I code has always been exactly like the LLM. If it's something brand new I'm googling it and pattern matching how to write it. If it's based on existing functionality I'm doing ctrl + f and pattern matching based on that and how to insert the minimal code changes to accomplish the task
Many have the attitude of finding one edge case that it doesn’t work well and dismiss AI as useful tool
I’m an early adopter and nowadays all I do is to co-write context documents so that my assistant can generate the code I need
AI gives you an approximated answer, it depends on you how to steer it to a good enough answer and this takes time and learning curve … and evolves really fast
Some people are just not good at constantly learning things
> Many have the attitude of finding one edge case that it doesn’t work well and dismiss AI as useful tool
Many programmers work on problems (nearly) *all day* where AI does not work well.
> AI gives you an approximated answer, it depends on you how to steer it to a good enough answer
Many programmers work on problem where correctness is of essential importance, i.e. if a code block is "semi-right" it is of no use - and even having to deal with code blocks where you cannot trust that the respective programmer did think deeply about such questions is a huge time sink.
> Some people are just not good at constantly learning things
Rather: some people are just not good at constantly looking beyond their programming bubble where AI might have some use.
[flagged]
Jenny, please try to conduct yourself with some sense of decorum here -- These are real people you're bullying. This isn't a hatemonger platform like some of the others. Please try to do better
That doesn't seem bullying to me
they called me an idiot in the other thread for pointing out AI is broader than just LLMs (after they called everyone that uses AI an idiot) lol they’re clearly very angry and bitter, and I believe this is not the first account they’ve made to bombard threads with insults. in another comment they advocate for insulting the “AI idiots”
it’s not bullying in that it’s more entertaining than insulting, but still
ah in another comment (I am enjoying reading these):
> Ruthlessly bully LLM idiots
quite openly advocating for “bullying” people that use the bad scary neural nets!
you're putting words in my mouth. i said the big STUPID neural nets.
now THIS is pod racing! :)
If you feel the need to hype up AI to this degree, you should provide some data proving that AI use actually increases productivity. This type of fact-free polemic isn’t interesting or useful.
I dont feel my coworkers who use AI finish task faster than the ones who dont.
i feel my coworkers who use AI are slower, stupider, and less trustworthy than my coworkers who don't.
This fits my experience: programmers who are very vocal in their hate of using AI for programming work have in my opinion traits that make them great programmers (but I have to admit that such people often do score not very high on the Agreeableness personality trait :-) ).
by AI you mean LLMs right? I use a neural network to blur my background on Zoom calls, among other non-LLM usage
[flagged]
I just wanted to clarify only some neural networks are bad :) technical precision of language is important to avoid confusion
sorry you’re so angry though. best of luck
0.1x
As one of those on the skeptical side, one train of thought I have not seen people even mention is, the way we’re using LLMs to code now is largely to use a less precise language (mostly English) to specify what’s often a very precise problem and solution. Why would we think that spoken language is the best interface for doing this?
I’m wondering if we can do something better…
I work in crypto (L1 chain) as a DevOps engineer (LOTS of baremetal, LOTS of CI/CD etc) and it's been amazing to see what Claude can do in this space too.
e.g. had an issue with connecting to AWS S3, gave Claude some of the code to connect and it diagnosed a CREDENTIALS issue without seeing the credentials file nor seeing the error itself. It can even find issues like "oh, you have an extra space in front of the build parameter that the user passed into a Jenkins job". Something that a human might have found in 30+ minutes of grepping, checking etc it found in <30 seconds.
It also makes it trivial to do things like "hey, convert all of the print statements in this python script to log messages with ISO 8601 time format".
Folks talk about "but it adds bugs" but I'm going to make the opposite argument:
The excuse of "we don't have time to make this better" is effectively gone. Quality code that is well instrumented, has good metrics and easy to parse logs is only a few prompts away. Now, one could argue that was the case BEFORE we had AI/LLMs and it STILL didn't happen so I'm going to assume folks that can do clean up (SRE/DevOps/code refactor specialists) are still going to be around.
> gave Claude some of the code to connect and it diagnosed a CREDENTIALS issue without seeing the credentials file nor seeing the error itself
10 years ago google would have had a forum post describing your exact problem with solutions within the first 5 results.
Today google delivers 3 pages of content farm spam with basic tutorials, 30% of them vaguely related to your problem, 70% just containing "aws" somewhere, then stops delivering results.
The LLM is just fixing search for you.
Edit: and by the way, it can fix search for you just because somewhere out there there are forum posts describing your exact problem.
What's a "Code Refactor Specialist"? Are you implying that in the future we'll have programmers who will just write code using AI and a specialist role whose job it would be to clean up that code? That isn't going to work, you'll need a superhuman for that role. People who write the code using AI have to be the ones who review it and they have to be responsible for the quality of that code.
Yes I remember a while ago it fixing a pipeline problem because I had managed to copy and paste an IP with one of the digits missing at the end. Spent about an hour before that looking at everything else (all the other steps succeeded, but the last one 'timed out', because I copy and pasted it wrong at the end). Took it <30secs as you said to instantly diagnose the problem.
What you suggested here is trivial with existing tools—linters in the first case, search-and-replace functions in editors for the second.
I have yet to see any evidence of the third case. I'm close to banning AI for my junior devs. Their code quality is atrocious. I don't have time for all that cleanup. Write it good the first time around.
We are moving up an abstraction layer. From the perspective of the business, my job is not to write code, my job is to ship products. The language you use to ship products is your tool of choice. Sure, it could be Python or Typescript, but my tool of choice is natural language.
>that’s a conversation worth having
I'm not even sure there is much room left for one.
There is very little alignment in starting assumptions between most parties in this convo. One guy is coding mission critical stuff, the other is doing throw away projects. One guy depends on coding to put food on table, the other does not. One guy wants to understand every LoC, other is happy to vibe code. One is a junior looking for first job, other is in management in google after being promoted out of engineering. One guy has access to $200/m tech, the other does not. etc etc
We can't even get consensus on tab vs spaces...we're not going to get AI & coding down to consensus or who is "right".
Perhaps a bit a nihilistic & jaded, but I'm very much leaning towards "place your bets & may the odds be ever in your favour".
We’ve been “losing skills” to better tools forever, and it’s usually been a net positive. Nobody hand-writes a sorting algorithm in production to “stay sharp”, most of us don’t do long division because calculators exist, and plenty of great engineers today couldn’t write assembly (or even manage memory in C) comfortably. That didn’t make the industry worse; it let us build bigger things by working at higher abstraction.
LLM-assisted coding feels like the next step in that same pattern. The difference is that this abstraction layer can confidently make stuff up: hallucinated APIs, wrong assumptions, edge cases it didn’t consider. So the work doesn’t disappear, it shifts. The valuable skill becomes guiding it: specifying the task clearly, constraining the solution, reviewing diffs, insisting on tests, and catching the “looks right but isn’t” failures. In practice it’s like having a very fast junior dev who never gets tired and also never says “I’m not sure”.
That’s why I don’t buy the extremes on either side. It’s not magic, and it’s not useless. Used carelessly, it absolutely accelerates tech debt and produces bloated code. Used well, it can take a lot of the grunt work off your plate (refactors, migrations, scaffolding tests, boilerplate, docs drafts) and leave you with more time for the parts that actually require engineering judgement.
On the “will it make me dumber” worry: only if you outsource judgement. If you treat it as a typing/lookup/refactor accelerator and keep ownership of architecture, correctness, and debugging, you’re not getting worse—you’re just moving your attention up the stack. And if you really care about maintaining raw coding chops, you can do what we already do in other areas: occasionally turn it off and do reps, the same way people still practice mental math even though Excel exists.
Privacy/ethics are real concerns, but that’s a separate discussion; there are mitigations and alternatives depending on your threat model.
At the end of the day, the job title might stay “software engineer”, but the day-to-day shifts toward “AI guide + reviewer + responsible adult.” And like every other tooling jump, you don’t have to love it, but you probably do have to learn it—because you’ll end up maintaining and reviewing AI-shaped code either way.
Basically, I think the author hit just in the point.
If you don't see the limitations of vibe coding, I shudder on the idea of maintaining your code even pre-AI.
Do I use it? Yes, a lot, actually. But I also spend a lot of tunning prunning its overly verbose and bizantine code, my esc key is fading from the amount of times I've interrupted it to steer it towards a non-idiotic direction.
It is useful, but if you trust it too much, you're creating a mountain of technical debt.
When the answer for if a given port is in the ephemeral range is still clearly wrong, it's not clear they've moved on at all.
The fact that i hear this mantra over and over again:
"She wrote a thing in a day that would have taken me a month"
This scares me. A lot.
I never found the coding part to be a bottle neck, but the issues arise after the damn thing is in prod. If i work on something big (that will take me a month) thats going to be anywhere from (im winging these numbers) 10K LOC to 25K LOC).
If thats a bechmark for me the next guy using AI will spew out at a bare minimun double the amount of code, and in many cases 3x-4x.
The surface area for bugs are just vastly bigger, and fixing these bugs will eventually take more time than you "won" using AI in the first place.
It really depends on how you use it. I really like using AI for prototyping new ideas (it can run on the background while I work on the main project) and for getting the boring grunt work (such as creating CRUD endpoints on a RESTful API) out of the way. Leaving me more time to focus on the code that really is challenging and need a deeper understanding of the business or the system as a whole.
The boring stuff like crud always needs design. Else you end up with a 2006 era PHP-like "this is a rest api" spaghetti monster. The fact that AI cant do this (and probably never will) is just another showstopper.
I tried AI, but the code it produces (on a higher level) is of really poor quality. Refactoring this is a HUGE PITA.
AI <> Transformers
I keep seeing this over and over by so called "engineers".
You can dismiss the current crop of transformers without dismissing the wider AI category. To me this is like saying that users "dismiss Computers" because they dismiss Windows and instead prefer Linux. Rejecting modern practices for not getting on the microservice hype train or not using React.
Intellisense pre-GPT is a good example of AI that wasn't using transformers.
And of course, you can have both criticise some usages of transformers in IDEs and editors while appreciating and using others.
"My coworker uses Claude Code now. She finished a project last week that would’ve taken me a month". This is one of those generalisations. There is no nuance here. The range of usage from boilerplate to vibe code level is vast. Quickly churning out code is not a virtue. It is not impressive to ship something only to find critical bugs on the first day. Nor is it a virtue using it at the cost of losing understanding of the codebase.
This rigid thinking by devs needs to stop imo. For so called rational thinkers, the development world is rife with dogma and simplistic binary thinking.
If using transformers at any level is cost-effective for all, the data will speak for itself. Vague statements and broad generalisations are not going to sway anyone and will just make this kind of articles sound like validation seeking behaviour.
Author doesn't consider the possibility that engineers dismiss AI after they constantly tried it. Not once, not twice, but consistently.
I am one of those dismissers. I am constantly trash talking AI. Also, I have tried more tools and more stress scenarios than a lot of enthusiasts. The high bars are not in my head, they are on my repositories.
Talk is cheap. Show me your AI generated code. Talk tech, not drama.
Asbestos and Leaded petrol solved problems for the people who used them too.
It's been somewhat disheartening to see many techie spaces (also HackerNews) become so skeptical and anti AI. It's as-if the luddites are at it again and they're just refuting progress because of a bad impression or because they fear the consequences.
AI is a tool and it shuold be treated as such.
Also, beware of snake oil salesmen. Is AI going to integrate widely into the world? Yes. Is it also going to destroy all the jobs in the world? Of course not, luddites don't understand the naïvety of this position.
New things are not always progress.
And even if LLMs turn out to really be a net positive and a requirement for the job, they're antithetical to what most software developers appreciate and enjoy (precision, control, predictability, efficiency...).
There sure seems to be two kinds of software developers: those who enjoy the practice and those who're mostly in for the pay. If LLMs win it will be second ones who'll stay on the job, and that's fine; it won't mean that the first group was made of luddites, but that the job has turned into crap that others will take over.
The two categories of software developers you mention already existed pre ChatGPT and will likely continue to exist. If anything, AI's going to make those who're in it just for the money much less relevant.
Do you really think that Software Engineering is going to be less about precision, control, predictability, and efficiency? These are fundamental skills regardless of AI.
As someone whose stance is to be extremely skeptical of AI, I threw Claude at a complex feature request in a codebase I wasn't very familiar with, and it managed to come up with a solution that was 99% acceptable. I was very impressed, so I started using it more.
But it's really a mixed bag, because for the subsequent 3-4 tasks in a codebase that I was familiar with, Claude managed to produce over-commented, over-engineered slop that didn't do what I asked for and took shortcuts in implementing the requirements.
I definitely wouldn't dismiss AI at this point because it occasionally astounds me and does things I would never in my life have imagined possible. But at other times, it's still like an ignorant new junior developer. Check back again in 6 months I guess.
> I copy from Stack Overflow constantly.
I'm so tired of this kind of reference to Stack Overflow. I used SO for about 15 years, and still visit plenty these days.
I rarely, if ever, copied from Stack Overflow. But I sure learned a great deal from SO.
That's great, but millions of others copy from it.
And those are the people who love AI.
> The gap is widening between engineers who’ve integrated these tools and engineers who haven’t.
Let‘s wait with the evaluation until the honeymoon phase is over. At the moment there are plenty of companies that offer cheap AI tools. It will not stay that way. At the moment most of their training data is man made and not AI made which makes AIs worse if used for training. It will not stay that way.
Yeah it boggles my mind all the people on here constantly dismissing LLMs.
It's very clearly getting better and better rapidly. I don't think this train is stopping even if this bubble bursts.
The cold ass reality is: We're going to need a lot less software engineers moving forward. Just like agriculture now needs way less humans to do the same work than in the past.
I hate to be blunt but if you're in the bottom half of the developer skill bell curve, you're cooked.
If you hate reading other people's code, then you'll hate reading llm generated code, then all you'll ever be with ai at best is yet another vibe coder who produces piles of code they never intend to read, so you should have found another career even before llms were a thing.
Responsible use of ai means reading lots and lots of generated code, understanding it, reviewing and auditing it, not "vibe coding" for the purpose of avoiding ever reading any code.
> If you hate reading other people's code, then you'll hate reading llm generated code, then all you'll ever be with ai at best is yet another vibe coder who produces piles of code they never intend to read, so you should have found another career even before llms were a thing.
I do like to read other people's code if it is of an exceptional high standard. But otherwise I am very vocal in criticizing it.
What I want to see is a Destroy All Software style screencast where somebody actually demonstrates their AI workflow on legacy code.
IMO Those screencasts work because they are painstakingly planned toy projects from scratch
Even without AI you cannot do a tight 10 minute video on legacy code unless you have done a lot of work ahead of time to map it out and then what’s the point
That would be fantastic. I’ve seen so many claims like the author’s
> [Claude Code and Cursor] can now work across entire codebases, understand project context, refactor multiple files at once, and iterate until it’s really done.
But I haven’t seen anyone doing this on e.g. YouTube? Maybe that kind of content isn’t easy to monetize, but if it’s as easy to use AI as everyone says surely someone would try.
> if it’s as easy as everyone says surely someone would try.
Yeah, 18 months ago we were apparently going to have personal SaaSes and all sorts of new software - I don't see anything but an even more unstable web than ever before
I don't mean to sound rude, but when I read comments like this, I wonder if I'm using the same model and tools?
I've done this many times over, and it's by far one of the least impressive things I've seen CC achieve with a good agent/skills/collab setup.
Link to your Youtube channel or Twitch stream?
I would never have had a working LoongArch emulator in 2 weeks at the kind of quality that I desire without it. Not because it writes perfect code, but because it sets everything up according to my will, does some things badly, and then I can take over and do the rest. The first week I was just amending a single commit that set everything up right and got a few programs working. A week after that it runs on multiple platforms with JIT-compilation. I'm not sure what to say, really. I obviously understand the subject matter deeply in this case. I probably wouldn't have had this result if I ventured into the unknown.
Although, I also made it create Rust and Go bindings. Two languages I don't really know that well. Or, at least not well enough for that kind of start-to-finish result.
Another commenter wrote a really interesting question: How do you not degrade your abilities? I have to say that I still had to spend days figuring out really hard problems. Who knew that 64-bit MinGW has a different struct layout for gettimeofday than 64-bit Linux? It's not that it's not obvious in hindsight, but it took me a really long time to figure out that was the issue, when all I have to go on is something that looks like incorrect instruction emulation. I must have read the LoongArch manual up and down several times and gone through instructions one by one, disabling everything I could think of, before finally landing on the culprit just being a mis-emulated kind-of legacy system call that tells you the time. ... and if the LLM had found this issue for me, I would have been very happy about it.
There are still unknowns that LLMs cannot help with, like running Golang programs inside the emulator. Golang has a complex run-time that uses signal-based preemption (sysmon) and threads and many other things, which I do emulate, but there is still something missing to pass all the way through to main() even for a simple Hello World. Who knows if it's the ucontext that signals can pass or something with threads or per-state signal state. Progression will require reading the Go system libraries (which are plain source code), the assembly for the given architecture (LA64), and perhaps instrumenting it so that I can see what's going wrong. Another route could be implementing an RSP server for remote GDB via a simple TCP socket.
As a conclusion, I will say that I can only remember twice I ditched everything the LLM did and just did it myself from scratch. It's bound to happen, as programming is an opinionated art. But I've used it a lot just to see what it can dream up, and it has occasionally impressed. Other times I'm in disbelief as it mishandles simple things like preventing an extra masking operation by moving something signed into the top bits so that extracting it is a single shift, while sharing space with something else in the lower bits. Overall, I feel like I've spent more time thinking about more high-level things (and occasionally low-level optimizations).
I am neither pro- nor anti-AI. I just don't like the manipulative and blackmailish tactics its proponents use to get me to use it. I will use it whenever I find it useful, not because you tell me I'm getting "left behind" by not adopting it.
[flagged]
Well said!
> if you haven’t tried modern AI coding tools recently, try one this week.
I don’t think I will. I am glad I have made the radical decision, for myself, to wilfully remain strict in my stance against generative AI, especially for coding. It doesn’t have to be rational, there is good in believing in something and taking it to its extreme. Some avoid proprietary software, others avoid eating sentient beings, I avoid generative AI on pure principle.
This way I don’t have to suffer from these articles that want to make you feel bad, and become almost pleading, “please use AI, it’s good now, I promise” which I find frankly pathetic. Why do people care so much about it to have to convince others in this sad routine? It honestly feels like some kind of inferiority complex, as if it is so unbearable that other people might dislike your favourite tool, that you desperately need them to reconsider.
cough
The Strange Case of "Engineers" Who Use AI
I rely on AI coding tools. I don’t need to think about it to know they’re great. I have instincts which tell me convenience = dopamine = joy.
I tested ChatGPT in 2022, and asked it to write something. It (obviously) got some things wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I've forgotten that lesson. Why wouldn't I? I've been offloading all sorts of meaningful cognitive processes to AI tools since then.
I use Claude Code now. I finished a project last week that would’ve taken me a month. My senior coworker took one look at it and found 3 major flaws. QA gave it a try and discovered bugs, missing features, and one case of catastrophic data loss. I call that “nitpicking.” They say I don’t understand the engineering mindset or the sense of responsibility over what we build. (I told them it produces identical results and they said I'm just admitting I can't tell the difference between skill and scam).
“The code people write is always unfinished,” I always say. Unlike AI code, which is full of boilerplate, adjusted to satisfy the next whim even faster, and generated by the pound.
I never look at Stack Overflow anymore, it's dead. Instead I want the info to be remixed and scrubbed of all its salient details, and have an AI hallucinate the blanks. Thay way I can say that "I built this" without feeling like a fraud or a faker. The distinction is clear (well, at least in my head).
Will I ever be good enough to code by myself again? No. When a machine showed up that told me flattering lies while sounding like a silicon valley board room after a pile of cocaine, I jumped in without a parachute [rocket emoji].
I also personally started to look down on anyone who didn't do the same, for threatening my sense of competence.
[flagged]
from some of the engineers I've debated this over, I think some of them have just dug their heels in at this point and have decided they're never going to use LLM tools period, and are just clinging to the original arguments without really examining the reality of the situation. In particular this "The LLM is going to hallucinate subtle bugs I can't catch" one. The idea that LLMs make subtle mistakes that are somehow more subtle, insidious and uncatchable compared to any random 25 pull requests you get from humans is simply ridiculous. The LLM makes mistakes that stick out to you like a sore thumb, because they're not your mistakes. The hardest mistakes to catch are your own, because your thinking patterns are what made them in the first place.
The biggest problem with LLMs for code that is ongoing is that they have no ability to express low confidence in solutions where they don't really have an answer, instead they just hallucinate things. Claude will write ten great bash lines for you but then on the eleventh it will completely hallucinate an option on some linux utility you hardly have time to care about, where the correct answer is "these tools don't actually do that and I dont have an easy answer for how you could do that". At this point I am very keen to notice when Claude gets itself into an endless ongoing loop of thought that I'm going about something the wrong way. Someone less experienced would have a very hard time recognizing the difference.
> The idea that LLMs make subtle mistakes that are somehow more subtle, insidious and uncatchable compared to any random 25 pull requests you get from humans is simply ridiculous.
This is plainly true, and you are just angry that you don't have a rebuttal
I didnt say the LLM does not make mistakes, I said the idea that a reviewer is going to miss them at some rate that is any different from mistakes a human would make, is ridiculous.
Missing in these discussions is what kinds of code people are talking about. Clearly if we're talking about a dense, highly mathematical algorithm, I would not have an LLM anywhere near that. We are talking about day-to-day boilerplate / plumbing stuff. The vast majority of boring grunt work that is not intellectually stimulating. If your job is all Carnegie-Mellon level PHD algorithm work, then good for you.
edit: I get that it looks like you made this account four days ago to troll HN on AI stuff. I get it, I have a bit of a mission here to pointedly oppose the entrenched culture (namely the extreme right wing elements of it). But your trolling is careless and repetitive enough that it looks like.....is this an LLM account instructed to troll HN users on LLM use ? funny
Love to set up a straw man just to knock it down right after. Feels great every time. Then I go to hacker news and read then comments.
[flagged]