I suppose everyone on HN reaches a certain point with these kind of thought pieces and I just reached mine.
What are you building? Does the tool help or hurt?
People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.
After five or six cycles it does become a bit fatiguing. Use the tool sanely. Work at a pace where your understanding of what you are building does not exceed the reality of the mess you and your team are actually building if budgets allow.
This seldom happens, even in solo hobby projects once you cost everything in.
It's not about agile or waterfall or "functional" or abstracting your dependencies via Podman or Docker or VMware or whatever that nix crap is. Or using an agent to catch the bugs in the agent that's talking to an LLM you have next to no control over that's deleting your production database while you slept, then asking it to make illustrations for the postmortem blog post you ask it to write that you think elevates your status in the community but probably doesn't.
I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
This x1000. The last 10 years in the software industry in particular seems full of meta-work. Building new frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. All to build... what exactly? Are these tools necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
Hard to shake the feeling that this looks like one big pyramid scheme. I strongly suspect that vast majority of the "innovation" in recent years has gone straight to supporting the funding model and institution of the software profession, rather than actual software engineering.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It was, and is. But not universally.
If you formulate questions scientifically and use the answers to make decisions, that's engineering. I've seen it happen. It can happen with LLMs, under the proper guidance.
If you formulate questions based on vibes, ignore the answers, and do what the CEO says anyway, that's not engineering. Sadly, I've seen this happen far too often. And with this mindset comes the Claudiot mindset - information is ultimately useless so fake autogenerated content is just as valuable as real work.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
If I engineer a bridge I know the load the bridge is designed to carry. Then I add a factor of safety. When I build a website can anyone on the product side actually predict traffic?
When building a bridge I can consult a book of materials and understand how much a material deforms under load, what is breaking point is, it’s expected lifespan, etc. Does this exist for servers, web frameworks, network load balancers, etc.?
I actually believe that software “could” be an engineering discipline but we have a long way to go
> can anyone on the product side actually predict traffic
Hypothetically, could you not? If you engineer a bridge you have no idea what kind of traffic it'll see. But you know the maximum allowable weight for a truck of X length is Y tons and factoring in your span you have a good idea of what the max load will be. And if the numbers don't line up, you add in load limits or whatever else to make them match. Your bridge might end up processing 1 truck per hour but that's ultimately irrelevant compared to max throughput/load.
Likewise, systems in regulated industries have strict controls for how many concurrent connections they're allowed to handle[1], enforced with edge network systems, and are expected to do load testing up to these numbers to ensure the service can handle the traffic. There are entire products built around this concept[2]. You could absolutely do this, you just choose not to.
If I need a bridge, and there's a perfectly beautiful bridge one town over that spans the same distance - that's useless to me. Because I need my own bridge. Bridges are partly a design problem but mainly a build problem.
In software, if I find a library that does exactly what I need, then my task is done. I just use that library. Software is purely a design problem.
With agentic coding, we're about to enter a new phase of plenty. If everyone is now a 10x developer then there's going to be more software written in the next few years than in the last few decades.
That massive flurry of creativity will move the industry even further from the calm, rational, constrained world of engineering disciplines.
Software packages are more complicated than you make them out to be. Off the top of my head:
- license restrictions, relicensing
- patches, especially to fix CVEs, that break assumptions you made in your consumption of the package
- supply chain attacks
- sunsetting
There’s no real “set it and forget it” with software reuse. For that matter, there’s no “set it and forget it” in civil engineering either, it also requires monitoring and maintenance.
I think it is in certain very limited circumstances. The Space Shuttle's software seems like it was actually engineered. More generally, there are systems where all the inputs and outputs are well understood along with the entire state space of the software. Redundancy can be achieved by running different software on different computers such that any one is capable of keeping essential functions running on its own. Often there are rigorous requirements around test coverage and formal verification.
This is tremendously expensive (writing two or more independent copies of the core functionality!) and rapidly becomes intractable if the interaction with the world is not pretty strictly limited. It's rarely worth it, so the vast majority of software isn't what I'd call engineered.
People don't realize how much software engineering has improved. I remember when most teams didn't use version control, and if we did have it, it was crappy. Go through the Joel Test [1] and think about what it was like at companies where the answers to most of those questions was "no."
At the same time, systems have become far more complex. Back when version control was crap, there weren't a thousand APIs to integrate and a million software package dependencies to manage.
Sure everything seems to have gotten better and that's why we now need AIs to understand our code bases - that we created with our great version control tooling.
Fundamentally we're still monkeys at keyboards just that now there are infinitely many digital monkeys.
Perrow’s book Normal Accidents postulates that, given advances which could improve safety, people just decide to emphasize throughout, speed, profits, etc. he turned out to be wrong about aviation (got much safer over time) and maritime shipping (there was a perception of a safety crisis in the late 1970s with oil tankers exploding, now you just hear about the odd exceptional event.)
Maybe back in the beginning, but I don't think it's an engineering discipline now. I don't think that's bad though. I always thought we tagged on the word "engineer" so that we could make more money. I'm ok with not being one. The engineers I've known are very strict in their approach which is good since I don't want my deck to fall down. Most of us are too risky with our approach. We love to try new things and patterns, not just used established ones over time. This is fine with me, and when we apply the term "engineer" to work, I get a little uneasy, because I think it implies us doing something that most of us really don't want to do. That is, absolutely prove our approach works and will work for years to come. Just my opinion though.
I’ve had jobs where my title was “software engineer”, but I never refer to myself as such outside of work. When I tell others what I do, I say I am a software developer. It may seem a pointless distinction, but to me there is a distinction.
Neither myself nor the vast majority of other “software engineers” in our field are living up to what it should mean to be an “engineer”.
The people that make bridges and buildings, those are the engineers. Software engineers, for the very very most part, are not.
I'm similar except for me reason is no degree. So some jobs eng others just developer... although my current job I'm a "technology specialist" which is funny. But I'm getting paid so whatever.
Most recently I wrote cloudformation templates to bring up infra for AWS-based agents. I don't use ai-assisted coding except googling which I acknowledge is an ai summary.
A friend of mine is in a toxic company where everyone has to use AI and they're looked down upon if they don't use it. Every minute of their day has to be logged doing something. They're also going to lay off a bunch of people soon since "AI has replaced them" this is in the context of an agency.
I was just reading "how the world became rich" and they made an interesting distinction economic "development" vs plain "growth". Amusingly, "development" to them means exactly what you're saying "engineer" should mean. It's sustainable, structural, not ephemeral. Development in the abstract hints at foundational work. Building something up to last. It seems like this meaning degradation is common in software. It still blows my mind how the "full-stack" naming stuck, for example.
Edit-on a related note, are there any studies on the all-in long-term cost between companies that "develop" vs. "engineer". I doubt there would be clean data since the managers that ignored all of the warning of "tech debt" would probably have the say on both compiling and releasing such data.
Does the cost of "tech-debt" decrease as the cost of "coding" decreased or is there a phase transition on the quality of the code? I bet there will be an inflection point if you plotted the adoption time of AI coding by companies. Late adapters that timed it after the models and harnesses and practices were good enough (probably still some time in the near future) would have less all-in cost per same codebase quality.
It’s a bit of a misclassification. In my mind we tend to be more like architects where there are a fair amount of innovative ideas that don’t work all that well in practice. Train stations with beautiful roofs that leak and slippery marble floors, airports with smoke ventilation systems in the floor, etc.
Of course, we use that term for something else in the software world, but architecture really has two tiers, the starchitects building super fancy stuff (equivalent to what we’d call software architects) and the much more normal ones working on sundry things like townhomes and strip malls.
That being said I don’t think people want the architecture pay grades in the software fields.
It's a Systems Engineering job. You provide context, define interfaces to people, tests for critical failure modes affecting customer, describe system behavior, and translate to other people.
At the same time, if you remove 'engineer' , informatics should fall under the faculty of Science, so scientists, which are even more rigorous than engineers ;)
> A number of these phenomena have been bundled under the name "Software Engineering". As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot.".
- Edsger Dijkstra, 1988
I think, unfortunately, he may have had us all dead to rights on this one.
Software engineering is real engineering because we rigorously engineer software the way "real" engineers engineer real things.
Software engineering is not real engineering because we do not rigorously engineer software the way "real" engineers engineer real things. <--- YOU ARE HERE
Software engineering is real engineering because we rigorously engineer software the way "real" engineers engineer real things.
1. Applied physics - Software is immediately disqualified. Symbols have no physics.
2. Ethics - Lives and livelihoods depend on you getting it right. Software people want to be disqualified because that stuff is so boring, but this is becoming a more serious issue with every passing day.
People built a lot of great stuff with Ruby, PHP, Notes and VB. I don't know what the problem really is.
Personally I think that whole Karpathy thing is the slowest thing in the world. I mean you can spin the wheels on a dragster all you like and it is really loud and you can smell the fumes but at some point you realize you're not going anywhere.
My own frustration with the general slowness of computing (iOS 26, file pickers, build systems, build systems, build systems, ...) has been peaking lately and frankly the lack of responsiveness is driving me up the wall. If I wasn't busy at work and loaded with a few years worth of side projects I'd be tearing the whole GUI stack down to the bottom and rebuilding it all to respect hard real time requirements.
Software was an engineering discipline... at some places. And it still is, at some places.
Other places were "hack it until we don't know of any major bugs, then ship it before someone finds one". And now they're "hey, AI agents - we can use that as a hack-o-matic!" But they were having trouble with sustainability before, and they're going to still, except much faster.
All (not some) of the most successful devs I've known in the sense of building something that found market fit and making money off it were terrible engineers. They were fairly productive at building features. That's it. And they were productive - until they weren't. Their work ultimately led to outages, lost data, and sensitive data being leaked (to what extent, I don't even know).
The ones who got acquired - never really had to stand up to any due diligence scrutiny on the technical side. Other sides of the businesses did for sure, but not that side.
Many of you here work for "real" tech companies with the budget and proper skin in the game to actually have real engineers and sane practices. But many of you do not, and I am sure many have seen what I have seen and can attest to this. If someone like the person I mentioned above asks you to join them to help fix their problems, make sure the compensation is tremendous. Slop clean-up is a real profession, but beware.
There used to be a saying along the lines of “while you’re designing your application to scale to 1m requests/min, someone out there is making $1m ARR with php and duct tape”
It feels like this takes on a whole new meaning now we have agents - which I think is the same point you were making
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It isn't. Show me the licensing requirements to be a "software engineer." There are none. A 12 year old can call himself a software engineer and there are probably some who have managed to get remote work on major projects.
That's assuming the axiom that "engineer" must require licensing requirements. That may be true in some jurisdictions, but it's not axiomatically or definitionally true.
Some kinds of building software may be "engineering", some kinds may not be, but anyone seeking to argue that "licensing requirements" should come into play will have to actually argue that rather than treat it as an unstated axiom.
Depends on the country. In some countries, it is a legal axiom (or at least identity).
For the other countries, though, arguing "some countries do it that way" is as persuasive as "some countries drive on the other side of the road." It's true, but so what? Why should we change to do it their way?
> Depends on the country. In some countries, it is a legal axiom (or at least identity).
As I said, "That may be true in some jurisdictions, but it's not axiomatically or definitionally true.". The law is emphatically not an axiom, nor is it definitionally right or wrong; it only defines what's legal or illegal.
When the article raised the question of whether "building software is an engineering discipline", it was very obviously not asking a question about whether the term 'engineering' is legally restricted in any particular jurisdiction.
To my mind, the term "engineering discipline" implies something roughly analogous to Electrical Engineering, Civil Engineering, Mechanical Engineering, Chemical Engineering.
There is no such rigorous definition for "software engineer" which normally is just a self-granted title meaning "I write code."
In Europe they are. Call yourself an Engineer without a degree and your company and you will be sued with a big fine, because here you must be legally accountable on disasters and ofc there are hard constraints .
Where specifically? I've been working as a "Software engineer" for multiple decades, across three countries in Europe, and 2-3 countries outside of Europe, never been sued or received a "big fine" for this, even have had presentations for government teams and similar, not a single person have reacted to me (or others) calling ourselves "software engineers" this whole time.
>After five or six cycles it does become a bit fatiguing. Use the tool sanely.
That's increasingly not possible. This is the first time for me in 20 years where I've had a programming tool rammed down my throat.
There's a crisis of software developer autonomy and it's actually hurting software productivity. We're making worse software, slower because the C levels have bought this fairy tale that you can replace 5 development resource with 1 development resource + some tokens.
Change it to "Some people" if your pedanticism won't let you follow the flow.
Or better yet point out the better paths they chose instead. Were they wrestling with Java and "Joda Time"? Talking to AWS via a Python library named after a dolphin? Running .NET code on Linux servers under Mono that never actually worked? Jamming apps into a browser via JQuery? Abstracting it up a level and making 1,400 database calls via ActiveRecord to render a ten item to-do list and writing blog posts about the N+1 problem? Rewriting grep in Rust to keep the ruskies out of our precious LLCs?
Asking the wrong questions, using the wrong tools, then writing dumb blog posts about it is what we do. It's what makes us us.
There's this interesting issue that we've never had occupational licensing for software developers despite the sheer incompetence that we see all the time.
On one hand there's an approach to computing where it is a branch of mathematics that is universal. There are some creatures that live under the ice on a moon circling a gas giant around another star and if they have computers they are going to understand the halting problem (even if they formulate it differently) and know bubble sort is O(N^2) and about algorithms that sort O(N log N).
On the other hand we are divided by communities of practice that don't like one another. For instance there is the "OO sux" brigade which thinks I suck because I like Java. There still are shops where everything is done in a stored procedure (oddly like the fashionable architecture where you build an API server just because... you have to have an API) and other shops where people would think you were brain damaged to go anywhere near stored procs, triggers or any of that. It used to be Linux enthusiasts thought anybody involved in Windows was stupid and you'd meet Windows admins who were click-click-click-click-clicking over and over again to get IIS somewhat working who thought IIS was the only web server good enough for "the enterprise"
Now apart for the instinctual hate for the tools there really are those chronic conceptual problems for which datetime is the poster child. I think every major language has been through multiple datetime libraries in and out of the standard lib in the last 20 years because dates and times just aren't the simple things that we wish they would be and the school of hard knocks keeps knocking us to accept a complicated reality.
> There's this interesting issue that we've never had occupational licensing for software developers despite the sheer incompetence that we see all the time.
I'm laughing over the current Delve/SOC2 situation right now. Everyone pulls for 'licenses' as the first card, but we all know that is equally fraught with trauma. https://xkcd.com/927/
Pedanticism (or pedantry) is the excessive, tiresome concern for minor details, literal accuracy, or formal rules, often at the expense of understanding the broader context.
I don't think this had anything to do with minor details at all. You're trying to convey a point while ignoring the half of the population who didn't go down that route.
Largely a problem of VCs and shareholders. After my 12th year of "we'll get around to bug fixes" and "this is an emergency" I realize I am absolutely not doing anything related to engineering. My job means less than the moron PM who graduated bottom of their class in <field>. The lack of trust in me despite having almost a life in software is actually so insulting it's hard to quantify.
Now I barely look at ticket requirements, feed it to an LLM, have it do the work, spend an hour reviewing it, then ship it 3 days later. Plenty of fuck off time, which is time well spent when I know nothing will change anyway. If I'm gonna lose my career to LLMs I may as well enjoy burning shareholder capital. I've optimized my life completely to maximize fuck off time.
At the end of the day they created the environment. It would be criminal to not take advantage of their stupidity.
What the article doesn't touch on is the vendor lock-in that is currently underway. Many corps are now moving to an AI-based development process that is reliant on the big AI providers.
Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.
Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market.
Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition?
I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better.
> Companies claiming 100% of their product's code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes
One thing about the old days of DOS and original MacOS: you couldn't get away with nearly as much of this. The whole computer would crash hard and need to be rebooted, all unsaved work lost. You also could not easily push out an update or patch --- stuff had to work out of the box.
Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.
Not that I want to go back to DOS but Wordperfect 5.1 was pretty damn rock solid as I recall.
> Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.
It's not the glut of compute resources, we've already accepted bloat in modern software. The new crutch is treating every device as "always online" paired with mantra of "ship now! push fixes later." Its easier to setup a big complex CI pipeline you push fixes into and it OTA patches the users system. This way you can justify pushing broken unfinished products to beat your competitors doing the same.
I think you're just recalling the few software products that were actually good. There was plenty of crap software that would crash and lose your work in the old days.
Another factor at work is the use of rolling updates to fix things that should better have been caught with rigorous testing before release. Before the days of 'always on' internet it was far too costly to fix something shipped on physical media. Not that everything was always perfect, but on the whole it was pretty well stress-tested before shipping.
The sad truth is that now, because of the ease of pushing your fix to everything while requiring little more from the user than that their machine be more or less permanently connected to a network, even an OS is dealt with as casually as an application or game.
Useful context here is that the author wrote Pi, which is the coding agent framework used by OpenClaw and is one of the most popular open source coding agent frameworks generally.
That's a great shout because I'm sure a lot of people would otherwise just discredit this take as just another anti-ai skeptic. But he probably has more experience working with LLM's and agents than most of us on this site, so his opinion holds more weight than most.
> it sure feels like software has become a brittle mess, with 98% uptime becoming the norm instead of the exception, including for big services
As somebody who has been running systems like these for two decades: the software has not changed. What's changed is that before, nobody trusted anything, so a human had to manually do everything. That slowed down the process, which made flaws happen less frequently. But it was all still crap. Just very slow moving crap, with more manual testing and visual validation. Still plenty of failures, but it doesn't feel like it fails a lot of they're spaced far apart on the status page. The "uptime" is time-driven, not bugs-per-lines-of-code driven.
DevOps' purpose is to teach you that you can move quickly without breaking stuff, but it requires a particular way of working, that emphasizes building trust. You can't just ship random stuff 100x faster and assume it will work. This is what the "move fast and break stuff" people learned the hard way years ago.
And breaking stuff isn't inherently bad - if you learn from your mistakes and make the system better afterward. The problem is, that's extra work that people don't want to do. If you don't have an adult in the room forcing people to improve, you get the disasters of the past month. An example: Google SREs give teams error budgets; the SREs are acting as the adult in the room, forcing the team to stop shipping and fix their quality issues.
One way to deal with this in DevOps/Lean/TPS is the Andon cord. Famously a cord introduced at Toyota that allows any assembly worker to stop the production line until a problem is identified and a fix worked on (not just the immediate defect, but the root cause). This is insane to most business people because nobody wants to stop everything to fix one problem, they want to quickly patch it up and keep working, or ignore it and fix it later. But as Ford/GM found out, that just leads to a mountain of backlogged problems that makes everything worse. Toyota discovered that if you take the long, painful time to fix it immediately, that has the opposite effect, creating more and more efficiency, better quality, fewer defects, and faster shipping. The difference is cultural.
This is real DevOps. If you want your AI work to be both high quality and fast, I recommend following its suggestions. Keep in mind, none of this is a technical issue; it's a business process isssue.
It's a systems engineering job. You need to provide context, acceptable failure modes, and test at each level for validation. Identify false coupling, poor interfaces, things that don't match business context during agent planning phase. Then communicate / translate to others so their decisions improve instead of destroying the system by optimizing only for their local situation.
It also seems like massive consolidation has caused issues too. Everyone is on Github. Everyone is on AWS. Everyone is behind cloudflare. Whenever an issue happens here it effects everyone and everyone sees it.
In the past with smaller services those services did break all the time, but the outage was limited to a much smaller area. Also systems were typically less integrated with each other so one service being down rarely took out everything.
This aligns with my observation from product design point as well.
Product design has a slightly different problem than engineering, because the speed of development is so high we cannot dogfood and play with new product decisions, features. By the time I’ve realized we made a stupid design choice and it doesn’t really work in real world, we already built 4 features on top of it. Everyone makes bad product decisions but it was easy and natural to back out of them.
It’s all about how we utilize these things, if we focus on sheer speed it just doesn’t work. You need own architecture and product decisions. You need to use and test your products with humans (and automate those as regression testing). You need to able to hold all of the product or architecture in your mind and help agents to make the right decisions with all the best practice you’ve learned.
Agree. The issue was never, how can we get our engineers to squirt out more lines of code in a day? It has always been, how can we effectively iterate using customer feedback to deliver the highest quality product. That type of thing needs time to bake.
Nature will handle this in time. Just expect to see a "Bear Stearns moment" in the software world if this spirals completely out of control (and companies don't take a hint from recent outages).
It's not really malware, but it's a mess. It installed so much shit and it interfered with your git hooks and stuff. It was kind of messy. I kind of gave up on it. I just went back to using built-in claude code todowrite tasks.
It managed to throw itself into a global file for me that Claude used which caused beads to appear in random projects on my machine. Because of how it was there the agent attempted to re-install beads after I already removed it because the guy hook errored.
But in many agent-skeptical pieces, I keep seeing this specific sentiment that “agent-written code is not production-ready,” and that just feels… wrong!
It’s just completely insane to me to look at the output of Claude code or Codex with frontier models and say “no, nothing that comes out of this can go straight to prod — I need to review every line.”
Yes, there are still issues, and yes, keeping mental context of your codebase’s architecture is critical, but I’m sorry, it just feels borderline archaic to pretend we’re gonna live in a world where these agents have to have a human poring over every single line they commit.
Maybe in the future humans won't need to pour over every line. However I quickly learn which interns I can trust and which I need to pour over their code - I don't trust AI because it has been wrong too often. I'm not saying AI is useless - I do most of my coding with an agent, but I don't trust it until I verify every line.
I did this for a while… and until Opus 4.5, I couldn't fully trust the model. But at this point, while it does make the occasional mistake, I don't need to scrutinize every line. Unit and integration tests catch the bugs we can imagine, and the bugs we can't imagine take us by surprise, which is how it has always been.
Even with 4.6 I find there are a lot of mistakes it makes that I won't allow. Though it is also really good at finding complex thread issues that would take me forever...
Were you not reviewing every line when a human wrote it before it went to prod? I think the output of these tools is about as good as a human would write - which means it needs thorough review if I’m going to be on the hook to resolve its issues at 2AM.
How do you know which lines you need to review and which you don't?
Does it feel archaic because LLMs are clearly producing output of a quality that doesn't require any review, or because having to review all the code LLMs produce clips the productivity gains we can squeeze out of them?
We live in a world where every line of code written by a human should be reviewed by another human. We can't even do that! Nothing should go straight to prod ever, ever ever, ever.
> Nothing should go straight to prod ever, ever ever, ever
Air Traffic Controller software - sure. 99% of other softwares around that are not mission-critical (like Facebook) just punch it to production - "move fast and break shit" has been cool way before "AI"
Prod in this context doesn't refer to one person's website for their personal project. It refers to an environment where downtime has consequences, generally one that multiple people work on and that many people rely on.
It's tough to not interpret this as "I don't care about my website". Do you not check the copy? Or what if AI one-shots something that will harm your reputation in the metadata?
That sounds better. I assume the stakes are low enough that you are happy reviewing after the fact, but setting up a workflow to check the diffs before pushing to production shouldn't be too difficult
There's a middle ground here. Code for your website? Sure, whatever, I assume you're not Dell and the cost of your website being unavailable to some subset of users for a minute doesn't have 5 zeroes on the end of it. If you're writing code being used by something that matters though you better be getting that stuff reviewed because LLMs can and will make absolutely ridiculous mistakes.
That a personal website? Prod means different things in different contexts. Even then, I'd be a bit worried about prompt injection unless you control your context closely (no web access etc).
If you keep the scope small enough it can be production ready ootb, and with some stuff (eg. a throwaway React component) who really cares. But I think it's insane to look at the output of Claude Code or Codex with frontier models and say "yep, that looks good to me".
Fwiw OP isn't an agent skeptic, he wrote one of the most popular agent frameworks.
It's a conversation I've had many times in my career and I'm sure I'll have many more. We've got code that seems plausible on a surface level, at a glance it solves the problem it's meant to solve - why can't we just send it to prod and address whatever problems we find with it later?
The answer is that it's very easy for bad code to cause more problems than it solves. This:
> Then one day you turn around and want to add a new feature. But the architecture, which is largely booboos at this point, doesn't allow your army of agents to make the change in a functioning way.
is not a hypothetical, but a common failure mode which routinely happens today to teams who don't think carefully enough about what they're merging. I know a team of a half-dozen people who's been working for years to dig themselves out of that hole; because of bad code they shipped in the past, changes that should have taken a couple hours without agentic support take days or weeks even with agentic support.
You say it's borderline archaic. I say trusting agents enough to not look at every single line is an abdication of ethics, safety, and engineering. You're just absolving yourself of any problems. I hope you aren't working in medical devices or else we're going to get another Therac-25. Please have some sort of ethics. You are going to kill people with your attitude.
Almost nobody works on medical devices... And some of you lucky folks might be working with mega minds everyday, but the rest of us are but shadows and dust. I trust 5.4 or 4.6 more than most developers. Through applying specific pressure using tests and prompts I force it to built better code for my silly hobby game than I ever saw in real production software. Before those models I was still on the other side of the line but the writing is on the wall.
This assumes that only (AI/Agentic) stupidity comes into play, with no malice on sight. But if things go wrong because you didn't noticed the stupidity, malice will pass through too. And there is a a big profit opportunity, and a broad vulnerable market for malice. Is not just correctness or uptime what comes into play, but bigger risks for vulnerabilities or other malicious injected content.
> And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you're actually building and why. Give yourself an opportunity to say, fuck no, we don't need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.
This is a great point.
I have been avoiding LLM's for awhile now, but realized that I might want to try working on a small PDF book to Markdown conversion project[0]. I like the Claude code because command line. I'm realizing you really need to architect with good very precise language to avoid mistakes.
I didn't try to have a prompt do everything at once. I prompted Claude Code to do the conversion process section by section of the document. That seemed to reduce the mistake the agent would make
I only have so long on earth. (I have no idea how long) I need things to be faster for me. Sometimes that means I need to take extra time now so they don't come back to me later.
I am "playing" with both pi and Claude (in docker containers) with local llama.cpp and as an exercise, I asked both the same question and the results are in this gist:
What I have leaned from the exercise above is that we paid more attention and spent more resources on "metadata" than real data. They are the rabbit holes that lead us to more metadata and forget what we really want.
If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term.
Integration is the key to the agents. Individual usages don't help AI much because it is confined within the domain of that individual.
> If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term.
I'm one of those people and I'm not going to slow down. I want to move on from bullshit jobs.
The only people that fear what is coming are those that lack imagination and think we are going to run out of things to do, or run out of problems to create and solve.
So are you aiming for death poverty? Once those bullshit jobs go, we’re going to find a lot of people incapable of producing anything of value while still costing quite a bit to upkeep. These people will have to be gotten rid of somehow.
> and think we are going to run out of things to do, or run out of problems to create and solve.
There will be plenty of problems to solve. Like who will wipe the ass of the very people that hate you and want to subjugate you.
Name a single time doomers were right about anything. Doomers consistently overstate their expected outcome in every single domain and consistently fail to predict how society evolves and adapts.
Again:
The only people that fear what is coming are those that lack imagination and think we are going to run out of things to do, or run out of problems to create and solve.
Also, there have been plenty of awful things caused by technological progress. Tons of death and poverty was created by the transition to factories and mechanization 150 years ago.
Did we come out the other end with higher living standards? Yes, but that doesn't make the decades of brutal transition period any less awful for those affected.
That's generous. Climate scientists were right, climate doomers were definitely wrong.
Society is mostly unchanged due to climate change. That's not to say climate has no effect, but it is certainly still not some doomer scenario that's played out. New York and Florida are most certainly not underwater as predicted by the famous "Inconvenient Truth". People still live in deserts just as they always have. Human lifespan is still increasing. We have less hunger worldwide than ever before, etc.
Climate change doomers conveniently leave out the part where climate has ALWAYS affected society and is one of the main inputs to our existence, therefore we are extremely adaptable to it.
Before "climate change" ever entered the general consciousness, climate wiped out civilizations MORE FREQUENTLY than it does now. All signs point to doomers being wrong and yet they all hold onto it stubbornly.
Doomers were never impressive because they got anything right, they are impressive because they have the unique skill of moving the goalpost when they are wrong. Any time you think the goalpost can't be moved further out, they prove it's possible.
The effects of climate change are just starting to happen. Ecosystems are dying. Very few "climate doomers" thought the world would be like the Day after Tomorrow.
The earth is becoming more hostile to it's inhabitants. There are famines caused by climate change. We will undoubtedly within the next 20 years see mass migration from the areas hardest hit.
Climate scientists, and climate reporting, often UNDERSTATED the worst of these effects.
I think it'd be worth stating what your definition of doomerism is. For me, seeing the increases in forest fires, seeing the sky reddened and the air quality diminish and floods and hurricanes increase... I don't think being able to buy a big mac doesn't make that any less pessimistic.
The CO2 concentration continues to climb year after year, at an accelerating rate. The world hasn't ended yet because it's still 2026 but it doesn't mean it won't.
We're on a hothouse earth trajectory. All signs point to you not being aware of serious climate research and hanging on to a naive Steven Pinker "everything is always improving" outlook.
> Name a single time doomers were right about anything.
- NFTs
- Surveillance schizos
- Global Pedophile Cabal schizos
- Anyone who didn’t believe we were a year out from Star Trek living when LLMs first started picking up steam
- People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
- Anyone amongst the sea of delusional democrats who did indeed believe Trump could win a second term.
All of those doomers were vindicated, and that’s just recently.
- NFTS doomers? I mean I appreciate the humor here.
- Surveillance schizos - Society still works
- Global Pedophile Cabal schizos - Again, funny use of 'doomers' but that's what the current society seems to be run by so I wouldn't say it's fitting for doomerism.
- People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
-- I'm a software "engineer" for ~14 years now. I still have no concern.
None of these things are that disruptive to our society at large. You will still be able to walk down the street and grab a Big Mac pretty much any day of the week. A large portion of society is going to look at all of what you're worried about and say "it's not that serious" while consuming their 20 second videos.
I was thinking the other day about why a "global pedophile cabal" would be a thing. I still think that phrase overstates it a bit, but not that much.
Committing a crime with someone bonds you to them.
First, it's a kind of shared social behavior, and it's one that is exclusive to you and your friends who commit the same kinds of crimes. Any shared experience bonds people, crimes included. Having a shared secret also bonds people.
Second, it creates an implied pact of mutually assured destruction. Everyone knows the skeletons in everyone else's closet, so it creates a web of trust. Anyone defecting could possibly be punished by selectively revealing their crimes, and vice versa. Game theoretically it overcomes tit-for-tat and enables all-cooperate interactions, at least to some extent, and even among people who otherwise don't like each other or don't have a lot in common.
Third, it separates the serious from the unserious. If you want to be a member of the club, do the bad thing. It's a form of high cost membership gating.
This works for other kinds of crimes too. It's not that unusual for criminal gangs to demand that initiates commit a crime and provide evidence, or commit a crime in front of existing members. These can be things like robbery, murder, and so on. Anyone not willing to do this probably isn't serious and can't be trusted. Once someone does do it, you know they're really in.
It naturally creates cabals. The crime comes first, the cabal second, but then the cabal can realize this and start using the crime as a gateway to admission.
Every mutual interest creates a community, but a secret criminal mutual interest creates a special kind of tight knit community. In a world that's increasingly atomized and divided, that's power. I think it neatly explains how the Epstein network could be so powerful and effective.
Ah yes, me on a high horse. Not the person whose entire worldview depends on defying nash equilibrium. You're all wasting brain cycles to discuss some unrealistic cooperative agreement to slow down and sing 'kumbaya' and telling us that if we don't get to this state that we will on the streets homeless. If this is me on a horse then you are on top of an ivory tower managing my beast of burden.
I think before even being able to entertain the thought of slowing the fuck down, we need to seriously consider divorcing productivity. Or at least asking a break, so you can go for a walk in the park, meet some friends and reflect on how you are approaching development.
> The point is: let the agent do the boring stuff, the stuff that won't teach you anything new, or try out different things you'd otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation.
That's partially true. I've also had instances where I could have very well done a simple change by myself, but by running it through an agent first I became aware of complexities I wasn't considering and I gained documentation updates for free.
Oh and the best part, if in three months I'm asked to compile a list of things I did, I can just look at my session history, cross with my development history on my repositories and paint a very good picture of what I've achieved. I can even rebuild the decision process with designing the solution.
Every problem is self-correcting in that some new normal will emerge. Either through acceptance or because something is changed.
It’s very hard to say right now what happens at the other side of this change right now.
All these new growing pains are happening in many companies simultaneously and they are happening at elevated speed. While that change is taking place it can be quite disorienting and if you want to take a forward looking view it can be quite unclear of how you should behave.
Unfortunately, I think the lesson from recent history seems to be that outside of highly-regulated industries, customers and businesses will accept terrible quality as long as it's cheap.
Yes, every slack is optimized out of systems. If something has an ounce more quality than would suffice to obtain the same profit, it must be cut out. It's an inefficiency. A quality overhang. If people buy it even if it's crap, then the conclusion is that it has to be crap, else money is left on the table. It's a large scale coordination issue. This gives us a world where everything balances exactly near the border where it just barely works, for just barely enough time.
Nah, there is a quality floor that consumers are willing to accept. Once you get below that, where it's actually affecting their lives in a meaningful way, it will self-correct as companies will exploit the new market created for quality products.
It's not even the complexity which, you have to realize: many managers and business types think it's just fine to have code no one understands because AI will do it.
I don't agree, but bigger issue to me is many/most companies don't even know what they want or think about what the purpose is. So whereas in past devs coding something gave some throttle or sanity checks, now we'd just throw shit over wall even faster.
I'm seeing some LinkedIn lunatics brag about "my idea to production in an hour" and all I can think is: that is probably a terrible feature. No one I've worked with is that good or visionary where that speed even matters.
> While all of this is anecdotal, it sure feels like software has become a brittle mess
That may be the case where AI leaks into, but not every software developer uses or depends on AI. So not all software has become more brittle.
Personally I try to avoid any contact with software developers using AI. This may not be possible, but I don't want to waste my own time "interacting" with people who aren't really the ones writing code anymore.
I suppose everyone on HN reaches a certain point with these kind of thought pieces and I just reached mine.
What are you building? Does the tool help or hurt?
People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.
After five or six cycles it does become a bit fatiguing. Use the tool sanely. Work at a pace where your understanding of what you are building does not exceed the reality of the mess you and your team are actually building if budgets allow.
This seldom happens, even in solo hobby projects once you cost everything in.
It's not about agile or waterfall or "functional" or abstracting your dependencies via Podman or Docker or VMware or whatever that nix crap is. Or using an agent to catch the bugs in the agent that's talking to an LLM you have next to no control over that's deleting your production database while you slept, then asking it to make illustrations for the postmortem blog post you ask it to write that you think elevates your status in the community but probably doesn't.
I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
> What are you building?
This x1000. The last 10 years in the software industry in particular seems full of meta-work. Building new frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. All to build... what exactly? Are these tools necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
Hard to shake the feeling that this looks like one big pyramid scheme. I strongly suspect that vast majority of the "innovation" in recent years has gone straight to supporting the funding model and institution of the software profession, rather than actual software engineering.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It was, and is. But not universally.
If you formulate questions scientifically and use the answers to make decisions, that's engineering. I've seen it happen. It can happen with LLMs, under the proper guidance.
If you formulate questions based on vibes, ignore the answers, and do what the CEO says anyway, that's not engineering. Sadly, I've seen this happen far too often. And with this mindset comes the Claudiot mindset - information is ultimately useless so fake autogenerated content is just as valuable as real work.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
If I engineer a bridge I know the load the bridge is designed to carry. Then I add a factor of safety. When I build a website can anyone on the product side actually predict traffic?
When building a bridge I can consult a book of materials and understand how much a material deforms under load, what is breaking point is, it’s expected lifespan, etc. Does this exist for servers, web frameworks, network load balancers, etc.?
I actually believe that software “could” be an engineering discipline but we have a long way to go
> can anyone on the product side actually predict traffic
Hypothetically, could you not? If you engineer a bridge you have no idea what kind of traffic it'll see. But you know the maximum allowable weight for a truck of X length is Y tons and factoring in your span you have a good idea of what the max load will be. And if the numbers don't line up, you add in load limits or whatever else to make them match. Your bridge might end up processing 1 truck per hour but that's ultimately irrelevant compared to max throughput/load.
Likewise, systems in regulated industries have strict controls for how many concurrent connections they're allowed to handle[1], enforced with edge network systems, and are expected to do load testing up to these numbers to ensure the service can handle the traffic. There are entire products built around this concept[2]. You could absolutely do this, you just choose not to.
[1] See NIST 800-53 control SC-7 (3)
[2] https://learn.microsoft.com/en-us/azure/app-testing/load-tes...
Software and bridges are entirely different.
If I need a bridge, and there's a perfectly beautiful bridge one town over that spans the same distance - that's useless to me. Because I need my own bridge. Bridges are partly a design problem but mainly a build problem.
In software, if I find a library that does exactly what I need, then my task is done. I just use that library. Software is purely a design problem.
With agentic coding, we're about to enter a new phase of plenty. If everyone is now a 10x developer then there's going to be more software written in the next few years than in the last few decades.
That massive flurry of creativity will move the industry even further from the calm, rational, constrained world of engineering disciplines.
Software packages are more complicated than you make them out to be. Off the top of my head:
- license restrictions, relicensing
- patches, especially to fix CVEs, that break assumptions you made in your consumption of the package
- supply chain attacks
- sunsetting
There’s no real “set it and forget it” with software reuse. For that matter, there’s no “set it and forget it” in civil engineering either, it also requires monitoring and maintenance.
>I actually believe that software “could” be an engineering discipline but we have a long way to go
It certain mission critical applications, it is treated as engineering. One example - https://en.wikipedia.org/wiki/DO-178B
I think it is in certain very limited circumstances. The Space Shuttle's software seems like it was actually engineered. More generally, there are systems where all the inputs and outputs are well understood along with the entire state space of the software. Redundancy can be achieved by running different software on different computers such that any one is capable of keeping essential functions running on its own. Often there are rigorous requirements around test coverage and formal verification.
This is tremendously expensive (writing two or more independent copies of the core functionality!) and rapidly becomes intractable if the interaction with the world is not pretty strictly limited. It's rarely worth it, so the vast majority of software isn't what I'd call engineered.
People don't realize how much software engineering has improved. I remember when most teams didn't use version control, and if we did have it, it was crappy. Go through the Joel Test [1] and think about what it was like at companies where the answers to most of those questions was "no."
[1] https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...
At the same time, systems have become far more complex. Back when version control was crap, there weren't a thousand APIs to integrate and a million software package dependencies to manage.
Sure everything seems to have gotten better and that's why we now need AIs to understand our code bases - that we created with our great version control tooling.
Fundamentally we're still monkeys at keyboards just that now there are infinitely many digital monkeys.
Perrow’s book Normal Accidents postulates that, given advances which could improve safety, people just decide to emphasize throughout, speed, profits, etc. he turned out to be wrong about aviation (got much safer over time) and maritime shipping (there was a perception of a safety crisis in the late 1970s with oil tankers exploding, now you just hear about the odd exceptional event.)
Maybe back in the beginning, but I don't think it's an engineering discipline now. I don't think that's bad though. I always thought we tagged on the word "engineer" so that we could make more money. I'm ok with not being one. The engineers I've known are very strict in their approach which is good since I don't want my deck to fall down. Most of us are too risky with our approach. We love to try new things and patterns, not just used established ones over time. This is fine with me, and when we apply the term "engineer" to work, I get a little uneasy, because I think it implies us doing something that most of us really don't want to do. That is, absolutely prove our approach works and will work for years to come. Just my opinion though.
I’ve had jobs where my title was “software engineer”, but I never refer to myself as such outside of work. When I tell others what I do, I say I am a software developer. It may seem a pointless distinction, but to me there is a distinction.
Neither myself nor the vast majority of other “software engineers” in our field are living up to what it should mean to be an “engineer”.
The people that make bridges and buildings, those are the engineers. Software engineers, for the very very most part, are not.
I'm similar except for me reason is no degree. So some jobs eng others just developer... although my current job I'm a "technology specialist" which is funny. But I'm getting paid so whatever.
Most recently I wrote cloudformation templates to bring up infra for AWS-based agents. I don't use ai-assisted coding except googling which I acknowledge is an ai summary.
A friend of mine is in a toxic company where everyone has to use AI and they're looked down upon if they don't use it. Every minute of their day has to be logged doing something. They're also going to lay off a bunch of people soon since "AI has replaced them" this is in the context of an agency.
I was won over by this distinction from another senior some years ago. I think he said…
“Developers build things. Engineers build them and keep them running.”
I like the linguistic point from a standpoint of emphasizing a long term responsibility.
I was just reading "how the world became rich" and they made an interesting distinction economic "development" vs plain "growth". Amusingly, "development" to them means exactly what you're saying "engineer" should mean. It's sustainable, structural, not ephemeral. Development in the abstract hints at foundational work. Building something up to last. It seems like this meaning degradation is common in software. It still blows my mind how the "full-stack" naming stuck, for example.
https://www.howtheworldbecamerich.com/
Edit-on a related note, are there any studies on the all-in long-term cost between companies that "develop" vs. "engineer". I doubt there would be clean data since the managers that ignored all of the warning of "tech debt" would probably have the say on both compiling and releasing such data.
Does the cost of "tech-debt" decrease as the cost of "coding" decreased or is there a phase transition on the quality of the code? I bet there will be an inflection point if you plotted the adoption time of AI coding by companies. Late adapters that timed it after the models and harnesses and practices were good enough (probably still some time in the near future) would have less all-in cost per same codebase quality.
When your bridge falls down, you don't call an incident and ask your engineer to fix it, you sue them.
In software there's a lot more emphasis on post-hoc fixes rather than up front validation, in my experience.
I like this one from Russ Cox:
"Software engineering is what happens to programming when you add time and other programmers."
It’s a bit of a misclassification. In my mind we tend to be more like architects where there are a fair amount of innovative ideas that don’t work all that well in practice. Train stations with beautiful roofs that leak and slippery marble floors, airports with smoke ventilation systems in the floor, etc.
Of course, we use that term for something else in the software world, but architecture really has two tiers, the starchitects building super fancy stuff (equivalent to what we’d call software architects) and the much more normal ones working on sundry things like townhomes and strip malls.
That being said I don’t think people want the architecture pay grades in the software fields.
It's a Systems Engineering job. You provide context, define interfaces to people, tests for critical failure modes affecting customer, describe system behavior, and translate to other people.
At the same time, if you remove 'engineer' , informatics should fall under the faculty of Science, so scientists, which are even more rigorous than engineers ;)
Maybe software tinkerer?
> scientists, which are even more rigorous than engineers ;)
You should see the code that scientists write...
Software craftsman seems to strike a good balance.
classic ... https://www.hillelwayne.com/post/are-we-really-engineers/
> A number of these phenomena have been bundled under the name "Software Engineering". As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot.".
- Edsger Dijkstra, 1988
I think, unfortunately, he may have had us all dead to rights on this one.
Software engineering is real engineering because we rigorously engineer software the way "real" engineers engineer real things.
Software engineering is not real engineering because we do not rigorously engineer software the way "real" engineers engineer real things. <--- YOU ARE HERE
Software engineering is real engineering because we rigorously engineer software the way "real" engineers engineer real things.
Engineering is two things:
1. Applied physics - Software is immediately disqualified. Symbols have no physics.
2. Ethics - Lives and livelihoods depend on you getting it right. Software people want to be disqualified because that stuff is so boring, but this is becoming a more serious issue with every passing day.
People built a lot of great stuff with Ruby, PHP, Notes and VB. I don't know what the problem really is.
Personally I think that whole Karpathy thing is the slowest thing in the world. I mean you can spin the wheels on a dragster all you like and it is really loud and you can smell the fumes but at some point you realize you're not going anywhere.
My own frustration with the general slowness of computing (iOS 26, file pickers, build systems, build systems, build systems, ...) has been peaking lately and frankly the lack of responsiveness is driving me up the wall. If I wasn't busy at work and loaded with a few years worth of side projects I'd be tearing the whole GUI stack down to the bottom and rebuilding it all to respect hard real time requirements.
I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It's a craft.
Software reminds me more of construction or home contracting work then engineering.
We do the actual building of things
Software was an engineering discipline... at some places. And it still is, at some places.
Other places were "hack it until we don't know of any major bugs, then ship it before someone finds one". And now they're "hey, AI agents - we can use that as a hack-o-matic!" But they were having trouble with sustainability before, and they're going to still, except much faster.
All (not some) of the most successful devs I've known in the sense of building something that found market fit and making money off it were terrible engineers. They were fairly productive at building features. That's it. And they were productive - until they weren't. Their work ultimately led to outages, lost data, and sensitive data being leaked (to what extent, I don't even know).
The ones who got acquired - never really had to stand up to any due diligence scrutiny on the technical side. Other sides of the businesses did for sure, but not that side.
Many of you here work for "real" tech companies with the budget and proper skin in the game to actually have real engineers and sane practices. But many of you do not, and I am sure many have seen what I have seen and can attest to this. If someone like the person I mentioned above asks you to join them to help fix their problems, make sure the compensation is tremendous. Slop clean-up is a real profession, but beware.
There used to be a saying along the lines of “while you’re designing your application to scale to 1m requests/min, someone out there is making $1m ARR with php and duct tape”
It feels like this takes on a whole new meaning now we have agents - which I think is the same point you were making
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It isn't. Show me the licensing requirements to be a "software engineer." There are none. A 12 year old can call himself a software engineer and there are probably some who have managed to get remote work on major projects.
> It isn't. Show me the licensing requirements
That's assuming the axiom that "engineer" must require licensing requirements. That may be true in some jurisdictions, but it's not axiomatically or definitionally true.
Some kinds of building software may be "engineering", some kinds may not be, but anyone seeking to argue that "licensing requirements" should come into play will have to actually argue that rather than treat it as an unstated axiom.
Depends on the country. In some countries, it is a legal axiom (or at least identity).
For the other countries, though, arguing "some countries do it that way" is as persuasive as "some countries drive on the other side of the road." It's true, but so what? Why should we change to do it their way?
> Depends on the country. In some countries, it is a legal axiom (or at least identity).
As I said, "That may be true in some jurisdictions, but it's not axiomatically or definitionally true.". The law is emphatically not an axiom, nor is it definitionally right or wrong; it only defines what's legal or illegal.
When the article raised the question of whether "building software is an engineering discipline", it was very obviously not asking a question about whether the term 'engineering' is legally restricted in any particular jurisdiction.
To my mind, the term "engineering discipline" implies something roughly analogous to Electrical Engineering, Civil Engineering, Mechanical Engineering, Chemical Engineering.
There is no such rigorous definition for "software engineer" which normally is just a self-granted title meaning "I write code."
In Europe they are. Call yourself an Engineer without a degree and your company and you will be sued with a big fine, because here you must be legally accountable on disasters and ofc there are hard constraints .
> In Europe they are
Where specifically? I've been working as a "Software engineer" for multiple decades, across three countries in Europe, and 2-3 countries outside of Europe, never been sued or received a "big fine" for this, even have had presentations for government teams and similar, not a single person have reacted to me (or others) calling ourselves "software engineers" this whole time.
Canada also (at least some provinces). I have quite a few Canadian software engineer colleagues with their iron rings to prove it.
>After five or six cycles it does become a bit fatiguing. Use the tool sanely.
That's increasingly not possible. This is the first time for me in 20 years where I've had a programming tool rammed down my throat.
There's a crisis of software developer autonomy and it's actually hurting software productivity. We're making worse software, slower because the C levels have bought this fairy tale that you can replace 5 development resource with 1 development resource + some tokens.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
Just another reason we should cut software jobs and replace them with A(G)I.
If the human "engineers" were never doing anything precisely, why would the robot engineers need to?
> People answered this wrong in the Ruby era, they answered it wrong in the PHP era
Aren't you conveniently ignoring the fact that there were people saw through that and didn't go down those routes?
Change it to "Some people" if your pedanticism won't let you follow the flow.
Or better yet point out the better paths they chose instead. Were they wrestling with Java and "Joda Time"? Talking to AWS via a Python library named after a dolphin? Running .NET code on Linux servers under Mono that never actually worked? Jamming apps into a browser via JQuery? Abstracting it up a level and making 1,400 database calls via ActiveRecord to render a ten item to-do list and writing blog posts about the N+1 problem? Rewriting grep in Rust to keep the ruskies out of our precious LLCs?
Asking the wrong questions, using the wrong tools, then writing dumb blog posts about it is what we do. It's what makes us us.
There's this interesting issue that we've never had occupational licensing for software developers despite the sheer incompetence that we see all the time.
On one hand there's an approach to computing where it is a branch of mathematics that is universal. There are some creatures that live under the ice on a moon circling a gas giant around another star and if they have computers they are going to understand the halting problem (even if they formulate it differently) and know bubble sort is O(N^2) and about algorithms that sort O(N log N).
On the other hand we are divided by communities of practice that don't like one another. For instance there is the "OO sux" brigade which thinks I suck because I like Java. There still are shops where everything is done in a stored procedure (oddly like the fashionable architecture where you build an API server just because... you have to have an API) and other shops where people would think you were brain damaged to go anywhere near stored procs, triggers or any of that. It used to be Linux enthusiasts thought anybody involved in Windows was stupid and you'd meet Windows admins who were click-click-click-click-clicking over and over again to get IIS somewhat working who thought IIS was the only web server good enough for "the enterprise"
Now apart for the instinctual hate for the tools there really are those chronic conceptual problems for which datetime is the poster child. I think every major language has been through multiple datetime libraries in and out of the standard lib in the last 20 years because dates and times just aren't the simple things that we wish they would be and the school of hard knocks keeps knocking us to accept a complicated reality.
> There's this interesting issue that we've never had occupational licensing for software developers despite the sheer incompetence that we see all the time.
I'm laughing over the current Delve/SOC2 situation right now. Everyone pulls for 'licenses' as the first card, but we all know that is equally fraught with trauma. https://xkcd.com/927/
> pedanticism
I don't think this had anything to do with minor details at all. You're trying to convey a point while ignoring the half of the population who didn't go down that route.Largely a problem of VCs and shareholders. After my 12th year of "we'll get around to bug fixes" and "this is an emergency" I realize I am absolutely not doing anything related to engineering. My job means less than the moron PM who graduated bottom of their class in <field>. The lack of trust in me despite having almost a life in software is actually so insulting it's hard to quantify.
Now I barely look at ticket requirements, feed it to an LLM, have it do the work, spend an hour reviewing it, then ship it 3 days later. Plenty of fuck off time, which is time well spent when I know nothing will change anyway. If I'm gonna lose my career to LLMs I may as well enjoy burning shareholder capital. I've optimized my life completely to maximize fuck off time.
At the end of the day they created the environment. It would be criminal to not take advantage of their stupidity.
What the article doesn't touch on is the vendor lock-in that is currently underway. Many corps are now moving to an AI-based development process that is reliant on the big AI providers.
Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.
Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market.
Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition?
I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better.
> Companies claiming 100% of their product's code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes
One thing about the old days of DOS and original MacOS: you couldn't get away with nearly as much of this. The whole computer would crash hard and need to be rebooted, all unsaved work lost. You also could not easily push out an update or patch --- stuff had to work out of the box.
Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.
Not that I want to go back to DOS but Wordperfect 5.1 was pretty damn rock solid as I recall.
> Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.
It's not the glut of compute resources, we've already accepted bloat in modern software. The new crutch is treating every device as "always online" paired with mantra of "ship now! push fixes later." Its easier to setup a big complex CI pipeline you push fixes into and it OTA patches the users system. This way you can justify pushing broken unfinished products to beat your competitors doing the same.
I think you're just recalling the few software products that were actually good. There was plenty of crap software that would crash and lose your work in the old days.
Another factor at work is the use of rolling updates to fix things that should better have been caught with rigorous testing before release. Before the days of 'always on' internet it was far too costly to fix something shipped on physical media. Not that everything was always perfect, but on the whole it was pretty well stress-tested before shipping.
The sad truth is that now, because of the ease of pushing your fix to everything while requiring little more from the user than that their machine be more or less permanently connected to a network, even an OS is dealt with as casually as an application or game.
Useful context here is that the author wrote Pi, which is the coding agent framework used by OpenClaw and is one of the most popular open source coding agent frameworks generally.
That's hilarious. I've been following Mario since his work on libGDX and RoboVM.
His blog post on pi is here: https://mariozechner.at/posts/2025-11-30-pi-coding-agent/
That's a great shout because I'm sure a lot of people would otherwise just discredit this take as just another anti-ai skeptic. But he probably has more experience working with LLM's and agents than most of us on this site, so his opinion holds more weight than most.
... people like that have a way of writing articles that don't seem to say anything at all.
> it sure feels like software has become a brittle mess, with 98% uptime becoming the norm instead of the exception, including for big services
As somebody who has been running systems like these for two decades: the software has not changed. What's changed is that before, nobody trusted anything, so a human had to manually do everything. That slowed down the process, which made flaws happen less frequently. But it was all still crap. Just very slow moving crap, with more manual testing and visual validation. Still plenty of failures, but it doesn't feel like it fails a lot of they're spaced far apart on the status page. The "uptime" is time-driven, not bugs-per-lines-of-code driven.
DevOps' purpose is to teach you that you can move quickly without breaking stuff, but it requires a particular way of working, that emphasizes building trust. You can't just ship random stuff 100x faster and assume it will work. This is what the "move fast and break stuff" people learned the hard way years ago.
And breaking stuff isn't inherently bad - if you learn from your mistakes and make the system better afterward. The problem is, that's extra work that people don't want to do. If you don't have an adult in the room forcing people to improve, you get the disasters of the past month. An example: Google SREs give teams error budgets; the SREs are acting as the adult in the room, forcing the team to stop shipping and fix their quality issues.
One way to deal with this in DevOps/Lean/TPS is the Andon cord. Famously a cord introduced at Toyota that allows any assembly worker to stop the production line until a problem is identified and a fix worked on (not just the immediate defect, but the root cause). This is insane to most business people because nobody wants to stop everything to fix one problem, they want to quickly patch it up and keep working, or ignore it and fix it later. But as Ford/GM found out, that just leads to a mountain of backlogged problems that makes everything worse. Toyota discovered that if you take the long, painful time to fix it immediately, that has the opposite effect, creating more and more efficiency, better quality, fewer defects, and faster shipping. The difference is cultural.
This is real DevOps. If you want your AI work to be both high quality and fast, I recommend following its suggestions. Keep in mind, none of this is a technical issue; it's a business process isssue.
It's a systems engineering job. You need to provide context, acceptable failure modes, and test at each level for validation. Identify false coupling, poor interfaces, things that don't match business context during agent planning phase. Then communicate / translate to others so their decisions improve instead of destroying the system by optimizing only for their local situation.
It also seems like massive consolidation has caused issues too. Everyone is on Github. Everyone is on AWS. Everyone is behind cloudflare. Whenever an issue happens here it effects everyone and everyone sees it.
In the past with smaller services those services did break all the time, but the outage was limited to a much smaller area. Also systems were typically less integrated with each other so one service being down rarely took out everything.
This aligns with my observation from product design point as well.
Product design has a slightly different problem than engineering, because the speed of development is so high we cannot dogfood and play with new product decisions, features. By the time I’ve realized we made a stupid design choice and it doesn’t really work in real world, we already built 4 features on top of it. Everyone makes bad product decisions but it was easy and natural to back out of them.
It’s all about how we utilize these things, if we focus on sheer speed it just doesn’t work. You need own architecture and product decisions. You need to use and test your products with humans (and automate those as regression testing). You need to able to hold all of the product or architecture in your mind and help agents to make the right decisions with all the best practice you’ve learned.
Agree. The issue was never, how can we get our engineers to squirt out more lines of code in a day? It has always been, how can we effectively iterate using customer feedback to deliver the highest quality product. That type of thing needs time to bake.
Nature will handle this in time. Just expect to see a "Bear Stearns moment" in the software world if this spirals completely out of control (and companies don't take a hint from recent outages).
I’m worried we end up with an AIG moment, and we all end up on the hook.
> You installed Beads, completely oblivious to the fact that it's basically uninstallable malware.
Did I miss something? I haven't used it in a minute, but why is the author claiming that it's "uninstallable malware"?
Malware might be a bit of stretch but could refer to this issue?
https://github.com/steveyegge/beads/issues/1857
Maybe they meant un-uninstallable?
It's not really malware, but it's a mess. It installed so much shit and it interfered with your git hooks and stuff. It was kind of messy. I kind of gave up on it. I just went back to using built-in claude code todowrite tasks.
It managed to throw itself into a global file for me that Claude used which caused beads to appear in random projects on my machine. Because of how it was there the agent attempted to re-install beads after I already removed it because the guy hook errored.
Haven't tried it, but this rewrite might be better?
https://github.com/Dicklesworthstone/beads_rust
I think the core idea here is a good one.
But in many agent-skeptical pieces, I keep seeing this specific sentiment that “agent-written code is not production-ready,” and that just feels… wrong!
It’s just completely insane to me to look at the output of Claude code or Codex with frontier models and say “no, nothing that comes out of this can go straight to prod — I need to review every line.”
Yes, there are still issues, and yes, keeping mental context of your codebase’s architecture is critical, but I’m sorry, it just feels borderline archaic to pretend we’re gonna live in a world where these agents have to have a human poring over every single line they commit.
Maybe in the future humans won't need to pour over every line. However I quickly learn which interns I can trust and which I need to pour over their code - I don't trust AI because it has been wrong too often. I'm not saying AI is useless - I do most of my coding with an agent, but I don't trust it until I verify every line.
I did this for a while… and until Opus 4.5, I couldn't fully trust the model. But at this point, while it does make the occasional mistake, I don't need to scrutinize every line. Unit and integration tests catch the bugs we can imagine, and the bugs we can't imagine take us by surprise, which is how it has always been.
Even with 4.6 I find there are a lot of mistakes it makes that I won't allow. Though it is also really good at finding complex thread issues that would take me forever...
Were you not reviewing every line when a human wrote it before it went to prod? I think the output of these tools is about as good as a human would write - which means it needs thorough review if I’m going to be on the hook to resolve its issues at 2AM.
Yeah in many places we had two humans with context on every line, and now we're advocating going to zero?
Maybe that's the distinction. If I write it, you can call me at 2AM. If an AI wrote it, call the AI at 2AM.
Oh, it can't take the phone call and fix the issue? Then I'm reviewing its output before it goes into prod.
How do you know which lines you need to review and which you don't?
Does it feel archaic because LLMs are clearly producing output of a quality that doesn't require any review, or because having to review all the code LLMs produce clips the productivity gains we can squeeze out of them?
We live in a world where every line of code written by a human should be reviewed by another human. We can't even do that! Nothing should go straight to prod ever, ever ever, ever.
> Nothing should go straight to prod ever, ever ever, ever
Air Traffic Controller software - sure. 99% of other softwares around that are not mission-critical (like Facebook) just punch it to production - "move fast and break shit" has been cool way before "AI"
> Nothing should go straight to prod ever, ever ever, ever.
I'm one-shotting AI code for my website without even looking at it. Straight to prod (well, github->cf worker). It is glorious.
Prod in this context doesn't refer to one person's website for their personal project. It refers to an environment where downtime has consequences, generally one that multiple people work on and that many people rely on.
It is not a personal project.
This is a bit of a no true Scotsman take but I agree with it anyway.
It's tough to not interpret this as "I don't care about my website". Do you not check the copy? Or what if AI one-shots something that will harm your reputation in the metadata?
Then I'll read the diffs after the fact and have fix AI it. ¯\_(ツ)_/¯
That sounds better. I assume the stakes are low enough that you are happy reviewing after the fact, but setting up a workflow to check the diffs before pushing to production shouldn't be too difficult
Of course. I could do a PR review process, but what's the point. It is just a static website.
There's a middle ground here. Code for your website? Sure, whatever, I assume you're not Dell and the cost of your website being unavailable to some subset of users for a minute doesn't have 5 zeroes on the end of it. If you're writing code being used by something that matters though you better be getting that stuff reviewed because LLMs can and will make absolutely ridiculous mistakes.
> There's a middle ground here.
I'm responding to this statement: "Nothing should go straight to prod ever, ever ever, ever."
That a personal website? Prod means different things in different contexts. Even then, I'd be a bit worried about prompt injection unless you control your context closely (no web access etc).
Prompt injection?! Give me an example.
Were people reviewing your hobby projects previously? Were you on-call for your hobby website? If not - then it sounds like nothing changed?
This is my business website.
[Note: It may be very risky to submit anything to this users site]
I'm not sure doing silly things, then advertizing it is a great way to do business, but to each their own.
So many assumptions.
It is a static website hosted on CF workers.
It’s not archaic, it’s due diligence, until we can expect AI to reliably apply the same level of diligence — which we’re still pretty far off from.
You sound like you are working on unimportant stuff. Sure, go ahead, push.
If you keep the scope small enough it can be production ready ootb, and with some stuff (eg. a throwaway React component) who really cares. But I think it's insane to look at the output of Claude Code or Codex with frontier models and say "yep, that looks good to me".
Fwiw OP isn't an agent skeptic, he wrote one of the most popular agent frameworks.
It's a conversation I've had many times in my career and I'm sure I'll have many more. We've got code that seems plausible on a surface level, at a glance it solves the problem it's meant to solve - why can't we just send it to prod and address whatever problems we find with it later?
The answer is that it's very easy for bad code to cause more problems than it solves. This:
> Then one day you turn around and want to add a new feature. But the architecture, which is largely booboos at this point, doesn't allow your army of agents to make the change in a functioning way.
is not a hypothetical, but a common failure mode which routinely happens today to teams who don't think carefully enough about what they're merging. I know a team of a half-dozen people who's been working for years to dig themselves out of that hole; because of bad code they shipped in the past, changes that should have taken a couple hours without agentic support take days or weeks even with agentic support.
You say it's borderline archaic. I say trusting agents enough to not look at every single line is an abdication of ethics, safety, and engineering. You're just absolving yourself of any problems. I hope you aren't working in medical devices or else we're going to get another Therac-25. Please have some sort of ethics. You are going to kill people with your attitude.
Almost nobody works on medical devices... And some of you lucky folks might be working with mega minds everyday, but the rest of us are but shadows and dust. I trust 5.4 or 4.6 more than most developers. Through applying specific pressure using tests and prompts I force it to built better code for my silly hobby game than I ever saw in real production software. Before those models I was still on the other side of the line but the writing is on the wall.
This assumes that only (AI/Agentic) stupidity comes into play, with no malice on sight. But if things go wrong because you didn't noticed the stupidity, malice will pass through too. And there is a a big profit opportunity, and a broad vulnerable market for malice. Is not just correctness or uptime what comes into play, but bigger risks for vulnerabilities or other malicious injected content.
> And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you're actually building and why. Give yourself an opportunity to say, fuck no, we don't need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.
This is a great point.
I have been avoiding LLM's for awhile now, but realized that I might want to try working on a small PDF book to Markdown conversion project[0]. I like the Claude code because command line. I'm realizing you really need to architect with good very precise language to avoid mistakes.
I didn't try to have a prompt do everything at once. I prompted Claude Code to do the conversion process section by section of the document. That seemed to reduce the mistake the agent would make
[0]: https://www.scottrlarson.com/publications/publication-my-fir...
I only have so long on earth. (I have no idea how long) I need things to be faster for me. Sometimes that means I need to take extra time now so they don't come back to me later.
I am "playing" with both pi and Claude (in docker containers) with local llama.cpp and as an exercise, I asked both the same question and the results are in this gist:
https://gist.github.com/ontouchstart/d43591213e0d3087369298f...
(Note: pi was written by the author of the post.)
Now it is time to read them carefully without AI.
What I have leaned from the exercise above is that we paid more attention and spent more resources on "metadata" than real data. They are the rabbit holes that lead us to more metadata and forget what we really want.
We are all rabbits.
If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term.
Integration is the key to the agents. Individual usages don't help AI much because it is confined within the domain of that individual.
> you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term
Pull the bandaid off quickly, it hurts less.
> If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term.
I'm one of those people and I'm not going to slow down. I want to move on from bullshit jobs.
The only people that fear what is coming are those that lack imagination and think we are going to run out of things to do, or run out of problems to create and solve.
> I want to move on from bullshit jobs.
So are you aiming for death poverty? Once those bullshit jobs go, we’re going to find a lot of people incapable of producing anything of value while still costing quite a bit to upkeep. These people will have to be gotten rid of somehow.
> and think we are going to run out of things to do, or run out of problems to create and solve.
There will be plenty of problems to solve. Like who will wipe the ass of the very people that hate you and want to subjugate you.
Name a single time doomers were right about anything. Doomers consistently overstate their expected outcome in every single domain and consistently fail to predict how society evolves and adapts.
Again:
The only people that fear what is coming are those that lack imagination and think we are going to run out of things to do, or run out of problems to create and solve.
Climate change would be a big one.
Also, there have been plenty of awful things caused by technological progress. Tons of death and poverty was created by the transition to factories and mechanization 150 years ago.
Did we come out the other end with higher living standards? Yes, but that doesn't make the decades of brutal transition period any less awful for those affected.
> Climate change would be a big one.
That's generous. Climate scientists were right, climate doomers were definitely wrong.
Society is mostly unchanged due to climate change. That's not to say climate has no effect, but it is certainly still not some doomer scenario that's played out. New York and Florida are most certainly not underwater as predicted by the famous "Inconvenient Truth". People still live in deserts just as they always have. Human lifespan is still increasing. We have less hunger worldwide than ever before, etc.
Climate change doomers conveniently leave out the part where climate has ALWAYS affected society and is one of the main inputs to our existence, therefore we are extremely adaptable to it.
Before "climate change" ever entered the general consciousness, climate wiped out civilizations MORE FREQUENTLY than it does now. All signs point to doomers being wrong and yet they all hold onto it stubbornly.
Doomers were never impressive because they got anything right, they are impressive because they have the unique skill of moving the goalpost when they are wrong. Any time you think the goalpost can't be moved further out, they prove it's possible.
The effects of climate change are just starting to happen. Ecosystems are dying. Very few "climate doomers" thought the world would be like the Day after Tomorrow.
The earth is becoming more hostile to it's inhabitants. There are famines caused by climate change. We will undoubtedly within the next 20 years see mass migration from the areas hardest hit.
Climate scientists, and climate reporting, often UNDERSTATED the worst of these effects.
I think it'd be worth stating what your definition of doomerism is. For me, seeing the increases in forest fires, seeing the sky reddened and the air quality diminish and floods and hurricanes increase... I don't think being able to buy a big mac doesn't make that any less pessimistic.
The CO2 concentration continues to climb year after year, at an accelerating rate. The world hasn't ended yet because it's still 2026 but it doesn't mean it won't.
We're on a hothouse earth trajectory. All signs point to you not being aware of serious climate research and hanging on to a naive Steven Pinker "everything is always improving" outlook.
> Name a single time doomers were right about anything.
- NFTs
- Surveillance schizos
- Global Pedophile Cabal schizos
- Anyone who didn’t believe we were a year out from Star Trek living when LLMs first started picking up steam
- People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
- Anyone amongst the sea of delusional democrats who did indeed believe Trump could win a second term.
All of those doomers were vindicated, and that’s just recently.
- NFTS doomers? I mean I appreciate the humor here.
- Surveillance schizos - Society still works
- Global Pedophile Cabal schizos - Again, funny use of 'doomers' but that's what the current society seems to be run by so I wouldn't say it's fitting for doomerism.
- People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
None of these things are that disruptive to our society at large. You will still be able to walk down the street and grab a Big Mac pretty much any day of the week. A large portion of society is going to look at all of what you're worried about and say "it's not that serious" while consuming their 20 second videos.What do you think is a valid doomer warning that came true? Or do you think literally everything that is pessimistic is doomerism?
I was thinking the other day about why a "global pedophile cabal" would be a thing. I still think that phrase overstates it a bit, but not that much.
Committing a crime with someone bonds you to them.
First, it's a kind of shared social behavior, and it's one that is exclusive to you and your friends who commit the same kinds of crimes. Any shared experience bonds people, crimes included. Having a shared secret also bonds people.
Second, it creates an implied pact of mutually assured destruction. Everyone knows the skeletons in everyone else's closet, so it creates a web of trust. Anyone defecting could possibly be punished by selectively revealing their crimes, and vice versa. Game theoretically it overcomes tit-for-tat and enables all-cooperate interactions, at least to some extent, and even among people who otherwise don't like each other or don't have a lot in common.
Third, it separates the serious from the unserious. If you want to be a member of the club, do the bad thing. It's a form of high cost membership gating.
This works for other kinds of crimes too. It's not that unusual for criminal gangs to demand that initiates commit a crime and provide evidence, or commit a crime in front of existing members. These can be things like robbery, murder, and so on. Anyone not willing to do this probably isn't serious and can't be trusted. Once someone does do it, you know they're really in.
It naturally creates cabals. The crime comes first, the cabal second, but then the cabal can realize this and start using the crime as a gateway to admission.
Every mutual interest creates a community, but a secret criminal mutual interest creates a special kind of tight knit community. In a world that's increasingly atomized and divided, that's power. I think it neatly explains how the Epstein network could be so powerful and effective.
If you don't want to slow down, maybe accelerating is the second better option for ordinary people.
That's a mighty high horse you are riding there
Ah yes, me on a high horse. Not the person whose entire worldview depends on defying nash equilibrium. You're all wasting brain cycles to discuss some unrealistic cooperative agreement to slow down and sing 'kumbaya' and telling us that if we don't get to this state that we will on the streets homeless. If this is me on a horse then you are on top of an ivory tower managing my beast of burden.
Exactly. The amount of bs bloatwork anywhere I've ever worked is insane and growing. We need to move on.
I think before even being able to entertain the thought of slowing the fuck down, we need to seriously consider divorcing productivity. Or at least asking a break, so you can go for a walk in the park, meet some friends and reflect on how you are approaching development.
I think this is very good take on AI adoption: https://mitchellh.com/writing/my-ai-adoption-journey. I've had tremendous success with roughly following the ideas there.
> The point is: let the agent do the boring stuff, the stuff that won't teach you anything new, or try out different things you'd otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation.
That's partially true. I've also had instances where I could have very well done a simple change by myself, but by running it through an agent first I became aware of complexities I wasn't considering and I gained documentation updates for free.
Oh and the best part, if in three months I'm asked to compile a list of things I did, I can just look at my session history, cross with my development history on my repositories and paint a very good picture of what I've achieved. I can even rebuild the decision process with designing the solution.
It's always a win to run things through an agent.
Eh I think its self-correcting problem
Companies will face the maintenance and availability consequences of these tools but it may take a while for the feedback loop to close
Every problem is self-correcting in that some new normal will emerge. Either through acceptance or because something is changed.
It’s very hard to say right now what happens at the other side of this change right now.
All these new growing pains are happening in many companies simultaneously and they are happening at elevated speed. While that change is taking place it can be quite disorienting and if you want to take a forward looking view it can be quite unclear of how you should behave.
Unfortunately, I think the lesson from recent history seems to be that outside of highly-regulated industries, customers and businesses will accept terrible quality as long as it's cheap.
Yes, every slack is optimized out of systems. If something has an ounce more quality than would suffice to obtain the same profit, it must be cut out. It's an inefficiency. A quality overhang. If people buy it even if it's crap, then the conclusion is that it has to be crap, else money is left on the table. It's a large scale coordination issue. This gives us a world where everything balances exactly near the border where it just barely works, for just barely enough time.
Nah, there is a quality floor that consumers are willing to accept. Once you get below that, where it's actually affecting their lives in a meaningful way, it will self-correct as companies will exploit the new market created for quality products.
True but there is a limit, there are still levels of quality
Levels of enshittification, more often than not.
It's not even the complexity which, you have to realize: many managers and business types think it's just fine to have code no one understands because AI will do it.
I don't agree, but bigger issue to me is many/most companies don't even know what they want or think about what the purpose is. So whereas in past devs coding something gave some throttle or sanity checks, now we'd just throw shit over wall even faster.
I'm seeing some LinkedIn lunatics brag about "my idea to production in an hour" and all I can think is: that is probably a terrible feature. No one I've worked with is that good or visionary where that speed even matters.
i just wish someone would explain why i prefer cline to claude code so much
I for one look forward to rewriting the entirety of software after the chatbot era
> While all of this is anecdotal, it sure feels like software has become a brittle mess
That may be the case where AI leaks into, but not every software developer uses or depends on AI. So not all software has become more brittle.
Personally I try to avoid any contact with software developers using AI. This may not be possible, but I don't want to waste my own time "interacting" with people who aren't really the ones writing code anymore.
hope my boss can see this
It's 2026, the "fuck" modifier for post titles by "thought leaders" has been done already ad nauseam. Time to retire it and give us all a break.
If we're on the subject of tropes: https://theonion.com/report-stating-current-year-still-leadi...