Seems like reading the code is now the real work. AI writes PRs instantly but reviewing them still takes time. Everything flipped. Expect more projects to follow - maintainers can just use ai themselves without needing external contributions.
Understanding (not necessarily reading) always was the real work. AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).
There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain. I do not need to spend 30 minutes learning how to hold the bazel rule. I do not need to spend 30 minutes to write client boilerplate. List goes on. All broad claims about AI's effects on productivity have counterexamples. It is situational. I think most competent engineers quietly using AI understand this.
The problem is, even if all that is true, it says very little about the distribution of AI-generated pull requests to GitHub projects. So far, from what I’ve seen, those are overwhelmingly not done by competent engineers, but by randos who just submit a massive pile of crap and expect you to hurry up and merge it already. It might be rational to auto-close all PRs on GitHub even if tons of engineers are quietly using AI to deliver value.
It makes a great code reading tool if you use it mindfully. For instance, you can check the integrity of your tests by having it fuzz the implementation and ensure the tests fail and then git checkout to get clean again.
In the civic tech hacknight community I'm part of, it's hard to collaborate the same now, at least when people are using AI. Mostly because now code often feels so disposable and fast. It's like the pace layers have changed
It's been proposed that we start collaborating in specs, and just keep regenerating the code like it's CI, to get back to the feeling of collaboration without holding back on the energy and speed of agent coding
This is probably true, and while I expect productivity to go up, I also expect "FOSS maintainer burnout" to skyrocket in the coming years.
Everyone knows reading code is one-hundredth as fun as writing it, and while we have to accept some amount of reading as the "eating your vegetables" part of the job, FOSS project maintainers are often in a precarious enough position as it is re: job satisfaction. I think having to dramatically increase the proportion of reading to writing, while knowing full well that a bunch of what they are reading was created by some bozo with a CC subscription and little understanding of what they were doing, will lead to a bunch of them walking away.
Not to worry! Microslop probably has a product in the works to replace disgruntled open-source maintainers with agreeable, high-review-throughput agentic systems.
Mitchell Hashimoto (2025-12-30):
"Slop drives me crazy and it feels like 95+% of bug reports, but man, AI code analysis is getting really good. There are users out there reporting bugs that don't know ANYTHING about our stack, but are great AI drivers and producing some high quality issue reports.
This person (linked below) was experiencing Ghostty crashes and took it upon themselves to use AI to write a python script that can decode our crash files, match them up with our dsym files, and analyze the codebase for attempting to find the root cause, and extracted that into an Agent Skill.
They then came into Discord, warned us they don't know Zig at all, don't know macOS dev at all, don't know terminals at all, and that they used AI, but that they thought critically about the issues and believed they were real and asked if we'd accept them. I took a look at one, was impressed, and said send them all.
This fixed 4 real crashing cases that I was able to manually verify and write a fix for from someone who -- on paper -- had no fucking clue what they were talking about. And yet, they drove an AI with expert skill.
I want to call out that in addition to driving AI with expert skill, they navigated the terrain with expert skill as well. They didn't just toss slop up on our repo. They came to Discord as a human, reached out as a human, and talked to other humans about what they've done. They were careful and thoughtful about the process.
Didn't take long before the quality went downhill.
Skynet was evil and impressive in The Terminator. Skynet 3.0 in reallife sucks - the AI slop annoys the hell out of me. I now need a browser extension that filters away ALL AI.
> An open pull request represents a commitment from maintainers: that the contribution will be reviewed carefully and considered seriously for inclusion.
This has always been the problem with github culture.
On the Linux and GCC mailing lists, a posted patch does not represent any kind of commitment whatsoever from the maintainers. That's how it should be.
The fact that github puts the number of open PR requests at the very top of every single page related to a project, in an extremely prominent position, is the sort of manipulative "driving engagement" nonsense you'd expect from social media, not serious engineering tools.
The fact that you have to pay github money in order to permanently turn off pull requests or issues (I mean turn off, not automatically close with a bot) is another one of these. BTW codeberg lets any project disable these things.
> If the job market is unfavourable to juniors, become senior.
That requires networking with a depth deep enough that other professionals are willing to critique your work.
So... open-source contributions, I guess?
This increases pressure on senior developers who are the current maintainers of open-source packages at the same time that AI is stealing the attention economy that previously rewarded open-source work.
Seems like we need something like blockchain gas on open-source PRs to reduce spam, incentivize open-source maintainers, and enable others to signal their support for suggestions while also putting money where their mouth is.
They invited AI in by creating a comprehensive list of instructions for AI agents - in the README, in a context.md, and even as yarn scripts. What did they expect?
The CONTEXT.md file was created 5 months ago, and the contribution policy changed today. I would interpret that as a good-faith attempt to work with AI agents, which with some experience, didn't work as well as they hoped.
Seems like reading the code is now the real work. AI writes PRs instantly but reviewing them still takes time. Everything flipped. Expect more projects to follow - maintainers can just use ai themselves without needing external contributions.
Understanding (not necessarily reading) always was the real work. AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).
There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain. I do not need to spend 30 minutes learning how to hold the bazel rule. I do not need to spend 30 minutes to write client boilerplate. List goes on. All broad claims about AI's effects on productivity have counterexamples. It is situational. I think most competent engineers quietly using AI understand this.
The problem is, even if all that is true, it says very little about the distribution of AI-generated pull requests to GitHub projects. So far, from what I’ve seen, those are overwhelmingly not done by competent engineers, but by randos who just submit a massive pile of crap and expect you to hurry up and merge it already. It might be rational to auto-close all PRs on GitHub even if tons of engineers are quietly using AI to deliver value.
I mean we did copy/paste before this? Also create-react-app is basically that. And probably better than a stochastic AI generating it.
It makes a great code reading tool if you use it mindfully. For instance, you can check the integrity of your tests by having it fuzz the implementation and ensure the tests fail and then git checkout to get clean again.
AI makes people less productive because it’s speeding up the thing that was hard: training AI for better future AI.
The productivity gets siphoned to the AI companies owning the AI.
In the civic tech hacknight community I'm part of, it's hard to collaborate the same now, at least when people are using AI. Mostly because now code often feels so disposable and fast. It's like the pace layers have changed
It's been proposed that we start collaborating in specs, and just keep regenerating the code like it's CI, to get back to the feeling of collaboration without holding back on the energy and speed of agent coding
Clowns will just use LLMs to post slop comments in the spec discussions.
This is probably true, and while I expect productivity to go up, I also expect "FOSS maintainer burnout" to skyrocket in the coming years.
Everyone knows reading code is one-hundredth as fun as writing it, and while we have to accept some amount of reading as the "eating your vegetables" part of the job, FOSS project maintainers are often in a precarious enough position as it is re: job satisfaction. I think having to dramatically increase the proportion of reading to writing, while knowing full well that a bunch of what they are reading was created by some bozo with a CC subscription and little understanding of what they were doing, will lead to a bunch of them walking away.
Not to worry! Microslop probably has a product in the works to replace disgruntled open-source maintainers with agreeable, high-review-throughput agentic systems.
That's interesting; another project stopped letting users directly open issues: https://news.ycombinator.com/item?id=46460319
Check Ghostty "CONTRIBUTING.md#ai-assistance-notice"
https://github.com/ghostty-org/ghostty/blob/main/CONTRIBUTIN...Mitchell Hashimoto (2025-12-30): "Slop drives me crazy and it feels like 95+% of bug reports, but man, AI code analysis is getting really good. There are users out there reporting bugs that don't know ANYTHING about our stack, but are great AI drivers and producing some high quality issue reports.
This person (linked below) was experiencing Ghostty crashes and took it upon themselves to use AI to write a python script that can decode our crash files, match them up with our dsym files, and analyze the codebase for attempting to find the root cause, and extracted that into an Agent Skill.
They then came into Discord, warned us they don't know Zig at all, don't know macOS dev at all, don't know terminals at all, and that they used AI, but that they thought critically about the issues and believed they were real and asked if we'd accept them. I took a look at one, was impressed, and said send them all.
This fixed 4 real crashing cases that I was able to manually verify and write a fix for from someone who -- on paper -- had no fucking clue what they were talking about. And yet, they drove an AI with expert skill.
I want to call out that in addition to driving AI with expert skill, they navigated the terrain with expert skill as well. They didn't just toss slop up on our repo. They came to Discord as a human, reached out as a human, and talked to other humans about what they've done. They were careful and thoughtful about the process.
People like this give me hope for what is possible. But it really, really depends on high quality people like this. Most today -- to continue the analogy -- are unfortunately driving like a teenager who has only driven toy go-karts. Examples: https://github.com/ghostty-org/ghostty/discussions?discussio... " ( https://x.com/mitchellh/status/2006114026191769924 )
> With luck, GitHub will soon roll out management features that let us open things back up.
I wouldn't bet on it
SlopHub
Didn't take long before the quality went downhill.
Skynet was evil and impressive in The Terminator. Skynet 3.0 in reallife sucks - the AI slop annoys the hell out of me. I now need a browser extension that filters away ALL AI.
> An open pull request represents a commitment from maintainers: that the contribution will be reviewed carefully and considered seriously for inclusion.
This has always been the problem with github culture.
On the Linux and GCC mailing lists, a posted patch does not represent any kind of commitment whatsoever from the maintainers. That's how it should be.
The fact that github puts the number of open PR requests at the very top of every single page related to a project, in an extremely prominent position, is the sort of manipulative "driving engagement" nonsense you'd expect from social media, not serious engineering tools.
The fact that you have to pay github money in order to permanently turn off pull requests or issues (I mean turn off, not automatically close with a bot) is another one of these. BTW codeberg lets any project disable these things.
A LinkedIn comment I made on an adjacent topic:
> If the job market is unfavourable to juniors, become senior.
That requires networking with a depth deep enough that other professionals are willing to critique your work.
So... open-source contributions, I guess?
This increases pressure on senior developers who are the current maintainers of open-source packages at the same time that AI is stealing the attention economy that previously rewarded open-source work.
Seems like we need something like blockchain gas on open-source PRs to reduce spam, incentivize open-source maintainers, and enable others to signal their support for suggestions while also putting money where their mouth is.
They invited AI in by creating a comprehensive list of instructions for AI agents - in the README, in a context.md, and even as yarn scripts. What did they expect?
Wouldn't that be for their usage? It's presence doesn't implicitly mean they want incomplete PRs submitted to their repository constantly.
The CONTEXT.md file was created 5 months ago, and the contribution policy changed today. I would interpret that as a good-faith attempt to work with AI agents, which with some experience, didn't work as well as they hoped.
> <BROWN AND WHITE DRAWING OF AN ASSHOLE> claude added the Task issue type 4 hours ago
is this satire?
At first I aggressively banned anyone that submitted slop to my projects.
Then I just took my hosting private. I can’t be arsed to put in the effort when they don’t.