> Authors would solve a problem in a way that ignored existing patterns
if you’re not writing your code why do you expect people to read it and follow your lead for whatever your preference is for a convention.
I get people who hand write being fussy about this but you start the article off devaluing coding entirely then pivot to how your codebase is written having value that needs to be followed.
It’s either low value or it isn’t but you can’t approach it as worthless then complain when others view your code as worthless and not worth reading too
If you don't have time, just write the damn issue as you normally would. I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.
If you don't have time to write an issue yourself or carefully proofread whatever LLM makes up for you, whom are you trying to fool by making it look pretty? At least if it is visibly lazy anyone knows to treat it with appropriate grain of salt.
Even if you are one of those who likes to code by having to correct LLMs all the time, surely you understand if your LLM can make candy out of poo when you post an issue then it can do the exact same thing when it processes the issue and makes a PR. Likely next month it will do a better job at parsing your quick writing, and having it immediately "upscaled" would only hinder future performance.
What would make sense for me is to use an AI to turn implicit context that is only there in the moment into explicit context that is stored in the ticket.
E.g. maybe you have your application open in a browser and are currently viewing a page with a very prominent red button. You hit that /issue command with "button should be yellow not red".
That half-sentence makes sense if you also have that open browser window as context, but would be completely cryptic without.
An AI could use both the input and the browser window to generate a description like "The background color of the #submit_unsafe button widget in frontend/settings/advanced.tsx should be changed from red to yellow." or something.
Sort of like a semantic equivalent to realpath if you want.
> /issue you know that paint bucket in google docs i want that for tldraw so that I can copy styles from one shape and paste it to another, if those styles exist in the other shape. i want to like slurp up the styles
What kind of context may be there?
Also, the entire repository and issue tracker is context. Over time it gets only more complete.
> If writing the code is the easy part, why would I want someone else to write it?
Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need managed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.
Ultimately, this would lead to.a situation where only the customer-facing (if there are any) or "business-facing" (i.e. C-suite) roles remain. I'm not sure, I like that.
Do you think any of them cares about long term? Regardless of AI, your head is always on a chopping block. You always grab that promo in front of you, even if it means you’ll be axed in two years by your own decisions.
I mean I understand that you want your business to not fall behind right now, sure. But I don't understand people in management who are audibly _excited_ about the prospect of these developments even behind closed doors. I guess some of them imagine they are the next Steve Jobs only held back by their dev teams, but most of them are in for a rude awakening lol. And I guess a lot are just grifting. The amount of psychotic B2B SaaS rambling on Twitter is already unbearable as is.
> AI changed all of that. My low-effort issues were becoming low-effort pull requests, with AI doing both sides of the work. My poor Claude had produced a nonsense issue causing the contributor's poor Claude to produce a nonsense solution.
The thing is, my shitty AI issue was providing value.
Seems like shitty AI issue did more harm than good?
> Once we had the context we needed and the alignment on what we would do, the final implementation would have been almost ceremonial. Who wants to push the button?
> ...
> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.
>Once or twice, I would begin fixing and cleaning up these PRs, often asking my own Claude to make fixes that benefited from my wider knowledge: use this helper, use our existing UI components, etc. All the while thinking that it would have been easier to vibe code this myself.
I had an odd experience a few weeks ago, when I spent a few minutes trying to find a small program I had written. It suddenly struck me that I could have asked for a new one, in less time than it took to find it.
Guy uses his project's GitHub issues as personal TODO list, realizes his one line GitHub issues look unprofessional, uses AI to hallucinate them into fake but realistic looking issues, and then complains when he gets AI slop PRs.
An alternative idea: Use a TODO list and stop using GitHub Issues as your personal dumping ground, whether you use AI to pad them or not. If the issue requires discussion or more detail and would warrant a proper issue, then make a proper issue.
> Authors would solve a problem in a way that ignored existing patterns
if you’re not writing your code why do you expect people to read it and follow your lead for whatever your preference is for a convention.
I get people who hand write being fussy about this but you start the article off devaluing coding entirely then pivot to how your codebase is written having value that needs to be followed.
It’s either low value or it isn’t but you can’t approach it as worthless then complain when others view your code as worthless and not worth reading too
"Just show me the prompt."
If you don't have time, just write the damn issue as you normally would. I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.
If you don't have time to write an issue yourself or carefully proofread whatever LLM makes up for you, whom are you trying to fool by making it look pretty? At least if it is visibly lazy anyone knows to treat it with appropriate grain of salt.
Even if you are one of those who likes to code by having to correct LLMs all the time, surely you understand if your LLM can make candy out of poo when you post an issue then it can do the exact same thing when it processes the issue and makes a PR. Likely next month it will do a better job at parsing your quick writing, and having it immediately "upscaled" would only hinder future performance.
What would make sense for me is to use an AI to turn implicit context that is only there in the moment into explicit context that is stored in the ticket.
E.g. maybe you have your application open in a browser and are currently viewing a page with a very prominent red button. You hit that /issue command with "button should be yellow not red".
That half-sentence makes sense if you also have that open browser window as context, but would be completely cryptic without.
An AI could use both the input and the browser window to generate a description like "The background color of the #submit_unsafe button widget in frontend/settings/advanced.tsx should be changed from red to yellow." or something.
Sort of like a semantic equivalent to realpath if you want.
I do see utility in that.
The context windows before a prompt is often large and contains all sorts of information though, it wouldn't be just a prompt in isolation.
I was going by this example:
> /issue you know that paint bucket in google docs i want that for tldraw so that I can copy styles from one shape and paste it to another, if those styles exist in the other shape. i want to like slurp up the styles
What kind of context may be there?
Also, the entire repository and issue tracker is context. Over time it gets only more complete.
> If writing the code is the easy part, why would I want someone else to write it?
Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need managed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.
Ultimately, this would lead to.a situation where only the customer-facing (if there are any) or "business-facing" (i.e. C-suite) roles remain. I'm not sure, I like that.
Do you think any of them cares about long term? Regardless of AI, your head is always on a chopping block. You always grab that promo in front of you, even if it means you’ll be axed in two years by your own decisions.
I mean I understand that you want your business to not fall behind right now, sure. But I don't understand people in management who are audibly _excited_ about the prospect of these developments even behind closed doors. I guess some of them imagine they are the next Steve Jobs only held back by their dev teams, but most of them are in for a rude awakening lol. And I guess a lot are just grifting. The amount of psychotic B2B SaaS rambling on Twitter is already unbearable as is.
>As a high-powered tech CEO, I'm
cough linkedin cringe cough
You should never sign a CLA unless you're getting paid to.
> AI changed all of that. My low-effort issues were becoming low-effort pull requests, with AI doing both sides of the work. My poor Claude had produced a nonsense issue causing the contributor's poor Claude to produce a nonsense solution. The thing is, my shitty AI issue was providing value.
Seems like shitty AI issue did more harm than good?
> Once we had the context we needed and the alignment on what we would do, the final implementation would have been almost ceremonial. Who wants to push the button?
> ...
> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.
>Once or twice, I would begin fixing and cleaning up these PRs, often asking my own Claude to make fixes that benefited from my wider knowledge: use this helper, use our existing UI components, etc. All the while thinking that it would have been easier to vibe code this myself.
I had an odd experience a few weeks ago, when I spent a few minutes trying to find a small program I had written. It suddenly struck me that I could have asked for a new one, in less time than it took to find it.
Guy uses his project's GitHub issues as personal TODO list, realizes his one line GitHub issues look unprofessional, uses AI to hallucinate them into fake but realistic looking issues, and then complains when he gets AI slop PRs.
An alternative idea: Use a TODO list and stop using GitHub Issues as your personal dumping ground, whether you use AI to pad them or not. If the issue requires discussion or more detail and would warrant a proper issue, then make a proper issue.
We need a chrome extension like SponsorBlock, which publicly tags slop contributors. Maintainers can just reject PRs from those users.