An interesting approach to the worsening signal-to-noise ratio OSS projects are experiencing.
However, it's not hard to envision a future where the exact opposite will be occur: a few key AI tools/models will become specialized and better at coding/testing in various platforms than humans and they will ignore or de-prioritize our input.
We'll ship some initial changes here next week to provide maintainers the ability to configure PR access as discussed above.
After that ships we'll continue doing a lot of rapid exploration given there's still a lot of ways to improve here. We also just shipped some issues related features here like comment pinning and +1 comment steering [1] to help cut through some noise.
Interested though to see what else emerges like this in the community, I expect we'll see continued experimentation and that's good for OSS.
The Web of Trust failed for PGP 30 years ago. Why will it work here?
For a single organisation, a list of vouched users sounds great. GitHub permissions already support this.
My concern is with the "web" part. Once you have orgs trusting the vouch lists of other orgs, you end up with the classic problems of decentralised trust:
1. The level of trust is only as high as the lax-est person in your network
2. Nobody is particularly interested in vetting new users
3. Updating trust rarely happens
There _is_ a problem with AI Slop overrunning public repositories. But WoT has failed once, we don't need to try it again.
I think this project is motivated by the same concern I have that open source (particularly on GitHub) is going to devolve into a slop fest as the barrier of entry lowers due to LLMs. For every principled developer who takes personal responsibility for what they ship, regardless of whether it was LLM-generated, there are people 10 others that don't care and will pollute the public domain with broken, low quality projects. In other words, I foresee open source devolving from a high trust society to a low one.
Makes sense, it feels like this just codifies a lot of implicit standards wrt OSS contribution which is great to see. I do wonder if we'll ever see a tangible "reputation" metric used for contribs, or if it'd even be useful at all. Seems like the core tension now is just the ease of pumping out slop vs the responsibility of ownership of code/consideration for project maintainers.
I think LLMs are accelerating us toward a Dune-like universe, where humans come before AI.
An interesting approach to the worsening signal-to-noise ratio OSS projects are experiencing.
However, it's not hard to envision a future where the exact opposite will be occur: a few key AI tools/models will become specialized and better at coding/testing in various platforms than humans and they will ignore or de-prioritize our input.
Hope github can natively integrate something in the platform, a relevant discussion I saw on official forums: https://github.com/orgs/community/discussions/185387
We'll ship some initial changes here next week to provide maintainers the ability to configure PR access as discussed above.
After that ships we'll continue doing a lot of rapid exploration given there's still a lot of ways to improve here. We also just shipped some issues related features here like comment pinning and +1 comment steering [1] to help cut through some noise.
Interested though to see what else emerges like this in the community, I expect we'll see continued experimentation and that's good for OSS.
[1] https://github.blog/changelog/2026-02-05-pinned-comments-on-...
The Web of Trust failed for PGP 30 years ago. Why will it work here?
For a single organisation, a list of vouched users sounds great. GitHub permissions already support this.
My concern is with the "web" part. Once you have orgs trusting the vouch lists of other orgs, you end up with the classic problems of decentralised trust:
1. The level of trust is only as high as the lax-est person in your network 2. Nobody is particularly interested in vetting new users 3. Updating trust rarely happens
There _is_ a problem with AI Slop overrunning public repositories. But WoT has failed once, we don't need to try it again.
Web of Trust failed? If you saw that a close friend had signed someone else's PGP key, you would be pretty sure it was really that person.
I think this project is motivated by the same concern I have that open source (particularly on GitHub) is going to devolve into a slop fest as the barrier of entry lowers due to LLMs. For every principled developer who takes personal responsibility for what they ship, regardless of whether it was LLM-generated, there are people 10 others that don't care and will pollute the public domain with broken, low quality projects. In other words, I foresee open source devolving from a high trust society to a low one.
Makes sense, it feels like this just codifies a lot of implicit standards wrt OSS contribution which is great to see. I do wonder if we'll ever see a tangible "reputation" metric used for contribs, or if it'd even be useful at all. Seems like the core tension now is just the ease of pumping out slop vs the responsibility of ownership of code/consideration for project maintainers.