The 90 day responsible disclosure window was built for a world where bug finders were rare and exploit development was slow. That world is gone. LLMs have compressed both timelines to near-zero. I have seen it first hand, and so has everyone else paying attention. This post lays out why the old model is broken, with real stories, and makes one ask to the industry: treat every critical security issue as P0 and patch it immediately.
I don't think this makes any sense. I can see that long delays in public reporting might not be good for the near future, but a year from now all of the easily found stuff will have been found. At some point, everything will have hardened to a certain extent, new things will get scanned before they hit the streets, and the only bugs being found will rely a lot more on somebody's insight than the LLM used to test that insight.
I think people are getting overly impressed/intimidated by tons of bugs being found by LLMs in a bunch of code that hasn't been looked at by more than a couple of people in years, or even at all since it was written. Those are going to run out. There won't be any code left that hasn't recently been looked over by an LLM.
That makes sense to me, but in a world where code is generated by the shovel-load (see https://news.ycombinator.com/item?id=48073680) could the pace of introducing bugs not match or exceed the rate of finding them indefinitely?
The 90 day responsible disclosure window was built for a world where bug finders were rare and exploit development was slow. That world is gone. LLMs have compressed both timelines to near-zero. I have seen it first hand, and so has everyone else paying attention. This post lays out why the old model is broken, with real stories, and makes one ask to the industry: treat every critical security issue as P0 and patch it immediately.
I don't think this makes any sense. I can see that long delays in public reporting might not be good for the near future, but a year from now all of the easily found stuff will have been found. At some point, everything will have hardened to a certain extent, new things will get scanned before they hit the streets, and the only bugs being found will rely a lot more on somebody's insight than the LLM used to test that insight.
I think people are getting overly impressed/intimidated by tons of bugs being found by LLMs in a bunch of code that hasn't been looked at by more than a couple of people in years, or even at all since it was written. Those are going to run out. There won't be any code left that hasn't recently been looked over by an LLM.
That makes sense to me, but in a world where code is generated by the shovel-load (see https://news.ycombinator.com/item?id=48073680) could the pace of introducing bugs not match or exceed the rate of finding them indefinitely?