Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies.
netsplit, I guess. decide that the risk of an open network is too great and simply block all routing out of the country through the ISPs and consider the political power that goes along with a global satellite constellation under rule of a single, government-aligned corporation.
>But new A.I. models like Anthropic’s Mythos, which was announced last month, appear to be so good at finding such holes that Anthropic shared it only with a limited number of firms and government agencies in the United States and Britain.
Immediate distrust of the article. GPT 5.5 is out with nearly the same capability. The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems. For all we know this group could have had a model examine some obscure line of code thousands of times until it found something.
Immediate distrust of the article… The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems.
> I am based in The Times’s Washington bureau, and much of my focus is on the dealings of U.S. cybersecurity and intelligence agencies, including the National Security Agency, Central Intelligence Agency, Cybersecurity and Infrastructure Security Agency and the Federal Bureau of Investigation, as well as their counterparts abroad, chiefly in China, Russia, Iran and North Korea.
> My remit spans nation-state hacking conflict, digital espionage, online influence operations, election meddling, government surveillance, malicious use of A.I. tools and other related topics.
> Before joining The Times, I worked at The Wall Street Journal, where I spent eight years covering cyber conflict and intelligence. My recent work at The Journal included a series of articles revealing a major Chinese intrusion of America’s telecommunications networks that breached the F.B.I.’s wiretap systems and has been described as one of the worst U.S. counterintelligence failures in history. I have also worked at Reuters and National Journal, where I began my career in Washington chronicling congressional efforts to reform surveillance practices at the N.S.A. in the wake of the 2013 Edward Snowden disclosures.
> My work has been internationally recognized, including by the White House Correspondents’ Association, the Gerald Loeb Awards, the Society of Publishers in Asia and the Society for Advancing Business Editing and Writing.
OP posited that the author didn't know what he's talking about. I pointed out that the author has far more knowledge and experience in the field than rando internet griefers on HN who immediately reach for "shoot the messenger" when they read something that doesn't neatly fit into their pre-conceived worldview, instead of perhaps learning things from other people.
But at least your trope acknowledges that he's an authority on the subject.
Black hat hacking seems to be a well-fit use case for these LLMs. Attackers only need to be right once, so the sometimes-wrongness of the attacks might be trivial. This probably devalues stashes of zero-day exploits for those that have been witholding them.
The Google Threat Intelligence Group wants to increase its relevance and casually point out the it was not Mythos which found the exploit!
Security "researchers" are overpaid buffoons who hype things for their own salaries and their companies. And the stenographers from the press dutifully copy everything.
This is a despicable game to fool politicians into giving money and favorable AI legislation.
Strangely enough these buffoons never offer their models to open source developers. It is always a select group of highly paid other buffoons that throws some very occasional results over the wall.
Software is in such a state now, Gmail is full of bugs around sharing attachments to the position that I have to tell my dad to turn his phone off and on again in order to attach a document
The detail that matters here is buried in the framing: Google says this is the first confirmed case of AI-assisted zero-day discovery by a criminal actor, not a nation-state.
Nation-states have had internal AI-augmented vuln research pipelines for years. The shift is the democratization. Zero-days used to require a team with rare expertise and months of work, which kept the supply constrained. If a criminal group can now compress that cycle with a commodity AI model, the supply-side constraint on zero-days breaks.
Google notably declined to name the AI platform used, and confirmed it wasn't Gemini. That detail alone will generate more questions than the report answers.
The practical implication isn't that AI finds bugs faster. It's that the population of actors capable of doing original vuln research just expanded significantly. Patching cadence doesn't scale with that.
Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies.
As long as it is within the country, restriction works. How do you restrict the capability from a foreign entity, especially a hostile one?
netsplit, I guess. decide that the risk of an open network is too great and simply block all routing out of the country through the ISPs and consider the political power that goes along with a global satellite constellation under rule of a single, government-aligned corporation.
>But new A.I. models like Anthropic’s Mythos, which was announced last month, appear to be so good at finding such holes that Anthropic shared it only with a limited number of firms and government agencies in the United States and Britain.
Immediate distrust of the article. GPT 5.5 is out with nearly the same capability. The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems. For all we know this group could have had a model examine some obscure line of code thousands of times until it found something.
Immediate distrust of the article… The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems.
https://www.nytimes.com/by/dustin-volz
> I am based in The Times’s Washington bureau, and much of my focus is on the dealings of U.S. cybersecurity and intelligence agencies, including the National Security Agency, Central Intelligence Agency, Cybersecurity and Infrastructure Security Agency and the Federal Bureau of Investigation, as well as their counterparts abroad, chiefly in China, Russia, Iran and North Korea.
> My remit spans nation-state hacking conflict, digital espionage, online influence operations, election meddling, government surveillance, malicious use of A.I. tools and other related topics.
> Before joining The Times, I worked at The Wall Street Journal, where I spent eight years covering cyber conflict and intelligence. My recent work at The Journal included a series of articles revealing a major Chinese intrusion of America’s telecommunications networks that breached the F.B.I.’s wiretap systems and has been described as one of the worst U.S. counterintelligence failures in history. I have also worked at Reuters and National Journal, where I began my career in Washington chronicling congressional efforts to reform surveillance practices at the N.S.A. in the wake of the 2013 Edward Snowden disclosures.
> My work has been internationally recognized, including by the White House Correspondents’ Association, the Gerald Loeb Awards, the Society of Publishers in Asia and the Society for Advancing Business Editing and Writing.
What have you done lately?
How many zeroday vulns had the article author discovered using AI assisted methods?
Reporting on such stuff requires networking skills, not technical knowledge.
Reporting on such stuff requires networking skills, not technical knowledge.
Guess how I know you've never been a reporter.
https://www.logicallyfallacious.com/logicalfallacies/Appeal-...
Not at all.
OP posited that the author didn't know what he's talking about. I pointed out that the author has far more knowledge and experience in the field than rando internet griefers on HN who immediately reach for "shoot the messenger" when they read something that doesn't neatly fit into their pre-conceived worldview, instead of perhaps learning things from other people.
But at least your trope acknowledges that he's an authority on the subject.
Black hat hacking seems to be a well-fit use case for these LLMs. Attackers only need to be right once, so the sometimes-wrongness of the attacks might be trivial. This probably devalues stashes of zero-day exploits for those that have been witholding them.
Can we link to the actual google article, instead of these editorialized articles about the article?
https://cloud.google.com/blog/topics/threat-intelligence/ai-...
People used LLMs to find flaws in Google software.
But did they use Gemini?
I don't know, but given how often Gemini refuses benign requests IME, I would suspect it's a complete non-starter for finding security holes.
The Google Threat Intelligence Group wants to increase its relevance and casually point out the it was not Mythos which found the exploit!
Security "researchers" are overpaid buffoons who hype things for their own salaries and their companies. And the stenographers from the press dutifully copy everything.
This is a despicable game to fool politicians into giving money and favorable AI legislation.
Strangely enough these buffoons never offer their models to open source developers. It is always a select group of highly paid other buffoons that throws some very occasional results over the wall.
Wait until the bio version of this shows up.
If "bad guy AI" can find flaws, can "good guy AI" patch them faster when backed by trillion dollar companies?
Do your AI patches introduce fewer flaws than they repair?
The bottleneck is probably validating and deploying the fix, which requires coordination.
If I sell weapons to both sides of a conflict, can I become rich?
Ask anyone selling AI hardware recently!
...says yet another company hell bent on integrating it into every facet of our lives. This reads like a celebration, if you ask me.
Can google please use AI to find bugs then?
Software is in such a state now, Gmail is full of bugs around sharing attachments to the position that I have to tell my dad to turn his phone off and on again in order to attach a document
https://secgemini.google/
https://projectzero.google/2024/10/from-naptime-to-big-sleep...
https://deepmind.google/blog/introducing-codemender-an-ai-ag...
Those are all for security vulnerabilities, OP is talking about bugs with functionality.
It's probably the AI overuse introducing many of those bugs in the first place...
The detail that matters here is buried in the framing: Google says this is the first confirmed case of AI-assisted zero-day discovery by a criminal actor, not a nation-state. Nation-states have had internal AI-augmented vuln research pipelines for years. The shift is the democratization. Zero-days used to require a team with rare expertise and months of work, which kept the supply constrained. If a criminal group can now compress that cycle with a commodity AI model, the supply-side constraint on zero-days breaks. Google notably declined to name the AI platform used, and confirmed it wasn't Gemini. That detail alone will generate more questions than the report answers. The practical implication isn't that AI finds bugs faster. It's that the population of actors capable of doing original vuln research just expanded significantly. Patching cadence doesn't scale with that.
But in exchange we get to also waste vast energy and carbon while depleting job prospects for just about any college grad.
It's not all bad though. We also managed to turn the Information Superhighway of the 1990s into the Slop Wasteland of the 2020s.