
A criminal hacking group recently attempted to launch a large-scale cyberattack that appeared to depend on artificial intelligence to detect a previously unknown bug, Google said in research published Monday, highlighting the potential threat AI poses to digital security.
For years, security experts have worried that malicious hackers could eventually rely on artificial intelligence models to identify undetected flaws in computer code and launch crippling attacks that are difficult to defend against. This fear has been largely theoretical until now.
“We believe the actor likely used an AI model to aid in the discovery and weaponization of this vulnerability,” the report said.
The tech giant did not say exactly when the botched attack took place, who it targeted or what AI platform the hackers used, but the company added that it did not believe it was its own Gemini chatbot.
Google’s research comes as the tech industry and governments, including the Trump administration, are rethinking how and whether to police advanced versions of AI, in large part because of growing concerns about what they mean for cybersecurity.
Bugs like the one identified by Google and the hacking group are known as “zero-day vulnerabilities” – security holes that are unknown to software developers. They were once considered so rare and powerful that they could fetch millions of dollars on the black markets used to sell hacking tools.
But new AI models like Anthropic’s Mythos, which was announced last month, seem so good at finding such holes that Anthropic has only shared them with a limited number of companies and government agencies in the United States and Britain. When Mythos was announced, Anthropic said it had identified thousands of zero-day vulnerabilities “in every major operating system and every major web browser,” including many that are decades old.
AI models are rapidly improving cybersecurity. Late last year, Anthropic said state-backed Chinese hackers had used its technology in an attempt to infiltrate the computer systems of about 30 companies and government agencies around the world. It was the first reported case of a cyber attack in which artificial intelligence gathered sensitive information with limited help from human operators.
The zero-day flaw was discovered by Google’s Threat Intelligence Group over the past few months and was exploited by “prominent cybercriminal threat actors” in a Python programming language script. It would allow hackers to bypass two-factor authentication on “a popular open-source, web-based system management tool,” although hackers would also need access to valid credentials such as usernames and passwords to be successful, the company said.
Google declined to identify the management tool, but said it notified the software maker quickly enough to allow for a fix before the attack could cause damage. It also declined to identify the hackers.
Updated
Google and independent security researchers said the attempted attack was the first known example of a zero-day flaw used maliciously by hackers, largely thanks to artificial intelligence.
“It’s a taste of what’s to come,” John Hultquist, principal analyst at Google’s Threat Intelligence Group, said in an interview. “We believe this is the tip of the iceberg. This problem is probably much bigger; this is just the first tangible evidence we can see.”
Rob Joyce, former director of cybersecurity for the National Security Agency, said it can be difficult to tell whether computer code was written by a human or a machine, adding that “code created by AI will not announce itself.”
But Google’s clues linking the hack to AI — which included excessive explanatory text and other quirks that human coders would have had no reason to include — seemed convincing, said Mr. Joyce, who reviewed the findings before they were published. “It’s the closest thing to a crime scene fingerprint yet,” he said.
Mr. Hultquist said Google had other indicators that strengthened its conclusion that the hacking code was written by AI, but he declined to discuss them.
The zero-day bug announced by Google could bolster international calls for a controlled release of the latest AI models so experts can fix problems first. The Trump administration is considering ideas that could include a formal government review process for new models, The New York Times reported last week.
Some experts believe that artificial intelligence will ultimately strengthen cybersecurity by enabling the production of error-free software code. But in the short term, they say, governments and companies must work together to limit the damage the models can do to the current Internet, which was created by imperfect human hands.
“Rough models will allow us to create the most secure code we’ve ever created,” Mr. Hultquist said. “That’s an absolute win for cybersecurity. The challenge is that we’ve just started this process and we have to contend with a world of code that already exists.”





