
When Anthropic told the world this month that it had created an artificial intelligence model so powerful it was too dangerous to be widely released, the company named 11 organizations as partners to help build a defense.
They were all from the United States.
Within two weeks, the model, dubbed Mythos, had launched a global scramble unlike anything seen before in the AI era. Mythos, which Anthropic claimed was incredibly capable of finding and exploiting hidden flaws in the software that runs the world’s banks, power grids and governments, became a geopolitical chip—and corporate America held it.
World leaders scrambled to figure out the extent of the security risks and how to fix them, with Anthropic sharing the Mythos only with Britain outside of the United States. The governor of the Bank of England has publicly warned that Anthropic may have found a way to “open up a whole world of cyber risk”. The European Central Bank has begun quietly questioning banks about their defenses. Canada’s finance minister compared the threat to closing the Strait of Hormuz.
For US rivals such as China and Russia, Mythos highlighted the security implications of falling behind in the AI race. One pro-Kremlin Russian outlet called the model “worse than a nuclear bomb.”
The answers illustrated a reality that AI researchers have long warned against, mostly in theoretical terms: Whoever leads in creating the most powerful AI models will gain huge geopolitical advantages. Major AI breakthroughs are starting to act less like product launches and more like weapons tests, and most nations want to understand how the technology works and what protections are needed.
As the underlying AI models become more rigorous, the approach becomes more geopolitical, said Eduardo Levy Yeyati, a former chief economist at Argentina’s central bank and regional adviser for growth and artificial intelligence at the Inter-American Development Bank. “I would take this episode as a political wake-up call. Governments can no longer ignore this problem.”
Even the US government, embroiled in a conflict with Anthropic over the use of artificial intelligence in warfare, took notice of Mythos. On Friday, Dario Amodei, Anthropic’s chief executive, met with White House officials after some in the Trump administration noted the new model’s potential to wreak havoc on computer systems.
Anthropic, which is based in San Francisco, told The New York Times that it keeps access to Mythos small because of safety and security concerns. It focused on sharing the model with more than 40 organizations that provide technologies used in the maintenance of critical global infrastructure, such as the Internet or power grids. Anthropic has named 11 organizations, including Amazon, Apple and Microsoft, that have pledged to help develop security patches for vulnerabilities identified by the model.
The company said it has no immediate timeline for widening access, but that it will work with the US government and industry partners to determine next steps. She said she has been bombarded with calls from governments, companies and other organizations seeking access and information, but that these organizations may have varying levels of expertise to safely evaluate such a powerful AI model.
Anthropic added that it expects other groups to release AI models with similar cyber capabilities more broadly in at least 18 months, giving organizations limited time to make the necessary security fixes.
On Tuesday, Anthropic said it was investigating a report that unauthorized users had gained access to a version of Mythos.
The scramble for Mythos comes at a time when international cooperation on artificial intelligence is minimal. Governments view each other with suspicion as corporations race to outdo rivals. There is no equivalent of a NPT, no shared inspections, and no agreed upon rules for dealing with something like the Mythos.
When Anthropic announced the model, many experts praised the company’s caution in limiting who could try the model, but expressed concern about the lack of international coordination to address the risk.
Britain was the only other nation to gain access. Its AI Security Institute, a government-backed organization, tested Mythos and published an independent assessment last week that confirmed it can carry out complex cyberattacks that no previous AI model has completed.
“This represents a step forward in the cyber capabilities of artificial intelligence,” UK AI Secretary Kanishka Narayan said on social media last week, saying the country was taking steps to protect “critical national infrastructure”.
Others received less information. The European Commission, the executive arm of the 27-member European Union, has met with Anthropic at least three times since the release of Mythos, an EU official said. However, the company did not provide access to the model because the two sides did not agree on how to share it with the commission, the official said.
In a statement, the commission said it was “assessing the possible implications” of Mythos, which “displays unprecedented cyber capabilities.”
Claudia Plattner, president of Germany’s cybersecurity agency, known as BSI, said she had not been given access to Mythos but had recently met with Anthropic staff in San Francisco to give them “meaningful insight” into how it works. The capabilities point to a “paradigm shift in the nature of cyber threats,” Ms. Plattner said in a statement.
Among America’s rivals, the response was more muted. Despite Anthropic’s recent run-in with the Trump administration, Mr. Amodei has made it clear that AI should be used to defend the United States and other democracies and defeat autocratic adversaries.
Neither Beijing nor Moscow has made a major public statement about Mythos. According to analysts studying the tech community in China, scientists and the wider AI community have been watching closely. Many of the country’s banks, energy companies and government agencies run the same software in which Mythos found the vulnerability — but for now, they have no seat at the table.
“I think it’s a second wake-up call for China after ChatGPT,” said Matt Sheehan, senior fellow at the Carnegie Endowment for International Peace. He added that the U.S. policy of preventing China from acquiring the most sophisticated semiconductors to build advanced artificial intelligence systems is helping to extend the U.S. lead.
Some AI researchers in China have privately expressed concern that the country could fall further behind and lose the benefits of first building a basic model, said Jeffrey Ding, a political science professor at George Washington University.
Liu Pengyu, a spokesman for the Chinese embassy in Washington, said China does not know the specifics of Mythos, but supports a peaceful, secure and open cyberspace.
Mythos is the latest sign of a growing global divide in AI. Nations without powerful computing infrastructure and AI models risk being left dependent on companies like Anthropic, Google and OpenAI, with little control over how their products are designed and protected, Mr. Yeyati said.
“The idea that access to frontier AI is something that a company can unilaterally limit using criteria that are opaque and inaccessible should be a real concern,” he said.





