
The Pentagon and Anthropic are locked in an unusual controversy as Defense Secretary Pete Hegseth warned the AI company would be removed from his agency’s supply chain if it did not meet certain requirements.
The Pentagon has reportedly already taken the first step toward blacklisting Anthropic, using its defense contractors to assess their reliance on AI companies.
The Department of Defense has been in a months-long dispute with Anthropic over the use of Claude AI. Reuters reported that Anthropic has no intention of easing restrictions on the use of military applications.
A meeting between Hegseth and Anthropic CEO Dario Amodei has already taken place, after which talks are continuing.
During the meeting, Hegseth said that if Anthropic did not comply, the Pentagon would take action against it, with options including designating it a supply chain risk or invoking a law that would force Anthropic to change its rules, Reuters reported.
Read also | Anthropic Releases AI Security Policy Amid Growing Competition – What Changes?
“We have continued to have good faith conversations about our usage policy to ensure that Anthropic can continue to support the government’s national security mission consistent with what our models can reliably and responsibly do,” Anthropic said in a statement after the meeting.
Antropic has until 5 p.m. Friday to respond, according to the report.
But why are the Anthropic and the Pentagon fighting? Here’s what you need to know.
Why the US military is fighting Anthropica over how to use the Claude AI
At the heart of the issue is the question of who controls the Claude AI – the Pentagon CEO or Anthropic.
According to a CBS report, the standoff began when the U.S. military used Anthropic’s Claude AI last month during an operation to capture former Venezuelan President Nicolás Maduro.
A spokesperson for Anthropic said in a statement that the AI startup “has not discussed the use of Claude for specific operations with the War Department.”
Citing people familiar with the matter, CBS reported that Anthropic has repeatedly asked the Pentagon to adhere to certain safeguards, including limiting the use of the Cloud for mass surveillance of US citizens.
Read also | An anthropic engineer says AI will take over most internet-based jobsRead also | The Pentagon has included Boeing and Lockheed Martin as the first step in blacklisting Anthropic
The Pentagon has been pushing major AI companies, including Anthropic and OpenAI, to make their AI tools available on classified networks without many of the standard restrictions companies place on users, according to Reuters.
However, Anthropic does not want the US military to use Claude “for final targeting decisions in military operations without any human involvement,” according to CBS.
Earlier this month, Mrinank Sharma, a senior security researcher, said he was leaving Anthropic. “I am constantly reckoning with our situation,” he wrote in a letter to colleagues sent by X.
“The world is in danger,” he wrote. “And not just from artificial intelligence or biological weapons, but from a whole series of interconnected crises that are unfolding at this very moment.”
The ultimatum from the Pentagon marks an escalation in a growing dispute between the Defense Department and the AI startup over the company’s insistence on railing against the use of its Claude AI tool. If carried out, the Pentagon’s threat would jeopardize up to $200 million in work Anthropic has agreed to do for the military.
Key things
- The Pentagon is pushing AI firms to give unrestricted access to their tools for military use.
- Antropic prioritizes ethical guidelines in AI applications, especially when it comes to military operations.
- The outcome of this dispute could have significant implications for the use of AI in national security and military ethics.





