
The deepening dispute over how artificial intelligence should be used in modern warfare is now pushing the Pentagon to take an extraordinary step: holding America’s leading AI firm as a security liability.
Defense Secretary Pete Hegseth is “close” to severing business ties with Anthropic and designating the company a “supply chain risk,” a senior Pentagon official told Axios — a move that would effectively penalize not only Anthropic but any supplier that relies on its technology.
Read also | Pentagon ‘fed up’ with policy of using Anthropic over Claude, may cut ties: Report
The same official described future decisions Axios bluntly: “It’s going to be a huge pain in the ass to unravel and we’re going to make sure they pay the price for forcing us like this.”
A punishment usually reserved for opponents, now aimed at an American AI company
The Pentagon’s threatened “supply chain risk” designation for Anthropic AI is rarely deployed against domestic firms and is more often associated with entities linked to hostile foreign powers. Its application to Anthropic would represent a sharp escalation of the Trump administration’s push to make artificial intelligence systems used by the military available without restrictive conditions.
It would also send an unmistakable signal to the broader tech sector: in the Defense Department’s view, national security partnerships require adherence to military operational standards — not ethical red lines drawn by private companies.
Claude’s unique role within classified networks makes the standoff unusually difficult
The dispute is complicated by a simple operational fact. Anthropic’s Claude is currently the only frontier AI model available in the US military’s covert systems, giving the company a strategic foothold unmatched by any competitor yet.
Pentagon officials privately describe Claude as extremely effective in specialized government work procedures. But the system’s presence on sensitive networks has also raised frustration in defense circles, where officials say restrictions on Anthropic’s use are out of step with the realities of military planning.
As Axios reported Friday, Claude was used during the January raid on Maduro — a detail that illustrates how quickly generative AI has moved from experimentation to real-world operations.
The underlying conflict: whether the Pentagon must adopt AI safeguards as a condition of access
At the heart of the confrontation lies a philosophical and legal dispute: whether an AI developer can dictate how its model is used once it becomes part of government systems.
Led by CEO Dario Amodei, Anthropic is adamant about restricting certain categories of use. The company is reportedly ready to relax its terms, but wants assurances that Claude will not be used for mass surveillance of Americans or to help develop weapons capable of firing without direct human involvement.
Pentagon officials say such restrictions are too rigid to be feasible. He argues that modern military action contains a myriad of ambiguous scenarios, and that attempting to pre-define boundaries in treaty language could limit lawful operations in unpredictable ways.
In interviews not only with Anthropic, but also with OpenAI, Google and xAI, Pentagon negotiators insisted on the right to use AI tools for “all lawful purposes.”
Reviewing Pentagon frameworks as a matter of combat readiness
The administration’s public language became increasingly combative, portraying the problem as one of military readiness rather than corporate governance.
Pentagon chief spokesman Sean Parnell told Axios, “The War Department’s relationship with Anthropic is under review. Our nation requires our partners to be willing to help our warfighters win any fight. Ultimately, it’s about our troops and the safety of the American people.”
The statement suggests that the Pentagon sees the dispute not as a technical disagreement, but as a question of whether the private AI firm is willing to fully align with defense goals.
Antropicka insists that discussions remain constructive as she defends her red lines
Anthropic has tried to position itself as cooperative, while still maintaining that some limitations are necessary given the power of modern AI systems.
An Anthropic spokesperson told Axios: “We are in good faith having productive conversations with the DoW about how to continue this work and address these new and complex issues.”
Read also | Anthropic raises $30 billion: Is it the biggest capital gain in AI?
The spokesperson also reiterated the company’s commitment to national security work, emphasizing that Claude was the first AI model deployed in classified networks — a claim that has become central to Anthropic’s position in Washington.
The shadow problem: surveillance law was not made for generative AI
The confrontation also reveals a regulatory vacuum. Current US surveillance agencies were designed for an earlier era of data processing, not for artificial intelligence systems capable of extracting patterns, building profiles and generating conclusions at scale.
Read also | Antropic hits $380 billion valuation as it ramps up competition with OpenAI
The Pentagon already has the authority to collect large volumes of personal information, from social media activity to concealed carry permits. Critics have warned that AI could augment these capabilities in ways that make surveillance more difficult while increasing the risk that automated analysis will target civilians.
Anthropic’s position reflects these anxieties. But Pentagon officials say legal permissibility should be the deciding standard — and that the Defense Department can’t accept contract terms that preempt lawful missions.
Significant ripple effect for suppliers if Anthropic is blacklisted
The most disturbing aspect of the threatened “supply chain risk” designation is not what it would do to Anthropica directly, but what it would require of others.
Read also | Why Jio, Anthropic, Ericsson and others formed the Trusted Tech Alliance
If implemented, it would require many companies that supply the Pentagon to certify that they are not using Claude internally. Given Anthropic’s commercial reach — the company said eight of the top 10 U.S. firms use Claude — such a requirement could force extensive internal audits, rapid tool changes and costly compliance efforts across corporate America.
A relatively small order, but a big political and strategic test
The Pentagon contract at risk is worth up to $200 million. Compared to Antropic’s annual revenue of $14 billion, this amount is not existential.
Still, officials familiar with the matter say the dispute is not fundamentally about money. At issue is authority: whether the military will accept AI guarantees set by private labs, or whether labs will be forced to accept the DoD’s interpretation of lawful use.
Pentagon may struggle to replace Claude despite claims rivals are ‘close behind’
A sudden break can also prove to be operationally inconvenient. A senior administration official told Axios that competing models are “closely behind” in specialized government applications.
Read also | Musk attacked Anthropic when maker Claude raised $30 billion
This loophole could complicate any attempt to quickly replace Claude, especially in clandestine environments where technical and bureaucratic barriers to new systems are high.
Signal to Silicon Valley: Pentagon intends to dictate terms
Anthropic’s tough approach seems designed to shape dealings with other AI developers.
Pentagon officials are simultaneously negotiating with OpenAI, Google and xAI, which have all agreed to remove safeguards for use in unclassified military systems. However, none have yet reached Cloud’s level of integration in classified networks.
A senior administration official said the Pentagon expects other firms to adopt the “all lawful use” standard. Still, a source familiar with the discussions said much remains undecided.