
The Pentagon has reportedly asked Boeing and Lockheed Martin to assess their reliance on Anthropic’s AI model, Claude — an early move that could pave the way for the firm to be formally labeled a “supply chain risk,” Axios reports.
Such a designation is usually reserved in the US for companies associated with enemy states. Applying it to a leading American technology company — especially one whose software is embedded in secret military systems — would represent an extraordinary departure from precedent.
Pentagon Investigates Cloud Vendors’ Exposure to AI
On Wednesday, the Pentagon contacted Boeing and Lockheed Martin for an analysis of their exposure to Anthropic and its AI model, Claude, according to individuals familiar with the discussions.
A Lockheed Martin spokesman confirmed the company had been approached by the Department of Defense about reviewing its exposure and reliance on Anthropic before “making a statement of potential supply chain risk.” Boeing did not immediately respond to requests for comment.
Read also | An anthropic engineer says AI will take over most internet-based jobs
The Pentagon plans to expand the investigation to other major defense contractors — the so-called “traditional primes” responsible for supplying fighter jets, missile systems and other essential military hardware — to see if and how they incorporate Cloud into their workflows.
While such outreach does not in itself break contractual ties, it signals that the ministry is laying the groundwork for more stringent measures should negotiations with Antropicky break down.
Secret systems and strategic operations are at stake
Claude currently occupies a unique position within the US military’s AI architecture: it is the only AI model operating inside classified systems. Thanks to Anthropic’s partnership with Palantir, the system was deployed during the operation to capture Venezuela’s Nicolás Maduro and is internally considered capable of supporting future contingencies, including a possible military campaign involving Iran.
Read also | Pentagon pushes Anthropic to loosen AI safeguards, sets Friday deadline
Officials are said to be impressed with Claude’s performance in a variety of military use cases. Still, frustration has grown over Anthropic’s refusal to loosen its safeguards to allow the model to be used for what the Pentagon describes as “all lawful purposes.”
Antropic maintained strict restrictions, notably banning the use of Cloud for mass surveillance of Americans or for developing weapons that operate without human involvement. Defense officials say obtaining approval for individual use cases is operationally impractical.
A tense meeting and a Friday deadline
The standoff intensified during a meeting Tuesday between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei. During the session, Hegseth set a deadline: 5:01 p.m. on Friday.
If Anthropic refuses to change its policies, the administration has warned that it could invoke the Defense Production Act (DPA) to force the company to adapt its model to military requirements, or alternatively declare Anthropic a supply chain risk.
Read also | Pentagon vs Anthropic: Hegseth requires full military access to Claude AI
Invoking a DPA could allow the military to retain access to Claude while enforcing compliance, although such a move would almost certainly trigger a legal challenge.
The Pentagon said it was “prepared to implement whatever decision the Secretary makes on Anthropic on Friday.”
Referring to a potential supply chain risk designation earlier this week, a senior defense official told Axios: “It’s going to be a huge pain in the ass and we’re going to make sure they pay the price for forcing our hand like that.”
Supply chain risk: A rare and serious measure
The label “supply chain risk” is generally associated with companies that are perceived as threats to national security due to foreign influence or hostile state ties. Among the most prominent examples is the Chinese telecommunications giant Huawei.
Applying such a label to a domestic AI company would be unprecedented and could have far-reaching commercial implications. Contractors working with the federal government may be forced to remove Cloud from sensitive systems, potentially disrupting projects that already depend on the model.
At present, the Pentagon’s request for an exposure assessment is more of a preliminary step than an immediate directive to cut ties. Some observers see the maneuver as a strategic strategy aimed at forcing the Anthropic into concession.
The Anthropic Position: Conservation and National Security
Anthropic publicly characterized the discussions as constructive, if firm.
A company spokesperson described the meeting between Amodei and Hegseth as a continuation of “good faith conversations about our usage policy to ensure that Anthropic can continue to support the government’s national security mission consistent with what our models can reliably and responsibly do.”
A spokesman declined to comment on the prospects for the supply chain risk designation.
Read also | Who is Dario Amodei? Did you know that the CEO of Anthropic was a former employee of OpenAI?
Anthropic executives have repeatedly expressed concern about the societal dangers of advanced artificial intelligence, including autonomous weapons and home surveillance. Those principles are now at the heart of a confrontation with the Pentagon at a time when military adoption of AI systems is accelerating worldwide.
Competitive landscape: Google, OpenAI and xAI enter the framework
The dispute is taking place against a rapidly changing competitive backdrop.
Elon Musk’s xAI recently secured a deal to move its systems into classified military environments under an “all lawful use” standard — the very framework that Anthropic resisted.
Google and OpenAI, whose AI models are already deployed in unclassified government systems, are in talks to expand their presence into classified domains. One individual familiar with the discussions characterized Cloud as the most capable model in several military applications, but identified Google’s Gemini as a credible alternative.
The Pentagon has indicated that Google and OpenAI would similarly be expected to loosen safeguards if they are to secure secret contracts.





