
Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei until Friday (February 27) evening to give the US military unrestricted access to its flagship AI model Claude or face serious consequences, according to a report from Axios.
The ultimatum, delivered during what an Axios report, citing officials, described as a tense meeting on Tuesday (February 24), underscores a widening rift in the limits of AI safeguards in national security operations. At stake is the Pentagon’s continued access to Cloud — currently the only model of artificial intelligence embedded in its most sensitive covert systems.
Read also | Pentagon vs Anthropic: Hegseth requires full military access to Claude AI
Pete Hegseth set Friday’s deadline, which represents a critical inflection point in the relationship between Silicon Valley and the US national security state. The Pentagon appears determined to assert operational superiority over AI deployments, while Anthropic continues to defend checkpoints designed to limit certain forms of military use.
Pentagon pushes Anthropic to protect AI
According to Axios Hegseth warned that the MoD could either cut ties with Antropic and formally label the company a “supply chain risk” or use the Defense Production Act (DPA) to force the firm to adapt its model to military requirements.
“The only reason we’re still talking to these guys is because we need them and we need them now. The problem for these guys is that they’re so good,” a defense official told Axios before the meeting.
Read also | Pentagon threatens to end anthropic work in disputes over artificial intelligence terms
The warning represents one of the most direct confrontations between Washington and a private AI developer over the permissible scope of military use of artificial intelligence. While Anthropic has signaled a willingness to modify its use policy for defensive applications, it has refused to allow Claude to be deployed for mass surveillance of Americans or for unmanned weapon systems.
Classified systems and operational dependence on Claude AI
Claude’s integration into classified Pentagon systems has created a strategic dependency that complicates any threat to end the relationship. This model is said to be used in both highly sensitive operational contexts and a wide variety of bureaucratic military functions.
One source familiar with the discussions said that Claude currently appears to be leading competing models in several applications relevant to military planning, including offensive cyber capabilities.
Read also | Anthropic CEO Dario Amodei says AI could surpass humans in the physical world
The Pentagon is said to be accelerating discussions with OpenAI and Google to transfer their models — already used in unclassified environments — to classified environments. Gemini emerged as a potential alternative, although such an arrangement would require Google to allow the Pentagon to use its system for “all lawful purposes,” terms that Anthropic has rejected.
Elon Musk’s xAI recently secured a contract to introduce Grok into classified settings, but it remains unclear whether the system could fully replace Claude’s current capabilities.
Defense Production Act: A rare tool of the adversary
The Defense Production Act gives the president the power to compel private companies to accept and prioritize contracts deemed necessary for national defense. It was used especially during the Covid-19 pandemic to expand the production of vaccines and ventilators.
Read also | Who is Dario Amodei? Did you know that the CEO of Anthropic was a former employee of OpenAI?
However, applying the law in a coercive manner against a technology company over AI safeguards would be an unusual and adversarial application. A senior defense official suggested the goal would be to force Anthropic to adapt its model to the Pentagon’s requirements without additional handrails.
According to one defense consultant cited in the report, Anthropic could challenge such a measure in court, arguing that its product is specialized software built to order for sensitive government use, rather than a commercially available commodity subject to DPA duress.
The dispute over the Venezuelan operation deepens the friction
Tensions were further fueled by the controversy surrounding Claude’s alleged use during the Venezuelan operation conducted through Anthropic’s partnership with Palantir.
Hegseth was reportedly referring to the Pentagon’s claim that Anthropic raised concerns with Palantir about deploying the model during the Maduro raid. Amodei denied the allegations.
Amodei denied that Anthropic had raised any such concerns or even broached the subject with Palantir beyond standard operational conversations.
Read also | What is COBOL? How the Anthropic Claude blog wiped $30 billion from IBM
He reiterated that the company’s red lines have never prevented the Pentagon from doing its job or posed a problem for anyone operating in the field.
Sources differed on the characterization of Tuesday’s meeting. A senior defense official described it as “not warm and fuzzy at all”. Another source said that he remained “cordial” with no raised voices and that Hegseth praised Claude directly to Amodeus.
Hegseth made it clear that he would not allow any private company to dictate the terms under which the Pentagon makes operational decisions or object to individual use cases.
The supply chain risk flag appears
If the Pentagon were to terminate its contract and designate Antropicka as a supply chain risk, the ramifications would extend beyond the company itself. Other defense contractors would likely have to confirm that Claude is not part of their workflows – a difficult task given the current integration of the model across systems.
Read also | Anthropic and OpenAI are the new darlings of Indian IT
Anthropicka maintained a conciliatory tone after the meeting.
“During the interview, Dario expressed appreciation for the work of the department and thanked the minister for his service,” an Anthropic spokesperson said.
“We have continued to have good faith conversations about our usage policy to ensure that Anthropic can continue to support the government’s national security mission consistent with what our models can reliably and responsibly do.”





