
AI company Anthropic on Thursday rejected the Pentagon’s latest offer to settle a dispute over the terms Trump’s War Department demanded from the company to continue working with the government. Anthropic said it would not give the US Department of Defense unlimited use of its AI Claude despite threats from the Pentagon.
“These threats do not change our position: we cannot in good conscience comply with their request,” Anthropic CEO Dario Amodei said in a statement.
This confrontation effectively threatens the company’s long-term relationship with the government.
The dispute between the Pentagon and Anthropic stems from the AI startup’s refusal to put down certain barriers that would allow the US military to autonomously use targeted weapons and conduct mass surveillance in the United States.
“To our knowledge, these two exemptions have not yet been an obstacle to accelerating the adoption and use of our models within our armed forces,” Amodei argued in a statement.
Anthropic’s announcement comes just one day ahead of a deadline given by the Pentagon and Defense Secretary Pete Hegseth, who Amodei met with earlier this week.
The Department of Defense has given Anthropic an ultimatum to agree to unconditional military use of its technology, even if doing so violates ethical standards within the company or be forced to comply under extraordinary federal powers.
Why does Anthropic refuse to give in to the Pentagon’s demands?
Anthropic, which is backed by Amazon and Google, has a contract with the US Department of Defense worth up to $200 million. However, Amodei said Thursday that his company will draw an ethical line on its use for mass surveillance of US citizens and fully autonomous weapons, even if it means losing the contract.
Read also | Pentagon sends Anthropic ‘final offer’ for military use Claude AI: ReportRead also | Explained | Why is the Pentagon fighting Anthropic over Claude?Read also | Anthropic Releases AI Security Policy Amid Growing Competition – What Changes?
The ministry said it will only contract with AI companies that proceed with “any lawful use” and remove safeguards, Amodei said in a statement. “The use of these systems for mass domestic surveillance is incompatible with democratic values.”
He said leading artificial intelligence systems are not yet reliable enough to be trusted with the ability to launch lethal weapons without any human intervention.
“We will not knowingly provide a product that puts American warfighters and civilians at risk,” Amodei said.
Anthropic vs Pentagon
After a meeting with Anthropic earlier this week, the Pentagon issued a stark ultimatum: agree to unrestricted military use of its technology by 17:01 (22:01 GMT) Friday or face being forced to comply with the Defense Production Act.
Earlier Thursday, Pentagon spokesman Sean Parnell told X that the department is not interested in using AI to conduct mass surveillance of Americans, nor does it want to use AI to develop autonomous weapons that operate without human involvement.
“What we’re asking is this: Allow the Pentagon to use the Anthropic Model for all lawful purposes,” Parnell said.
The Pentagon also threatened to designate Antropicka as a supply chain risk, a designation usually reserved for firms from hostile countries that could seriously damage the company’s ability to work with the U.S. government and reputation.
However, Anthropic refused to budge from his position. “It is the ministry’s prerogative to select contractors that best match their vision,” Amodei said in a statement.





