
The Pentagon is at odds with artificial intelligence developer Anthropic over safeguards that would prevent the government from deploying its technology to autonomously aim weapons and conduct US domestic surveillance, three people familiar with the matter told Reuters.
The discussions represent an early test case for whether Silicon Valley, courtesy of Washington after years of tension, can influence how the U.S. military and intelligence officials deploy increasingly powerful AI on the battlefield.
After extensive negotiations over a contract worth up to $200 million, the U.S. Department of Defense and humanitarian organizations are at an impasse, six people familiar with the matter said on condition of anonymity.
The company’s stance on how its AI tools can be used has heightened disagreements between it and the Trump administration, the details of which have not been previously disclosed.
Read also | US cyber agency chief Madhu Gottumukkala uploads sensitive files to ChatGPT
A spokesman for the Defense Department, which the Trump administration renamed the War Department, did not immediately respond to requests for comment.
Anthropic said its artificial intelligence is “used extensively for national security missions by the US government, and we are having productive discussions with the Department of War about ways to continue this work.”
The rift, which could threaten Anthropic’s Pentagon business, comes at a sensitive time for the company.
The San Francisco-based startup is preparing for a possible public offering. It has also spent heavily on US national security courtships and sought an active role in shaping government AI policy.
Read also | Pentagon ready to deploy Grok despite global backlash over AI’s obscene images
Anthropic is one of the few major AI developers awarded contracts by the Pentagon last year. Others were Alphabet’s Google, Elon Musk’s xAI and OpenAI.
Read also | Will China overtake the US military? The Pentagon is sounding the alarm about BIG risks
Targeting weapons
In discussions with government officials, representatives of the Anthropic Organization expressed concern that its tools could be used to spy on Americans or help aim weapons without sufficient human oversight, sources told Reuters.
The Pentagon bristled at the company’s directive. In line with the department’s Jan. 9 memo on artificial intelligence strategy, Pentagon officials said they should be able to deploy commercial AI technology regardless of companies’ usage policies as long as they comply with US law, the sources said.
Still, Pentagon officials would likely need Anthropic’s cooperation moving forward. Its models are trained to avoid actions that could lead to harm, and Anthropic employees would be the ones to rebuild its AI for the Pentagon, some sources said.
Anthropic’s caution has drawn conflict with the Trump administration before, Semaphore reported.
In an essay on his personal blog this week, Anthropic CEO Dario Amodei warned that AI should support national defense “in all ways except those that would make us more like our autocratic adversaries.”
Amodei was one of the co-founders of Anthropic, who criticized the fatal shootings of US citizens protesting immigration actions in Minneapolis, which he described as a “horror” in a post on X.
The deaths have heightened concerns among some in Silicon Valley about the government’s use of their tools for potential violence.





