Anthropic Rejects the Pentagon’s Demand That It Remove AI Safeguards

Anthropic is seeking to prevent its AI model Claude from being used for “mass domestic surveillance” and “fully autonomous weapons,” requests that the DOD has said are unworkable.

Defense Secretary Pete Hegseth stands outside the Pentagon

Kevin Wolf/AP

Artificial intelligence company Anthropic said Thursday that it would not agree to the Department of Defense’s request to allow its AI model to be used freely at the discretion of Pentagon leaders, which would require that the firm alter its current safeguards.

Anthropic is seeking to prevent its AI tools, including its model Claude, from being used for “mass domestic surveillance” and “fully autonomous weapons,” requests that the DOD has said are unworkable.

“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Anthropic CEO Dario Amodei said in a Thursday statement defending the company’s decision.

Anthropic and the Pentagon have been holding negotiations for weeks over the issue. The Trump administration has threatened to invoke the Defense Production Act, which gives the White House authority to use national defense concerns to compel a domestic company to produce goods or services at the government’s behest — or be declared a “supply chain risk.”

The designation, if approved, would forbid any DOD contractor from using the company’s software.

As of Wednesday, the Pentagon had already begun the process of designating Anthropic as a supply chain risk, according to Axios, which reported that representatives from the DOD asked aerospace giants Boeing and Lockheed Martin to document their reliance on Claude.

In his statement, Amodei said that even the most advanced AI models are not reliable enough to carry out the tasks intended by the Pentagon.

Amodei said while semi- and even fully autonomous weapons have proven beneficial in war zones, “frontier AI systems are simply not reliable enough to power fully autonomous weapons.”

“They need to be deployed with proper guardrails, which don’t exist today,” Amodei said. “We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.”

Amodei also argued that the government’s threats against his company have been “inherently contradictory.”

“One labels us a security risk; the other labels Claude as essential to national security,” Amodei said. “Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”

In a post to X on Thursday, chief Pentagon spokesperson Sean Parnell said the department has “no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media.”

“Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell continued. “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions.”

In his statement, Amodei touched on how AI could aid national security while warning that, if unrestrained, “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”

“Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at a massive scale,” he wrote. “To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI.”