U.S. Defence Secretary Pete Hegseth has warned AI firm Anthropic to drop restrictions on its Claude model for defence applications or face removal from Pentagon supply chains.
The threat came at a Tuesday Pentagon meeting Hegseth demanded with CEO Dario Amodei, sources told the BBC. Anthropic has until Friday evening to comply, a senior official said.
“We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” Anthropic stated.
Amodei outlined red lines during the cordial talks, including bans on AI for autonomous strikes or domestic surveillance. The Pentagon insists the dispute concerns general access, not those issues. Non-compliance could invoke the Defense Production Act to force usage and label Anthropic a supply-chain risk.
Amodei “expressed appreciation for the Department’s work and thanked the Secretary for his service,” a spokesperson noted.

Anthropic won a $200 million Pentagon contract last summer with Google, OpenAI, and xAI.
Known for ethical AI, Anthropic’s reports admit hackers “weaponised” Claude for cyberattacks. It aided a U.S. operation capturing ex-Venezuelan President Nicolás Maduro via Palantir.
“They need to get to a resolution,” said Emelia Probasco of Georgetown University’s Center for Security and Emerging Technology. “In my opinion, we should be giving the people we ask to serve every possible advantage. We owe it to them to figure this out.”