The U.S. Department of Defence is contemplating severing ties with AI developer Anthropic. This follows the firm’s resistance to easing limits on military use of its models. An Axios report on Saturday cited a government source detailing frustration after extended negotiations.
Defence officials demand access to tools from Anthropic, OpenAI, Google, and xAI for all lawful purposes. These include weapons development, intelligence work, and battlefield tasks. The other firms appear more accommodating, unlike Anthropic’s stance on autonomous weapons and domestic surveillance.
An Anthropic spokesperson said the company had not discussed the use of its AI model Claude for specific operations with the Pentagon.

The spokesperson said conversations with the U.S. government so far had focused on a specific set of usage policy questions, including hard limits around fully autonomous weapons and mass domestic surveillance, none of which related to current operations.
Reuters received no immediate Pentagon response. The row intensified after Claude aided the capture of ex-Venezuelan leader Nicolás Maduro via Palantir partnership. A prior $200 million deal now stalls over these ethical red lines.