AI pioneer Anthropic has launched an unprecedented legal battle against the Trump administration after the Pentagon branded it America’s first domestic “supply chain risk” and barred its technology from federal use.
The move, triggered by the company’s refusal to lift restrictions on military applications of its Claude models, has sparked a fierce constitutional showdown over free speech and government power.
The dispute erupted when Defence Secretary Pete Hegseth demanded Anthropic scrap longstanding contract clauses prohibiting “lethal autonomous warfare” and mass surveillance of Americans.
Anthropic, which has supplied AI tools for classified defence work since 2024, sought compromise through revised terms but faced abrupt retaliation. President Trump publicly denounced its leaders as “left wing nut jobs” and ordered all agencies to abandon the technology. Hegseth quickly formalised the ban, prohibiting contractors from using Claude—now integral to operations at Google, Meta, Amazon and Microsoft.
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” Anthropic stated in its California federal court filing, which also targets Trump’s office, Secretaries Rubio and Lutnick, and 16 agencies including the rebranded Department of War. “No federal statute authorizes the actions taken here.”

White House spokeswoman Liz Huston dismissed Anthropic as “a radical left, woke company” dictating military policy. “Under the Trump Administration, our military will obey the United States Constitution—not any woke AI company’s terms of service,” she told the BBC.
Anthropic claims “irreparable” commercial damage and warns of a chilling effect on tech firms advocating AI safeguards. “Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term,” it argued, adding that its reputation and First Amendment rights face assault.
Remarkably, nearly 40 Google and OpenAI staff filed a supportive court brief despite commercial rivalries. “As a group, we are diverse in our politics and philosophies, but we are united in the conviction that today’s frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails,” they wrote.
Legal expert Carl Tobias of the University of Richmond School of Law predicts a Supreme Court clash. “Anthropic may very well win in federal court, but this government is not shy about appealing,” he said. “It will probably go to the Supreme Court.”