Artificial intelligence company Anthropic filed two landmark federal lawsuits against the US government on Monday, alleging unlawful retaliation after it refused to allow its Claude AI model to be used for lethal autonomous weapons and mass surveillance of American citizens.
The dispute escalated rapidly after the Pentagon demanded in February that Anthropic remove all safety restrictions on Claude for "all lawful uses." Anthropic agreed to most terms but drew two firm lines: Claude would not be deployed to control autonomous weapons without human oversight, nor would it be used to conduct large-scale surveillance of US citizens.
Defense Secretary Pete Hegseth subsequently labelled Anthropic a "supply chain risk" — a designation typically reserved for companies with ties to China — effectively barring the company from all federal government contracts. President Trump separately ordered all government personnel to stop using Anthropic's products.
"These actions are unprecedented and unlawful," Anthropic said in its court filing. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."
The White House dismissed the lawsuit, saying its goal was ensuring the military operates "not under any woke AI company's terms of service." Anthropic filed its suits in the US District Court for Northern California and the DC Circuit Court of Appeals.
A Yogi Walks Into the Senate Delaware Makes History by Honoring Sadhguru With Official Proclamation
Apr 17, 2026
Five Indian Americans Named to TIME100 Most Influential People of 2026 Across Tech, Politics and Science
Apr 16, 2026
India's R. Vaishali Makes History, Wins FIDE Women's Candidates
Apr 16, 2026
Indian American Duke Alumnus and Wife Give $3M to Transform Radiology and Patient Care
Apr 14, 2026
Swami Vivekananda Statue Unveiled in Seattle
Apr 13, 2026