Business

U.S. Judge Blocks Pentagon Blacklist of AI Firm Anthropic in First Amendment Battle

California court rules government retaliated against AI company for opposing military use of chatbot technology.

U.S. Judge Blocks Pentagon Blacklist of AI Firm Anthropic in First Amendment Battle
(CBC Business / File)

A U.S. federal judge has temporarily blocked the Pentagon's controversial blacklisting of artificial intelligence company Anthropic, marking a significant victory for the firm in its high-stakes legal battle with the American military over AI safety concerns.

U.S. District Judge Rita Lin ruled Thursday that the government likely violated Anthropic's First Amendment rights when Secretary of War Pete Hegseth designated the company a national security supply-chain risk. The decision comes after Anthropic opposed allowing the military to use its Claude chatbot for surveillance or autonomous weapons systems.

"The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press," Lin wrote in her 43-page ruling. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation."

The temporary injunction will take effect in seven days, giving the administration time to appeal the decision.

Unprecedented Government Action

Anthropic's designation marked the first time a U.S. company has been publicly labelled a supply-chain risk under an obscure government procurement statute typically reserved for foreign entities that could potentially sabotage military systems.

The company filed its lawsuit in California federal court on March 9, alleging that Hegseth overstepped his authority and violated both First and Fifth Amendment protections. Anthropic argued it was denied due process and faced retaliation for its public stance on AI safety.

The blacklisting blocks Anthropic from certain military contracts, with company executives estimating potential losses in the billions of dollars, along with significant reputational damage.

AI Safety Concerns at Centre of Dispute

Anthropic has consistently maintained that current AI models are not reliable enough for safe deployment in autonomous weapons systems. The company also opposes domestic surveillance applications, citing civil rights violations.

"AI models are not reliable enough to be safely used in autonomous weapons and we oppose domestic surveillance as a violation of rights," the company has stated.

The Pentagon countered that private companies should not constrain military operations, while asserting it has no interest in the controversial applications Anthropic opposes and would only use the technology within legal boundaries.

In court filings, the Justice Department argued that Anthropic's refusal to lift restrictions could create uncertainty for Pentagon operations and risk disabling military systems during critical missions.

Company Responds to Victory

Anthropic spokesperson Danielle Cohen welcomed the court's decision while emphasizing the company's willingness to work with government officials.

"While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," Cohen said in a statement.

The case highlights growing tensions between AI companies and government agencies over the appropriate use of artificial intelligence technology in military and surveillance applications. As AI capabilities continue to advance rapidly, questions about oversight, safety, and civil liberties remain at the forefront of policy debates.

The legal battle occurs against the backdrop of increasing global competition in AI development, with governments worldwide seeking to harness the technology for defence purposes while companies grapple with ethical considerations.

This article is based on reporting from CBC Business and Thomson Reuters.

Share this story