Anthropic Loses Bid To Block Pentagon AI Risk Label


The US Court of Appeals for the District of Columbia Circuit has rejected AI company Anthropic’s bid to pause the US Defense Department’s designation of the firm as a national security supply chain risk. 

The three-judge panel denied the emergency motion for a stay on Wednesday, ruling that the government’s interest in controlling how it secures AI technology during active military conflict outweighed any financial or reputational harm Anthropic may suffer from the label.

The decision means that part of the Defense Department’s official designation of Anthropic’s products as a “supply-chain risk to national security” remains in place. 

This designation has never been applied to an American company before and also blocks contractors who work with the Pentagon from using Anthropic’s AI models. It could set a chilling precedent for other tech companies that do not comply with government demands. 

“In our view, the equitable balance here cuts in favor of the government,” wrote the three-judge panel.

“On one side is a relatively contained risk of financial harm to a single private company.  On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict.” 

US Court of Appeals order, case No. 26-01049. Source: Courtlistener

Challenging the label in two courts 

The dispute stems from a deal between the AI firm and the Pentagon in July 2025 on a contract to make Anthropic’s AI model Claude the first large language model approved for use on classified networks. 

However, negotiations collapsed in February, with the government seeking to renegotiate and insisting that Anthropic allow military use of Claude without restrictions. Anthropic maintained that its technology should not be used for lethal autonomous weapons and mass domestic surveillance of Americans.

US President Donald Trump ordered all federal agencies to stop using Anthropic products in late February, stating that the company had made a “disastrous mistake trying to strong-arm the Department of War.” 

Anthropic sued the Trump administration in March in what it termed an “unlawful campaign of retaliation.” 

In late March, the District Court for the Northern District of California ordered a preliminary injunction against the Pentagon over the designation and temporarily halted Trump’s directive, branding it “Orwellian.” 

Related: Anthropic limits access to AI model over cyberattack concerns

However, because of the way federal procurement law is written, Anthropic had to challenge the designation on two separate legal tracks — in a California district court on constitutional grounds and directly at the D.C. Circuit under the specific statute that authorized the designation.

The ruling acknowledged that Anthropic will “likely suffer some degree of irreparable harm absent a stay,” and stated that “substantial expedition is warranted.”

Acting US Attorney General Todd Blanche said on X that it was a “resounding victory for military readiness.”

“Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.”

Asia Express: Phantom Bitcoin checks, China tracks tax on blockchain