Judge temporairly blocks Pentagon's 'supply chain risk' designation for Anthropic

Judge temporairly blocks Pentagon's 'supply chain risk' designation for Anthropic

A federal judge has temporarily blocked the Pentagon from designating Anthropic a "supply-chain risk to national security," ruling that the Trump administration unlawfully tried to punish the AI company for publicly criticizing the government's position on using AI.

ABC News

"The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press," U.S. District Judge Rita Lin wrote in Thursday's order. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation."

The judge's order, which followscourt proceedings earlier this week, is set to take effect in seven days, allowing the Trump administration to pursue an appeal.

Jonathan Raa/NurPhoto via Getty Images - PHOTO: The Anthropic AI logo is displayed on a mobile phone.

Lin said the efforts to restrict any government use of Anthropic's AI chatbot, Claude, "appear designed to punish Anthropic."

"These broad measures do not appear to be directed at the government's stated national security interests. If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic," Lin wrote.

Anthropic "has shown that these broad punitive measures were likely unlawful and that it is suffering irreparable harm from them," she continued.

Lin also rebuked the Trump administration for baselessly claiming that Anthropic might try to sabotage the military based on ideological issues.

Advertisement

Judge appears skeptical of Pentagon arguments in legal fight over Anthropic 'supply chain risk' designation

"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," she wrote.

The order effectively restores the status quo prior to Defense Secretary Pete Hegseth'sFebruary directiveto designate the company a "supply chain risk." Lin acknowledged that the Pentagon can still attempt to phase out Claude from national security applications using other lawful means.

Evelyn Hockstein/Reuters - PHOTO: President Donald Trump speaks next to Defense Secretary Pete Hegseth during a cabinet meeting at the White House in Washington, March 26, 2026.

"It is the Department of War's prerogative to decide what AI product it uses. Everyone, including Anthropic, agrees that the Department of War may permissibly stop using Claude and look for a new AI vendor who will allow 'all lawful uses' of its technology. That is not what this case is about," she wrote. "The question here is whether the government violated the law when it went further."

In a statement, an Anthropic spokesperson said: "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."

Last month, President Donald Trump ordered U.S. government agencies to stop using Anthropic's products, and Hegseth designated the AI company a "supply chain risk," amid a dispute with the company over the use of its technology.

The company said its artificial intelligence should not be used for fully autonomous weapons -- meaning AI, not humans, making final battlefield targeting decisions -- or for mass domestic surveillance.

 

ALPHA MAG © 2015 | Distributed By My Blogger Themes | Designed By Templateism.com