The US government had flagged the artificial intelligence giant as a supply-chain risk
The D.C. Circuit appeals court has rejected artificial intelligence giant Anthropic’s request for relief from the US Defense Department’s declaration that it is a supply-chain risk, reported the Wall Street Journal.
Pentagon had classified Anthropic as a supply-chain risk under two statutes – one is under the Washington appeals court jurisdiction. Anthropic had told the court that the designation was likely to impact its revenue and investors.
However, the court said that even though Anthropic’s finances had been negatively impacted by Pentagon’s claim, it did not feel strongly enough to bypass the US government on a national security issue. Thus, Anthropic will remain a security threat, and the company will be locked out of new contracts and Pentagon systems.
Anthropic’s AI model, Claude, is presently being utilized in the conflict with Iran. US president Donald Trump had set a six-month deadline in February for the defence department to shift out of the platform.
“The D.C. Circuit’s denial will prolong ambiguities regarding whether political considerations can drive federal procurement,” said Matt Schruers, chief executive of the Computer & Communications Industry Association, in a statement published by WSJ.
A California court had last month granted Anthropic a preliminary injunction halting Trump’s wide ban on the use of Claude by all federal agencies. The US government is appealing this decision, which has led companies to state that they would cease Claude’s use in government work to duck potential problems.
Anthropic’s battle with the US government began after recent contract renewal negotiations fell through. The AI company had requested that its models should not be applied to fully autonomous weapons or in domestic surveillance; Pentagon wanted an agreement whereby the military could use Anthropic’s AI in all legal applications, arguing that the restrictions Anthropic sought were unnecessary since they were already covered by military policies or legislation.
According to WSJ, Anthropic’s designation as a supply-chain risk was almost unprecedented because the classification typically applied to US opponents like China. Anthropic rivals OpenAI and xAI had acquiesced to Pentagon’s demands.
In a statement published by WSJ, acting attorney general Todd Blanche described the appeals court ruling as “a resounding victory for military readiness.” Nonetheless, an Anthropic spokeswoman said in a statement published by WSJ that the company was confident the courts would “ultimately agree that these supply-chain designations were unlawful.”