In a landmark case that could redefine the relationship between AI companies and the U.S. government, a federal judge has granted Anthropic a preliminary injunction in its lawsuit against the Department of War, temporarily blocking the government’s blacklisting of the AI company while the judicial process plays out. The judge’s ruling, which will go into effect in seven days, represents a significant early victory for Anthropic and raises profound questions about free speech, government procurement, and the limits of executive power over private technology companies.
The case began when Defense Secretary Pete Hegseth issued a memo in January calling for “any lawful use” language to be written into AI services procurement contracts ??language that would have required companies like Anthropic, OpenAI, xAI, and Google to waive their ability to restrict how the government used their AI technology. Anthropic’s negotiations with the Pentagon stretched on for weeks, hinging on two non-negotiable red lines: the company refused to allow its AI to be used for domestic mass surveillance or lethal autonomous weapons ??AI systems with the power to kill targets with no human involvement in the decision-making process.
When those negotiations broke down, the Department of War designated Anthropic as a “supply chain risk” ??a designation typically reserved for foreign companies potentially linked to adversaries, an almost unheard-of move for a domestic American company. The government’s position, delivered via a series of confrontational public statements from Secretary Hegseth on social media, effectively barred contractors from working with Anthropic.
The Judge’s Ruling: First Amendment Retaliation
Judge Rita F. Lin, a district judge in the Northern District of California, was unsympathetic to the government’s position. In her order, she wrote that the Department of War’s records show it designated Anthropic as a supply chain risk “because of its ‘hostile manner through the press.'” She continued: “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”
During Tuesday’s hearing, Judge Lin pressed Department of War representatives on the logical consequences of their position. She asked whether a military contractor providing IT services could be terminated for using Anthropic if that work was separate from their national security work. When the representative confirmed that such a contractor could indeed be terminated, the judge observed: “You’re standing here saying, ‘We said it but we didn’t really mean it.'” She also questioned why Secretary Hegseth published a public announcement on X stating that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic” rather than simply designating the company as a supply chain risk.
The Stakes: Billions of Dollars and Bipartisan Concerns
The case has major financial implications for Anthropic. The company’s court filings indicate it has “received outreach from numerous outside partners expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic.” Dozens of companies have contacted Anthropic seeking guidance on whether they can terminate their usage. Depending on how broadly the government prohibits contractor work with Anthropic, the company alleged that revenue adding up to between hundreds of millions and multiple billions of dollars could be at risk.
The designation raised bipartisan eyebrows nationwide. The concern was straightforward: if disagreeing with a presidential administration could lead to a company being effectively blacklisted from government contracts and frozen out of the private sector, what would stop that same tactic from being used against any business in any sector? The case drew comparisons to the Chinese government’s practice of pressuring companies through informal social pressure rather than formal legal mechanisms.
The Core Debate the Judge Refused to Settle
Judge Lin was careful to note that she was not deciding who was right in the underlying policy debate. Anthropic argues that its AI product is not safe to use for autonomous lethal weapons and domestic mass surveillance, and that the government must agree not to use it for those purposes if it wants to use the technology. The Department of War argues that military commanders must retain the flexibility to decide what AI products to use. Judge Lin’s view: “It’s not my role to decide who’s right in that debate… The Department of War decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor.” The question she is deciding is narrower: whether the government violated the law when it went beyond simply choosing not to use Claude.
A final verdict could be weeks or months away. In the meantime, the injunction provides Anthropic temporary relief from the worst effects of the blacklisting while the case proceeds. Anthropic’s statement was measured: “We are grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
This case will be studied for years. It is one of the first times an American company has successfully challenged the government’s use of informal pressure tactics to control the behavior of a technology provider ??and the outcome could set important precedents for how both the government and AI companies operate in an era when artificial intelligence is becoming critical national infrastructure.