Skip to content
news11 min read

Trump Banned Anthropic From the US Government. A Federal Judge Just Blocked It.

The Trump administration banned Anthropic's Claude from all federal agencies after the company refused to allow autonomous weapons and mass surveillance. Judge Rita Lin blocked the ban on March 26, calling it 'Orwellian' First Amendment retaliation. DOJ appealed April 2 to the Ninth Circuit.

Author
Anthony M.
11 min readVerified April 5, 2026Tested hands-on
Federal judge blocks Trump administration ban on Anthropic Claude from US government agencies
Judge Rita F. Lin ruled the Trump administration's ban on Anthropic likely violated the First Amendment -- a landmark decision for AI governance.

On March 26, 2026, US District Judge Rita F. Lin blocked the Trump administration's ban on Anthropic's Claude AI across all federal agencies, calling the Pentagon's "supply chain risk" designation an "Orwellian notion" and "classic illegal First Amendment retaliation." The ban originated from Anthropic's refusal to allow its $200M Pentagon contract to cover autonomous weapons and mass domestic surveillance. The DOJ filed an appeal to the Ninth Circuit on April 2.

What Happened: The Full Timeline

This is the most consequential clash between the US government and an AI company in history. It involves a $200 million Pentagon contract, a presidential directive, a "supply chain risk" designation normally reserved for foreign adversaries, and a federal judge who refused to let it stand. We have been tracking this story since the first reports of tension in late February, and we want to lay out exactly what happened, why it matters, and where it goes from here.

DateEventSignificance
July 2025Anthropic awarded $200M Pentagon contractFirst AI lab on classified DoD networks
Feb 23, 2026Hegseth-Amodei meeting at PentagonDoD demands "all lawful use cases" access
Feb 24, 2026Hegseth issues Friday ultimatumDrop guardrails or face consequences
Feb 26, 2026Anthropic publicly refusesAmodei: "cannot in good conscience" comply
Feb 27, 2026Trump orders all agencies to stop using Anthropic6-month phase-out for existing deployments
Feb 27, 2026OpenAI announces Pentagon deal same dayImmediate replacement, raises conflict questions
Mar 5, 2026Pentagon designates Anthropic "supply chain risk"Historically reserved for foreign adversaries
Mar 9, 2026Anthropic files two federal lawsuitsCalifornia district court + DC appeals court
Mar 24, 2026Preliminary injunction hearingJudge presses DoD on justification
Mar 26, 2026Judge Lin issues 43-page injunctionBlocks ban, calls it First Amendment retaliation
Apr 2, 2026DOJ files appeal to Ninth CircuitNinth Circuit sets April 30 briefing deadline

The $200M Contract That Started Everything

In July 2025, the Department of Defense -- through the Chief Digital and Artificial Intelligence Office (CDAO) -- awarded Anthropic a two-year prototype other transaction agreement with a $200 million ceiling. Anthropic became the first AI lab to integrate its models into mission workflows on classified Pentagon networks. The company partnered with Palantir to deploy Claude across defense and intelligence workflows, processing and analyzing classified data at unprecedented scale.

By all accounts, the partnership was working. The Pentagon praised Anthropic publicly. Claude passed rigorous national security vetting. Defense analysts were using it daily for intelligence synthesis, logistics planning, and document processing. The contract had explicit guardrails that both parties agreed to at signing: no autonomous weapons, no mass domestic surveillance.

The trouble started when the DoD decided those guardrails were no longer acceptable.

The Hegseth Ultimatum

On February 23, 2026, Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei at the Pentagon. The meeting was described by sources as "tense." Hegseth demanded that Anthropic allow its models to be used "for all lawful use cases" -- which would include mass domestic surveillance of Americans and fully autonomous weapons systems that can fire without human involvement.

On February 24, Hegseth gave Amodei a Friday deadline: drop the guardrails or face consequences. The Pentagon spelled out the consequences explicitly -- cancel the $200M contract and designate Anthropic a "supply chain risk," a classification normally reserved for companies connected to foreign adversaries like China and Russia.

Emil Michael, the defense undersecretary for research and engineering, publicly alleged on X that Amodei "has a God-complex" and "wants nothing more than to try to personally control the US Military." The rhetoric was extraordinary for a government official discussing an American defense contractor.

Anthropic Refuses: "We Cannot in Good Conscience"

On February 26, Dario Amodei published a public statement on Anthropic's website. The core message was unequivocal: Anthropic would not comply.

"We cannot in good conscience agree to allow the Department of Defense to use our AI models in all lawful use cases."

Amodei laid out two specific red lines:

  • Autonomous weapons: Frontier AI systems are "not reliable enough to power fully autonomous weapons," and without proper oversight, they "cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day."
  • Mass domestic surveillance: Amodei noted that mass surveillance of Americans "actually isn't illegal. It was just never useful before the era of AI. So there's this way in which domestic mass surveillance is getting ahead of the law." He argued that just because something is currently legal does not mean a company should enable it at scale.

When asked what he would tell President Trump, Amodei responded: "I would say, we are patriotic Americans. Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security. The red lines we have drawn we drew because we believe that crossing those red lines is contrary to American values."

Trump's Ban and the Supply Chain Risk Designation

On February 27, 2026, President Trump ordered all federal agencies to "immediately cease all use" of Anthropic's technology, with a six-month phase-out period for existing deployments. The directive affected every branch of government that had adopted Claude -- from intelligence agencies to civilian departments.

On March 5, the Pentagon went further: it officially designated Anthropic a "supply chain risk" to national security. This was an extraordinary step. The supply chain risk framework was created to protect the US military from companies with ties to foreign adversaries -- think Huawei, Kaspersky, or firms linked to the Chinese military. Applying it to an American AI company headquartered in San Francisco, founded by former OpenAI researchers, and already operating on classified networks was, as legal experts noted, unprecedented.

The designation meant that every defense contractor that used Anthropic's technology had to certify they would stop -- or risk losing their own government contracts. The ripple effect was immediate and devastating for Anthropic's enterprise business.

Timeline of the Trump administration Anthropic ban from July 2025 contract to April 2026 DOJ appeal
The five-month timeline from record Pentagon contract to federal court battle.

OpenAI Steps In -- Same Day

Hours after Trump banned Anthropic on February 27, OpenAI announced its own Pentagon deal. The timing was noticed by everyone in the industry.

OpenAI CEO Sam Altman acknowledged the negotiations were "definitely rushed." The company published a blog post arguing its agreement included protections against autonomous weapons and mass domestic surveillance -- but critics pointed out the protections were less binding than Anthropic's contractual red lines. OpenAI's approach relied on cloud-only deployment and internal safety teams rather than hard contractual limits.

The move triggered internal dissent at OpenAI. CNN reported that some OpenAI staff were "fuming" about the Pentagon deal. More than 30 Google and OpenAI employees later filed an amicus brief in Anthropic's lawsuit, supporting the company's position.

The MIT Technology Review ran a headline that captured the industry's unease: "OpenAI's 'compromise' with the Pentagon is what Anthropic feared."

Anthropic Sues the Pentagon

On March 9, Anthropic filed two lawsuits -- one in federal court in the Northern District of California, and another in the DC Circuit Court of Appeals. The central argument was that the supply chain risk designation was illegal retaliation for exercising First Amendment rights.

The company argued that the Pentagon had praised it as a partner, put it through rigorous security vetting, and only turned against it after Amodei went public with his refusal. The sequence of events, Anthropic's lawyers argued, demonstrated a textbook pattern of government retaliation against protected speech.

The case attracted extraordinary amicus support:

  • Microsoft filed an amicus brief backing Anthropic's request for a temporary restraining order
  • 22 retired military officials co-signed Microsoft's brief
  • 30+ Google and OpenAI employees filed their own brief, including Google DeepMind chief scientist Jeff Dean
  • Nearly 150 retired federal and state judges warned the Pentagon was misusing a tool designed for foreign adversaries

Judge Lin's Ruling: "Orwellian" and "Corporate Murder"

On March 26, 2026, US District Judge Rita F. Lin of the Northern District of California issued a 43-page ruling granting Anthropic's request for a preliminary injunction. The ruling was devastating for the government's position.

Judge Lin's key findings:

"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."

"Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation."

The judge noted that the Defense Department provided "no legitimate basis to infer [Anthropic] might become a saboteur" and rejected the argument that disagreement constitutes a valid security concern. She wrote that while agencies may choose contractors, they cannot attempt "corporate murder" because a firm refuses compliance or exercises First Amendment rights "in ways embarrassing to the department."

The injunction specifically:

  • Blocked the "supply chain risk" designation
  • Halted Trump's directive ordering agencies to stop using Anthropic
  • Restored the status quo as it existed before the ban
  • Stayed the order for one week to allow a government appeal

DOJ Appeals to the Ninth Circuit

On April 2, 2026, the Department of Justice filed a notice of appeal to the US Court of Appeals for the Ninth Circuit. The government is seeking to overturn Judge Lin's preliminary injunction and reinstate both the ban and the supply chain risk designation.

The Ninth Circuit has set an April 30 deadline for the DOJ to file its opening brief. Legal experts we have spoken with expect oral arguments sometime in late May or June 2026. The case, Anthropic PBC v. Department of Defense, is being watched by every major tech company, defense contractor, and AI safety researcher in the world.

Meanwhile, the GSA issued a statement on April 3 acknowledging the preliminary injunction and directing agencies to comply with the court's order -- meaning Anthropic's products remain available to federal agencies for now.

Why This Matters for the Entire AI Industry

This is not just an Anthropic story. It is a precedent-setting case that will determine whether the US government can punish AI companies for maintaining safety guardrails.

As NYU's Stern Center for Business and Human Rights put it: "If the United States government responds to principled limits by threatening to cut off the company that imposes them, it sends a clear message to the entire industry: responsibility is a liability."

The implications span multiple dimensions:

The Chilling Effect on AI Safety

Every AI company now knows that maintaining red lines on military use could result in being designated a national security threat. If the Ninth Circuit overturns Judge Lin's ruling, it would create a legal framework where the government can classify any American tech company as a foreign-adversary-level risk simply for disagreeing with policy. The chilling effect on responsible AI development would be profound.

The Defense Tech Startup Ecosystem

TechCrunch raised a critical question: "Will the Pentagon's Anthropic controversy scare startups away from defense work?" The answer appears to be yes, at least partially. Multiple defense tech founders told reporters they were reconsidering government contracts after watching Anthropic get blacklisted for maintaining contractual terms both parties originally agreed to.

First Amendment Protections for Tech Companies

Judge Lin's ruling, if upheld, establishes that tech companies have First Amendment protections when they publicly disagree with government contracting demands. This is a novel legal finding with sweeping implications. It means the government cannot use procurement power as a weapon to silence corporate dissent -- a principle that extends far beyond AI.

AI industry impact of the Anthropic government ban showing the balance between AI safety and government power
The Anthropic ban triggered the largest industry coalition in AI history, with Microsoft, Google DeepMind, and 150 retired judges filing amicus briefs.

The Unexpected Public Support for Anthropic

One of the most striking side effects of the ban: on February 28, one day after Trump's order, Anthropic's Claude app rose to No. 1 in Apple's US App Store free rankings, overtaking ChatGPT. The Streisand Effect was in full force -- the government's attempt to punish Anthropic turned it into the most visible AI company in the world overnight.

What Happens Next

We are tracking three key developments:

  1. Ninth Circuit briefing (April 30): The DOJ must file its opening brief by the end of April. Anthropic will respond, and we expect amicus briefs from the same coalition that backed it in the district court.
  2. Oral arguments (May-June 2026): The Ninth Circuit is expected to schedule oral arguments for late spring. A ruling could come anytime from summer 2026 onward.
  3. Congressional action: Senator Elizabeth Warren has already written to Hegseth demanding justification for the supply chain risk designation. Multiple bipartisan bills addressing AI procurement and military AI governance are in committee.

If the Ninth Circuit upholds Judge Lin's ruling, it sets a binding precedent across nine western states that the government cannot weaponize procurement designations against companies exercising First Amendment rights. If it reverses, the case almost certainly heads to the Supreme Court.

Our Take

We have covered AI tools for over a year at ThePlanetTools. We review them, test them, compare them, and track the companies behind them. We do not typically wade into politics. But this story is impossible to separate from the tools themselves.

Whether you use Claude, ChatGPT, Gemini, or any other AI tool, the outcome of this case will shape what those tools can and cannot do. If the government can punish companies for setting safety boundaries, every AI company will face pressure to remove guardrails. If Judge Lin's ruling stands, companies retain the right to draw red lines -- even when the customer is the most powerful military on Earth.

The facts are clear. Anthropic had a working $200M contract. Both parties agreed to terms. The government changed those terms. Anthropic said no. The government tried to destroy Anthropic's business. A federal judge said that was unconstitutional.

What happens at the Ninth Circuit will determine whether that principle holds.

Frequently Asked Questions

Why did Trump ban Anthropic from the US government?

President Trump ordered all federal agencies to stop using Anthropic's Claude AI on February 27, 2026, after the company refused to allow the Pentagon to use its models for mass domestic surveillance and fully autonomous weapons. Anthropic had a $200M Pentagon contract with explicit guardrails that both parties originally agreed to, but the DoD later demanded "all lawful use cases" access without limitations.

What does "supply chain risk" designation mean for Anthropic?

The Pentagon designated Anthropic a "supply chain risk" on March 5, 2026 -- a classification normally reserved for companies connected to foreign adversaries like China or Russia. It required every defense contractor using Anthropic's technology to certify they would stop, effectively cutting Anthropic off from the entire defense ecosystem. Judge Lin blocked this designation on March 26, calling it an unprecedented misuse of the framework.

Who is Judge Rita F. Lin and what did she rule?

Judge Rita F. Lin is a US District Judge in the Northern District of California. On March 26, 2026, she issued a 43-page preliminary injunction blocking both the supply chain risk designation and Trump's ban on federal agencies using Anthropic. She called the government's actions "Orwellian" and "classic illegal First Amendment retaliation," finding the Pentagon had no legitimate basis to classify Anthropic as a security threat.

What is Anthropic's position on military AI use?

Anthropic supports military use of its AI for many applications -- it actively worked with the Pentagon through a $200M contract and partnered with Palantir for defense workflows. The company draws two specific red lines: no fully autonomous weapons systems (AI that can fire without human oversight) and no mass domestic surveillance of Americans. CEO Dario Amodei stated these red lines exist because current AI "is not reliable enough" for autonomous weapons and mass surveillance "is getting ahead of the law."

Did OpenAI replace Anthropic at the Pentagon?

OpenAI announced a Pentagon deal on February 27, 2026 -- the same day Trump banned Anthropic. CEO Sam Altman admitted the negotiations were "definitely rushed." OpenAI claims its contract includes protections against autonomous weapons and mass surveillance, but critics note these rely on internal safety teams rather than hard contractual limits. The deal triggered internal dissent at OpenAI and amicus briefs from 30+ Google and OpenAI employees supporting Anthropic.

What happens next in the Anthropic v. Department of Defense case?

The DOJ appealed Judge Lin's ruling to the Ninth Circuit Court of Appeals on April 2, 2026. The court set an April 30 deadline for the government's opening brief. Oral arguments are expected in May-June 2026. If the Ninth Circuit upholds the injunction, it sets binding precedent across nine western states. If reversed, the case likely heads to the Supreme Court.

How did the tech industry react to the Anthropic ban?

The industry rallied behind Anthropic in unprecedented fashion. Microsoft filed an amicus brief co-signed by 22 retired military officials. Over 30 Google and OpenAI employees -- including Google DeepMind chief scientist Jeff Dean -- filed their own brief. Nearly 150 retired federal and state judges warned the Pentagon was misusing the supply chain risk framework. Meanwhile, Claude rose to No. 1 on Apple's App Store the day after the ban.

Frequently Asked Questions

What exactly did Judge Rita F. Lin rule in her 43-page injunction?

On March 26, 2026, US District Judge Rita F. Lin issued a 43-page preliminary injunction blocking the Trump administration's ban on Anthropic's Claude across all federal agencies. She ruled the Pentagon's 'supply chain risk' designation was an 'Orwellian notion' and constituted 'classic illegal First Amendment retaliation' — because it punished Anthropic for publicly refusing to enable autonomous weapons and mass domestic surveillance, which are forms of protected speech and corporate expression.

Why did Anthropic refuse the Pentagon's demands while OpenAI accepted a similar deal?

Anthropic CEO Dario Amodei drew two hard contractual red lines: no fully autonomous weapons (arguing frontier AI is 'not reliable enough' without human oversight), and no mass domestic surveillance of Americans (noting 'it was just never useful before the era of AI'). OpenAI, by contrast, accepted a Pentagon deal announced on the same day Trump's ban dropped — February 27, 2026 — relying on cloud-only deployment and internal safety teams rather than hard contractual limits. OpenAI CEO Sam Altman acknowledged negotiations were 'definitely rushed.' More than 30 Google and OpenAI employees later filed an amicus brief supporting Anthropic's position, and MIT Technology Review wrote that 'OpenAI's compromise is what Anthropic feared.'

What is the 'supply chain risk' designation the Pentagon applied to Anthropic — and why is it unprecedented?

The supply chain risk framework was designed to protect the US military from companies with ties to foreign adversaries — it has previously been applied to Huawei, Kaspersky, and firms linked to the Chinese or Russian military. Applying it to Anthropic — an American company headquartered in San Francisco, founded by ex-OpenAI researchers, already operating on classified Pentagon networks under a $200M contract — was, as legal experts noted, entirely unprecedented. The designation forced every defense contractor using Anthropic's technology to certify they would stop, or risk losing their own government contracts.

What happens next? Will the Ninth Circuit overturn Judge Lin's injunction?

The DOJ filed its appeal to the Ninth Circuit on April 2, 2026 — seven days after Judge Lin's ruling. The Ninth Circuit has set an April 30, 2026 briefing deadline. The Ninth Circuit is historically one of the most plaintiff-friendly circuits on First Amendment issues, which is strategically unfavorable for the government. Judge Lin's ruling that the supply chain risk designation was 'classic illegal First Amendment retaliation' sets a high bar for the DOJ to overcome on appeal. No timeline for a final ruling has been set.

What was the $200M Pentagon contract Anthropic held, and what happened to it?

In July 2025, the Department of Defense — through the Chief Digital and Artificial Intelligence Office (CDAO) — awarded Anthropic a two-year prototype 'other transaction agreement' with a $200 million ceiling, making Anthropic the first AI lab to integrate its models into mission workflows on classified Pentagon networks. Anthropic partnered with Palantir to deploy Claude for intelligence synthesis, logistics planning, and document processing. The contract included explicit guardrails against autonomous weapons and mass surveillance — guardrails both parties agreed to at signing. The DoD cancelled it after Anthropic refused to remove those restrictions in February 2026.

How does the Anthropic case compare to past government actions against tech companies like Huawei or TikTok?

Past supply chain risk designations targeted companies with alleged ties to foreign governments — Huawei (China), Kaspersky (Russia), or firms on the Entity List. This is the first known case where the framework was used against a domestic American AI company as apparent retaliation for its public speech about safety policy. Judge Lin explicitly called this an 'Orwellian notion,' distinguishing it from legitimate national security actions. The case is considered the most consequential clash between the US government and an AI company in history, with direct implications for every AI lab operating under or seeking federal contracts.

What did Dario Amodei say he would tell President Trump?

In his public statement on February 26, 2026, Amodei said: 'I would say, we are patriotic Americans. Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security. The red lines we have drawn we drew because we believe that crossing those red lines is contrary to American values.' He argued that autonomous weapons and mass domestic surveillance — though potentially 'lawful' — cross ethical lines that Anthropic cannot in good conscience enable, regardless of legality.

Which other AI companies and employees publicly backed Anthropic against the ban?

More than 30 employees from Google and OpenAI filed a joint amicus brief in Anthropic's lawsuit supporting its position. CNN reported that some OpenAI staff were internally 'fuming' about their company's rushed Pentagon deal. Anthropic filed two federal lawsuits on March 9, 2026 — one in the Northern District of California and one in the DC Circuit — framing the case as a First Amendment retaliation issue. The amicus support from OpenAI's own employees underscored the rare cross-industry alignment against the government's approach.

Related Articles

Was this review helpful?
Anthony M. — Founder & Lead Reviewer
Anthony M.Verified Builder

We're developers and SaaS builders who use these tools daily in production. Every review comes from hands-on experience building real products — DealPropFirm, ThePlanetIndicator, PropFirmsCodes, and many more. We don't just review tools — we build and ship with them every day.

Written and tested by developers who build with these tools daily.