Trump Bans Anthropic From All Federal Systems as Pentagon Brands AI Firm a National Security Risk

Share

Reading time: 5 min

President Donald Trump on Friday ordered every federal agency to cease using Anthropic’s technology and gave the Pentagon six months to phase out Claude, the only frontier AI model operating on classified military networks, after the company refused to allow unrestricted military use of its artificial intelligence.

The Ban and Its Mechanisms

Trump’s directive, posted on Truth Social about an hour before a Pentagon-imposed 5:01 PM ET deadline expired on 27 February, instructed all government agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology.” He called Anthropic’s leadership “Leftwing nut jobs” who had made a “DISASTROUS MISTAKE trying to STRONG-ARM the Department of War,” according to the full text reproduced by NPR, CNN, CNBC, Axios, Fortune and NBC News. The president added a direct threat: Anthropic had better “get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.”

Shortly after the deadline passed, Defence Secretary Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security” on X — a classification CNN and Fortune noted is normally reserved for entities linked to foreign adversaries such as China and Russia. The designation requires every contractor and supplier doing business with the US military to certify it conducts no commercial activity with Anthropic. The General Services Administration separately confirmed it would remove Anthropic from USAi.gov, the federal government’s centralised AI testing platform.

What Anthropic Refused

The dispute centres on a contract worth up to $200 million that Anthropic signed with the Pentagon in July 2025 to provide Claude on classified defence networks via data analytics firm Palantir. Under the original agreement, Anthropic’s acceptable use policy prohibited Claude’s deployment for mass domestic surveillance of Americans or fully autonomous lethal weapons without human oversight. The Pentagon demanded those restrictions be replaced with language permitting use “for all lawful purposes,” arguing the military’s own legal framework — not a private company’s terms of service — should govern operational decisions.

CEO Dario Amodei rejected the demand in a public statement on Thursday. He argued that frontier AI systems are “not reliable enough to power fully autonomous weapons” and that powerful AI can now stitch together individually innocuous public data into a comprehensive portrait of any person’s life, creating surveillance capabilities that existing law does not adequately address, as reported by CNN, CNBC and Fortune. He called the Pentagon’s twin threats — invoking the Korean War-era Defence Production Act to compel compliance while simultaneously labelling his company a security risk — “inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” Anthropic’s refusal to bend before Friday’s deadline came despite weeks of escalating pressure from Defence Department leadership.

The Pentagon’s Case

Emil Michael, Under Secretary of Defence for Research and Engineering, responded by calling Amodei “a liar” with a “God-complex” on X, according to The Hill and CNN. Michael told CBS News the military had offered written acknowledgments of federal surveillance laws and invited Anthropic to join its AI ethics board. An Anthropic spokesperson countered that new contract language received overnight “made virtually no progress” and that terms “framed as compromise” were “paired with legalese that would allow those safeguards to be disregarded at will.” Pentagon spokesperson Sean Parnell maintained the military would “not let ANY company dictate the terms regarding how we make operational decisions.”

Silicon Valley Mobilises

The confrontation triggered the most significant cross-company employee mobilisation since Google’s 2018 revolt against Project Maven. More than 300 Google employees and over 60 OpenAI employees signed an open letter titled “We Will Not Be Divided,” calling on leadership to uphold Anthropic’s red lines, TechCrunch and CNBC reported. A separate Friday letter from labour organisations including the Alphabet Workers Union and Amazon Employees for Climate Justice — representing a coalition Bloomberg described as encompassing more than 700,000 workers — urged Amazon, Google and Microsoft management to “refuse to comply” with similar Pentagon demands.

OpenAI CEO Sam Altman told staff in an internal memo reported by the BBC and Axios that OpenAI shares Anthropic’s “red lines” and that any Pentagon contracts would exclude domestic surveillance and autonomous offensive weapons. Retired Air Force General Jack Shanahan, who oversaw the Pentagon’s original AI initiatives, posted that Claude’s red lines are “reasonable.”

What Comes Next

Losing the $200 million contract alone would not imperil Anthropic, recently valued at approximately $380 billion. The greater financial risk lies in the supply-chain designation. Adam Connor at the Centre for American Progress told CNN it could cause “some large portion” of Anthropic’s enterprise customer base to “evaporate” because those clients either hold government contracts or aspire to win them. The designation also forces Palantir — which powers its most sensitive defence work through Claude — to source an alternative model. Elon Musk’s xAI has become the second company cleared for classified networks after agreeing to the Pentagon’s “all lawful purposes” standard for Grok, though Axios sources said it is “unlikely to be a like-for-like replacement.”

Senator Mark Warner, vice chairman of the Senate Intelligence Committee, warned the directive “raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.” The six-month phaseout window now sets the clock on what may become the defining test of whether America’s leading AI companies will accept unrestricted government authority over their technology — or whether the industry’s collective resistance will force a renegotiation of those boundaries entirely.

Disclaimer: Finonity provides financial news and market analysis for informational purposes only. Nothing published on this site constitutes investment advice, a recommendation, or an offer to buy or sell any securities or financial instruments. Past performance is not indicative of future results. Always consult a qualified financial advisor before making investment decisions.
Mark Cullen
Mark Cullen
Senior Stocks Analyst — Mark Cullen is a Senior Stocks Analyst at Finonity covering global equity markets, corporate earnings, and IPO activity. A London-based professional with over 20 years of experience in communications and operations across financial, government, and institutional environments, Mark has worked with organisations including the City of London Corporation, LCH, and the UK's Department for Business, Energy and Industrial Strategy. His extensive background in strategic communications, market research, and stakeholder management — including coordinating financial services partnerships during COP26's Green Horizon Summit — informs his ability to distill complex market dynamics into clear, accessible analysis for investors.

Read more

Latest News