Anthropic Just Called the Pentagon’s Bluff. The Deadline Is Today.

Share

Reading time: 7 min

Anthropic CEO Dario Amodei rejected the Pentagon’s “best and final offer” on Thursday evening, saying the company would rather lose its $200 million defence contract than allow Claude to be used for mass surveillance of Americans or fully autonomous weapons. Defence Secretary Pete Hegseth’s deadline expires at 5:01pm ET today — Friday, February 27 — after which the Pentagon has threatened to invoke the Defence Production Act, designate Anthropic a supply chain risk, or both. As of this morning, Anthropic has not budged.

What They’re Actually Fighting About

The dispute is narrower than the headlines suggest. Anthropic is not refusing to work with the military — Claude is already the only AI model operating on the Pentagon’s classified networks, deployed through a partnership with Palantir under a contract awarded last July. It was used in the January operation that captured Venezuelan president Nicolás Maduro. The company has advocated for chip export controls to China and has customised models for national security customers.

The fight is over two specific restrictions Anthropic insists on maintaining. First, that Claude not be used for mass domestic surveillance of American citizens. Second, that it not make final targeting decisions in military operations without human involvement — what Amodei calls “fully autonomous weapons.” A source familiar with the negotiations told CBS News that Anthropic’s position partly reflects a technical concern: Claude is not immune to hallucinations and is not reliable enough to avoid potentially lethal mistakes without human judgment in the loop.

The Pentagon wants “all lawful purposes” — no carve-outs, no company-imposed restrictions. A senior Pentagon official told CNN the issue “has nothing to do with mass surveillance and autonomous weapons being used” because the department “has always followed the law.” Sean Parnell, the Pentagon’s chief spokesperson, framed it more bluntly on X: “We will not let ANY company dictate the terms regarding how we make operational decisions.”

Seven Days in February

The confrontation has moved fast. On February 13, Axios and the Wall Street Journal reported that Claude had been used in the Maduro raid via the Anthropic-Palantir partnership. A senior Pentagon official claimed an Anthropic executive subsequently contacted a Palantir executive to ask whether Claude was involved, raising it “in such a way to imply that they might disapprove.” Anthropic flatly denied this, saying it had not discussed the use of Claude for specific operations with either the Pentagon or Palantir “outside of routine discussions on strictly technical matters.”

The Palantir exchange — disputed as it is — appears to have been the trigger. By February 15, Axios reported the Pentagon was close to cutting ties entirely. By February 17, Parnell confirmed the department’s relationship with Anthropic was “being reviewed.” On February 23, xAI signed an agreement to bring Grok into classified systems under the “all lawful purposes” standard — giving the Pentagon at least a theoretical alternative, though defence officials acknowledge replacing Claude would be technically difficult and time-consuming.

Then came Tuesday, February 24. Hegseth summoned Amodei to the Pentagon. The room was heavy with brass: Deputy Secretary Steve Feinberg, Under Secretary for Research and Engineering Emil Michael, Under Secretary for Acquisition and Sustainment Michael Duffey, general counsel Earl Matthews, and Parnell. One source described the atmosphere as “not warm and fuzzy.” Another said it remained cordial, with Hegseth praising Claude’s performance. Both accounts agree on the substance: Hegseth told Amodei to sign a document granting full access by Friday evening or face consequences.

Three Threats, One Contradiction

The Pentagon put three options on the table. First, terminate the contract. Second, designate Anthropic a “supply chain risk” — a classification typically reserved for foreign adversarial firms like Huawei — which would force every Pentagon contractor to certify that Claude is not used in their military workflows. Third, invoke the Defence Production Act to compel Anthropic to provide Claude without restrictions, a move the DPA authorises when products are deemed critical to national defence.

Amodei noted the paradox in his Thursday blog post. The threats “are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” He reiterated that Anthropic “believes deeply in the existential importance of using AI to defend the United States,” that the Pentagon “not private companies, makes military decisions,” but that in a narrow set of cases involving surveillance and autonomous weapons, “AI can undermine, rather than defend, democratic values.”

The Pentagon’s final offer arrived Wednesday night — less than 48 hours before the deadline. Anthropic said the contract language “made virtually no progress” on its two concerns. New language framed as compromise was “paired with legalese that would allow those safeguards to be disregarded at will,” a spokesperson told The Hill. A source described the additions as designed to sound like concessions but functioning as escape hatches. Amodei’s Thursday evening response was unequivocal: “We cannot in good conscience accede to their request.”

The Competitive Landscape Shifts

The timing is not accidental. On the same day Amodei walked into the Pentagon, Anthropic published version 3.0 of its Responsible Scaling Policy — a significant overhaul that dropped the company’s original hard commitment to pause model training if safety measures were not proven adequate. Chief Science Officer Jared Kaplan told TIME that “it wouldn’t actually help anyone for us to stop training AI models” when “competitors are blazing ahead.” The RSP update separates what Anthropic will do unilaterally from what it recommends the industry adopt collectively — a tacit acknowledgement that self-imposed restraints only work if competitors follow suit.

They have not. OpenAI, Google and xAI have all agreed to the “all lawful purposes” standard for unclassified military systems. Grok is now the first non-Claude model approved for classified use. Google’s Gemini is reportedly close to a classified deal. OpenAI is further behind but negotiations have intensified. If the Pentagon follows through on the supply chain risk designation, Anthropic would not just lose one contract — it would be functionally barred from the entire defence ecosystem at a moment when it is trying to scale its enterprise business. The broader pattern of government leverage over private technology companies is becoming difficult to ignore.

Anthropic’s leverage is narrower but real. Claude remains, by the Pentagon’s own admission, the most capable model for sensitive military applications. A defence official told Axios: “The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.” Replacing Claude on classified networks is not a software update — it is an integration project that could take months, during which the Pentagon’s most advanced AI capabilities would degrade.

What Happens at 5:01

If the deadline passes without agreement, the Pentagon has three moves. The supply chain risk designation is the most immediately damaging to Anthropic’s commercial position. DPA invocation is the most legally contentious — Anthropic could argue it is not providing a commercially available product but custom-built software for classified use, though such a challenge would take months to resolve. Contract termination is the simplest but hurts the Pentagon most, since no replacement is ready.

Amodei offered an off-ramp in his Thursday statement: “Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.” The offer to help the Pentagon leave is itself a negotiating signal — it communicates that Anthropic is prepared to absorb the revenue hit rather than cross its stated red lines.

The deeper question is precedential. If the Pentagon can compel a private company to strip safety restrictions from an AI system through threat of blacklisting, every AI lab’s governance framework becomes negotiable under pressure. If Anthropic holds the line and survives commercially, it establishes that there are limits to what the national security apparatus can demand from technology providers. Either outcome reshapes the relationship between Silicon Valley and the state for the AI era. The clock runs out this evening.

Sources: Axios, CNN, CBS News, NPR, The Hill

Disclaimer: Finonity provides financial news and market analysis for informational purposes only. Nothing published on this site constitutes investment advice, a recommendation, or an offer to buy or sell any securities or financial instruments. Past performance is not indicative of future results. Always consult a qualified financial advisor before making investment decisions.
Mark Cullen
Mark Cullen
Senior Stocks Analyst — Mark Cullen is a Senior Stocks Analyst at Finonity covering global equity markets, corporate earnings, and IPO activity. A London-based professional with over 20 years of experience in communications and operations across financial, government, and institutional environments, Mark has worked with organisations including the City of London Corporation, LCH, and the UK's Department for Business, Energy and Industrial Strategy. His extensive background in strategic communications, market research, and stakeholder management — including coordinating financial services partnerships during COP26's Green Horizon Summit — informs his ability to distill complex market dynamics into clear, accessible analysis for investors.

Read more

Latest News