Alphabet's Google joined a growing list of technology firms to sign a deal with the US Department of Defense to use its artificial intelligence models for classified work, The Information reported on Tuesday, citing a person familiar with the matter. The agreement allows the Pentagon to use Google's AI for "any lawful government purpose." A spokesperson for Google Public Sector, the unit that handles US government business, confirmed to The Information that the new agreement is an amendment to its existing contract. FBFB
Four words carry the weight of everything that has unfolded since February: "any lawful government purpose." That phrase is not boilerplate. It is the specific language that Anthropic's CEO Dario Amodei refused to accept, that triggered the most extraordinary government action against a private technology company in recent memory, and that every other major AI lab has now agreed to. Google's signature doesn't just complete a procurement list. It marks the moment when the American AI industry's negotiating position on military use became, for practical purposes, settled.
The Contract Language That Is Doing Enormous Work
The contract includes language noting "the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control." But it also adds that the "Agreement does not confer any right to control or veto lawful Government operational decision-making." New Atlas
Read both clauses together and the architecture of the deal becomes clear. The first clause is the safety guardrail — it acknowledges that domestic mass surveillance and fully autonomous weapons are not the intended use. The second clause is the operational clause — it ensures Google has no enforceable mechanism to prevent the government from making whatever decisions it considers lawful. The safety language is aspirational. The operational language is binding.
Google's agreement requires it to help in adjusting the company's AI safety settings and filters at the government's request. That sentence is the one that matters most, and it has received the least attention in the coverage. Not "Google retains the right to adjust safety settings." Not "safety settings remain under Google's control." The agreement requires Google to help the government adjust them. The direction of authority is explicit. New Atlas
Google said it supports government agencies across both classified and non-classified projects. A spokesman said the company remains committed to the principle that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight. "We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security," a Google spokesman told Reuters. CXO Digitalpulse
That statement is carefully worded. "Industry-standard practices and terms" — which now means the terms that OpenAI, xAI, and Google have each agreed to — is a different standard from what Anthropic was insisting upon before it was blacklisted. The industry standard, as of April 28, 2026, is: any lawful use, with safety language that cannot be contractually enforced against the government's operational decisions.
"The Pentagon is seeking to preserve all flexibility in defense and not be limited by warnings from the technology's creators against powering weapons with unreliable AI."
— Reuters, reporting on the classified AI deal landscape, April 28, 2026
How the Pentagon Assembled Its AI Arsenal — and What It Cost Anthropic
The backstory of Google's deal is inseparable from one of the most dramatic corporate-government confrontations in Silicon Valley's history.
Anthropic signed a two-year, $200 million contract with the Pentagon in July 2025, becoming the first AI laboratory to integrate its frontier models into mission workflows on classified networks. That position — first mover on classified AI deployment, with a safety-centric company providing the technology — was framed as an alignment between Anthropic's principles and the DoD's stated commitment to responsible AI. Renegotiations broke down in February 2026 over a single clause. The Pentagon insisted on language authorising Claude for "any lawful use." Anthropic CEO Dario Amodei declined. TelecomsinfrastructureTelecomsinfrastructure
What followed was without precedent. On February 27, 2026, President Trump ordered the US government to stop using Anthropic's products. Defense Secretary Pete Hegseth moved to designate Anthropic a supply-chain risk to national security — the first such designation ever applied to an American company — triggered not by any security failure, but by Anthropic's refusal to accept a contract clause permitting "any lawful use" of its Claude models. The Fast Mode
The formal designation required defense vendors and contractors to certify that they don't use Anthropic's models in their work with the Pentagon. Hegseth granted agencies six months to phase out Claude, during which the military would transition to OpenAI's models and those from xAI. The Pentagon then went further: Hegseth threatened to ensure "the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not." A Korean War-era industrial mobilisation statute, invoked as a threat against a company that builds chatbots. The escalation said something about how the government now regards AI as strategic infrastructure. NVIDIA NewsroomNVIDIA Newsroom
OpenAI CEO Sam Altman announced OpenAI's deal with the Department of Defense hours after Anthropic was blacklisted. He had previously publicly supported Anthropic's position on the limitations it was seeking. Numerous OpenAI employees had also signed an open letter supporting Amodei's insistence that its models not be used for mass surveillance or autonomous weapons. Altman's public pivot was notable: same stated principles, different contractual approach. Where Anthropic sought specific prohibitions written into the contract, OpenAI deferred to existing law as the constraint — a position that satisfied the Pentagon's "any lawful use" requirement while allowing Altman to maintain that the red lines remained. Stock Titan
The Anthropic episode crystallised the central governance question: when an AI lab's safety principles conflict with a state actor's operational requirements, which authority prevails? The Pentagon's answer — delivered through the supply-chain risk designation — was unambiguous. The government will not accept contractual veto power over how it uses technology it has purchased. OpenAI, xAI, and now Google have each concluded that operating within that constraint, while negotiating the language around it, is preferable to the alternative Anthropic experienced.
Google's Position: A History Longer Than This Contract
Google's agreement with the Pentagon is framed as an amendment to an existing contract — which means this is not the company's first classified AI engagement. It is an escalation of one. That context matters because Google's relationship with military AI has been publicly contentious for nearly a decade.
In 2018, Google employees staged walkouts and the company ultimately declined to renew its Project Maven contract — the DoD programme that used AI to analyse drone footage. The internal employee pressure that ended Project Maven became a landmark moment in the emerging discipline of tech worker activism on AI ethics. Google subsequently published AI Principles that explicitly listed "technologies that cause or are likely to cause overall harm" and "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" as applications the company would not pursue.
Those principles have been progressively softened over the subsequent eight years. By 2024, Google had removed the explicit weapons prohibition from its AI principles. By April 2026, it has signed a deal that requires it to help adjust its AI safety settings at the government's request, with no veto over operational decisions. The trajectory from Project Maven walkout to April 28's agreement is the arc of an industry that arrived at the national security market with principles, encountered the Pentagon's market power, and negotiated a landing position.
Classified networks are used to handle a wide range of sensitive work, including mission planning and weapons targeting. The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google. At $200 million per agreement, the financial stakes are relevant but not determinative — these sums are meaningful but not existential for companies of Google's scale. The strategic stakes are larger: classified AI deployment establishes a company's models in government workflows that, once embedded, are difficult to displace, and creates a reference architecture for sensitive-use deployment that has commercial implications across defence contractors, intelligence community vendors, and allied governments. FBFB
Key Takeaways
1. The "any lawful government purpose" clause is now the industry standard — and Anthropic's refusal to accept it is the only documented exception. Google's signature completes the consolidation of every major US AI lab except Anthropic into the Pentagon's classified AI ecosystem on the government's terms.
2. The safety adjustment clause is the deal's most consequential provision. Requiring Google to help adjust AI safety settings at the government's request is structurally different from retaining that authority internally. The direction of control has been explicitly defined in the contract language.
3. The Anthropic precedent has changed the negotiating landscape permanently. No AI lab can now credibly threaten to walk away from Pentagon terms without the risk of a supply-chain risk designation. The government has demonstrated it will use that tool. The credibility of the threat is established.
4. Google's pivot from Project Maven to this agreement represents an eight-year arc of principle erosion under market pressure. That arc is not unique to Google — it describes the broader trajectory of the US AI industry's relationship with the national security state. But Google's specific history makes the contrast particularly sharp.
5. The global implications extend well beyond the US. Allied governments in the UK, Australia, Japan, and across NATO are watching how US AI labs structure their classified deployment terms. The "any lawful use" framework, if adopted as the template for allied nation agreements, will define military AI governance well beyond American borders.
The Honest Counterargument
The Pentagon's position — that it will not accept contractual guardrails that limit its operational flexibility — has a coherent internal logic that the civilian technology industry is poorly positioned to evaluate. Military operations involve adversarial environments, incomplete information, and time constraints that do not accommodate the kind of deliberate human review that AI safety frameworks assume. The insistence that any lawful use be permitted reflects a genuine operational requirement, not merely bureaucratic overreach.
OpenAI's head of national security partnerships Katrina Mulligan argued that its contract limits deployment to cloud API, gives it control over the models and safety stack deployed, and has human AI experts in the loop to make any modifications. "Autonomous systems require inference at the edge," Mulligan said. "By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware." That technical constraint — cloud API only, no edge deployment — is the substantive guardrail that the contractual language alone cannot provide. If Google's agreement includes a similar architectural limitation, it represents a more meaningful safety mechanism than the "not intended for" language in the contract. NVIDIA Corporation
Anthropic has said it will challenge "any supply chain risk designation in court." That legal fight, if pursued, will test questions about government authority over private companies' product terms that American courts have rarely addressed at this level of specificity. The outcome could reshape the terms on which every AI lab negotiates with the federal government going forward — or, if the Pentagon's position is upheld, cement the current framework as the settled law. NVIDIA Newsroom
The counterargument from Anthropic's position deserves the same honest assessment. Caitlin Kalinowski, who had been leading OpenAI's robotics and hardware operations since November 2024, resigned on March 7, 2026, stating that "surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got." A senior executive departing OpenAI over the terms of its Pentagon deal is not a small data point. It is evidence that the safety concerns Anthropic cited were not universally dismissed by its competitors' own leadership. NVIDIA Newsroom
What Google's Signature Means for the Global AI Governance Debate
The Pentagon has said it has no interest in using AI to conduct mass surveillance of Americans or to develop weapons that operate without human involvement, but wants "any lawful use" of AI to be allowed. The gap between those two positions — stated intent to avoid certain uses, insistence on the legal authority to pursue them anyway — is precisely the space that Anthropic was trying to close with contractual language, and precisely the space that Google, OpenAI, and xAI have each decided to leave open. Stock Titan
For observers outside the United States, the consolidation of American AI capabilities behind a "any lawful use" framework has specific implications. European regulators drafting AI Act implementation guidance cite autonomous weapons and mass surveillance as use cases that require the highest level of human oversight. The US government's insistence that it will not accept contractual restrictions on these use cases creates a values misalignment between American military AI deployment and European AI governance that will surface in trade negotiations, data-sharing agreements, and allied interoperability frameworks.
China's military AI programme operates under no equivalent governance debate — the PLA's AI deployment is not constrained by the equivalent of Silicon Valley employee open letters or CEO refusals. The framing in Washington is that American AI labs need to be deployable for "any lawful use" to maintain competitive advantage against Chinese military AI. The framing that the now-blacklisted Anthropic and the now-departed Kalinowski would offer is that the absence of hard contractual limits is precisely what makes that competitive advantage dangerous to exercise.
Google, OpenAI, xAI, and the global AI industry's relationship with military applications has moved from philosophical debate to contractual reality in eight months. The Anthropic episode demonstrated what happens to companies that hold the philosophical line when the government has decided to move to contractual reality. Google's April 28 signature is the industry's collective acknowledgement that the debate is, for now, settled.
The clause that cannot veto lawful government operational decision-making will now be embedded in the AI systems handling mission planning and weapons targeting at the world's most powerful military. Whether that is a responsible approach to national security, or a governance failure that future courts and historians will measure carefully, is a question whose answer has been deferred — not resolved.






