The US Department of Defense has quietly signed classified AI contracts with OpenAI, Google, and Nvidia. What's notable isn't just who made the list — it's who didn't. Anthropic, despite being one of the most credible AI safety organizations in the world, was left off. The Pentagon is spending billions on AI, and the decisions being made in classified government buildings right now will shape the future of warfare, intelligence, and global stability for decades.
Why Defense AI Is Urgent Right Now
Modern warfare is being transformed at a pace that outstrips most public understanding. Autonomous drone swarms, AI-guided precision strike systems, real-time battlefield intelligence synthesis — these aren't science fiction concepts. Ukraine has deployed autonomous drones at scale. China's People's Liberation Army has published doctrine explicitly centering "intelligentized warfare" as its strategic goal. The US military sees an AI gap not just as a procurement problem but as an existential security risk.
The Pentagon's classified AI programs aren't about flashy demos. They're about operational edge: processing satellite imagery faster than human analysts, parsing intercepted communications across dozens of languages simultaneously, predicting adversary logistics and resupply patterns, and hardening US cyber infrastructure against AI-assisted attacks. The country that does these things better — and more reliably — wins conflicts before they escalate into full-scale wars.
What the Contracts Likely Cover
Because the contracts are classified, their exact scope is unknown. But based on public DoD procurement documents, Congressional testimony, and reporting from defense beat journalists, the likely areas include:
- Intelligence analysis: Processing satellite imagery at scale, flagging anomalies in signals intelligence, and synthesizing multi-source intelligence into actionable reports faster than human analysts can.
- Cyber defense and offense: AI models trained to detect novel malware signatures, patch zero-days automatically, and in classified programs, likely generate offensive cyber tools.
- Logistics and supply chain optimization: The US military is the largest logistics operation on Earth. AI reducing waste, improving readiness, and predicting maintenance failures has enormous peacetime value before a single bullet is fired.
- Autonomous weapons decision support: Not full autonomy — current US policy requires human control over lethal force decisions — but AI providing targeting recommendations, threat classification, and engagement windows to human operators.
- Language and translation: Near-real-time translation and cultural context analysis for intelligence gathered in Mandarin, Arabic, Farsi, and Russian.
Nvidia's inclusion is primarily about hardware: its H100 and B200 GPU clusters are the backbone of virtually every large AI training run in existence. Supplying classified computing infrastructure is a natural extension of its existing role.
Why Anthropic Wasn't Included
This is the most interesting absence. Anthropic is arguably the most technically serious AI safety company in the world. Its Constitutional AI framework, its investment in interpretability research, and its public positioning as a "safety-first" lab make it, ironically, a potential liability for defense procurement. Here's why:
Anthropic's corporate culture and published principles place explicit constraints on use cases the company deems harmful. A classified contract with the DoD — one that might involve offensive cyber tools, autonomous weapons systems, or intelligence gathering — may simply be incompatible with Anthropic's stated values. Government procurement also requires cleared personnel with Top Secret/SCI access, and a smaller, more research-focused organization may lack the cleared workforce and SCIF infrastructure that large defense contracts require.
There's also the possibility that Anthropic's safety constraints make its models less useful for certain military applications — a model trained to refuse harmful requests is a model that might refuse operationally critical tasks. That's not a criticism of Anthropic; it may be exactly what they intended. But it explains the selection.
The History: Project Maven and the Backlash That Didn't Stop Anything
This isn't the first time big tech has wrestled with defense contracts. In 2018, Google won Project Maven — a DoD contract to use AI to analyze drone footage for target identification. When employees found out, thousands signed a petition, prominent engineers resigned, and Google ultimately declined to renew the contract. The company published AI principles forbidding weapons development.
What happened next: the Pentagon awarded Maven to Palantir. The work continued. Google's principled stand didn't stop autonomous drone targeting AI — it just meant Google didn't benefit from the contract revenue. The DoD now runs the Maven Smart System through a consortium of defense-focused AI vendors.
The lesson Silicon Valley drew, rightly or wrongly, is that moral stands in defense procurement have zero effect on the underlying military capability being developed — they only affect which company profits from it.
OpenAI's Arc: From Nonprofit to Pentagon Vendor
OpenAI was founded in 2015 as a nonprofit with an explicit mission: develop artificial general intelligence for the benefit of all humanity. Early employees and board members were idealists. The organization published safety research, released models openly, and maintained a culture that was at least nominally skeptical of commercial pressures.
That arc is now complete. The Microsoft partnership (2019, $1B; 2023, $10B) commercialized the technology. The ChatGPT launch commoditized it. And the Pentagon contracts militarized it. Each step was presented as necessary to fund the mission. Each step moved OpenAI further from its founding charter — a distance that is now the central subject of Elon Musk's lawsuit against Sam Altman.
OpenAI's leadership argues that safety research requires enormous resources, and enormous resources require commercial and government revenue. This is not an unreasonable argument. But it means the organization that was supposed to be a counterweight to unchecked AI development by powerful actors is now itself a powerful actor operating under classified government contracts.
The Employee Backlash That Keeps Not Happening
Employees at both Google and OpenAI have voiced concerns about defense work. Petitions have been written, internal memos leaked. And yet — unlike the 2018 Maven moment — mass resignations haven't materialized. The reason is simple: the AI labor market has tightened, compensation at these companies is extraordinary, and the cultural moment of 2018 has passed. The tech industry has normalized defense adjacency in a way that would have been unthinkable eight years ago.
China's Parallel Race
While American policymakers debate the ethics of AI in warfare, China's PLA is not waiting. The Chinese military has been integrating AI into command-and-control systems, drone swarms, and intelligence fusion systems under a doctrine called "intelligentized warfare." Chinese defense AI doesn't have an employee petition problem. It doesn't have a press freedom problem. It operates at the intersection of state power and corporate mandate in ways that American defense planners find genuinely alarming.
This geopolitical reality is the strongest argument Pentagon officials make for accelerating AI militarization: if the US doesn't, China will, and the asymmetry in capability could be decisive in a future conflict over Taiwan or in the South China Sea.
The Regulatory Vacuum
There is no international treaty governing autonomous weapons systems. The Geneva Conventions were written for a world where humans pulled triggers. The question of whether an AI system that selects and engages a target without human confirmation is lawful under international humanitarian law is genuinely unresolved — and being resolved de facto by procurement decisions made in classified DoD offices.
The Campaign to Stop Killer Robots has been lobbying for a treaty since 2012. They have nothing to show for it. The UN Group of Governmental Experts on LAWS (Lethal Autonomous Weapons Systems) has met annually and produced no binding agreement. Meanwhile, autonomous drone systems are being deployed in active conflict zones right now.
What This Means for the AI Industry
Defense contracts represent a qualitatively different revenue stream for AI companies. Unlike consumer products, defense customers don't churn. Contracts run for years or decades. There's no social media backlash from the end user. Revenue is classified, so competitive intelligence is impossible. And the national security framing insulates companies from the usual ESG pressure.
For investors, this is straightforwardly positive. For employees who joined these companies because they believed in the democratization of AI, it's a more complicated calculus.
The Militarization Is Accelerating
Individual company decisions — whether to participate or abstain — have minimal impact on the overall trajectory. The US military will use AI. China will use AI. Other state actors will follow. The question isn't whether AI gets militarized; it already has been. The question is whether any governance framework — domestic regulation, international treaty, or industry self-regulation — can shape how it gets militarized before a catastrophic incident forces the issue.
History suggests we'll need the catastrophic incident first.