Wednesday, 18 March 2026 Strategic Analysis of the Middle East

Military AI: When the Future of Tech Meets the Power of the State

For an industry that enjoys speaking in the accents of moral philosophy, America’s frontier-AI business is learning the language of procurement. The latest quarrel, between Anthropic, the self-consciously cautious maker of Claude, and OpenAI, the more expansionist creator of ChatGPT, turns on Pentagon contracts, safety clauses, and the meaning of “lawful use”. Yet beneath the legalese lies a blunter struggle. The contest is not merely over who supplies the government with the best model for military AI. It is over whether the firms that preach restraint can remain in charge of their own restraints once the state comes calling.

The Pentagon has spent the past year signing agreements with leading AI labs, typically with ceilings of up to $200m. OpenAI, Anthropic, and others received such arrangements as the Department of War sought cutting-edge systems for military and enterprise use. In February, OpenAI also reached an agreement to deploy its models on classified defence cloud networks. This is no bureaucratic sideshow. It marks the moment when generative AI stops being merely a commercial technology with military applications and starts becoming part of the strategic machinery itself.

Anthropic, curiously, helped create this moment before rebelling against it. It had actively pursued national security work and signed its own Pentagon deal. Then it balked. On February 26th, the firm said it would not remove safeguards that bar the use of its systems for fully autonomous weapons or mass domestic surveillance. The Pentagon’s answer was brutal: accept “all lawful” uses, it said, or risk cancellation and a designation as a “supply-chain risk”. By early March, that threat had become reality, and Anthropic said it would challenge the move.

OpenAI chose accommodation, but not surrender, at least on its own telling. OpenAI said its Pentagon arrangement preserved red lines against mass domestic surveillance, the operation of autonomous weapons systems, and high-stakes automated decisions, while maintaining its safety architecture for classified deployments. It even said it opposed branding Anthropic a supply-chain risk. That is both a principled and an interested stance: nobody wants to be the only remaining supplier to a customer as forceful as the American state.

At first glance, the dispute seems moral and the money incidental. That is wrong. The money matters greatly, but only indirectly. A $200m ceiling is not transformative for firms at the frontier of AI. What such contracts buy is something more valuable than revenue: legitimacy, embeddedness, and a route into the machinery of the state. A firm trusted with classified environments looks less like a fashionable application-maker and more like infrastructure. That signalling value reaches well beyond Washington. Once the government labels a supplier as risky, investors, agencies, and corporate buyers may begin treating it accordingly.

A regulatory label coined for one purpose soon becomes a commercial stigma. Yet the more interesting issue is not financial but constitutional. Anthropic’s case is, in essence, that a private company may refuse to enable certain applications because the technology remains too unreliable or too dangerous. The company’s objections centre on autonomous weapons and domestic surveillance. The Pentagon retorts that lawful authority, not vendor philosophy, must govern national-security use. In other words, Anthropic is asserting a kind of corporate conscience, and the state is insisting on sovereign discretion.

That quarrel is sharpened by what lies abroad. China publicly speaks the language of prudence: its foreign ministry has said military AI should be regulated, remain subject to human control, comply with international humanitarian law, and avoid strategic instability. Yet Beijing did not endorse the 2024 Seoul “blueprint for action” on responsible military AI use, and Reuters reported that both China and America also declined to sign a 2026 declaration on governing military AI at the REAIM summit in Spain. Reuters has separately reported that PLA-affiliated researchers built a military-focused model on top of Meta’s Llama. The contrast, then, is not between an America with rules and a China with none. It is between an American system in which firms, agencies, courts, and public argument can visibly constrain defence AI, and a Chinese one in which the state appears to face far less public friction in bending AI to strategic ends. In such a contest, Washington will be tempted to treat Anthropic-style principles not as safeguards, but as luxuries.

That is why the dispute is spreading beyond the Pentagon. On March 7th, the Trump administration had drawn up stricter rules for civilian AI contracts, requiring companies to grant the government irrevocable rights to all lawful uses of their systems. Those draft rules reportedly mirror ideas under consideration in military procurement. If adopted, they would turn a quarrel with one determined supplier into a broader doctrine: government buys not merely access to a model, but priority over the terms on which it may be used. That may be democratically defensible. Governments, unlike AI labs, answer to voters. But it also means that the industry’s favourite language, “responsible AI”, “alignment”, “safety”, will matter only for so long as it does not collide with sovereign power.

OpenAI and Anthropic, therefore, represent not two opposite futures but two versions of the same one. Both want defence work. Both want to retain a reputation for safety. Both know that the old posture, half laboratory, half ethical guardian, becomes harder to sustain once the customer is the armed state. OpenAI’s solution is to argue that guardrails can be built into the contract. Anthropic’s is to insist that some boundaries must sit outside it. The Pentagon, characteristically, prefers a simpler arrangement: the company may sell the tool, but the state decides what counts as proper use.

The AI boom is often described as a race for better models. That is flattering nonsense. The more consequential race is for official favour: to be the lab that government trusts, subsidises and, in time, quietly disciplines. The prize is not merely a contract worth a few hundred million dollars, but admission to the inner circles of state power, along with the prestige, dependence, and moral compromise that come with it. Once a frontier-AI firm becomes part of the national-security apparatus, its talk of safety still matters, but only at the margin and only until it obstructs raison d’état. The labs still speak like philosophers, but they are starting to behave like defence contractors.