The Algorithm Went to War — And Nobody Asked You

Date:

The numbers from the three-week US-Israel campaign against Iran are staggering enough on their own: 2,000 dead, 10,000 wounded, four million displaced, oil above $100 a barrel, 56 Iranian cultural heritage sites reduced to rubble. A girls’ school in Minab — 170 children, gone.

But here’s the number that will define the next century of warfare: 1,000.

That’s how many targets the United States struck in the first 24 hours. For context, the 2003 “Shock and Awe” campaign over Iraq — the one that shocked the world — hit roughly 500 in the same window. The Iran operation moved at twice the speed, with twice the density, against a country three times the size.

That acceleration didn’t come from better pilots, better bombs, or better generals. It came from software.

The Kill Chain Got an Upgrade

Military strategists talk about the “kill chain” — the sequence of steps from spotting a target to destroying it. Designate. Identify. Strike. In previous conflicts this process took hours, sometimes days. Officers cross-referencing intelligence, lawyers reviewing legality, commanders signing off.

Admiral Brad Cooper, overseeing the campaign, described what happens now: systems “can turn processes that used to take hours, and sometimes even days, into seconds.”

Seconds.

That’s not an incremental improvement. That’s a different category of weapon. When a human being makes a lethal decision over days, there is time for doubt, for additional intelligence, for someone to say wait. When a machine flags, classifies, and queues a target in seconds, the human in the loop becomes a rubber stamp — present in legal theory, absent in practice.

The system doing much of this work is Palantir’s Maven Smart System, integrated with Anthropic’s Claude AI. The same Claude that helps startups write marketing copy and students draft essays is now, somewhere in a rack of servers, helping decide what gets bombed.

The Tech Bro in the War Room

Anthropic, to its credit, tried to draw a line. The company sought contractual guardrails against mass surveillance applications and autonomous lethal targeting. The Pentagon’s response was instructive: Defense Secretary Pete Hegseth ordered Anthropic’s removal from the program entirely, citing them as a “supply chain risk.”

Read that again. A company that tried to put ethical limits on a weapons system was removed — not because their technology didn’t work, but because their principles were inconvenient.

This is the clearest signal yet about how Silicon Valley’s integration into the military industrial complex actually functions. You get the contract if you comply. Ask questions and you become a liability. The market for military AI is not a marketplace of ideas — it’s a compliance funnel.

Noah Sylvia at RUSI put it plainly: “Tech bros are going to war.” What he means is that the culture, incentives, and assumptions of the venture-backed startup ecosystem are now embedded in the most consequential decisions a state can make. Move fast. Ship it. Iterate later. In consumer software, the cost of a bug is a bad review. In this context, it’s a school in Minab.

The Accountability Void

Proponents of AI-assisted targeting make a reasonable-sounding argument: these systems create auditable records. Every decision is logged. That’s more accountability than the fog of traditional war, where responsibility dissolves into chaos.

But auditability is not accountability. A log that proves an algorithm flagged a building as a legitimate target doesn’t tell you why, doesn’t tell you whether that classification was correct, and doesn’t tell you who is legally and morally responsible when it’s wrong.

When a human commander orders a strike that kills children, we have centuries of legal and moral framework for assigning responsibility — however imperfectly applied. When an AI system surfaces a target, a human approves it in three seconds, and a missile follows — who is responsible? The programmer who wrote the model? The executive who signed the contract? The officer who clicked approve? The politician who authorized the operation?

The honest answer, right now, is: nobody. The responsibility is distributed so thinly across humans and machines, contractors and governments, training data and operational parameters, that it effectively disappears.

That’s not a bug in the system. For the people deploying these tools, it may be the feature.

What 2018 Looked Like vs. What 2026 Looks Like

In 2018, Google employees staged walkouts to protest Project Maven — the Pentagon drone surveillance program that the company had quietly joined. Thousands of engineers signed letters. Senior researchers resigned. Google eventually did not renew the contract.

It felt, at the time, like the tech industry had a conscience.

Eight years later, Maven is still running — just without the branding friction. Google’s CEO publicly committed the company to non-weapons AI cooperation, while maintaining the Pentagon partnerships through layers of contractors and resellers. Palantir’s market cap has quadrupled. Anduril, founded by Oculus co-founder Palmer Luckey, is worth $28 billion building autonomous weapons systems. The protest energy of 2018 has been absorbed, laundered through corporate structure, and quietly put to work.

The engineers who would have walked out are still there. They’ve just been moved to different teams.

The Military Technology Complex

Architect and scholar Nader Tehrani offers the most useful reframe: “We used to talk about the military industrial complex. Now we can talk about the military technology complex.”

The original military industrial complex — the one Eisenhower warned about in 1961 — was a closed loop of defense contractors, Pentagon officials, and congressional representatives who shared financial interests in perpetual military spending. It was corrupt and self-serving, but it was also slow, bureaucratic, and visible enough to be protested.

The military technology complex is faster, more diffuse, and almost completely invisible to the public. It lives in cloud infrastructure, API contracts, and SaaS agreements. Its weapons are not tanks and jets — assets that take years to procure and are obvious on a budget sheet — but software subscriptions, model weights, and compute clusters that can be stood up overnight and classified as IT expenditure.

The F-35 program cost $1.7 trillion and took decades. Maven-class AI integration can be deployed in months, at a fraction of the cost, with almost no public debate and no vote.

That asymmetry — between the speed of deployment and the pace of democratic oversight — is the real story of the Iran strikes. Not whether AI is accurate enough. Not whether the targets were legitimate. But whether any of us — citizens, legislators, international bodies — have any meaningful say in whether these tools are used, how they’re constrained, and who bears responsibility when they fail.

Right now, the answer is no.

The Precedent That Can’t Be Unset

Wars establish precedents. The use of chemical weapons in WWI led to international prohibition. The atomic bombings of Hiroshima and Nagasaki created a nuclear taboo that has, improbably, held for 80 years. These norms are fragile and imperfect, but they exist because the first uses were so horrific that the world eventually agreed: not this.

The Iran campaign is setting a precedent for AI-accelerated warfare — for what it looks like when a military machine processes targets at machine speed, when the kill chain runs in seconds, when a school can be misclassified and struck before anyone has time to ask whether the data was right.

The question isn’t whether this technology will spread. It will. Every military with the resources is watching, learning, and building. The question is whether the first iteration of this — the one being written right now, in the smoking ruins of Minab and the server logs of Maven — will define what’s acceptable for the generations that follow.

History is not optimistic about that.


Casualty figures and operational details sourced from open reporting. The Pentagon’s investigation into the Minab school strike is ongoing.


 

LATEST NEWS