Kill Chain: How Silicon Valley's AI Powers the Imperial War on Iran
Bappa Sinha
ON February 28, 2026, the United States and Israel launched Operation Epic Fury, striking 1,000 targets in Iran within the first 24 hours. By mid-March, the number had crossed 6,000. Behind this staggering pace of destruction lay not just the familiar arsenal of Tomahawk missiles, B-2 stealth bombers and carrier-based fighters, but a new weapon in the imperial toolkit: artificial intelligence. The US military's own AI strategy document puts it with brutal clarity: "speed wins" and "the risks of not moving fast enough outweigh the risks of imperfect alignment." What this means in practice is that the Pentagon has decided that killing faster matters more than killing accurately.
It is worth stating at the outset that artificial intelligence, as a technology, has enormous potential for human progress in healthcare, climate science, material discovery, and planning for human needs. The issue is not AI itself. The issue is what happens when this technology is placed in the hands of an imperial war machine that has, across decades, perfected the machinery of death while systematically dismantling every legal and ethical constraint on its use.
THE AI KILL CHAIN
The core of the US military's AI deployment in Iran is a system called the Maven Smart System, built by the war-technology corporation Palantir and incorporating the large language model Claude, developed by Anthropic. Maven consolidates what were previously eight or nine separate intelligence and targeting systems into a single digital platform. It ingests data from satellite imagery, drone video feeds, signals intelligence - intercepted phone calls, text messages, internet surveillance - radar, and human intelligence reports. Machine learning algorithms then process this vast ocean of data to identify and prioritise potential targets, recommend appropriate weaponry, and even evaluate the legal grounds for a strike.
First, there is the question of scale. The traditional process of military targeting, what the Pentagon calls the "kill chain", historically required teams of thousands of intelligence analysts poring over imagery, cross-referencing reports, and building target packages over days or weeks. During the Second World War, the aerial targeting cycle from intelligence collection to assembled strike package took weeks or even months. A Georgetown University investigation found that in the US Army's 18th Airborne Corps, AI had already reduced a team of 2,000 intelligence analysts to just 20. Craig Jones, a senior lecturer at Newcastle University and expert on kill chains, put it starkly: AI is making targeting recommendations "much quicker in some ways than the speed of thought." The assassination-style strikes that killed Iran's Supreme Leader Ayatollah Ali Khamenei were reportedly executed within 60 seconds of identification.
Then there is the matter of what this speed means for civilian lives. David Leslie, professor of ethics at Queen Mary University of London, has warned that reliance on AI produces "cognitive off-loading" - human decision-makers feel detached from the consequences of a strike because the analytical labour has been performed by a machine. When the Maven system generates a target recommendation, the human officer reviewing it has, in Leslie's words, "a much narrower time band to evaluate the recommendation." The system produces options; the human rubber-stamps them. The fiction of "human-in-the-loop" decision-making, the claim that humans always make the final call, collapses under the sheer velocity of the process.
The Iran war has already produced what may be the most devastating consequence of AI-assisted targeting: the bombing of the Shajareh Tayyebeh elementary school in Minab on the very first day of the war. A US Tomahawk cruise missile struck the school, killing at least 170 people, most of them schoolgirls. The school was located adjacent to an Islamic Revolutionary Guard Corps (IRGC) compound. According to a preliminary investigation reported by the New York Times, US Central Command officers "created the target coordinates for the strike using outdated data provided by the Defense Intelligence Agency." Satellite imagery analysed by news organisations shows that the school was fenced off from the military compound between 2013 and 2016, a fact that was either missed or never updated in the targeting database. Over 120 members of Congress have demanded to know whether Maven and its AI systems were used to identify the school as a target.
The competing explanations for the school bombing are themselves revealing. Some sources suggest AI failed to identify the school as a civilian object, classifying it instead as part of the military compound. Others argue this was a human intelligence failure, analysts working with decade-old data. The Semafor news outlet reported that publicly available Iranian business listings showed the school's location, and a simple internet search could have prevented the massacre. Whether the blame lies with the algorithm or the analyst, the systemic conclusion is the same: the drive for speed in targeting, accelerated enormously by AI, compresses the space for careful verification to the vanishing point. When you are striking 1,000 targets in 24 hours, the time available to check whether a building is a school or a barracks approaches zero. And of course, the possibility of deliberate criminality - that the US-Israeli war machine simply does not care about civilian casualties - cannot be excluded. US Secretary of War Pete Hegseth has openly stated that the military's aim is "maximum lethality, not tepid legality."
WAR AS LOGISTICS AND AI AS LOGISTICS ENGINE
AI's role extends well beyond target selection. Modern warfare is, at its core, a logistics problem of immense complexity. Coordinating Tomahawk launches from naval vessels, stealth bomber sorties, drone operations, aerial refuelling, munitions management, and damage assessment - all simultaneously across multiple theatres - is exactly the data-intensive coordination problem AI is supposed to solve. Maven reportedly recommends specific weaponry for each target, accounting for stockpiles and previous munitions performance. AI runs "what if" operational simulations, allowing planners to evaluate courses of action in minutes rather than days. The Pentagon spent $11.3 billion in the first six days alone.
Then there is the political economy of AI warfare. Palantir, the company behind Maven, has seen its market capitalisation approach $360 billion on the strength of military contracts. The Pentagon awarded Palantir an initial Maven contract worth $480 million in 2024, expanded to $1.3 billion by 2025, and has now made Maven an official programme of record. The US Army separately awarded Palantir a contract worth up to $10 billion. AI warfare is now a major profit centre for Silicon Valley, creating a powerful constituency with a direct financial interest in perpetual conflict.
THE GAZA PRECEDENT AND THE SYSTEMIC LOGIC
AI use in the Iran war did not emerge from nowhere. Israel's genocidal war on Gaza provided the testing ground. The Israeli military deployed AI systems called Lavender and Gospel to identify targets, programmed to accept up to 100 civilian casualties for a single strike on a suspected Hamas combatant. Over 75,000 Palestinians have been killed since October 2023. What was tested in Gaza has now been scaled up against Iran.
The parallels with earlier imperial warfare are instructive. In the Vietnam War, the Igloo White automated targeting system was regularly deceived by decoys. In 1988, the US Navy's Aegis cruiser shot down Iran Air Flight 655, killing 290 civilians, because personnel, working under pressure with the automated Aegis combat system, misidentified a civilian airliner. In 1999, intelligence failures led US stealth bombers to strike the Chinese embassy in Belgrade. At every stage, the automation of warfare has produced catastrophic errors; at every stage, the imperial powers have pressed forward regardless.
What is new is the scale. The infrastructure now being built - Maven deployed across all six military branches, the GenAI.mil platform for classified networks, over 20,000 active users across 35 military tools - is the permanent architecture of algorithmic warfare. Meanwhile, the Trump administration has systematically dismantled every civilian protection mechanism. Military lawyers who advised on international law compliance have been sidelined and fired. The Iranian Red Crescent reports nearly 20,000 civilian buildings and 77 healthcare facilities damaged.
The emerging reality of AI warfare is not a verdict on the technology. It is a verdict on imperialism. When it is deployed under the logic of imperial capitalism, it becomes what it has become in Iran: a machine for killing faster, at greater scale, with less accountability, and with enormous profits for the corporations that build it. The 170 dead children of Shajareh Tayyebeh school are not a bug in the system. Under the logic of "maximum lethality, not tepid legality," they are the defining feature.


