Artificial Intelligence once felt like humanity’s most benign creation—writing poetry, painting pictures, making films, composing music, helping scientists collaborate across labs. Then it crept into corporate boardrooms, assisting executives with organizational restructuring and employee layoffs. Panic set in. Would machines eventually rob people of their livelihoods?

That fear, looking back, feels almost quaint now.

AI has since crossed into far darker terrain. On the battlefield. It identifies targets, guides missiles, and helps kill. The old anxiety about losing jobs has been eclipsed by something far grimmer: machines are now taking lives.

On February 28, 2026, U.S. and Israeli jets, in a joint mission, cut through Iranian airspace to kill Iran’s Supreme Leader Ayatollah Ali Khamenei alongside senior ministry officials and military commanders. Stealthily manoeuvring the operation was an unexpected actor—Claude, the AI model built by Anthropic, running  on classified U.S. military networks.

The operation showcased AI’s contribution to modern warfare. The “decision compression” of AI collapsed the window from target to strike almost to the “speed of thought.” In the Middle East conflict, nearly 900 strikes were launched in the first 12 hours alone.

Over 1,200 munitions reportedly struck 24 Iranian provinces, killing 201 and wounding 747. The deadliest strike hit a girls’ school in Minab resulting in 165 deaths and 95 wounded. In Tehran, the Revolutionary Court, the Ministries of Intelligence and Defence, and the Atomic Energy Organization were all struck. Iran retaliated against eight neighboring nations.

Claude, however, was not alone in this operation. Other AI systems like Gemini, ChatGPT, GenAI.mil, and Grok also participated. The U.S. War Department disbursed $200 million in contracts with AI companies. These systems scanned satellite imagery, signals intelligence, and tactical feeds, compressing hours of analysis into minutes and generating simulations that helped the army generals schedule strikes. Hermes, another AI system, helped U.S. Marine planners by fusing military doctrine with open-source data into defensive strategies.

AI technology is built on Large Language Model. Claude is trained on vast historical combat data, which it processes to accelerate decisions that once required entire analyst teams working for days and months. Earlier this year, through a Palantir Technologies and Amazon Web Services partnership, Anthropic’s AI also helped the Trump administration capture Venezuelan President Nicolás Maduro and his wife.

Claude is the latest in a long line of computer-aided warfare, going back to Alan Turing’s Enigma codebreaking during World War II. During the Cold War, the Pentagon funded ARPANET—the architecture behind today’s Internet—and developed GPS for military navigation. In the 1991 Gulf War, the Dynamic Analysis and Replanning Tool managed battlefield logistics, recouping the Pentagon’s three decades of AI investment. NATO drones located Serbian positions in Kosovo in 1999, heralding unmanned warfare. Predator and Reaper drones dominated Afghanistan post-September 11, and by 2017, the U.S. Project Maven applied machine learning to drone footage.

Today, AI hunts with cold precision. Israel’s Habsora system—tested in Gaza and Lebanon, and now over Iran—analyzes surveillance and drone data to pinpoint targets. Its AI generated over 100 bombing targets per day in Gaza, against roughly 50 per year in the pre-AI era times, a figure confirmed by former IDF Chief of Staff Aviv Kochavi. In Ukraine, FPV drones, mostly human-piloted but increasingly AI-guided, account for 60-70 percent of battlefield casualties on both sides.

India also employed AI in Operation Sindoor, during its four-day conflict with Pakistan last year. The Indian army deployed 23 indigenous AI applications. Trinetra’s sensor-fusion system, linked with Project Sanjay’s surveillance network, gave Indian army generals a live operational picture. The Electronic Intelligence Collation and Analysis System detected enemy radars and missiles with 94 percent accuracy, processed data on-site, defeated jammers, and enabled precision strikes.

Satellite imagery confirmed widespread destruction, which accounted for roughly 20 percent of Pakistan Air Force infrastructure.

AI has had a wider destabilizing effect. Deepfakes saturated both the India-Pakistan and U.S.-Israel-Iran conflicts with disinformation. Elections have become battlegrounds. Russia deployed AI content to interfere in U.S. and European elections. China used AI-powered influence operations in Taiwan’s 2024 elections. Iran propagated AI-driven disinformation in the 2024 US presidential race. India, Pakistan and Bangladesh saw synthetic content manipulate voter sentiment.

Domestically, governments have turned AI inward. The UAE’s predictive policing system uses AI-powered facial recognition and behavioral analysis to flag crime locations and potential offenders; China deploys AI to crush anti-government dissent at home.

Future wars will increasingly be decided by technological edge, precision weapons, cyber strength, and intelligence dominance. Whoever observes, decides, and strikes first will emerge invincible. AI is becoming this era’s expert general.

Yet general without accountability are dangerous. Rapid AI decisions risk “flash wars,” where misread data can escalate crises to nuclear thresholds. Greater precision would lower the psychological barrier to launching strikes. Unguarded dependence on AI technology invites hacking, data poisoning, and jamming. Bias in training data and algorithmic errors make accountability nearly impossible.

Binding guardrails are therefore essential. Every critical decision must carry human authorization. Datasets must be free from bias. Systems must be hardened against cyberattacks. International agreements—including the UN’s affirmation that humanitarian law applies to AI in warfare—must advance in step with the technology’s evolution.

From Turing’s wartime codebreaking to Claude’s silent role over the skies of Iran, AI has moved from the margins of conflict to its very core. It delivers speed and precision that no human team can match. Yet without governance, genuine restraint, and enforceable accountability, it risks making war not only faster and deadlier but chillingly detached from the human costs it generates.

Kaushik Bhowmik is a senior information security analyst with HSBC who has written op-eds on science and geopolitics in Indian and international publications.