Costly reliance


THE reverberations of the Iran war are being felt around the world. In Pakistan, the government has hiked fuel prices, which will mean higher prices for all commodities. School schedules are being shortened and people are finding it harder to ignore the bill that the US-Israel war on Iran has handed them. Similar difficulties are being faced by other countries across the region. All eyes are on the April 6 deadline the US has given Iran to open the Strait of Hormuz.
By now, nearly every aspect of the war — strategic interests, failures and the costly regional fallout — has been analysed at length. Less discussed has been the role of artificial intelligence used by the US to decide on targets and provide predictions. Perceived through this lens, the dynamics of the war were decided not on Feb 28, when it began, but rather on Jan 9, when the US secretary of war issued a memo in which he noted seven projects that would make the US military “an AI first war fighting force across all components from front to back”.
This strategy required the compression of timelines of years into months. It prescribed that the military opt for the speed of implementation of AI models at the risk of imperfect alignment. A few weeks later, there was a public showdown between the war secretary and the AI company Anthropic which the former demanded remove internal restrictions in its models on fully autonomous weapons and mass surveillance of the population. Anthropic refused to do this and lawsuits ensued. Pete Hegseth then had the company declared “a supply chain risk to national security” and banned from all federal contracts.
Despite this, AI systems were already embedded in warfare systems being utilised. According to news reports, AI models run before the war provided predictions that were extremely optimistic and echoed aggressive projections favoured by those advocating confrontation. These included expectations that the regime would fall quickly, that Iranians would come out into the streets to protest, that Iran would not shut down the Strait of Hormuz and that the war would only last a few days.
AI predictions regarding the war have been disproven.
It is clear that nearly all of these predictions have been disproven by actual events. The regime may not be as strong as before but it is still intact, having put into motion a diffuse command structure that has taken over. Iranians did not spill out into the streets to topple the regime. Instead, there have been visible signs of a rally-around-the-flag effect.
The models also assumed that Iran would not act against its own interests in shutting the Strait of Hormuz to sea traffic — a premise based on rational-interest theory and past behaviour. Finally, the continuation of the war for more than a month after it began shows that the prediction of a short conflict was also incorrect.
The reason for this predictive failure lies in a deficiency that Anthropic had warned about. The term is ‘AI sycophancy’ and it refers to the tendency of large language models trained on reinforcement learning from human feedback to echo what they perceive as the preferences of the user. Because such models are rewarded for echoing user expectations, they produce confident predictions even when evidence to the contrary exists.
Implementing an AI-first policy while disregarding the guardrails and caveats flagged by industry experts may have provided the war-hungry hawks in the Trump administration with the predictions they were looking for. The appeal of AI systems is that they are able to echo preferences in precise language backed by evidence. In essence, it is able to present persuasive information as factual and objective information.
There were, of course, other sources of analysis available that could have provided better assessments. Yet the infatuation with artificial intelligence — the belief that it is inherently superior to human judgement — is not limited to the US. We live in a world increasingly willing to believe that the knowledge offered by systems like ChatGPT and Claude is necessarily better than what we can assess ourselves.
It is true that humans are slower and less exact than large language models trained on vast amounts of data. But it is also true that humans possess the ability to assess information with contextual and real-world understanding that AI systems cannot fully replicate. AI is not the only reason the US chose to initiate and pursue this war. However, understanding the role it may have played is essential in evaluating a technology that is often assumed to be objective, infallible and beyond reproach.
The writer is an attorney teaching constitutional law and political philosophy.
Published in Dawn, April 4th, 2026



