[1]
By John Helmer, Moscow
@bears_with [2]
On the eve of the Islamabad negotiations, by intention, the US oligarch-run media platform known as the Washington Post editorialized in favour of a five-point combination of invasion and genocide for Iran in order to achieve a “final and decisive conclusion” if Iran refuses to come to “an acceptable agreement”.
That meant conceding all of the US and Israeli terms including demilitarization of Iran’s drone and missile production and operation capacities; denuclearization of both civilian and military enrichment; decapitation of the surviving Iranian command and control; reopening of the Strait of Hormuz; and an end to Iranian support for the Arab resistance. Read the claims here [3].
Also on the eve but by coincidence, General Reasoning (GR), a London-based research and development consultancy working on Artificial Intelligence (AI) systems, produced an unprecedented demonstration of the failure of the well-known global search engines and data operations to analyze and predict “long-horizon, non-stationary environments with open-ended goals.”
The first haf of the phrase is GR’s term for reality; the term “open-ended goals” means the future.
GR’s test evidence is the failure of the AI search engines – Antropic, ChatGPT, Google, Grok, etc. – to bet money profitably on game outcomes over the English Premier League football season of 2023-24. Read the report here [4].
“Every frontier model we evaluated lost money over the season and many experienced ruin. The best-performing model, Claude Opus 4.6, finished with an average return of −11% over three seeds. Only two models, Claude Opus 4.6 and GPT-5.4, avoided ruin across all three seeds [5].”
Technical conclusion [5]: “Models can write sophisticated code, diagnose their own failures, and articulate correct strategies, yet persistently fail to execute those strategies reliably, monitor their own performance, or adapt when their approach is not working.”
GR didn’t extend its skepticism towards Artificial Intelligence in sports to warfighting. On the battlefield of the Middle East, for example, this currently means the failure of the US and Israel to achieve their declared war aims; their refusal to accept the intelligence feedback on the penetration and defeat of their missile defence systems; their rejection of any change of policy.
However, the bottom line of the GR report is that man-made projections of US and Israeli warfighting superiority may prove to go as badly wrong as Google, ChatGPT, and Anthropic lost money and went bankrupt after they applied their machine superiority to sports wagering. When GR concludes [6] that “[this] is an early example of a complex world that tests long-horizon sequential decision-making under uncertainty. Adaptive reasoning under uncertainty becomes essential, ” the translation for US and Israel warmaking right now is — think again, or you face ruin.
In this podcast Geopolitics and Empire, hosted from Mexico City by Hrvoje Moric, thinking again is the point. The discussion starts with the mistakes of AI systems and podcasting as the new form of investigative journalism. It then focuses on how imperialism is faring in the Middle East, the Americas, and on the Ukraine battlefield.
The podcast went to air on Saturday from 8 pm US Eastern Time. Click to view or listen: https://www.youtube.com/watch?v=BTZEKTJPZfY [7]