The dangers of losing control of AI
Humanity seems to be ignoring a specter on the horizon. It is the specter of global nuclear war caused by artificial intelligence (AI). UN Secretary-General Antonio Guterres has warned about it. But so far, nuclear-armed nations have not come together to negotiate this catastrophic threat.
The rapid development of artificial intelligence (AI) poses the risk that AI could intervene in the process of launching nuclear weapons. Illustration photo
There has long been an informal consensus among the five largest nuclear powers - the US, Russia, China, the UK and France - on the principle of "humans in the loop", meaning that each country has a system to ensure that humans are involved in the decision to launch nuclear weapons.
None of the five powers say they have deployed AI in their nuclear launch command systems. This is true but misleading, according to Dr. Sundeep Waslekar, chairman of the Strategic Foresight Group, an international research organization in Mumbai, India.
AI is already being used for threat detection and target selection. AI-powered systems analyze large amounts of data from sensors, satellites, and radars in real time, analyzing incoming missile attacks and suggesting response options.
The operator then cross-checks the threat from different sources and decides whether to intercept enemy missiles or launch retaliatory strikes.
“Currently, the response time available to operators is 10 to 15 minutes. By 2030, this will be reduced to 5 to 7 minutes,” said Sundeep Waslekar. “While humans will make the final decisions, they will be influenced by AI’s predictive and prescriptive analytics. AI could be the driving force behind launch decisions as early as the 2030s.”
The problem is that AI can be wrong. Threat-detection algorithms can indicate a missile strike when none is happening. This could be due to computer errors, network intrusions, or environmental factors that obscure the signals. Unless human operators can confirm false alarms from other sources within two to three minutes, they could trigger retaliatory strikes.
Very small error, huge disaster
The use of AI in many civilian functions such as crime prediction, facial recognition and cancer prognosis is known to have a margin of error of 10%. In nuclear early warning systems, the margin of error can be around 5%, according to Sundeep Waslekar.
As the accuracy of image recognition algorithms improves over the next decade, this margin of error could drop to 1-2%. But even a 1% margin of error could start a global nuclear war.
Decisions to launch a nuclear strike or retaliate could be triggered by AI errors. Photo: Modern War Institute
The risk could increase over the next two to three years as new malware emerges that is able to bypass threat detection systems. This malware will adapt to avoid detection, automatically identify targets, and automatically attack them.
There were several “brinkmanship” situations during the Cold War. In 1983, a Soviet satellite mistakenly detected five missiles launched by the United States. Stanislaw Petrov, an officer at Russia’s Sepukhov-15 command center, concluded that it was a false alarm and did not alert his superiors so they could launch a counterattack.
In 1995, the Olenegorsk radar station detected a missile attack off the coast of Norway. Russia’s strategic forces were put on high alert and then-President Boris Yeltsin was handed the nuclear briefcase. He suspected it was a mistake and did not press the button. It turned out to be a scientific missile. If AI had been used to determine the response in either situation, the results could have been catastrophic.
Hypersonic missiles today use conventional automation rather than AI. They can travel at speeds of Mach 5 to Mach 25, evading radar detection and controlling their flight paths. The superpowers are planning to enhance hypersonic missiles with AI to instantly locate and destroy moving targets, shifting the decision to kill from humans to machines.
There is also a race to develop general artificial intelligence, which could lead to AI models that operate beyond human control. When this happens, AI systems will learn to enhance and replicate themselves, taking over decision-making processes. If such AI is integrated into decision support systems for nuclear weapons, machines will be able to initiate devastating wars.
Time to act
Faced with the above risks, many experts believe that humanity needs a comprehensive agreement between major powers to minimize the risk of nuclear war, going beyond repeating the slogan “humans in the loop”.
This agreement should include transparency, accountability, and cooperation measures; international standards for testing and evaluation; crisis communication channels; national oversight boards; and rules to prohibit aggressive AI models that are capable of bypassing human operators.
Secretary-General António Guterres attends a peace memorial ceremony in Hiroshima, which was hit by an atomic bomb in 1945. Photo: UN
Geopolitical shifts are creating an opportunity for such a pact. Leading AI experts from China and the US, for example, have engaged in a number of track-two dialogues on AI risks, leading to a joint statement by former US President Joe Biden and Chinese President Xi Jinping last November.
Billionaire Elon Musk is a strong advocate for saving humanity from the existential risks posed by AI, and Musk may urge current US President Donald Trump to turn the joint statement between Joe Biden and Xi Jinping into a treaty, according to Dr. Sundeep Waslekar.
The AI-nuclear challenge also requires Russia’s participation, according to Dr. Sundeep Waslekar. Until January this year, Russia refused to discuss any nuclear risk reduction measures, including convergence with AI, unless Ukraine was brought up.
With President Donald Trump engaging in dialogue with Russian President Vladimir Putin to improve bilateral relations and end the war in Ukraine, Russia may now be open to discussions.
In February this year, following US Vice President JD Vance's speech at the Paris AI Action Summit, the Center for a New American Security (CNAS) also released a report titled "Preventing the AI Doomsday: US-China-Russia Competition at the Nexus of Nuclear Weapons and Artificial Intelligence".
The report identifies the most significant risks of the AI-nuclear nexus and urges the US administration to establish a comprehensive set of risk mitigation and crisis management mechanisms with China and Russia.
Earlier, in September last year, about 60 countries including the US adopted an “action plan” to manage the responsible use of AI in the military at the Responsible AI in Military (REAIM) Summit held in Seoul, South Korea. This was the second conference of its kind, following one held in The Hague last year. Such movements show that the risk of a nuclear war initiated by AI is not science fiction.
The world is clearly facing an increasingly urgent existential problem that requires real action from nuclear powers to ensure that “every decision about the use of nuclear weapons is made by people, not machines or algorithms” - as UN Secretary-General Antonio Guterres has called for.
Nguyen Khanh
Comment (0)