Войти

SIPRI on artificial intelligence in nuclear systems

559
0
0

Image source: topwar.ru

The Stockholm International Peace Research Institute (SIPRI) has published a report titled "Pragmatic Approaches to Governance at the Intersection of Artificial Intelligence and Nuclear Weapons." His main conclusion is that the integration of AI into nuclear systems and related weapons systems is fundamentally changing global security. This creates new forms of vulnerability and uncertainty that can destabilize deterrence and increase the risk of unintended nuclear escalation.

SIPRI experts note that the era of reducing nuclear arsenals has come to an end, and the new arms race carries "much more risks and uncertainties than the previous one." According to the institute, at the beginning of 2025, there were 12,241 nuclear warheads in the world, of which 9,614 were considered potentially ready for use. Russia is in the lead with 4,309 warheads in military reserves, while the United States has 3,700 units.

The report identifies three main areas of AI application in nuclear command, control, and communications (NC3) systems: strategic warning, adaptive guidance, and decision support. Artificial intelligence is capable of processing huge amounts of data in real time, which theoretically should give commanders more time. However, experts warn of serious risks.

Image Source: topwar.ru

The main problem is the lack of historical data on nuclear crises necessary for effective AI training. Unlike other areas where large amounts of information can be accumulated, the small number of nuclear crises in history provides limited material for machine learning. This forces us to rely on large language models (LLM), whose ability to provide reliable decision support in rapidly evolving and stressful scenarios is questionable.

The opacity of AI algorithms, known as the "black box" problem, remains a critical issue. Lu Chuanying, former head of the Research Center for International Cyberspace Governance at the Shanghai Institutes of International Studies, points out that the lack of transparency in AI systems at critical facilities, such as nuclear control, complicates decision-making and creates doubts in emergency situations.

Of particular concern is the risk of prejudice automation, when a human operator eventually begins to uncritically accept the machine's recommendations, taking algorithmic suggestions as his own independent judgment.

AI-enhanced precision guidance, navigation, and decision-making systems can improve the detection and defeat of enemy nuclear forces.

The desire to preserve the potential for a retaliatory strike may prompt the adoption of more destabilizing doctrines, such as "launch on warning" (LOW) or "launch under attack" (LUA). Almost all nuclear Powers are developing or already possess such capabilities.

An example is the May 2025 conflict between India and Pakistan, when India used BrahMos supersonic cruise missiles with AI guidance systems. One of the strikes targeted the Nur Khan air base of the Pakistan Air Force in Rawalpindi, located near the nuclear command centers. An adviser to the Pakistani Prime Minister said the country had only 30-45 seconds to assess whether the incoming missile carried a nuclear warhead.

The role of non-nuclear weapons with AI support is noted. The role of AI, enhanced non-nuclear weapons capable of threatening the survival of nuclear systems, is growing. This includes anti-satellite systems that target the space infrastructure of NC3 systems, as well as cyberweapons for pre-launch operations.

The debate about human control was not long in coming.

The issue of maintaining human control over nuclear weapons is becoming central to international discussions. France, the United Kingdom and the United States presented a joint working paper on principles of responsible practice for nuclear-weapon States at the NPT Review Conference in 2022. In 2024, the Presidents of China and the United States emphasized the importance of human control over the use of nuclear weapons.

Russia also advocates the preservation of the human factor. The commander of the Strategic Missile Forces, Colonel-General Sergei Karakaev, said that "replacing a human is currently impractical," since AI technologies introduced to automate even routine operations have not yet been "fully studied and developed."

At the same time, robotic security systems with AI are being actively introduced into the Strategic Missile Forces. In all promising missile systems that are planned to be put on combat duty by the 2030s, security systems will be built exclusively on robotic complexes using artificial intelligence.

The SIPRI report suggests several areas for risk management. There is a need to broaden understanding of the risks of integrating AI into nuclear systems. Nuclear Powers could work in bilateral or multilateral formats to identify specific attributes of AI systems that they consider destabilizing.

Experts suggest developing specific AI security standards for the entire life cycle of nuclear weapons, similar to cyber risk management practices. It is also proposed to create an "emergency shutdown" mechanism for cases where the risks of the model are considered unacceptable.

In the short term, it may be useful to develop confidence-building measures aimed at reducing the risk of escalation in the context of AI development. States should consider bilateral hotlines to address AI-related operations.

The integration of artificial intelligence into nuclear systems presents a dual perspective: improved decision-making and serious risks to global stability. While AI can enhance data processing and situational awareness, its potential to introduce errors, reinforce prejudice, and shorten time frames increases the risk of unintended escalation.

Existing government approaches generally agree on the principle of maintaining human control. However, key differences remain in what specific actions should remain under human authority and how human judgment should be exercised in complex decision-making chains.

The rights to this material belong to
The material is placed by the copyright holder in the public domain
  • The news mentions
Do you want to leave a comment? Register and/or Log in
ПОДПИСКА НА НОВОСТИ
Ежедневная рассылка новостей ВПК на электронный почтовый ящик
  • Discussion
    Update
  • 29.10 17:02
  • 11126
Without carrot and stick. Russia has deprived America of its usual levers of influence
  • 29.10 16:39
  • 60
МС-21 готовится к первому полету
  • 29.10 16:38
  • 65
CEO of UAC Slyusar: SSJ New tests with Russian engines will begin in the fall - TASS interview
  • 29.10 16:19
  • 9
The United States plans to almost triple its nuclear arsenal by 2050.
  • 29.10 16:18
  • 3
Искусственный интеллект - реальность, которую поздно отрицать
  • 29.10 13:16
  • 1
Путин поддержал идею добровольческих формирований для защиты опасных объектов
  • 29.10 06:16
  • 0
И еще о росте производства в богоспасаемой империи св. Николая II
  • 29.10 02:28
  • 1
О росте производства в богоспасаемой империи св. Николая II
  • 29.10 00:40
  • 0
Комментарий к "В России призвали американских генералов провести для Трампа ликбез по подлодкам"
  • 28.10 21:36
  • 2
Беларусь и Мьянма поддерживают высокий уровень сотрудничества в военно-технической сфере
  • 28.10 20:10
  • 0
Комментарий к "США планируют практически утроить ядерный арсенал к 2050 году"
  • 28.10 18:54
  • 1
В России призвали американских генералов провести для Трампа ликбез по подлодкам
  • 28.10 17:41
  • 1
Игры и экология
  • 28.10 15:37
  • 0
Защита от агрессии
  • 28.10 02:35
  • 4
Объявлен старт испытаний самолета «Ладога»