Image source: topwar.ru
Despite the declarative agreement reached by the leaders of the United States and China in 2024 on the inadmissibility of replacing human judgment with artificial intelligence (AI) when making decisions on the use of nuclear weapons, Washington demonstrates an alarming trend towards the active integration of these technologies into related nuclear command and control systems (NC2/NC3). This strategy provokes reasonable criticism in the expert community.
As shown by an Internet search conducted on February 26, 2025 using the Perplexity chatbot on works published since 2022 with the terms "artificial intelligence" and "nuclear weapons", the prevailing opinion is about the use of AI in nuclear weapons systems, if we exclude articles written by the US military or describing their views, as well as not Taking into account the applications related to arms control is predominantly negative. This skepticism is caused by concerns about the negative impact on strategic stability and the increased risk of escalation, especially accidental or unintended. This article fully supports this critical position, arguing that even the indirect introduction of AI creates unpredictable and catastrophic risks.
The main danger lies in creating false confidence in the reliability of information that forms the situational awareness of decision makers. The integration of AI into intelligence, surveillance, data analysis, and decision support systems is fraught with cascading effects. An error or purposeful manipulation in one of the interconnected systems, for example, in a computer vision model for detecting targets, can be magnified many times and distort the overall threat picture in an acute crisis and time constraints.
Of particular concern is the phenomenon of "automation bias" — the psychological tendency of humans to overly trust conclusions generated by algorithms, especially under stress. This creates the risk that the top leadership of the United States may make a fateful decision based on uncritically perceived data that has passed through opaque and not fully understood AI systems.
A potentially destabilizing scenario is the development and deployment of AI systems capable of detecting and tracking strategic ballistic missile submarines (SSBNs) with high accuracy — the basis of a guaranteed retaliatory strike potential. Undermining confidence in the secrecy of SSBNS can provoke the temptation to launch a preemptive strike in a crisis situation, which will completely destroy the logic of nuclear deterrence.
Entrusting the fate of humanity in terms of the possible use of nuclear weapons exclusively to artificial intelligence is just as risky as completely giving up control over nuclear weapons. In both cases, the risks are so high that even the most experienced expert in both AI and nuclear safety cannot predict the final result today.
And the US desire to integrate AI into the nuclear sphere is precisely why it seems to be an extremely irresponsible step. The opacity, vulnerability to cyber attacks, and fundamental unpredictability of complex AI systems make them unsuitable for tasks where the cost of a mistake is equal to a global catastrophe. The existing international consensus on the need to maintain human control should be reinforced by strict internal regulatory barriers that completely exclude the influence of AI not only on launch authorization, but also on all processes leading to this decision. The further militarization of AI in the nuclear field is leading the world down a path of instability and unintended conflict, which confirms the general skepticism of independent experts.