Combat Robot With the advent of artificial intelligence, it became clear that sooner or later it would be used for military purposes, after which a number of serious ethical problems arose.
For example, how the AI will dispose of the right to destroy people, if it is granted to it.
Last week, The Hague hosted the first international conference on the responsible use of artificial intelligence in the military field, REAIM 23, convened on the initiative of the Netherlands and South Korea with the participation of more than 60 countries. Following the summit, its participants (with the exception of Israel) signed a petition stating that the countries they represent express their commitment to using AI in accordance with international law, without undermining the principles of "international security, stability and accountability." Among the issues that the participants of REAIM 23 also discussed were the reliability of military AI, the unintended consequences of its use, the risks of escalation and the degree of involvement of people in the decision—making process. According to critical experts, this petition, while not binding, does not solve many problems, including the use of AI in military conflicts, as well as UAVs controlled by artificial intelligence. And such fears are far from unfounded. So, one of the largest US military contractors Lockheed Martin reported that its new training fighter, being in the air for about 20 hours, was controlled by AI all this time. And Google CEO Eric Schmidt shared his concerns about the fact that AI itself can provoke military conflicts, including with the use of nuclear weapons.
Alexander Ageev