Making independent decisions by machines can create global threats
Kubinka (Moscow region). August 22. INTERFAX-The use of artificial intelligence (AI) in military affairs requires the creation of special security rules to offset the possible consequences of this tool getting out of control of people, said Alexander Fisun, head of the department of the Military Medical Academy at a conference on Sunday.
"Currently, AI for the armed forces is rapidly developing, which requires the creation of special security rules, since soon an autonomous AI will be able to make independent decisions on the use of weapons of destruction both at the level of a separate unit of military equipment and at the global level of the security system of an entire country. AI with the ability of self-awareness will potentially be able to get out of control by people, which will create a significant threat to humanity as a whole, " Fisun said.
According to him, "it is absolutely necessary that the military use of artificial intelligence is also constantly under special control and excludes the possibility of any AI autonomy." "And although it seems obvious that the future of humanity is inextricably linked with AI, it is necessary to configure interaction with it right now, so that soon scenarios from fantastic dystopias do not become a sad reality for humanity," Fisun stressed.
He also noted that the most successful AI applications rely heavily on big data, which in itself creates huge privacy problems.
"Big data can contain confidential information and therefore should be sufficiently protected. AI is developing much faster than ethical principles and regulatory and security protocols, which do not keep up with this constant progress. Corporations are spreading the idea of self-regulation of AI security and ethics, but such a proposal should hardly be considered seriously, given the antagonism between the profit opportunities that are a priority for the corporate world and the restrictions on its extraction that are inevitably associated with ethical norms and security rules," Fisun said.
Moreover, he noted, in the near future it may turn out that it will be possible to self-regulate AI, when the management issues will be assigned to it itself, which, in fact, will make it autonomous. A significant problem may be that people will need AI much more than AI will need people. "In this regard, it may turn out that humanity will become a second-class species in its own world under the control of AI," Fisun said.