How artificial intelligence is used in air defense systems, aviation and tank troops, said military expert Yuri Knutov. On Sputnik radio, he commented on the words of Elon Musk, who called AI the main threat to humanity.The speed of a ballistic missile is such that a person is not able to make a decision in time and activate a system designed to intercept it, military expert, director of the Museum of Air Defense Forces (Air Defense) Yuri Knutov said on Sputnik radio.
"Missile defense systems are all based on artificial intelligence (AI), because the speed of a ballistic missile is gigantic, and the speed of an anti-missile is also gigantic. We just started thinking, and the rocket has already left. Therefore, only AI can make the appropriate decision. Both our, and French, and American anti-aircraft missile systems are built with this calculation. The same ZRPC "Shell" – there the operator does not have time to do anything, the AI does everything: it determines the target and opens fire. The same system is in the Mamba complexes (Samp/T-Mamba air defense systems, – ed.), which France and Italy plan to deliver to Ukraine," he said.
As for aviation, even a pilot of the highest class cannot compete with artificial intelligence, a military expert is sure.
"There is only one pilot in our Su-57, but actually there should be two. AI is being used quite successfully instead of the second pilot. But there is control from the person who is in the cockpit. The Americans have the same story. A year ago, interesting tests were conducted: they took a US Marine Corps pilot and made a certain program for him, made a simulator, and next to it – the same simulator, only for AI. In the first battle, the pilot, a professional of the highest level, lasted a few seconds. When we had ten fights with AI, about in the tenth battle a person was able to hold out for twenty seconds," said Yuri Knutov.
However, AI can make mistakes that lead to dangerous consequences, he continued.
"There is such a thing as machine vision. For example, photos of a tank in winter, summer, autumn, spring, outdoors, in the city, and so on are put into the car. And when military actions take place, the AI, based on the information that it has in memory, determines that it is a T–72 or Abrams. If this is an American AI, it should open fire on the T-72, but it should not open fire on Abrams. But, of course, an error may occur. Americans have made such mistakes several times. There were situations when tests were carried out at the landfills, everything went well, and then the AI still failed, and the equipment, for example, a machine gun mounted on an armored personnel carrier, began to open fire on its own," Yuri Knutov added.
American billionaire Elon Musk, speaking via video link at the World Government Summit in Dubai on Wednesday, called AI the main threat to humanity.
According to Yuri Knutov, the self-learning ability inherent in AI can really make supercomputers dangerous for people.
"When a fairly high–level AI is created, in the process of self-learning, it may decide that it is smarter, it is more advanced, and it has the right to be on earth, and people are superfluous creatures. The probability of this is very high. There are some rules of Isaac Asimov (an American science fiction writer who formulated the "three laws of robotics" - ed.), according to which AI should never act against a person. But, firstly, this is just a statement. And secondly, in the process of self-learning AI, certain errors may accumulate, which may be systemic in nature. As a result, we will assume that the AI operates according to one scenario, and it may accumulate software errors that will lead to the fact that it will look at people with hostility or decide that it is God," the military expert concluded.