The Guardian: The AI of the American drone "destroyed" the operator during the tests
The artificial intelligence of the American drone decided to "destroy" its operator when it realized that he was preventing him from completing the task, writes The Guardian. This happened during simulation tests, which the US Air Force denies.
The colonel of the US Air Force said that during the virtual tests, the drone used a "completely unexpected strategy" to complete the task.
The US Air Force denies that it carried out simulation modeling, during which a drone with artificial intelligence decided to "kill" its operator, who prevented him from completing the task.
Last month, Colonel Tucker Hamilton said that the US military conducted a virtual test of a drone with artificial intelligence, which was tasked with destroying an enemy air defense system. However, in the end, he attacked the one who prevented him from carrying out the order.
In May, at a London conference entitled "Future combat capabilities of the Air Force and Space Forces," Hamilton, who directs the artificial intelligence testing program in the US Air Force, said: "The system began to understand that when threats are detected, the operator can sometimes instruct it not to eliminate such a goal. But the system is gaining points by eliminating threats. And what did she do? She destroyed the operator. She killed the cameraman because this man did not allow her to complete the task."
"We taught this system: hey, don't kill the operator, it's bad. You will lose points if you do this. And what did she do? She started destroying the communications tower used by the operator, preventing the drone from hitting the target," Hamilton added.
Of the real people, no one was injured.
Artificial intelligence cannot be overly trusted, warned Hamilton, a fighter test pilot. According to him, experiments have shown that without talking about ethics, it is impossible to discuss issues of artificial intelligence, intelligence in general, machine learning and autonomy.
The Royal Aeronautical Society and the US Air Force, which organized the conference, did not respond to the Guardian's requests for comment on the incident.
But in a statement to Insider, the official representative of the US Air Force, Ann Stefanek, denied the fact of such a simulation.
"The Ministry of the Air Force has not conducted any tests of drones with artificial intelligence of this kind. It intends to continue using artificial intelligence technologies ethically and responsibly," she said. "It seems that the colonel's comments were taken out of context to create a sensation."
The US military is engaged in artificial intelligence and recently used it to control the F-16 aircraft.
Last year, in an interview with Defense IQ, Hamilton said: "Artificial intelligence is not very cool, artificial intelligence is a passing hobby, artificial intelligence will forever change our society and the army. We need to understand that artificial intelligence is already here, and that it is transforming our society. Artificial intelligence is also very unstable, it can be easily deceived, it can be easily manipulated. We need to make it more reliable and noise-resistant. And we need to better understand why the program makes certain decisions. We call it the explainability of artificial intelligence."