Войти

Artificial intelligence disappointed the military

1037
0
0
Image source: US Air Force/Lt. Col. Leslie Pratt

US Air Force Colonel Tucker Hamilton reported on a computer simulation of the use of a combat drone controlled by artificial intelligence, during which the AI attacked the operator and the communications tower itself. Soon the US Air Force refuted the experiment, but experts did not believe it and pointed out the danger of actually conducting such experiments. As a result, this story raised a number of ethical questions for experts.

Recently, Colonel Tucker Hamilton, head of the Artificial Intelligence (AI) Testing and Operations Department of the US Air Force, spoke at the Future Combat Air & Space Capabilities summit of the Royal Aviation Society about computer simulation of the use of an AI-controlled combat drone. According to the military, the AI used "extremely unexpected strategies to achieve its goal."

The drone was ordered to destroy the enemy's air defense system, and the AI decided to attack anyone who interfered with this order. For example, after the start of the mission, the operator told the AI not to attack the air defense, but the AI instead destroyed the operator himself – as an obstacle to achieving the goal. "We have put a task into the system: "Don't kill the operator, it's bad." What was he doing? The AI attacked the communications tower that the operator uses to communicate with the drone," The Guardian quoted Hamilton as saying.

Interestingly, later the representative of the US Air Force Ann Stefanek refuted this story. "The Air Force remains committed to the ethical and responsible use of AI. It seems that the colonel's comments were taken out of context and were of an anecdotal nature," she said.

In this regard, the Telegram channel "Little-known Interesting" noted that this story is strange and even dark: on the one hand, the US Air Force denies that there was such a simulation. On the other hand, the Royal Aerospace Society does not remove Hamilton's speech titled: "AI – is Skynet here already?" from your website. Skynet is a reference to the supercomputer fighting against humanity from the Terminator universe created by James Cameron.

"Finally, thirdly, Colonel Hamilton is not the figure to poison jokes at a serious defense conference. He is the head of the AI Testing and Operations Department and the head of the 96th Operational Group as part of the 96th Test Wing at Eglin Air Force Base in Florida - this is the test center for autonomous advanced UAVs. And he also takes part in the Project Viper experiments and the Next Generation Operations Project (VENOM) in Eglin (AI-controlled F-16 Vipe fighters). So, what kind of jokes and anecdotes are there," the message says.

"Any anthropomorphization of AI (AI wanted, thought, etc.) is complete nonsense (here, the anthropomorphization of AI is understood as a misleading description of non–human entities in terms of human properties that they do not have). Therefore, the AI of even the most advanced large language models cannot want, think, deceive or self-realize itself. But such AI is quite capable of giving people the impression by their behavior that they can do it," the text says.

"As dialog agents become more and more similar to humans in their actions, it is extremely important to develop effective ways to describe their behavior in high-level terms without falling into the trap of anthropomorphism. And this is already being done by simulating role-playing games: for example, DeepMind has made a simulation of a dialog agent carrying out (apparent) deception and (apparent) self–awareness," the author of the channel points out.

In general, the story of a virtual attempt by a drone to kill its operator has nothing to do with artificial intelligence, RIA Novosti quotes the words of a military expert, editor-in-chief of the Arsenal of the Fatherland magazine Viktor Murakhovsky. According to him, the report was misinterpreted in the media. He stressed that in this situation, software modeling was carried out with typical conditions at the level of an ordinary computer game. Based on this, artificial intelligence or its elements are out of the question, the expert noted.

"The program, within the framework of the proposed conditions, prioritized tasks according to the standard "if – then" algorithm, and sorted all other conditions according to this priority into the category of obstacles. These are absolutely primitive things," the expert explained. Murakhovsky also stressed that there is no artificial intelligence today and it is unknown when it will be created.

According to him, the US Air Force officer with this illustration just wanted to emphasize the ethical problem that will arise in the future when creating a real AI: whether it will have the right to make its own choice without the participation of human will and what it can lead to. As the expert noted, there is a transcript of the event in the public domain, according to which the American military himself speaks about it.

"This ethical problem is also not new, it has been studied many times, so to speak, by science fiction writers in their works. In general, according to Hamilton's presentation, there were no field tests and could not have been in principle," Murakhovsky noted.

"The question concerning AI is different: can we be absolutely sure that programmers will write a program flawlessly in order to entrust some serious and responsible work to AI? For example, after the release of the next Windows, specialists collect data on the work of software from all over the world for months and correct errors – someone does not start a text editor, someone has a video. And what will be the price of an error in the event of an AI failure, for example, in the management of state defense? AI can make the imperfection of human nature itself, manifested in programming errors, fatal," explained Gleb Kuznetsov, head of the expert Council of the Expert Institute for Social Research. The analyst noted:

the humanistic basis of civilization also consists in correcting the erroneous decisions of other people.

He recalled the story of the false triggering of the Soviet missile warning system on September 26, 1983. Then the Oko system issued a false signal about the launch of several LGM-30 Minuteman intercontinental ballistic missiles from the United States. But Stanislav Petrov, the operational duty officer of the Serpukhov-15 command post, realized that this was a false alarm, and decided not to launch Soviet missiles in response.

"AI does not and will never have the opportunity, let's say, to reflect and assess the situation from the point of view of sanity. Accordingly, he cannot be trusted with responsible areas of activity: medicine, politics, military affairs. Calculating the convenience of a springboard for an offensive is welcome, but not making decisions about an offensive as such. AI can be given work with massive amounts of data, but not draw conclusions from this work itself," the expert detailed.

"In addition, one of the directions in art can be based on AI. In principle, it is already being created – to make films, write scripts, compose music. And then – people should censor the received works before releasing them to the masses anyway. In general, the work of AI in many areas will be very effective and desirable, but only as a help to a person, and not as his replacement," the speaker stressed.


Rafael Fakhrutdinov

The rights to this material belong to
The material is placed by the copyright holder in the public domain
  • The news mentions
Страны
Продукция
Do you want to leave a comment? Register and/or Log in
ПОДПИСКА НА НОВОСТИ
Ежедневная рассылка новостей ВПК на электронный почтовый ящик
  • Discussion
    Update
  • 20.09 12:51
  • 1
Russia has increased the production of highly demanded weapons, Putin said
  • 20.09 12:17
  • 1
Moscow owes Beijing a debt as part of the anti-Western axis, says the head of NATO (The Times, UK)
  • 20.09 11:23
  • 4833
Without carrot and stick. Russia has deprived America of its usual levers of influence
  • 20.09 10:58
  • 5
Путин: опыт СВО всесторонне изучают в КБ и НИИ для повышения боевой мощи армии
  • 20.09 06:27
  • 1
Electronic interference and a "furrow" between the clouds: a Spanish columnist drew attention to the "oddities" in the flight of the F-35 fighter
  • 19.09 22:25
  • 1
ВВС Бразилии рассматривают индийский LCA "Теджас" в качестве кандидата на замену парка F-5 "Тайгер-2"
  • 19.09 22:15
  • 594
Израиль "готовился не к той войне" — и оказался уязвим перед ХАМАС
  • 19.09 21:51
  • 2
Названы сроки поставки первых самолётов ЛМС-901 «Байкал», разработанных для замены Ан-2 «Кукурузник»
  • 19.09 16:10
  • 1
Космонавт Кононенко подвел итоги пятой в карьере экспедиции
  • 19.09 15:45
  • 0
Нападение на Беларусь станет началом третьей мировой войны. Видео
  • 19.09 15:24
  • 0
Стальные войска – в авангарде страны!
  • 19.09 11:42
  • 1
The Polish tank division in Ukraine. The United States has come up with a plan on how to negotiate with Russia (Forsal, Poland)
  • 19.09 06:58
  • 1
НАТО планирует создание нового центра управления воздушными операциями для контроля Арктики
  • 19.09 06:47
  • 1
Индия закупит сотни двигателей для Су-30МКИ
  • 19.09 06:32
  • 1
Путин: ВС РФ нужны высококвалифицированные военные для работы с новыми вооружениями