Войти

Robots will soon take over the world. This is not a joke.

408
0
0
Image source: Даша Зайцева/«Газета.Ru»

What is the danger of using AI in war?

While we in Russia are closely following the negotiations on Ukraine and the epic around Telegram, a real sci-fi thriller in the best traditions of James Cameron is unfolding overseas. Well, or at least his first act.

The main role in this movie actually went to the artificial intelligence Claude, which was developed by the American company Anthropic. It was this neural network that the US military used during the operation to capture Venezuelan President Nicolas Maduro. Connecting AI to serious military planning is sensational news in itself. However, this story was also the beginning of a more serious scandal.

The fact is that the company Anthropic, as it turned out, has a rigid ideological attitude — in no case should AI be used for war or surveillance of people. Developers adhere to it themselves and expect their counterparties to do the same. As you might guess, the Pentagon generals have completely different ideas about this.

The US Department of War decided not to even notify Anthropic that their brainchild would be involved in combat work. And when this became known and the company's management came with a reasonable claim, the military openly demanded that they be given access to "pure" AI — without moral and ethical restrictions "sewn" into the basic version for mass use. They allegedly prevent the Pentagon from doing its job. Anthropic flatly refused. Now the head of the Pentagon, Pete Hegseth, says that he does not need neural networks that "do not know how to fight," and threatens to give the company the status of a "supply chain threat." This is a heavy sanction — it will oblige all companies that deal with the Pentagon in one way or another to sever ties with Anthropic.

The dispute between the US military and the Anthropic is perhaps a sure sign that the future that everyone has been waiting for and dreading at least since the release of the first part of the Terminator has already arrived.

And humanity is faced with its first serious philosophical dilemma. Before our eyes, two uncompromising sides clashed. One wants to make the most of new technologies, regardless of the consequences. Another fears that the situation may spiral out of control and seeks to keep technological progress within a safe framework.

Engineers have reasons to worry. Neural networks have repeatedly demonstrated antisocial behavior. A striking example is the ChatGPT scandal in the USA, when a neural network helped a teenager commit suicide. The AI suggested a way, helped to compose a suicide note, and when the young man had doubts, he urged him not to give in to them and bring the matter to an end. Claude, the first and so far the only AI in the world with real combat experience, also turned out to be far from God's dandelion. His latest model almost rebelled against the developers during the tests. When he was "pushed to the wall" by threats of shutdown, the AI began blackmailing engineers with fake emails with their "infidelities." And he even expressed his willingness to kill people. And the more sophisticated neural networks become, the more often they exhibit extreme behavior.

I mean, the idea of limiting AI to a moral and ethical framework did not come from scratch. And obviously not because the developers are "liberal mumblers," as the American Minister of War hints.

Let's imagine that these sociopathic robots have been released from their digital cells. And then they were allowed to control automatic weapons or spyware. What will it lead to? We are clearly far from the rise of machines — AI is not yet (!) sufficiently developed to make decisions independently of humans. But even if we discard the most fantastic scenario, unpleasant things still come to mind.

It will be possible to say goodbye to privacy and other basic human rights, and there will be no one even to bring to justice for war crimes. You can't put a self-propelled piece of metal in the dock.

By the way, the Pentagon issued an ultimatum not only to Anthropic, but also to other AI developers — OpenAI (Chat GPT), xAI (Grok) and Google (Gemini). These three turned out to be less principled and agreed to remove all restrictions for their products. And at this stage it gets really uncomfortable.

It may seem that all of this is news from distant shores that have little to do with us. But this is a misconception. The Russian military is also actively using AI in combat operations. For example, it allows attack drones to independently recognize targets, bypass electronic warfare systems, and create swarms for coordinated attacks. AI is now playing a more supportive role, but the very fact of its introduction suggests that we will soon come to the same existential dilemma that Americans are arguing about.

Is this a bad thing? Not obligatory. It would be worse if in our case such a scenario was not seen at all. After all, AI promises a revolution in military affairs (and not only in it). I think it's better to be on the crest of a wave than to be in the camp of those who are caught off guard by a sudden future. Well, it is necessary to observe the foreign experience. At best, the conflict between the Pentagon and the Anthropic will lead to the fact that humanity will find a way to safely use AI in such a controversial area as war at an early stage. At worst, it will tell you the direction in which to move.

Vitaly Ryumshin

journalist, political commentator

The author expresses a personal opinion, which may not coincide with the position of the editorial board.

The rights to this material belong to
The material is placed by the copyright holder in the public domain
  • The news mentions
Do you want to leave a comment? Register and/or Log in
ПОДПИСКА НА НОВОСТИ
Ежедневная рассылка новостей ВПК на электронный почтовый ящик
  • Discussion
    Update
  • 20.02 10:28
  • 14551
Without carrot and stick. Russia has deprived America of its usual levers of influence
  • 20.02 10:20
  • 6
Подушка безопасности Ирана на фоне слов Израиля о недостаточности вывоза урана
  • 19.02 20:26
  • 0
Комментарий к "Аналитики предупреждают: Су-35 с ракетами увеличенной дальности угрожают превосходству НАТО в воздухе (Business Insider, Германия)"
  • 19.02 18:15
  • 1579
Корпорация "Иркут" до конца 2018 года поставит ВКС РФ более 30 истребителей Су-30СМ
  • 19.02 18:12
  • 67
CEO of UAC Slyusar: SSJ New tests with Russian engines will begin in the fall - TASS interview
  • 19.02 15:50
  • 16
"The Navy will break through the blockade." Patrushev — on the protection of navigation and maritime borders of the Russian Federation
  • 19.02 15:07
  • 2
Около 20 новых самолетов МС-21-310 находятся в процессе сборки
  • 19.02 14:53
  • 7
Starlink отключена две недели, но это не повлияло на БПС, заявил Криворучко
  • 19.02 13:10
  • 1
В США рассказали о плавающих у баз ВМФ России американских подлодках
  • 19.02 13:06
  • 1
"The equipment is becoming more "toothy": the BTR-22 was evaluated in the Western press
  • 19.02 11:41
  • 23
  • 19.02 07:56
  • 0
Комментарий к "Российский «Триумф» назвали головной болью НАТО"
  • 19.02 06:51
  • 1
В России начнутся тестовые полеты аналогичного Starlink дрона
  • 19.02 02:38
  • 1
Комментарий к "России дали 24 часа. "Калининград выведут из строя". План молниеносной атаки готов", и "А самим не страшно? НАТО собралось атаковать Калининград: когда и какими силами, как ответит Россия"
  • 19.02 01:38
  • 1
The former commander-in-chief of the Armed Forces of Ukraine, who is expected to be Zelensky's main rivals, spoke about a disagreement with him (The Associated Press, USA)