Войти

US Foreign Policy Council: we are lagging behind our opponents in the field of military AI

1597
0
0
Image source: © РИА Новости Максим Блинов

The Power of the Future – Military AI

Foreign Affairs is sounding the alarm – the United States is beginning to seriously lag behind China and Russia in the field of military artificial intelligence. The Pentagon is unnecessarily resting on its laurels, not understanding the enormous importance of AI. The authors write that a number of new US developments with AI have failed. America can cede military primacy in the world to its opponents.

Military Artificial Intelligence (AI) innovations – high benefits and low risks

Gunpowder. Internal combustion engine. Airplane. These are just some of the technologies that have changed the face of war forever. Now the world is going through another transformation that can give a new content to military power: the development of artificial intelligence (AI).

Combining AI with military operations may seem like science fiction, but today artificial intelligence is at the center of almost all advances in defense technology. It defines how the military recruits and trains soldiers, how they deploy the armed forces and how they fight. China, Germany, Israel and the United States use artificial intelligence to visualize battlefields in real time. Russia uses artificial intelligence to create deceptive videos and spread disinformation about its special operation in Ukraine. In the course of the Russian-Ukrainian conflict, both sides can use AI algorithms to analyze large amounts of data from open sources coming from social networks and from the battlefield, which allows them to better organize their attacks.

The United States is the leading technological center of the world, and theoretically, the development of AI opens up huge opportunities for the US armed forces. But at the moment it also creates big risks. The world's leading armies often become overconfident in their ability to win future wars, and there are signs that the US Department of Defense may also fall victim to complacency. Although the top leadership of the US defense departments has been talking for decades about the importance of developing new technologies, including artificial intelligence and autonomous systems, real actions on the ground are painfully slow. For example, since 2003, the US Air Force and the US Navy have joined forces to create prototypes of the X-45 and X-47A UAVs - semi-autonomous, unobtrusive military drones capable of conducting reconnaissance and launching missile and bomb strikes. But many military leaders viewed this program as a threat to the F-35 fighter program, and the Air Force dropped out of this project. The Navy then funded an even more impressive X-47B prototype, capable of flying as accurately as a human-piloted combat fighter. But the Navy also saw it as a threat to manned combat aircraft and eventually retreated, instead developing a program to create an unarmed autonomous aircraft with much more limited capabilities.

The slow actions of the United States stand in stark contrast to the behavior of China, Washington's most powerful geopolitical rival. Over the past few years, China has invested about as much in AI research and development as the United States, but it is much more actively integrating this technology into its military strategy, military planning and weapons systems, which could potentially lead to its victory over the United States in a future war. China has developed an advanced semi-autonomous combat drone that integrates well into its armed forces. This is in stark contrast to the fact that Washington abandoned the X-45, X-47A and X-47B programs. Russia is also developing military technologies with artificial intelligence that can threaten the enemy's armed forces and critical infrastructure (so far they have not been used in its special operation in Ukraine). If Washington does not do more to actively integrate AI into its armed forces, it may be among the laggards.

But although the lag in the field of artificial intelligence may jeopardize the power of the United States, mindless acceleration forward is also not without risks. There are analysts and developers who fear that active AI development could lead to serious accidents, including algorithmic failures that could lead to civilian casualties during the war. There are experts who even claim that the inclusion of machine intelligence in the management and control of nuclear facilities can increase the likelihood of nuclear accidents. But this is still unlikely — it seems that most nuclear powers are aware of the danger of using AI in missile launch systems.

And yet now Washington should be most concerned about the fact that it is moving too slowly with the development of military AI. At the same time, some of the world's leading researchers believe that the US Department of Defense ignores security and reliability issues related to AI, and the Pentagon should take their concerns seriously. The successful use of artificial intelligence requires the US military to innovate at a rapid pace, but with full security. And this is a task that is easier to formulate than to accomplish.

The Biden administration is taking active steps to achieve this goal. Biden created a National Task Force on Research in the Field of Artificial Intelligence, which is tasked with distributing access to research tools that can help promote AI innovation for both the armed forces and the civilian economy as a whole. The Administration has also introduced the position of senior responsible person for digital and artificial intelligence in the Ministry of Defense. This senior official will be tasked with ensuring that the Pentagon expands and accelerates its efforts to develop artificial intelligence.

But if the White House wants to move at the necessary speed, it must take additional measures. Washington will need to focus on ensuring that researchers have access to better and more complete data from the Ministry of Defense, which will contribute to the creation of effective AI algorithms. The Pentagon should reorganize in a way that would make it easier for its agencies to cooperate and share their findings. It should also create incentives to attract more talent in the field of exact sciences and should provide an atmosphere in which staff will be confident that they will not be punished if experiments fail. At the same time, the Ministry of Defense must conduct successful projects through rigorous safety tests before implementing them. Only in this way can the United States quickly develop many new AI tools without fear that they will create unnecessary threats.

The advantage of the first move

Technological innovation has long been crucial to the military success of the United States. During the American Civil War, U.S. President Abraham Lincoln used the North's excellent telegraph system to communicate with his generals, coordinate strategy, and transfer troops, which ultimately helped the Unionists defeat the Confederates. In the early 1990s, Washington used new precision-guided munitions during the Gulf War to oust Iraq from Kuwait.

But history shows that military innovation is not just a process of creating and using new technologies. Along with this, it entails a reworking of how States recruit troops, organize their armed forces, plan operations and develop strategies. For example, in the 1920s and 1930s, France and Germany created modern powerful tanks, trucks and aircraft. During World War II, Germany used the combined potential of these innovations (along with radio) to conduct its now infamous blitzkriegs - aggressive offensive strikes that quickly crushed its enemies. France has invested most of its resources in the Maginot Line, a series of powerful fortifications along the Franco-German border. French leaders believed that they had created an impassable border that would deter any attempt at a German invasion. Instead, the Nazis simply bypassed this line, passing through Belgium and the Ardennes Forest. With its best units concentrated on other bridgeheads, with poor communications and outdated military planning, France quickly fell.

It is no accident that France did not risk using its new military systems. It won the First World War, and the leading military powers often refuse to innovate and resist radical changes. In 1918, the British Navy invented the first aircraft carrier, but Britain, the dominant naval power at that time, treated these ships mainly as spotters to support the fire of their traditional battleships, and not as mobile bases for conducting offensive operations. Japan, on the contrary, used its aircraft carriers to deliver attack aircraft directly to the battlefields. As a result, the British navy could not cope with the Japanese in the Pacific, and eventually Japan had to oppose another rising maritime power.: The United States. Before and during World War II, the US Navy experimented a lot with new technologies, including aircraft carriers, which helped them later become a decisive force in the Atlantic and Pacific.

But today, the United States risks looking more like the United Kingdom or even France. The Ministry of Defense seems to prefer time-tested capabilities rather than new tools, and the pace of its innovation has slowed down. The time it takes for new technology to arrive from the design bureau on the battlefield has lengthened from about five years in the early 1960s to a decade or even more today. It seems that sometimes the Pentagon deliberately delays with AI and autonomous systems, because it fears that the introduction of these technologies may require drastic changes that will jeopardize existing and successfully operating military systems, as, for example, the story of the X-45, X-47A and X-47B illustrates. Some projects couldn't even get off the drawing board. Numerous experiments have shown that the Loyal Wingman, an unmanned aircraft with artificial intelligence, can help groups of aircraft better coordinate their attacks. But the US military still has serious efforts to introduce this technology, although it has been around for many years. It is not surprising that in 2021, the National Security Council Commission on Artificial Intelligence in its final report came to the disappointing conclusion that the United States is "not ready to defend itself or compete in the AI era."

If the United States fails to make real progress in developing effective AI, they may simply fall at the mercy of the winners – their more advanced opponents. China, for example, is already using AI to conduct war games in a future conflict over Taiwan. Beijing plans to use artificial intelligence in combination with cyber weapons, electronic warfare and robotics to increase the likelihood of success of its naval landing on Taiwan. He invests heavily in systems with military AI for tracking underwater and surface warships of the US Navy, as well as in developing the possibility of carrying out group attacks using inexpensive large-scale aircraft. If the United States does not have advanced artificial intelligence capabilities, it will inevitably find that it is moving at a slower pace and, consequently, it will have fewer opportunities to help Taiwan repel the invasion.

Risky business

Given the current high stakes in the military sphere, the American defense establishment is rightly worried that Washington is sluggishly promoting defense innovations. But outside of government, many analysts fear the opposite: if the military acts too hastily in developing AI weapons, the world could face deadly — and possibly even catastrophic — emergencies.

You don't need to be an expert to see all the risks of AI: killer robots have been the main heroes of pop culture for decades. But even science fiction is not by far the best indicator of real dangers. Fully autonomous weapons systems of the Terminator type will require high-level machine intelligence, which, even according to optimistic forecasts, will appear no earlier than in half a century. One group of analysts made a film about "Robot Butchers" - whole deadly swarms of autonomous systems capable of killing people on a massive scale. But any government or non-State actor wishing to inflict such damage on the enemy could accomplish the same task more reliably and cheaply using traditional weapons. The danger of AI is associated with the deployment of new algorithms of actions both on the battlefield and beyond, which can lead to emergencies, accidents, failures or even unintentional attacks. AI algorithms are designed to ensure that machines act quickly and decisively, which can lead to errors in situations that require careful (albeit quick) consideration. For example, in 2003, the automated system of the MIM-104 Patriot anti-aircraft missile system mistakenly identified its aircraft as an enemy, and the human operators did not correct this error, which led to the death of an American F-18 pilot from "friendly fire". Research shows that the more demanding people's cognitive abilities are and the more stressful the situation is, the more likely it is that people will rely on AI judgments. This means that on the battlefield, where many military systems are automated, the number of such accidents may increase.

Humans, of course, also make fatal mistakes, and their trust in AI may not be something wrong at all. But people can be prone to overconfidence in cars. In fact, even very good AI algorithms can potentially be more accident prone than humans. People are able to take into account nuances and context when making decisions, whereas AI algorithms are trained to make only very specific verdicts and work only in certain circumstances. If they are instructed to launch missiles or engage air defense systems, in conditions beyond their normal operating parameters, AI systems may fail and cause unintended strikes. It may be difficult for the attacking country to convince the enemy that the strikes were a mistake. Depending on the size and scale of the error, the end result may be a fatal flare-up of the conflict.

This can have frightening consequences. Even the most reliable machines with artificial intelligence are unlikely to ever be able to carry out nuclear attacks, but they may someday make recommendations to politicians on whether to use weapons in response to signals from early warning air defense systems or not. And if the AI gives the green light, then the soldiers controlling these machines will not always be able to properly study the data they give out and recheck the machines for potential errors in the input data, especially in a rapidly developing situation. And the result may be the reverse of the infamous incident of 1983, in which a lieutenant of the Soviet Air Force may have saved the world when, correctly suspecting a false alarm, he decided to cancel the instruction on a nuclear launch issued by an automated warning system. This system mistook the light reflected from the clouds for an approaching ballistic missile.

Not slow, but not too fast either

Thus, the United States faces double the risks associated with artificial intelligence. If Washington moves too slowly, it may be bypassed by its competitors, which will jeopardize US national security. But if it moves too fast, it can jeopardize safety and create artificial intelligence systems that generate fatal accidents. While the former poses a greater risk than the latter, it is imperative that the United States takes security concerns as seriously as possible. To be effective, artificial intelligence must be safe and reliable.

So how can Washington find a kind of "habitable zone" (the habitable zone is a conditional area in space, determined based on the calculation that the conditions on the surface of the planets in it will be close to the conditions on Earth - Approx. InoSMI) for innovation? You can start by thinking about technological development in terms of its three phases: invention, development and implementation. Different speeds are suitable for each. Rapid progress in the first two stages will not do much harm, and the US armed forces should quickly develop and experiment with new technologies and operational concepts. But during implementation, it will be necessary to solve security and reliability problems in the most thorough way.

To achieve such a balance, the US army needs to ensure that its personnel work as efficiently as possible with all the data of the Ministry of Defense. This includes open content available on the Internet, as well as closed, such as satellite images and intelligence data about the opponents and their military potential. It also includes data on the effectiveness, composition and capabilities of the US Armed Forces' own military assets.

The Ministry of Defense already has many units that collect such data, but the information belonging to them is scattered and stored in different ways. In order to implement AI more effectively, the Pentagon will need to step up its ongoing efforts to create a common data infrastructure. The Ministry of Defense has already taken an important step by combining data processing responsibilities regarding AI in the hands of a senior digital technology and artificial intelligence official. But this reorganization will not succeed if the new high official does not have the authority to overcome bureaucratic barriers to the introduction of AI both in the army and in other Pentagon units.

Providing researchers with better data will also help ensure thorough security testing of each new solution. Testers, for example, can intentionally enter a wide range of complex or deliberately incorrect information into an artificial intelligence system to see if it gives an erroneous indication, for example, an order to strike a friendly aircraft. This testing will help to create a basic idea of how reliable and accurate artificial intelligence systems are by setting a margin of error that future operators can remember. This will help people understand when to doubt what the machines are telling them, even in very stressful situations.

The production of innovative and secure AI will also require closer communication between the research and development departments of the Department of Defense and the rest of the Pentagon. Theoretically, it is the R&D Department that is responsible for the technological innovations of the Pentagon. But according to a report by Melissa Flagg and Jack Corrigan of the Center for Security and Emerging Technologies, the Pentagon's innovation efforts are disorganized and cover at least 28 organizations. All of these efforts will benefit from the greater coordination that the Research and Development Department can provide. One of the reasons for optimism is that this Department has recently created an experimental bureau of high-speed developments within itself, which will allow the department to quickly create prototypes of new samples and experiment with new technologies in the most popular areas, which should improve coordination and speed of acceptance of products in the troops.

But the Pentagon cannot stimulate more effective innovation solely through structural reforms. This will require appropriate personnel. The United States can be proud of its well-trained and educated military, but in order to win the wars of the future, it needs more talent in new fields of science and technology. This means that the Ministry of Defense should hire more employees engaged in artificial intelligence. This also means that the Pentagon should offer additional courses in coding and programming for existing personnel and provide additional monetary remuneration or more free time to those employees who enroll in such courses. Just as it is done, for example, for employees studying foreign languages.

As part of the major reorganization, the Ministry of Defense will also need to change its culture so that it is not "unnecessarily prone to avoiding all risks," as Michel Flournois, a former Deputy Minister of Defense for Political Affairs, wrote on the pages of this publication last year. Currently, Pentagon officials are often slow to make decisions or avoid risky initiatives in order to eliminate the reputational damage that accompanies failure, thereby burying promising projects in the bud. This is completely contrary to the spirit of innovation, in which trial and error is an integral part. The Pentagon leadership should reward program managers and researchers for the total number of experiments and operational concepts they test, not for the percentage of successful results.

Even unsuccessful investments can be strategically useful. The Chinese military pays close attention to the military potential and planning of the United States, which allows the United States to harm Beijing's own military planning, selectively revealing, among other things, those of its developments that for some reason failed. China may react by rushing in pursuit of imperfect American systems, and not knowing what exactly the United States will actually deploy or develop next. If the American armed forces want to remain the strongest in the world, they must continue to force their opponents to follow them in everything.

The United States should also develop ways to make effective use of any technology it decides to deploy in the American army. Military power is ultimately more about people and organizations than widgets or inanimate supernova tools. History shows that even the most successful armed forces must use all new capabilities in their plans if they want to win on the battlefield. At a time when a very regrettable return to the already familiar old weapons systems is taking place in the United States, we urgently need to restructure and adapt our army to the needs of the future, and not rest on our laurels.

Authors: Michael Horowitz, Lauren Kahn, Laura Resnik Samotin (Michael Horowitz, Lauren Kahn, Laura Resnik Samotin)

Michael Horowitz is a professor at the University of Pennsylvania, director of the Perry World House Institute of Political Science and Senior Fellow on Defense Technology and Innovation at the American Council on Foreign Relations.

Lauren Kahn is a researcher at the American Council on Foreign Relations, specializing in defense innovations and new technologies.

Laura Reznik Samotin is a researcher in the field of national security and intelligence at the Arnold A. Saltzman Institute for War and Peace Studies at Columbia University and a visiting senior fellow at the Atlantic Council's New American Policy Initiative.

The rights to this material belong to
The material is placed by the copyright holder in the public domain
Original publication
InoSMI materials contain ratings exclusively from foreign media and do not reflect the editorial board's position ВПК.name
  • The news mentions
Do you want to leave a comment? Register and/or Log in
ПОДПИСКА НА НОВОСТИ
Ежедневная рассылка новостей ВПК на электронный почтовый ящик
  • Discussion
    Update
  • 23.11 05:15
  • 5831
Without carrot and stick. Russia has deprived America of its usual levers of influence
  • 23.11 04:09
  • 1
Начало модернизации "Северной верфи" запланировали на конец 2025 года
  • 22.11 20:23
  • 0
В рамках "корабельной полемики".
  • 22.11 16:34
  • 1
Степанов: Канада забыла о своем суверенитете, одобрив передачу США Украине мин
  • 22.11 16:14
  • 11
  • 22.11 12:43
  • 7
Стало известно о выгоде США от модернизации мощнейшего корабля ВМФ России
  • 22.11 04:04
  • 684
Израиль "готовился не к той войне" — и оказался уязвим перед ХАМАС
  • 22.11 03:10
  • 2
ВСУ получили от США усовершенствованные противорадиолокационные ракеты AGM-88E (AARGM) для ударов по российским средствам ПВО
  • 22.11 02:28
  • 1
Путин сообщил о нанесении комбинированного удара ВС РФ по ОПК Украины
  • 21.11 20:03
  • 1
Аналитик Коротченко считает, что предупреждения об ответном ударе РФ не будет
  • 21.11 16:16
  • 136
Russia has launched production of 20 Tu-214 aircraft
  • 21.11 13:19
  • 16
МС-21 готовится к первому полету
  • 21.11 13:14
  • 39
Какое оружие может оказаться эффективным против боевых беспилотников
  • 21.11 12:14
  • 0
Один – за всех и все – за одного!
  • 21.11 12:12
  • 0
Моделирование боевых действий – основа системы поддержки принятия решений