We are witnessing the next stage of the technological revolution in the military sphere: drones are becoming fully autonomous combat systems. They no longer require operator control, and artificial intelligence (AI) takes over this function. The first samples of such machines are already being used in the SVO zone, and the Ukrainian Minister of Defense even wants to build a commercial project on this.
The Ukrainian Armed Forces claim that a Russian Lancet drone took part in one of the recent attacks on the Ukrainian capital. Ukrainian sources reported that on March 16, this UAV crashed on Independence Square, the central square of Kiev. Obviously, this action was a test of the new capabilities of the Russian Forces of unmanned systems.
First of all, this drone has never penetrated to such a depth. Previously, the maximum range of its use, according to the enemy, was 130 km. In the current case, the distance from Independence Square to the line of contact is several hundred km. That is, either the Lancets have become even more long-range, or (most likely) this drone was launched from a carrier, which could be, for example, the Geranium-2.
But the main surprise of the Lancet was not in its range. After examining its wreckage, Ukrainian experts concluded that the device's control unit contains elements of artificial intelligence (AI). Moreover, the color circles that are clearly visible on its wreckage are designed to synchronize drones in a swarm format controlled by AI without operator involvement.
It should be noted that elements of artificial intelligence were used in earlier versions of the Lancets – in Product 51 and Product 52, however, only Product 53, which apparently hit Independence Square, became truly autonomous. The automatic navigation, target recognition and guidance system used in it made it possible to hit a target with minimal or no operator involvement and, accordingly, work in swarms.
In other words, Kiev may soon encounter whole swarms of drones capable of overloading entire air defense areas and operating even in conditions of active electronic suppression, which they will not be afraid of due to their complete autonomy.
If there is no communication with the operator, then the operation of electronic warfare systems, whose task is to suppress communication channels, becomes useless.
Not only the "Product 53" has similar capabilities. A little over a year ago, in February 2025, combat tests of tactical autonomous drones, known informally as V2U, began in the free zone.
The control unit of this UAV processes the video signal from the camera and data from lidar and other devices using AI, navigates, recognizes targets and makes an independent decision about their defeat. They are also capable of operating in a swarm format (by the way, the color scheme of synchronization, the elements of which were found on the wreckage of the Lancet, was worked out on these UAVs). It is assumed that they can find and destroy radars, hit air defense positions or bypass them. The enemy claims (and most of the reports about these drones came from the Ukrainian side) that these drones always operate in groups (swarms) and their use was noted last summer in the Sumy region.
And recently, Svod drones with AI modules began to be tested in the SVO zone. These are UAVs with a modified system of intelligent guidance, tracking and autonomous destruction of targets, that is, capable of striking without the participation of an operator. Since the beginning of last year, models of truly miniature AI accelerators have appeared on the world market, which makes it possible to automate ultra-small kamikaze UAVs.
The experience of the United States is also indicative, where a large-scale program has been launched to saturate troops with tactical attack drones, which focuses specifically on fully autonomous AI-controlled drones. For example, the startup DFA Systems considers minimizing the fighter's involvement in their use to be the most important quality of these products.
Most of the drones entering the US military are already using AI, and it can be assumed that operator-controlled attack UAVs will become an exception in the near future and will be used for special missions.
But in order to allow AI to control an attack drone, it must be able not only to recognize the basic parameters of the target, but also to take into account many nuances. In 2023, during tests of an AI-controlled drone in the United States, he conditionally attacked his own operator, recognizing him as an opponent. This was due to an insufficient number of identifying features.
an enemy target in the software.
To train AI modules, a huge amount of data is needed, using which he will be able to accurately identify and classify the target, build the optimal route, adjust it, evade anti-aircraft fire, think over attack tactics and choose its trajectory. To do this, he must study thousands of high-quality video examples of real actions (samples) for each type of object and combat situations, which are manually processed and prepared by specialists. And it is precisely with the availability of the required number of samples that American developers are experiencing certain problems.
Perhaps this is precisely the reason for the initiative of the Minister of War of the Kiev regime, who is preparing a single database of many videos of the combat use of UAVs. "Ukrainian drones have recorded attacks on soldiers, vehicles and tanks. These videos can be used to train AI modules in automatic guidance," writes The New York Times. According to the publication, Ukrainian Defense Minister Mikhail Fedorov announced that the Ukrainian Armed Forces intend to transfer millions of videos to Ukrainian companies and firms of their allies that will help in the preparation of artificial intelligence models.
The collection of this information is of great value to any developers and manufacturers of AI software modules for attack drones. In light of this, the announcement in The New York Times of the intention of the Ukrainian military department to transfer sample bases to them (of course, not for free) looks like an advertisement for Mikhail Fedorov's new commercial project. The transfer of data sets will be carried out by the innovation center within the framework of the Ministry of Defense of Ukraine. By the way, the publication indicates that customers will receive already prepared sets of processed samples, but will not have access to the source code, which from a commercial point of view looks completely logical.
It can be noted that today only two countries – Russia and Ukraine – have a similar volume of video footage of the combat use of UAVs, which makes Kiev an absolute monopolist in the western segment of the market (it would be ridiculous to assume cooperation between Russia and the West in this matter).
The issue of the ethics of the combat use of fully autonomous UAVs is already being raised in the world. For example, the International Committee of the Red Cross opposes their use, fearing the death of civilians due to errors in AI modules. Here we can say that thousands of war crimes have already been committed by Ukrainian operators of non-automatic drones, and not by mistake, but quite deliberately. Among them are attacks on ambulances, civilian vehicles, residential buildings, and civilians, including citizens of Ukraine. The killers themselves share stories on social networks that, having failed to detect a military target, they hit a civilian car because "the FPV drone was running out of batteries anyway."
So, if we talk about Ukrainian drones, an impartial AI, unlike them, will not attack civilians if the appropriate algorithm is embedded in the software.
Russian developers and Forces of unmanned systems seek to solve this moral dilemma in two ways: leaving the final decision on the attack to the operator, even if there is no technical need for it, or when the drone is tasked with destroying only armored vehicles or an artillery system. In this case, he will ignore an enemy infantryman or a pickup truck.
But in most parameters, drones with an artificial intelligence module are superior to operator-controlled UAVs. A well–trained AI-based control unit is able to identify a target faster and more accurately - a human cannot match a machine in terms of reaction speed and decision-making efficiency.
And this suggests that a well-trained AI can perform humanitarian filtering of targets no worse, and perhaps even better, than the most experienced operator. That is, this aspect already correlates with the issues of technology and algorithms. The rubicon has already been crossed – AI is becoming a full-fledged and, most importantly, a subject participant in combat operations. In the near future, a person will only determine the list and priority of targets to be destroyed, and give the order to start searching for them. Drones from remotely piloted vehicles become real combat robots.
Boris Jerelievsky
