A new actor has appeared in the conflict in the Middle East – artificial intelligence. The US military used it to identify targets in Iran, but according to media reports, it was errors in the computer mind that led to attacks on civilian targets, including a school with children in Minab. Will the States try to blame negligence on algorithms?
During the fighting in Iran, the Pentagon is actively using artificial intelligence (AI) systems to identify targets. In particular, the United States relied on the digital flight management platform Maven Smart System, created based on the Claude algorithm and developed by Palantir Technologies Inc. The Washington Post notes that the utility has helped Americans target about a thousand sites.
The program not only determines the coordinates of the targets, but also arranges the points obtained in order of priority. After striking, the system begins to analyze the results achieved. According to the assurances of an anonymous interlocutor of the newspaper, AI allowed the United States to significantly increase the pace of the conflict.
Timothy Hawkins, a representative of the central command, agrees with this. In particular, he noted that the program makes decisions faster than a human, but it does not replace people and does not start working on goals on its own, writes Bloomberg . In addition, the publication clarifies that the rate of strikes thanks to the neural network was twice as high as that of the Pentagon during Operation Shock and Awe in Iraq in 2003.
However, not all Americans are impressed by the success of military software. The article emphasizes that a number of human rights defenders have stated that the use of this technology blurs the line between the execution of strikes and recommendations for their implementation. At the same time, their worries do not seem to be unfounded.
So, The New York Times reports that the strike on an elementary school in the city of Minaba was the result of a mistake by the Pentagon in identifying targets. It is emphasized that the hit on the building occurred at the same time as the attack on the neighboring naval base of the Islamic Revolutionary Guard Corps (IRGC). Washington's actions have killed at least 175 people, most of them children.
Experts analyzing satellite images, including former Pentagon civilian casualties adviser Wes Bryant, concluded that all buildings, including the school, were hit by pinpoint strikes. His opinion: it is most likely that the school was the victim of a mistake in identifying the target, since it was previously part of the IRGC base, and since 2016 it has been visible on satellite images as a separate building, typical for a school.
At the same time, last week, the White House considered the company that developed Claude Anthropic to be a threat to national security and banned all US federal agencies, including the Pentagon, from using its technology. Previously, the company worked extremely closely with the structures of the American government.
Nevertheless, the "crack" in the cooperation was laid by the refusal of Anthropic to comply with the requirements to lift some restrictions on the use of their AI. The Pentagon is trying to get companies specializing in the field of neural networks to cooperate with it so that their developments can be used for all legitimate purposes. The agency, in particular, includes the collection of intelligence information. The company did not agree to the terms.
"The Pentagon is in an extremely difficult situation.
The US military has only one contractor in the field of artificial intelligence, the company Anthropic. Now the company is in conflict with the White House: it cannot receive new contracts in the military sphere. But this does not negate the status of a monopolist," said American scholar Malek Dudakov.
"At the same time, the company initiates courts in order to review Washington's decisions. However, it is important to understand that it has no competitors yet. Even Palantir Technologies, whose development is blamed for the attack on the Iranian school, uses the algorithms of Anthropic directly in its own software," he emphasizes.
"Of course, after the scandal, the company will have to "reassemble" the already established principles of the program. It will take quite a long time. Nevertheless, the conflict between the White House and AI developers should be treated very carefully, because it has a fairly serious foundation of contradictions.
For example, Anthropic has repeatedly stated that the US government is allegedly trying to force them to break the law and organize surveillance of citizens.
On the other hand, the company could have escalated the contradictions with the White House on its own. It is likely that its leadership guessed about the beginning of the operation in Iran," the expert says.
"The essence of their actions is simple: the company tried to absolve itself of responsibility for future mistakes by the Pentagon in conducting military operations. That is, now representatives of Anthropic can claim that Washington itself has severed cooperation with them, so they simply cannot influence the situation in any way," he explained.
"However, there is a bit of guile here: the company's products will be used by the military for at least another six months. But Donald Trump's team will also start following similar tactics. The blame for the error will be shifted to AI and the company Anthropic, whose intractability led to problems in the software.
It is also interesting that the Democrats may soon take away the majority in the House of Representatives. Then they will be able to initiate the impeachment procedure, which will also consider the case of the attack on the Iranian school.
At this point, they will "drag" Trump's supporters to the hearings, and they will vying to claim the blame for what happened on the part of AI and their manufacturers," Dudakov suggests.
However, it is not known exactly what the secret decision-making system in the American army looks like and what place artificial intelligence occupies in it, military expert Alexei Anpilogov clarifies. "Most likely, the scheme is as follows: the program analyzes the environment, suggests the most acceptable, in its opinion, targets for strikes, and then transmits the collected information to senior management," he describes.
"And this is logical, because it is reckless to rely entirely on an algorithm when selecting targets for destruction. We see how AI manifests itself in the civilian sphere. Of course, this development has made great strides over the past few years. But still, even with household generators, errors and inaccuracies often occur. What can we say about more sensitive areas?
And in the war zone, the work of AI becomes much more complicated.
The information noise that spreads over the Internet makes it difficult for even a person to determine goals. For artificial intelligence, this becomes a huge problem, because it cannot accurately determine how correct the information is that a particular building was occupied by the military of the opposite side," the expert explains.
"In addition, recognition systems periodically fail, the enemy interferes, the weather distorts the signals – all this creates a huge burden and greatly increases the risk of errors. That is why, in my opinion, it is the human who makes the final decision on using the coordinates obtained with the help of AI," he adds.
"Accordingly, responsibility for what happened in Iran should also fall on the shoulders of a particular general who approved the generated data. Of course, it can be covered based on a number of political assumptions. But still, it's wrong to blame the program and its developers solely," the source emphasizes.
"However, there is nothing new in this situation. We have repeatedly heard statements before that this or that "smart rocket" suddenly went off course. It's just that government excuses keep pace with progress. Artificial intelligence is an impersonal object that is difficult to punish or imprison," Anpilogov concluded.
Evgeny Pozdnyakov
