Home » Afghanistan is the first conflict conducted by artificial intelligence

Afghanistan is the first conflict conducted by artificial intelligence

by admin

Last summer, in August, a few dozen military drones and several small robot-tanks simulated an air and ground attack about fifty kilometers south of Seattle. The goal was to hit terrorists hiding in some buildings. The exercise, organized by DARPA, the agency that deals with the Pentagon’s frontier technology projects, was used to test the ability of artificial intelligence to manage complex situations in war zones “at the speed of light”. In fact, the drones and robots, once the target was identified, made a plan for themselves to hit it using artificial intelligence algorithms. The episode, reported by Wired, came to my mind reading the news of the American military action against alleged terrorists in Afghanistan via drones: was artificial intelligence used? And to what extent? Who really pulled the trigger? According to Wired, the Pentagon has long since been rethinking the need to hold “humans in the loop” when it comes to military operations with autonomous weapons. In short, can we do without humans?

The debate has been going on for some time, officially the rule says that artificial intelligence must “allow operators to be able to exercise an appropriate level of human judgment on the use of force”. But does this mean that a human has to approve every single time a drone pulls the trigger? In the case of the Seattle exercise, DARPA concluded that in some cases, requiring humans to make every single decision would lead to mission failure “because no one person is capable of making so many decisions simultaneously.” Artificial intelligence in warfare does not always need us, it is the thesis that advances. Indeed, it needs us less and less: or at least they want to convince us that it is so.

See also  40 billion more emails per month: are we crazy?

Also according to Wired, General John Murray (US Army Futures Command), at a military conference last April said that the possibility of sending flocks of autonomous robots to attack will force us all to reconsider whether one person can or should take every single decision on the use of force. “Is it within the power of a human being to make hundreds of decisions at the same time? And is it really necessary to keep human beings in the decision-making chain? ”.

The first response on the ground came, a few weeks later, from the violent conflict between Israel and Hamas in May. A few days after the ceasefire, in fact, the first confirmations came from the Israeli government that that conflict was “the first artificial intelligence war”, the first war conducted mainly through artificial intelligence algorithms. Artificial intelligence was used both in the defensive phase, to determine the trajectories of missiles launched against Israel, intercepting only those directed towards inhabited areas or sensitive targets, and ignoring the others; both in the attack phase, in Gaza. According to The Jerusalem Post, soldiers from Unit 8200, an elite unit of the intelligence division, used algorithms to conduct several actions called “Alchemist, Gospel, Depth of Wisdom,” using data that came in real time from many different sources to gods. supercomputers which generated recommendations on where the targets were to hit. At the forefront, according to reports, there were often “flocks of fighting drones” and autonomous. According to the Israeli military, this also serves to minimize the casualties among civilians, but in the days of the conflict in May they were numerous and there were also many children. If the aim was not to hit civilians, it has not been achieved.

See also  The tree, a technological gift - the Republic

In reality, the May conflict in Israel would have at least one precedent: according to a United Nations report, on March 27, 2020 the Libyan Prime Minister al-Sarraj would have ordered “Operation Peace Stom”, or the attack by autonomous drones, against Haftar’s forces. The drones, the report says, “have been used in combat for several years, but what makes that attack different is that the drones there operated without human input,” that is, after being sent to attack, they were autonomous in making decisions: “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

In short, the killer-robots are already among us, despite the protests of many humanitarian organizations who see in this passage a de-humanization of war, its transformation into a video game but real. Is it the same thing that is happening in Afghanistan these days? Was the drone with rotating blades used by the Americans to target alleged terrorists autonomous once launched? Or were there still “humans in the loop”? Meanwhile, fears are growing that the terrorists themselves for their attacks use totally autonomous artificial intelligence systems to hit us better. A report, dated 2021, by the United Nations Anti-Terrorism Office, titled “Algorithms and Terrorism: The Bad Uses of Arctic Intelligence by Terrorists”, warns of a threat that is realistic to think is already in place. Terrorists, the report says, are always gods early adopters of new technologies. According to Max Tegmark, an MIT professor quoted by Wired who is in charge of the Future Life Institute, weapons driven by autonomous artificial intelligence systems should be banned like biological weapons. But it is a position that seems to have less and less consensus in reality

See also  Has the pandemic passed? Guys, don't rush

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy