Home » The dangerous war games of artificial intelligence

The dangerous war games of artificial intelligence

by admin

Sometimes it’s just one question of etiquette. A while ago, Open AI researchers just had to stick on a green apple a piece of paper that says “iPod” to fool their machine learning model, causing him to mistake the fruit for a gadget.

Nothing too serious and no injuries (except academic pride), but what would happen if the same algorithm were used in the military field and a patrol robot being tricked by a bomb with “soccer ball” written on it?

It is one of the questions that the scholars of the UN Institute for Research on Disarmament have placed themselves, nel rapporto “Known unknowns: data issues and military autonomous systems”, which warns of the risks of relying excessively on artificial intelligence on the battlefield.

Examples of smart weapons
That of autonomous or semi-autonomous weapons, capable of decide almost for yourself when and what to strike on the battlefield, often also known as killer robots, it is still a limited, but growing trend.

In Libya and in Syria they fly drones like the Kargu (video above): of Turkish manufacture, it is able, once the coordinates or an image of the target have been entered, to identify and hit it without human intervention. There South Korea For some time now, to watch over the area that separates it from the North, the Sgr-A1, semi-automatic turrets equipped with machine guns and connected via optical fiber to a command center.

Even in the recent conflict in the Gaza Strip, from the Israeli side massive use of artificial intelligence would have been made to locate targets.

See also  injured an artery, two doctors on trial

To date, the decision whether to shoot or not is always made by a supervisor in flesh and blood, from one person, but even that could change: there are situations, such as a coordinated multi-pronged attack by a swarm of drones, where human reflexes may not be up to the task and systems able to defend themselves and counterattack on their own may be more effective.

But as long as they are not wrong, by mistake or because deceived on purpose:

  • because the enemy manages to break into the transmission channels and deliberately provide incorrect data (this technique is known as spoofing);
  • because i data they were poorly reliable at the origin;
  • because we are in the presence of new data, which the machine cannot classify.

experimentation

Sony brings artificial intelligence to the streets of Rome

by Alessio Jacona


The risks of the real world
One of the main problems is that, as the author of the UN report, Arthur Holland Michel, writes, “compared to controlled environmentsenvironments of uncontrolled conflict pose a wide range of challenges ”.

Artificial intelligence algorithms perform best when they can rely on reliable and consistent data and scenarios with those on which the machine has been trained, but in the case of anomalous and unexpected events, the model must be updated and recalibrated: it is one thing to beat a human pilot in the ideal environment in air combat created by a flight simulator, as happened last yearanother is to do it in the sky, where rain, smoke and stress can damage the sensors.

See also  Referendum on refugee accommodation in Greifswald | > - News

Not to mention what happens in a battle on the ground: trees, reflections, people in motion and camouflage can make it difficult for artificial intelligence to correctly catalog what it is seeing.

It’s not always easy for a car distinguish between an armored van for military use and a school bus if the shape and color are similar or understand that what is framing is a tank, if a fundamental part of the vehicle (the cannon) is hidden by a tree.

In some cases, such as that of the tank, it can be easy for a human supervisor to step in and resolve the situation. In other less, due to lack of time and also because it is not always clear how the mathematical model, which draws billions of information in a few moments and connects them together, comes to make a certain decision.

Startup

Breadcrumbs, artificial intelligence to accelerate companies’ revenues

by Silvio Gulizia



The Black Box Issue
It’s the so-called black box problem: AI often gets it right, but it is not clear why. And in war cases of misjudgment could lead to disastrous results, without humans noticing if not done.

“Imagine a reconnaissance drone which, due to spoofing or incorrect data, incorrectly categorizes a target area as having a very low probability of civilian presence – has told Holland Michel to Popular Science “The human soldiers acting on the evaluation of that system would not necessarily know it was faulty, and in a very quick situation they may not have the time to check and find the problem.”

See also  May Day Movie Lineup: 11 Films to be Released with Participation from Listed Companies

Which also raises the question of liability: in the case of a massacre of civilians due to a mistake in the AI ​​assessment, who’s to blame? Who designed the algorithm? Whose data gave the machine incorrect? Or the soldier who gave the green light or did not intervene in time to correct a decision of the robot?

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy