Home » In Geneva, banning of robots capable of killing is discussed – Pierre Haski

In Geneva, banning of robots capable of killing is discussed – Pierre Haski

by admin

December 14, 2021 9:57 am

A “killer robot” is a lethal autonomous weapon capable of deciding to open fire without any human intervention. Today there are no machines of this type in operation, but technological progress is such that the problem of how to manage devices of this type will soon arise.

Since 2014, there has been discussions about banning killer robots. On December 13, an international conference that brings together 125 states under the aegis of the United Nations began in Geneva, but despite an international campaign that can count on the support of numerous countries and NGOs, in addition to that of the Secretary General himself, the possibilities successes are scarce.

On the eve of the event, US delegate Josh Dorosin said he was in favor of introducing a non-binding code of conduct, which he believes would be a tool to encourage “responsible behavior” by states. But antirobot militants reject this position as hypocritical, and cite international humanitarian law to demand the total ban on weapons that could decide for themselves whether a human should live or die.

Autonomous weapons
The development of artificial intelligence and other associated technologies makes possible a transformation of weapons comparable to the emergence of aviation or nuclear proliferation. This evolution will allow the appearance of programmed weapons which will subsequently be autonomous in deciding to open fire on a target.

Currently an armed drone, even hundreds of kilometers away, is still controlled by a pilot who decides whether to open fire. A French drone pilot detailed a mission in Mali on the podcast Collimator of the Strategic Research Institute of the French military school: usually several operators cross information and await a decision at the upper level before opening fire. This does not prevent irregularities and collateral victims, but there are individuals who control the process.

See also  La Calle del Café takes Apía: fourth version

For an arms treaty to work, you need a little bit of trust

If weapons became autonomous, ethical concerns, doubts and respect for the rules of war (already difficult to deal with during a military action) would also be automated, while human beings would be deprived of all responsibility. This is precisely what proponents of the ban contest.

The chances of obtaining a tender are currently minimal, also considering the climate of international tension. For an arms treaty to work, you need a little bit of trust. “Trust and verify”, said the disarmament negotiators during the cold war.

advertising

Such an approach is much more difficult in the digital age, especially when trust doesn’t exist. The Americans say they do not want to be the first to introduce autonomous weapons, but point out that if their opponents do so then they will not be able to allow them to gain an advantage. The Chinese and the Russians will certainly take the same position. And what about Turkey, Iran or North Korea, potential owners of this technology?

The problem is moral but also legal. Who can be responsible for a war crime if the action was conducted by a self-contained weapon? Furthermore, the philosophical theme of the relationship between human being and machine emerges. In short, the topic is very broad and deserves a mobilization of public opinion.

(Translation by Andrea Sparacino)

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy