Home » Interpol wants to use an AI algorithm against the ‘Ndrangheta: can it work?

Interpol wants to use an AI algorithm against the ‘Ndrangheta: can it work?

by admin
Interpol wants to use an AI algorithm against the ‘Ndrangheta: can it work?

Il Interpol project called I-Canwhich aims to counter the ‘Ndrangheta, “will be able to make use of the first predictive artificial intelligence algorithm to intercept the expansionist strategies of the criminal organization and anticipate the threat”: he explained this at the beginning of November, during the Conference on the International Cooperation of Forces of police, the prefect Vittorio Rizzi, deputy chief of police.

Again, the prospect of using the potential of artificial intelligence to fight crime (even organized crime) is making its way into law enforcement agencies. But which system is it specifically? Few details were provided during the conference, but it is known how Interpol is developing a predictive policing algorithm that goes by the name of Insight and which has been specifically designed, as stated in a presentation, to combat organized crime at an international level. Further requests for clarification to Interpol have not yet been answered.

History

Privacy and facial recognition: an Italian startup’s adversative clothes will hide us from AIs

by Emanuele Capone


It is a project still under development, which “will allow Interpol to collect, store, process, analyze and interpret vast amounts of data from multiple sources and in different formatsquickly and efficiently”. Sources reported include police reports, images, videos and textual information of other types (also found on the Web).

The first phase of development concluded in 2021 and used only internal Interpol data, with the limited goal of enabling “enhanced strategic analysis”. In the two-year period 2022-23 instead, additional data sources will be added, with the aim of enabling an “advanced analytics” capability. The third and final phase will instead take place in 2024-2026, when the platform, again according to what Interpol explained, “will reach its full potentialincorporating all relevant internal and external data sources and the most advanced technologies to offer the highest possible quality of analysis”.

See also  Samsung Android: IT security warning of a new vulnerability

At this point, the Insight algorithm will be equipped with predictive capabilities that will allow States that are part of Interpol to discover the modus operandi of criminals, identify crime trends and patterns and even “report people of interest”. And so, once again, predictive police algorithms are making their way back also in Italy, where they have already been used (or at least tested) on various occasions and despite having been all over the world at the center of fierce controversy.

The use of systems of this type could however clash with the European regulation on Artificial Intelligence which is taking shape in Brussels. Although there is still no definitive regulation, it was precisely one of the two co-rapporteurs of the European Parliament (the Italian Brando Benifei) who explained how the goal was to get to the banning of predictive police systems: “A joint amendment that we have presented concerns the prohibition of the use of predictive police algorithms, because they are too much in contrast with the value of the presumption of innocence. Indeed, with these tools, a presumption of guilt seems to be created”. Benifei explained to Italian Tech.

This kind of predictive algorithms not only does it already fall within the cases in which (again according to the nascent regulation) artificial intelligence systems are considered high risk and can only be used in limited and exceptional cases, but could therefore be completely prohibited. Especially if (and this is the case with Insight) they also allow you to “report suspicious people” through algorithmic methods, thus fully falling within that risk of “presumption of guilt” mentioned by Benifei.

See also  The shortest-lived video game "Before the Holocaust" has caused controversy again, making players unhappy and blaming the game's failure on players for "inciting hatred" | SCREEN FANDOM - European and American entertainment addicts! - TOY PEOPLE NEWS

Safety

Tasers, bodycams and artificial intelligence: the hi-tech future of the Italian police

by Emanuele Capone


But why does predictive policing raise so many concerns? First of all, the various systems (from the most famous PredPol to Gotham by Palantir, passing through RTM or KeyStats) in the field test they proved less capable than promised to obtain useful information from the analysis of big data. Despite the differences, these algorithms work in a similar way: through the data relating to the crimes committed (police reports, testimonies, arrest records, identified license plates, times, weather conditions and more), predictive police systems try to identify correlations and recurring patterns in crimes, thus predicting where and when they are more likely to occur in the future.

Put like this, it sounds like any cop’s dream. However, as reported in numerous studies (e.g Discriminating Data, published by MIT), these systems are just not coming abandoned by more and more police departments because they are not very effective (the New Mexico police chief had explained that “he didn’t tell us anything we didn’t already know”), but they also risk reproducing the same prejudices already present in society.

For example, people who live in the toughest neighborhoods they are more likely to be arrested, also because those are traditionally the most patrolled areas. The big data fed to the algorithms will therefore indicate that these are the areas with the highest crime density, creating a vicious circle that will lead to more and more people being stopped, searched and controlled just for living in a difficult neighborhood (discriminating and subjecting them to further hardship an honest person who, for example, lives in an area with a high penetration of ‘Ndrangheta).

See also  Google presents a new way to search

In this way, it is also to create a real confirmation bias, as reported by The Next Web: “If the AI ​​indicates that there is a high risk of crime in a particular area, and patrol officers find nothing, the merit is of the AI, which by showing the police where to go helped prevent the crime. And if instead the agents identify crimes? In any case, thanks to the algorithms, which predicted that crimes would be committed there”.

It’s been several years now since doubts about the effectiveness and risks of these tools have begun to spread, to the point of risking being banned by the European Union and even being abandoned by the police departments that had first decided to use them (such as those of Chicago or Los Angeles, and also in the UK). What are the chances that the Insight system developed by Interpol turns out to be drastically different?

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy