Home » The artificial intelligence of the virtual prosecutor

The artificial intelligence of the virtual prosecutor

by admin

An article published on December 26 by the South China Morning Post titled Chinese scientists develop AI ‘prosecutor’ that can press its own charges. According to the news, therefore, the project started in 2015 would have passed to the executive phase and now a software would be able to provide support to investigating magistrates in deciding whether to prosecute eight types of crimes, including dangerous driving, fraud and gambling. ‘gamble. The scope of application is therefore very limited because the crimes that can be analyzed are few and the last word belongs to the magistrate. There were, however, the usual “alarms” about the “robot judge” and the umpteenth demonstration of how dangerous this “artificial intelligence” is.

It is pointless to point out that the uses of this technology are limited and will not extend beyond a certain boundary due to the inherent limitations of the way computers work. In fact, this consideration is opposed by the “today is like this, but tomorrow? It is a logically flawed reasoning – a paralogism, the experts would say – but of great rhetorical power: it is very convincing for those who are not experts in a particular subject. To understand the error of this approach, just think that we can wave our arms all our life – and have children, grandchildren and great-grandchildren do the same – but we will never be able to fly without the help of some technological tool.

The basis of this paralogism is the confusion between science and science fiction – unscrupulously exploited also by the sector industry and suffered by politicians and information professionals – which leads to the humanization of technological tools by attributing to them the characteristics of living beings. It is a script similar to what we have seen for “cyberspace”, a word that according to its inventor, the writer William Gibson, means absolutely nothing but which has become a constant presence in legislative politics and in Western legal doctrine. Therefore, also thanks to the entertainment industry, the perception has taken root that sooner or later we will live in scenarios like those of Bicentennial Man or I Robots (films, not Isaac Asimov’s novels). A reflection of this attitude is the number of articles in which commentators are astonished at the “emotional manifestations” of the new iterations of Eliza as Sophia.

See also  Copyright and platforms, new legal dispute. What if they pay users?

Having said this, let’s go back to the question of the “robot judge” (as he was improperly called).

The software works by applying a natural language processing system. Therefore, it analyzes the reports of the investigators and the various documents that make up the investigation file and assesses whether the elements obtained from this analysis correspond to facts punishable under Chinese criminal law. If that’s it, there’s little to worry about.

Assuming that the software works, perhaps aided by a standardization of the way in which police reports and judicial orders are compiled, there are crimes that can objectively be managed automatically. In Italy too, the speeding, the quantification of maintenance checks in the event of divorce, the verification of the correctness of a financial statement in the event of fraud, the granting of credits and loans are substantially automated. The programs will not be very advanced, but the fact remains that an important component of the decision about a person’s (social) life is entrusted to automated tools. And as regards the fears about the “automatic judge” it is enough to consider that in Italy in addition to the infringements detected with tutors and speed cameras, the non-compliance with the vaccination obligation is also ascertained via software, reversing the burden of proof. You receive the dispute and then you have to prove that you are not responsible or that you have had a good reason for violating the law. In jargon it is called “reversal of the burden of proof” and it works for minor and fiscal violations and not, however, for crimes punishable by criminal laws.

See also  3 Slot thermal board, one-click overclocking iGame GeForce RTX 3070 Ti Advanced OC-HKEPC Hardware

From the point of view of principle, therefore, there is no difference between the instrument developed by Chinese scientists and those used on this side of the Iron Curtain. It is unthinkable that a huge number of (potential) violations can be handled entirely by human operators, so much so that jurisprudence has made leaps and bounds to support the legitimacy of “posthumous disputes” or formulated without the handwritten signature of those who detected them. Of course, in those cases the violations concerned the highway code, but as systematically happens in the world of law, once a principle has been established, it can be extended, thanks to the sentences, to any other area.

The issue, therefore, is not “whether” to automate the study of evidence but to understand what are the real concrete possibilities for the accused to exercise the right of defense, when the assessments of the public prosecutor derive from the analysis of the results of the investigations carried out through a software.

This issue is not new either: all processes with a technical content (tax and financial fraud, medical liability, environmental disasters, cybercrime) are based on the results of technical appraisals and assessments. These are expensive activities and managed by consultants who do not always live up to their reputation. Objectively, the suspects who can bear the economic weight of the consultancy have a better chance than those who must, by necessity, rely on the work of the experts who work for the prosecution. Even the use of software for analyzing the results of the surveys does not escape this rule. A suspect has the right to know how that data was obtained and therefore has the right to request that the functioning of the “algorithms” be verified. But the Italian jurisprudence has already established – in the matter of computer proof – that this type of expert opinion cannot be requested. In fact, the suspect must explain exactly “where” and “how” the technical error would have occurred. However, in the case of complex software such as those mentioned in this article, such proof would be practically impossible to provide and therefore a person will have to undergo a trial without a real possibility of defending himself. What frightens us, therefore, should not be the artificial intelligence of a virtual prosecutor, but the real one of a real magistrate, or – more precisely – the lack of it.

See also  Aids: This is how the costs for rollators etc. are covered > - Guide

This article is the reworked summary of the report Artificial intelligence and the right to defense presented in the General States of Internet Law which took place in Rome, at the LUISS Guido Carli university, from 16 to 18 December 2021.

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy