Home » Covid: the risks of Draghi’s “reasoned risk”

Covid: the risks of Draghi’s “reasoned risk”

by admin

In ten days we will know how the Draghi government’s bet on Covid went. Bet is not the right word. It’s not my style, Draghi would say. In fact he said “calculated risk”. Together with Minister Speranza, one of the rare serious people and for this reason attacked by those who are not serious, Draghi consulted epidemiologists and virologists. It collected the data that the Regions provide (fire is made with the wood you have) – the trend of the infected in the 15 months we have behind us and the number of vaccinated thanks to General Figliuolo -, took into account the evolution seasonal towards summer time and socio-economic concerns, and finally handed everything over to computer scientists expert in simulation and projection algorithms. At the end of the course he communicated the political decision, for which he publicly assumed responsibility.

Matter of sources
The question is: will the risk have been well calculated? Technically yes, although Salvini and Meloni make calculations starting from other data: surveys on how many citizens believe the stories they tell and are therefore willing to vote for them. However, another question needs to be asked: could those data be trusted? In part, yes, but not for all regions. Not those that we already know have falsified them, not those that do not collect them with a scientific method, not for honest but inhomogeneous data. So there is a bit of a bet, and it is on the reliability of the sources.

One question is still missing: how much can algorithms be trusted?
A book by Aurélie Jean (photo above) with which the publisher Neri Pozza inaugurates its presence in quality scientific dissemination helps us to understand this: “In the country of algorithms” (172 pages, 17 euros). It is written in an autobiographical form. Precisely for this reason it is read by enjoying it literally as well as learning. To complete, I would also recommend reading “Algorithms for a new world” by Alfio Quarteroni, Politecnico di Milano, an important name in applied mathematics (Dedalo Edizioni, 80 pages, 11.50 euros).

See also  Germany sent the first soldiers to Lithuania, the Kremlin warns against these steps - what it means according to Martin Mojžiš | World | .a week

Mark 1 and the bomb
Aurélie Jean specializes in mathematical modeling, lives between New York and Paris, the magazine “Forbes” has placed her among the forty most influential French women. His story begins with a visit, or rather a pilgrimage, to MIT to see Mark 1, the computer that was used to make the calculations of the first atomic bomb. Not antiques. Archeology. We omit other curious and exciting episodes of initiation, which are also very useful for learning the rudiments of computer processes (machine language, compiler, and so on) and we come to modeling.

How rubber works
In 2005 Aurélie Jean built her first model to simulate the behavior of an elastomer, commonly rubber for tires. Rubber is a mixture of elastomers (weak or strong bonded polymers) and carbon nanoparticles. Its behavior at the macroscopic level depends on how the two components are mixed at the nanometric level (i.e. at the scale of one millionth of a millimeter): speed, duration of mixing, temperature. Here we discover the importance of starting data, which are never as safe and precise as we would like. Yet it is from them, treated with appropriate algorithms (said to the good: suitable procedures), that we arrive at the virtual model, for example, of the real tire.

Bias algoritmici
In the first part of her journey as a researcher Aurélie Jean is fascinated by the power of the algorithms she studies, in the second part she investigates their limits: the bias – vulgarly systematic distortions, often hidden. There are various types of algorithmic bias. Among the most hidden, Jean puts our unconscious beliefs, prejudices, categories, classifications that we carry within us. “Because – he writes – no matter how sophisticated an algorithm is, it will always do only what it is programmed for, even in the case of machine learning” (algorithms that learn from their mistakes). “The algorithm – Jean reiterates – has no conscience, has no autonomy, is not endowed with magical powers”. Right demythization. However, I would add, if the algorithm is very complex as when Artificial Intelligence is involved, even for the best computer scientists it becomes a black box and it is almost impossible to understand “how” it reaches its result. There is a danger, then, that the algorithm will grant itself a blind trust as it would do with an oracle (the word used by Vespignani) or a guru (as Jean says).

See also  If d'Azeglio rhymes with “awakening”: Gozzano's idealized past

Amplified errors
Aurélie Jean absolves the algorithms of their alleged faults but warns us against excesses of trust, and in particular from self-learning algorithms: not because they are bad in themselves, on the contrary, but because they are by their nature amplifiers of bias: a small systematic error put into a learning algorithm it becomes huge. Following are various examples of bias that lead to racist, undemocratic, gender equality outcomes and – why not? – immoral. The latest issue of “The Sciences” arrived on newsstands has an article that explains how the algorithms hidden behind social media favor the polarization of positions and consequently the violence and rudeness of the network. Therefore caution. Two concepts summarize Aurélie Jean’s lesson: 1) a life completely conditioned by algorithms would prevent us from making mistakes and therefore from learning; 2) in the realm of algorithms the choice will be of those who understand.

Beware of FaceBook
From Alfio Quarteroni’s book we draw a complementary idea about Artificial Intelligence applied to Big Data. “The knowledge that can be potentially generated by each dataset is all the greater the greater the possibility that this dataset is connected to other sets. The linking of heterogeneous ensembles confers high epistemic value on digital objects such as GPS positions or DNA sequencing data, to name two examples. In fact, the aggregation of data from a great variety of sources often constitutes the premise for generating extremely effective data analysis tools “. Think about it when you expose yourself in FaceBook, Instagram, Twitter, and remember that their fabulous advertising revenues are the result of your data.

See also  Exploring Wenjin Pavilion: A Glimpse into the History and Cultural Significance of a Royal Library

Marrying “a” robot
Last note, appropriate at a time when writing algorithms are supplanting journalists. As Aurélie Jean reminds us again, “a minority of jurists advocate the idea of ​​establishing a specific legal personality for robots, and therefore for algorithms. Some countries have already done so. This is the case of Saudi Arabia, which granted rights to the robot Sofia, granting him citizenship. What a shameful paradox that a robot in that country enjoys more rights than a woman or a foreign worker! Another example comes from China, where an engineer named Zhen Jiajia was able to marry the robot woman he designed himself. Does the image of a silent, docile woman, always well cared for and never in contrast with her husband coincide with that of the perfect wife for the Chinese? “.

I have never liked Alberto Sordi, but one of his films, “Me and Catherine” (1980), arrived forty years before the engineer Zhen Jiajia.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy