Home Technology When algorithms fail, they are often just misinformed

When algorithms fail, they are often just misinformed

by admin
When algorithms fail, they are often just misinformed

It can happen that you have a post removed from Facebook, Twitter or Instagram without understanding the exact reason. For most of us, nothing serious. But let’s try to think about what happens when income depends on a post. For example, it may happen that on YouTube the user does not get the budget from advertising because the algorithm has tagged that video as content that does not comply with the policy or an internal rule.
At the base of these mechanisms there is Artificial Intelligence, in particular automatic learning systems, that is capable of “learning” to label the information they receive – for example a like, or a tag to a photograph – to attribute it to the same content in future. As when inside a procedure, Google asks us to select images that contain a traffic light inside a square with nine photographs. By doing this, we are helping Google’s algorithm to better recognize the traffic lights in the figures it will have to process. There are also applications whose purpose is to label news as more or less reliable; others that must be trained to identify the presence of any child pornography images.
These machine learning systems are pervasive in our everyday life. Many times we do not realize how many dimensions of our life are managed in this way, nor are we aware of the “risks” that mislabeling can have in our society. If the data is badly classified, it will generate wrong outputs and therefore wrong predictions.
“In a survey we conducted recently involving 4000 citizens from 8 European Union countries aged 18 to 75, it emerged that almost half say they have almost no knowledge on artificial intelligence” says Teresa Scantamburlo, researcher in Digital Ethics of the Ca ‘Foscari University of Venice, which studies the impact of AI on social well-being and will be a guest at the Trieste Next Festival (22-24 September 2022) in a round table organized by the University of Trieste and SISSA from title When machines think too much. “Yet, at the same time, more than 65% of respondents said they were confident of the positivity of these technologies for the development of an ever more just society. A contrast that surprised us very much “.
A first concrete risk due to algorithms that “label badly” is discrimination. The selection procedures of various American universities, for example, are processed not by individuals, but by algorithms that analyze a series of parameters, such as grades, participation in extracurricular activities, and so on. Where there are biases in the algorithms or in the data used to train the algorithm, when for example they do not take into account variables on socio-economic inequality between areas of origin, there may be cases of people who are not admitted when they could.
A second problem for disinformation is the ease with which it is possible today, through processes based on neural networks, to create fake or non-existent videos and images. It is not difficult to be able to change the movement of the lips of a person who is speaking to say something else, and at the same time produce an audio using that person’s vocal timbre as he says what we want him to say.
The problem is that most of the time the algorithms on which even important decisions in our lives are based, are actually not completely known: it is difficult to trace why such an output was generated.
How do you deal with all this? In 2018 the European Commission published ethical guidelines for the design of Artificial Intelligence, which was followed in 2021 by a bill – currently under discussion – on AI defined as high risk. However, regulation is not enough. It is necessary to understand how to hold the reins of Machine Learning to direct it towards non-discriminatory outcomes. “In our research, which I dare to define as philosophical – explains Scantamburlo – we unpack the concepts and internal mechanisms of these algorithms in order to understand what their implications are, and how the human contribution could interact with automatic learning systems. We prefer to speak of the social machine as an algorithm that is expressed by interacting with the human being, who may be able to influence its mechanism. “

Find out more

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy