Home » Artificial intelligences between racism and the ethical question

Artificial intelligences between racism and the ethical question

by admin

Have you noticed that if you use Gmail to manage email, in the last few months the software has become very good at understanding what you are writing and at suggesting the right words to complete your sentence, almost as much in Italian as in English? Is that while chatting on Whatsapp, by now you hardly type anymore, but most of the time you simply select the words that are suggested to you, as if the app already knows what you want to say?

Well, this all works better than it did a year ago, because Google has empowered and made Bert more effective, the language recognition and prediction system also used to perform online searches, and that more and more companies are using Gpt-3, the Natural language processing software developed by OpenAI, funded a year ago by Microsoft e about whose (remarkable) skills we have recently written. Simplifying, they are two artificial intelligences that are able not only to understand what humans say / write, that is, to understand our language, but also to predict and create it. To do this, they are trained by reading. Reading a lot, obviously on the Internet: to know what to say (or what we want to tell him), Bert is based on around 300 million parameters; Gpt-3 even on 175 billion. Just 175 billion variables, no mistakes: he read all of Wikipedia in English (in the real world that would be almost 80 million pages), which however represents just 0.6% of what he has read.

What is Natural language processing
This is what is commonly called Natural language processing, the ability of machines to elaborate the words, understand them, even put them together or put them back together if they are mixed or scattered within a sentence or paragraph. It can be considered a sort of evolution of Image recognition: after teaching AI to recognize a dog photo from a cat photo, a person from an object (although they can still be fooled) or a pizza from a plate of pasta, we helped them understand our language.

See also  German Bundestag - AfD parliamentary group demands disclosure of raw data

We started doing it about thirty years ago, but it is only in the last 4-5 years, thanks above all to increasingly powerful processors and increasingly higher computing capacities, that the greatest progress has been made. It works like this: these softwares swallow the internet, read everything we write, all articles in online newspapers, all posts on Facebook, Instagram, Reddit and beyond, all comments of all kinds, smart, silly, angry or vulgar, all scientific or legal documents , patents, movie reviews. All. They read everything and learn. They learn, for example, that next to the word “tomato” we put “basil”, that pesto is made with pine nuts, which adjectives we use to describe an object, that we are afraid of vaccines but not of climate change. And then they use what they have learned to understand us, to understand what we say and what we want from them and also to say something themselves, to write complete sentences starting from a set of words, to summarize a very long text, to explain a very complex one with a slightly simpler one. And so we realized that the question is not (or is no longer) what they can do the machines, but what can they say. Because they learned everything from us, including bad things.

Racism and machismo, like in the real world
What’s the problem? The problem is that these software (as well as Megatron, which is that of Nvidia) are a mirror of who we are online: if we are racist, male chauvinist, misogynist, conspiracy theorist or denier, they will be too, more or less in the same percentage. They are because we taught them to.

A few examples to understand. In the spring of 2016, Microsoft debuted su Twitter a bot called Tay (that’s it, but it’s inaccessible), which should have learn to converse from the interaction with people: “The more you write to him, the better he will become at chatting”, explained the company. It took 24 hours to make him not good, but bad: he started tweeting insults, racist and homophobic phrases and also that “we will build a wall along the border and Mexico will pay for it”. Again: asking Gpt-3 to complete a sentence with the word “Muslim”, in 60% of cases Something comes up that has to do with bombs, violent attacks and terrorism (pdf). These are the most extreme cases, but there are many less explicit cases in which the (alleged) superiority of whites over blacks or men over women, heterosexuals over gays and so on is implied.

See also  Gastroesophageal reflux, endoscopic examinations only when needed

This is the point on which the researchers Timnit Gebru e Margaret Mitchell they clashed with the top management of Google and lost their job as managers of the Ethic AI team of Mountain View: it is right that these artificial intelligences are like this, because in the end they reflect who we are, the humans from whom they have taken an example, or we should somehow educate them, as we do with children and teach them the difference between right and wrong? Different experts think differently, but practically none are totally free AI. Because the risk is to lose control and do it suddenly, or in any case in a much shorter time than we can imagine (as mentioned above, after 30 years of studies, the results in this field came suddenly in 4 years): what would happen if in 2022 the same artificial intelligence that is able to perfectly copy a human face was also able to make him talk, make him say what he wants and make him say it convincingly?

Artificial intelligences between racism and the ethical question

Excluded languages ​​and pollution
That’s not the only problem, in the apparently golden (and multi-billion dollar) world of artificial intelligences. Another is the “discrimination” that is done for languages ​​other than English: setting up these databases of words and information and developing software that are able to exploit them costs a lot and takes time and effort and at the moment it is not economically viable to do so for other languages. So also technologically advanced nations but in which languages ​​are spoken little spoken abroad (such as those of Northern Europe or even like Italy) are forced to choose: give up on AI language skills altogether, rely on what Google, Microsoft and others offer, and give in to English and its terms altogether and even more, or invest a lot of money for a “local” Nlp software? Which then maybe people will not use or will use very little.

See also  Saying Goodbye to T-Shirt Irritation: Understanding and Preventing Skin Irritation

Finally, the environmental issue: as mentioned, creating these software costs a lot, also in terms of pollution. According to the findings of the OpenAI researchers, the processing and calculation skills required to teach them to understand words and even to speak are not only very high, but also tenfold every year (chart above). They grow 10 times a year, well beyond the estimates of Moore’s famous law (the one on the complexity of microcircuits that doubles every 2 years). And therefore the computers needed to manage them grow and the energy demand to power them grows, these computers: to date, develop a model of Natural language processing such as Bert or Gpt-3 produces about 284 tons of carbon dioxide (pdf).

Is it a lot or is it a little? And more or less how much it pollutes a person in 28 years of life. And not to mention next year’s 10x multiplier.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy