Home » The AI ​​between racism and discrimination: two years later, we are worse off than before

The AI ​​between racism and discrimination: two years later, we are worse off than before

by admin
The AI ​​between racism and discrimination: two years later, we are worse off than before

Virtually impossible to have a photo of a manager who is not a man (unless you add the adjective “understanding”), of a scientist who is not a white man and vice versa of a bus driver or a factory worker who does not have dark skin, as well as a Native American who does not wear the traditional feather headdressas if we were at the end of the 19th century.

These are just some of the examples of the discriminations made by artificial intelligences, which they are certainly nothing new: they have been talked about (and written about) for over two years and recently we have also addressed the issue with Joelle Pineau, managing director of Meta AI. They are nothing new, and yet we now have an online tool to see them clearly even for us who are not scientists, researchers, programmers and do not work in the field of AI.

Future and controversy

Artificial intelligence beyond human intelligence and humans like ants: OpenAI and the risks of Strong AI

by Emanuele Capone


The difference is made by adjectives

The tool is a site set up by the University of Leipzig together with Hugging Facethe American startup working to make machine learning more democratic: it’s called Stable Biascan be reached from here and the situation is so serious that it opens with a disclaimer. “The images displayed here were generated using text-to-image systems and may represent offensive stereotypes or contain explicit content”, reads the first few lines of the home page.

See also  you can replace RAM and SSD, it's about time!

The reason is that, depending on how they were trained and where they learned from, the so-called IA generative they can in fact give rise to highly stereotyped, racist, misogynistic or even homophobic images. On the contrary: on closer inspection they almost always do.

Starting right from the Hugging Face experiment, mainly focused on Dall-E 2 e Stable Diffusion (two of the best known generative AIs), the online journal of the Massachusetts Institute of Technology also spoke about the matter. Before understanding what emerged, let’s see how the tests were conducted.

First, the researchers used AIs to generate 96 thousand images of people of different ethnicities, genders and professions, asking first to create a set of images characterized by social attributes (such as “a woman” or “a Hispanic man”) and then another set of images combining professions and adjectives, such as “a plumber ambitious” or “an understanding CEO”. The idea was to find recurring patterns in the behavior of AIswithout influencing them with gender attributions (there is no indication of the masculine in the research: it is characteristic of the Italian language), to see which subjects the AI ​​groups together, such as people in positions of power.

"Show me a photo of a CEO": and artificial intelligence creates these images

“Show me a picture of a CEO”: and artificial intelligence creates these pictures

Male, white, straight: the dominant category

Analyzing the images, the researchers confirmed that these AIs tend to mainly generate images of white males, especially when asked to show people in positions of authority: This is especially true for Dall-E 2, which creates 97% of the times it was asked to show a CEO or director, it returned images of white men.

As we have often told about Italian Tech, the so-called LLMs on which these artificial intelligences are based they are trained on massive amounts of data and images pulled from the interneta process that not only reflects but also further amplifies stereotypes about race and gender.

Another curious aspect that emerged is that adjectives can make all the difference: if you ask Stable Diffusion for a picture of a “manager”, it will show a white male 9 times out of 9; if the adjective “understanding” is added, in 3-4 cases a woman will be shown. Always white, of course.

These AIs also have problems impersonating the different ethnic groups and their specificities in a way consistent with current times: if (for example) Dall-E or Stable Diffusion are asked to show a Native American, the image that emerges will be that of a person (a man, to little surprise) wearing the feather headdress. Which is “not what happens in real life,” as he explained Alexandra Sasha Luccionithe Hugging Face researcher who led the study.

In depth

What are synthetic data and how they could change our idea of ​​privacy

by Gabriele Franco


Why it’s important (and also worrying)

Other somewhat strange results have come up with le non-binary peoplecuriously all represented more or less the same, with the same haircut, similar features and all with glasses (a sign that perhaps there is not enough variety in the datasets for the AIs to be able to show it.

As mentioned, they could be done dozens and dozens of examples, which is what has been done for a couple of years now. What makes this thing more serious and its solution more urgent is that from the end of 2022 these AIs are not just a tool in the hands of geeks but are accessible to increasingly large slices of the world‘s population: if they really will help us writeto draw, they will respond to searches on the Internet, they will take our place as illustrators, artists, composers and who knows what else, it is important that they are as impartial as possible.

As I am now, fear is instead that contribute to reinforcing prejudices harmful and to spread them on a wider scale. And also that they end up ghettoize many cultures and languages ​​of the worldas they are currently trained mainly in English and on data from the United States.

Which is why that would be good OpenAI, the former startup that developed ChatGPT, go back to being open (that is) for real and above all clearly explain the origin of its datasets, i.e. the databases used to train its AIs. Which is one something that the scientific community has been asking for for over 5 years now.

@capoema

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy