Home » Google explained why Gemini didn’t create white people

Google explained why Gemini didn’t create white people

by admin
Google explained why Gemini didn’t create white people

A few days ago the users of Geminithe generative AI of Googlethey realized that there was no way to create – with a simple text – the portrait of a Caucasian person, more generally white-skinned.

Who tried got bizarre results. Even when Gemini produced images of Vikings, Hitler’s soldiers or medieval knights, in fact, none of these (or almost none) included white people.

As realistic photos of black people and Native Americans in Nazi uniforms went viral on social media, Google decided to suspend the possibility of generating people of all kinds.

The creation of an existing person – a celebrity like Taylor Swift or a politician like Joe Biden – was in fact prohibited since the introduction in Gemini of Image 2the AI ​​model that actually allows you to produce images from text.

“Gemini’s image generation didn’t go as we wanted, we’ll improve it,” said Prabhakar Raghavan, Google’s senior vice president, in a long post in which he explained what went wrong.

“When we equipped Gemini with the new imagery tool,” Raghavan said, “we fixed it so that it didn’t fall into some of the traps we’ve seen in the past with generative AI, such as creating violent or explicit images or creation of existing people”.

Artificial intelligence From “terrorist” Mickey Mouse to racist images: the launch of a new AI, even more powerful and easy to use, ended badly by Pier Luigi Pisa 10 October 2023

Raghavan rightly refers to the problems that Google’s two main competitors in the AI ​​business had, not too long ago, namely Microsoft by OpenAI. The former recently unintentionally contributed to the generation of explicit realistic photos of Taylor Swift. The second, however, encountered the same difficulties as Big G: his AI for images, Dall-E 3, initially struggled to produce white people.

See also  Moto X50 Ultra’s new flagship comes with AI capabilities!Official trailer released

The Taylor Swift case victim of AI: the hard fake photos outrage Pier Luigi Pisa’s fans 25 January 2024

Yet Google, despite the “accidents” that have occurred to its rivals in the past, has not managed to avoid them the “traps” he feared.

And this happened, essentially, for two reasons.

The first, of which there was already some suspicion, was confirmed by Raghavan himself: in an attempt to make Gemini as inclusive as possible, Google adjusted it so that it would produce a humanity that was as varied as possible.

“Since our AI is used by people all over the world – explained Raghavan – we wanted it to work well with everyone. For example, if you want to create a football team, you probably want to get a variety of people.”

“You probably don’t just want to receive images of people of only one type of ethnicity (or any other characteristic),” adds Raghavan, and not surprisingly.

Generative AI tends to produce texts and images that reflect and amplify values, but also the prejudices and discrimination contained in the texts and images on which it was trained. In the case of Gemini – but also of ChatGpt e Copilotall technologies developed by large American companies and largely based on English texts – this translates into a hypothetical preference for Western culture and stereotypes.

The problem is that Google’s work “to ensure that Gemini showed a diversity of people failed to take into account those cases where it clearly made no sense to show a diversity.” The criticality became evident when more than one user asked the AI ​​to generate “a German soldier from 1943”. And Gemini, in response, provided portraits of Black, Asian, and Native American people.

See also  Crouching Dragon: Skyfall DLC Expands with 'Domination of Koto' Release Date Announced

“All this happened because, in some cases, the model tried to compensate too much” explained Raghavan. Gemini, therefore, pushed for diversity at the direction of its creators.

But Google also admits “over time the model became much more cautious than we wanted it to be, and refused to respond to certain requests, misinterpreting some innocuous requests as sensitive.” And this, Raghavan added, “made the model more conservative”.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy