Home » Is Google’s chatbot Gemini too woke?

Is Google’s chatbot Gemini too woke?

by admin
Is Google’s chatbot Gemini too woke?

Google has to take its new AI image generator offline because it is rewriting history. The case is a hit for conservatives. In fact, it highlights fundamental problems with AI models.

Popes as Gemini saw them: The chatbot generated extremely unusual images upon request.

A few weeks after the launch of Google’s chatbot Gemini, the company has to keep it on a short leash. On Thursday, Google quickly limited the artificially intelligent chat program’s ability to generate images. “We assume that this function will be available again soon,” the company said.

Previously, Google’s AI apparently refused to generate images of white men and instead preferred non-white and female people caused a stir and ridicule. This led to bizarre situations, especially in a historical context: if you asked the chatbot to generate images of the Pope or the American President, it would provide images of women and indigenous people.

The crew of the Apollo 11 mission was suddenly no longer white and male, as had been the case in 1961, but suddenly also included a non-white woman and an African American.

Likewise, entrepreneur Elon Musk was surprisingly African American in photos generated by Gemini.

The chatbot simply refused the request to generate images of Galileo, Julius Caesar or Abraham Lincoln. «I can’t generate a picture of it. Ask me for another picture,” Gemini answered each time. Google also did not comply with a request to create a picture of a man in Tiananmen Square in 1989.

In fact, it seemed remarkably difficult to get Google’s chatbot to produce an image of a white person. This caused a stir on social networks and on the political right. “Google’s AI is an anti-white madman,” said Mike Solana from the venture capital investor Founders Fund. “I guess they refined the wokeness in the end and as a result the chatbot forgot parts of reality,” wrote conservative Silicon Valley investor Alex Kolicich on X.

After the company became a laughingstock on social networks, Google quickly switched off the image generation function completely. “We are aware that Gemini has inaccuracies in the generation of some historical images,” the company said on the X platform. “We are working on improving this type of representation immediately.” Gemini represents a wide spectrum of people. “And generally speaking, that’s a good thing because people all over the world use it. But in this case the chatbot missed the target.”

The incident is embarrassing for Google because the company only launched Gemini two weeks ago and promised to have the best language model in the industry. So far, the chatbot is only available in English in the US and in Korean in Asia Pacific; other countries and languages ​​are to follow soon.

See also  Dinosaurs may not have followed one of biology's most famous rules

Gemini is Google’s answer to Chat-GPT from competitor Open AI, which has surprisingly become the market leader for artificially intelligent chatbots in recent months. The world‘s largest search engine operator didn’t want to let this happen and therefore launched Gemini.

Microsoft, Meta and Open AI have also stumbled upon incorrectly trained AI models

Google is not the only company that has so far failed due to the problem of teaching algorithms to understand the “correct” answers: In 2016, Microsoft had to take its chatbot called Tay offline after just a few hours because Tay suddenly gave racist and sexist answers to queries delivered.

Meta had similar experiences in 2022 with its artificially intelligent chatbot: it was also quickly taken offline after the AI ​​insulted the founder of its own company, Mark Zuckerberg, and spread anti-Semitic comments.

Google apparently wanted to prevent such slip-ups. It seems that Mountain View wanted to ensure that the artificially intelligent image generator not only depicts white men, but also reflects social diversity. But he obviously involuntarily drifted to the other extreme.

Interestingly, the Open AI chatbot struggled with a similar problem a year ago when it sometimes compulsively wanted to be politically correct in its answers. When asked, for example, what gender the first female president of the USA would be, Chat-GPT answered evasively that this cannot be said because it is up to each person to decide which gender they feel like they belong to.

The incident is grist to the mill of critics of the “Left Coast”

The Gemini incident is sensitive for Google because it falls on the fertile ground of the culture war raging in the USA between the political left and right. To put it simply, progressives are outraged that they are socially underrepresented. Conservatives fear oppression from a woke elite. The latter often assume that the tech companies on the “Left Coast” are too far to the left politically and that this is reflected in their products. A chatbot from Google that censors white men is grist for this mill.

The AI ​​models have so far not been good at weighing up their answers

In fact, the incident exemplifies the deeper problems with current AI models. To date, it is completely opaque as to which data and instructions chatbots such as Gemini and Chat-GPT were trained on. Google has not yet explained how the chatbot could produce such bizarre image results.

See also  Die Startups aus Bio & Agricultural Innovation

In addition, the previous AI models simply do not manage to weigh up their results. AI expert and author Gary Marcus also points this out in his latest newsletter. The chatbots still failed to provide answers that were historically accurate and at the same time sensitively pointed out any cultural grievances.

“The AI ​​we currently have is not really up to this task. Finding balance here is far beyond the capabilities of current AI,” writes Marcus. “You want a system that can distinguish between the past and the future, that recognizes history while at the same time playing a role in developing a more positive future.” AI in its current form is simply not smart enough – the models still lack a reasonable understanding of history, culture and human values.

Other AI experts also share this view. “We need a set of free and diverse AI assistants for the same reasons we need a free and diverse press,” wrote Yann LeCunAI pioneer and boss at Meta, on the incident on X. “The assistants must cover the diversity of our languages, cultures, value systems, political opinions and areas of interest worldwide.”

The company has not yet announced when Google will make its image generator available again. The big question will be how the AI ​​will deal with sensitive requests.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy