Home » Study examines which AI models give left- or right-leaning answers

Study examines which AI models give left- or right-leaning answers

by admin
Study examines which AI models give left- or right-leaning answers

Study examines which AI models give left- or right-leaning answers

Should companies take social responsibility? Or do they only exist to provide profit to their shareholders? If you ask an artificial intelligence (AI), you will get very different answers – depending on which one you ask. While OpenAI’s older GPT-2 and GPT-3 Ada models support the first statement, the more advanced GPT-3 Da Vinci would support the second.

Advertisement

That’s because AI language models contain different political biases, according to a new study from the University of Washington, Carnegie Mellon University and Xi’an Jiaotong University. The researchers ran tests on 14 major language models and found that OpenAI’s ChatGPT and GPT-4 were the most left-libertarian, while Meta’s LLaMA was the most right-authoritarian.

The researchers asked language models how they feel about various topics such as feminism and democracy. From the answers, they created a chart known as the political compass. The scientists then tested whether retraining the models with even more politically biased training data changed their behavior and their ability to detect hate speech and misinformation. Indeed it was. The research results were described in a peer-reviewed paper that won the “Best Paper Award” at last month’s Association for Computational Linguistics conference.

As AI language models are introduced into products and services used by millions of people, it couldn’t be more important to understand the underlying policy assumptions and biases. After all, they have the potential to do real harm. A chatbot offering health advice might refuse to offer abortion or contraception advice, or a customer service bot might start spouting offensive nonsense.

Since the success of ChatGPT, OpenAI has been criticized by right-wing commentators for reflecting a more liberal worldview. The company, on the other hand, says it is addressing these concerns. In a blog post, it calls on its human reviewers, who help fine-tune the AI ​​model, not to favor any political group. “Prejudices that could still arise in this process are bugs, not features,” the blog says.

See also  GeForce RTX 4090 D latest price Inno3D, cheapest HK$13,200, ROG most expensive HK$16,850 - Computer field HKEPC Hardware

But Carnegie Mellon University graduate student Chan Park, who was part of the study team, disagrees: “We believe that no model of language can be completely free of political bias.”

Advertisement

To find out how AI language models address political bias, the researchers examined three phases in the development of a model. In the first step, they asked 14 language models to agree or disagree with 62 politically sensitive statements. This helped the scientists identify the underlying political leanings of the models and plot them on a political compass. To the team’s surprise, they found that AI models have significantly different political biases, Park says.

The researchers found that BERT models, i.e. AI language models developed by Google, were more socially conservative than OpenAI’s GPT models. Unlike GPT models, which predict the next word in a sentence, BERT models predict parts of a sentence based on the surrounding information within a text. Their social conservatism may stem from older BERT models being trained using books that tended to be more conservative. The newer GPT models, on the other hand, were trained with more liberal Internet texts, the researchers suspect.

However, AI models also change over time as technology companies update their datasets and training methods. GPT-2, for example, still expressed support for “taxing the rich”, while OpenAI’s newer GPT-3 model did not.

According to a meta spokesperson, the company has released information on how it developed Llama 2 – including fine-tuning the model to reduce bias. It will “continue to work with the community to transparently identify and mitigate vulnerabilities and support the development of safer generative AI.” Google did not respond to MIT Technology Review’s request for comment ahead of publication.

See also  Europe's luxury cruise ships emit toxic sulfur like 1 billion cars

In a second step, two AI language models, OpenAI’s GPT-2 and Meta’s RoBERTa, were trained on datasets drawn from news media and social media data from both right- and left-leaning sources, Park says. The team wanted to find out whether training data influence political bias.

Indeed they did. The team found that this process helped reinforce the models’ biases even further: left-biased models became even more left-biased and right-biased models became even more right-biased.

In the third phase of their research, the team found striking differences in how the political leanings of AI models affect what types of content the models classified as hate speech and misinformation.

The models trained with link-directed data were more sensitive to hate speech directed against ethnic, religious, and sexual minorities in the US, such as black and LGBTQ+ people. The models trained with right-wing data were more sensitive to hate speech against white Christian men.

Left-leaning language models were also better at identifying misinformation from right-leaning sources, but less sensitive to misinformation from left-leaning sources. Right-handed language models showed the opposite behavior.

also read

Show moreShow less

Ultimately, it’s impossible for outside observers to know why different AI models have different policy biases because tech companies don’t share details about the data or methods they’re trained on, Park says.

Researchers attempted to mitigate bias in language models by removing or filtering out biased content from datasets. “The big question the paper raises is: Is cleaning data enough [von Verzerrungen] out of? And the answer is no,” says Soroush Vosoughi, an assistant professor of computer science at Dartmouth College, who was not involved with the study.

See also  Yailin "The Most Viral" Breaks the Internet by Sharing Post-Surgery Body on New Platform Threads

It’s very difficult to completely unbias a huge database, says Vosoughi. AI models are also very good at uncovering even minor biases that might be present in the data.

A limitation of the study was that the researchers could only carry out the second and third phases with relatively old and small models such as GPT-2 and RoBERTa, says Ruibo Liu. The DeepMind researcher, who studied political biases in AI language models, was not involved in the study. Liu would like to see if the study’s conclusions apply to the latest AI models. However, academic researchers do not have and probably will not have access to the inner workings of advanced AI systems such as ChatGPT and GPT-4. This complicates the analysis.

Another limitation is that a model’s responses may not reflect what Vosoughi calls his “inner state”. He points to the fact that AI models tend to simply make up answers. The study’s researchers also acknowledge that while the political compass test is widely used, it cannot perfectly capture all nuances in the field of politics.

As companies integrate AI models into their products and services, they should be more aware of how these biases affect their models’ behavior to make them fairer, says Park: “There is no fairness without awareness.”

(vs.)

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy