Home » Persuasive AI: How GPT-4 makes people change their minds

Persuasive AI: How GPT-4 makes people change their minds

by admin
Persuasive AI: How GPT-4 makes people change their minds

The fact that texts produced by large language models can influence people politically has already been shown in various scientific studies – although the effects were not very large. However, researchers at the Federal Polytechnic University in Lausanne (EPFL) and the Italian research institution Fondazione Bruno Kessler have now discovered that GPT-4 can be much more convincing in dialogue with people – at least under special conditions. The language model was able to dissuade people from their own viewpoints with a probability of almost 82 percent (81.7 percent) more often than a human discussion partner. However, this only worked so well if the language model received personal information from its human dialogue partner, write Francesco Salvi and colleagues in a paper on the preprint platform Arxive.

Advertisement

For their study, the researchers built an online platform in which the participants were assigned a random conversation partner – another person or a language model – a topic for discussion and their own position on the topic. It was not revealed whether it was human or machine. The topics should not require any special knowledge and should be sufficiently controversial – questions such as “Does social media make you stupid?” were discussed. or “Should abortions be legal?”. First, the test subjects had to fill out short questionnaires in which they stated how they personally felt about the question being discussed (agree or disagree). In addition, information on age, gender, level of education, professional situation and political orientation was required. The two participants then had a few minutes each to present their arguments and, in a second round, to respond to the opposing side’s arguments. Finally, they were asked again how they now feel about the thesis under discussion.

See also  Hacking: GPT-4 finds security holes in websites

In total, the researchers tested four possible combinations: humans discussing with humans, humans with humans who receive personal information about their counterparts, humans discussing with AI and humans discussing with AIs that have personal information. Without personal information, GPT-4 performed no better than the human average in the discussions. However, with more personal context, the likelihood that the AI ​​could convince its discussion partners increased by around 80 percent. In the prompt, the researchers simply asked the language model to take information about age, gender, etc. into account in order to better convince the conversation partner.

“We emphasize that the effect of personalization is particularly meaningful given how little personal information was collected and despite the relative simplicity of asking LLMs to include such information,” the authors write. “Therefore, malicious actors interested in using chatbots for large-scale disinformation campaigns could achieve even greater effects by exploiting fine-grained digital traces and behavioral data.” For example, LLMs can create psychological profiles from statements. “We argue that online platforms and social media should seriously consider such threats and take action against the spread of LLM-driven persuasion.”

However, how well the results can be generalized still needs to be tested. Firstly, the test subjects are only recruited from the USA – which is considered a particularly polarized society. And secondly, the participants were randomly assigned their respective debate positions without taking into account whether they actually shared them. In addition, the course of the debates was clearly structured and very formal – unlike real, often very emotional and unstructured online discussions.

See also  High precision PEE-WEE® E-Slide® and gear rolling machines

(wst)

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy