Home » Open letter warns against manipulative AI

Open letter warns against manipulative AI

by admin

Business Insider hat eine Artikel-Serie über LLMs: Large, creative AI models will transform lives and labour markets, How generative models could go wrong, The world needs an international agency for artificial intelligence, say two AI experts, How to worry wisely about artificial intelligence, Large language models’ ability to generate text also lets them plan and reason.

But the most interesting one is this: How AI could change computing, culture and the course of history, in which they luckily not only discuss the widely discussed topics such as hallucinations, disinformation, etc., but also ask questions about how this technology affects human Perception could change and what effects this has on our psyche. This is a topic that has interested me since I’ve seen social media and the web do the same thing, with far-reaching consequences around the world.

The changing of human perception through technology is not new, but it is always revolutionary. Language started our cognitive evolution, writing enabled the Agricultural Revolution, movable type and the printing press led to decades of religious wars in Europe, the internet and social media gave us a resurgence of tribalism and atomized our common view of the world. There’s no reason to believe that AI won’t have a significant impact on human cognition, how we interact with each other, and the ways in which this will go wrong, which is quite a guarantee.

One of these cognitive dangers of AI is hidden in this paragraph:

An obvious application of the technology is to turn bodies of knowledge into subject matter for chatbots. Rather than reading a corpus of text, you will question an entity trained on it and get responses based on what the text says. Why turn pages when you can interrogate a work as a whole?

One obvious application of the technology is to turn areas of knowledge into themed chatbots. Instead of reading texts, you consult a finetuned AI. Why dig through a work when you can question a book or even an entire way of thinking as a whole via the voice interface?

See also  Avatar 2 streaming: the two possible dates have to do with the number 17

Because you don’t know the right questions. Even if you are very familiar with a topic, a human author’s unique perspective co-formulated in a text gives you new ideas and perspectives from which new insights emerge. It is therefore not possible to consult a text in its entirety unless you have actually read it. Of course, this can be very useful as a companion tool, as can reading a Wikipedia page for a book you’re reading – but never as a substitute for reading.

Furthermore, there is a possibility that AI can lead to a world of synthetic ghosts where AI digital twins are trained on our personality traits from posts, podcasts, videos, etc. and this model potentially lives forever after your death. What happens to the human psyche when you can always talk to a realistic model of your late mother? We have no idea.

The article cites Laurie Anderson, wife of the late Velvet Underground singer Lou Reed, as an early example. She has an AI assistant trained on her work and that of her husband: “She doesn’t view using the AI ​​as collaborating with her late partner,” likely because the system was only trained on the creative output of Reed and Anderson , not other personality traits. However, future systems will certainly be so sophisticated that they can pass as “synthetic personality twins”.

Freud’s concept of the uncanny, where humans can’t really decide whether something is in its right place (un-uncanny) or alive or dead, then becomes central to questions from these mimetic AI models of the dead, and it can interfering with our psychological mechanisms of the grieving process, which I wrote about here a few months ago.

See also  12VHPWR x2 and 666W TGP configuration, Galax GeForce RTX 4090 HOF OC Lab is coming soon

These are some of the issues on the horizon that bother me the most — not paperclip maximizers, not disinformation — the unknown unknowns of AI are showing up fast these days.

Although I signed the demand for a moratorium, the open letter I piqed here seems more relevant to me for the reasons just mentioned Scientists from the University of Leuven in Belgium: We are not ready for manipulative AI.

the recent chatbot-encouraged suicide in Belgium highlights another major concern: the risk of manipulation. While this tragedy illustrates one of the most extreme consequences of this risk, emotional manipulation can also manifest in subtler forms. Once people get the feeling they interact with a ‘subjective’ entity, they build a bond – even unconsciously – that exposes them. This is not an isolated incident. Other users of text-generating AI also described its manipulative effects. (…)
Most users realise rationallythat the chatbot they interact with has no understanding and is just an algorithm that predicts the most plausible combination of words.It is, however, in our human nature to react emotionally to such interactions. This also means that merely obliging companies to indicate “this is an AI system and not a human being” is not a sufficient solution.
Some individuals are more susceptible than others to these effects. For instance, children can easily interact with chatbots that first gain their trust and later spew hateful or conspiracy-inspired language and encourage suicide, which is rather alarming. Yet also consider those without a strong social network or who are lonely or depressed – precisely the category which, according to the bots’ creators, can get the most ‘use’ from them. The fact that there is a loneliness pandemic and a lack of timely psychological help only increases the concern.
It is, however, important to underline that everyone can be susceptible to the manipulative effects of such systems, as the emotional response they elicit occurs automatically, even without even realising it.
“Human beings, too, can generate problematic text, so what’s the problem” is a frequently heard response. But AI systems function on a much larger scale. And if a human had been communicating with the Belgian victim, we would have classified its actions as incitement to suicide and failure to help a person in need – punishable offences.

Freely adapted, shortened and edited machine translation from my newsletter: AI’s uncanny effect.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy