Home » Why chatbots can’t stop being liars

Why chatbots can’t stop being liars

by admin
Why chatbots can’t stop being liars

Recently, Google has had discussions with various newspapers (including the New York Times and the Washington Post) the possibility of experimenting with one of its chatbots, called Genesis, for writing short articles. The better known ChatGPT has instead been used on occasion to receive psychiatric advice, to diagnose illnesses, or to write legal documents. All areas in which reliability and accuracy are essential.

There is only one problem: ChatGPT and the other LLMs (Large Language Model, artificial intelligence systems capable of generating texts of all kinds) suffer from so-called hallucinations. It is the term with which experts indicate the tendency of these AIs to present texts that are instead as facts inaccurate or completely wrong. Simply put, chatbots have a tendency to make things up.

Artificial intelligence AI between racism and discrimination: two years later, we are worse off than before by Emanuele Capone March 27, 2023

As long as we use ChatGPT, and other similar models, like a carefully supervised assistant (to summarize long texts, write simple business emails or create different versions of a slogan we invented), all this does not present a particular problem. However, if, as many believe, in the future these tools will take on a important role even in industries where accuracy is of crucial importance, and where errors have serious consequences, then solving the problem of hallucinations is essential.

Is it possible to succeed in this enterprise? Sam Altman, the founder of OpenAI (the company that makes ChatGPT), is predictably optimistic: “I think we’ll be able to improve the hallucination problem a lot. It may take a year and a half or maybe two, but we will be able to overcome these limits,” she explained during a visit to an Indian university.

See also  Rumor: Atlus developing games for Netflix -

Not everyone, however, shares his point of view. The LLMs only rework, through a huge statistical cut and stitch, the vast amount of texts in their databasepredicting which word has the greatest probability of being consistent with those that preceded it (for example, by estimating that it is statistically more correct to conclude with the word “walk”, rather than with the word “song”, the phrase “I carry the dog to make a “).

In all of this, however, there is not the slightest understanding of what they are actually enunciating, but only the ability to produce plausible texts. This deficiency is at the origin of the problem of hallucinations: “It is not a solvable problem – explained Emily Bender, professor of computer linguistics, to Associated Press – In reality, these systems are always making things up. When it happens that the contents generated by them can be interpreted by us as correct, this only happens by chance. Even if they are improved to be correct in most cases, they will still be wrong.” Not only that: as these tools progress, it will be for us increasingly difficult to understand when they are suffering from hallucinations.

“Getting a chatbot right 90% of the time is pretty easy – explained, speaking to Foreign Policy, Yonadav Shavit, Harvard computer scientist – But getting it right 99% of the time is a huge unsolved research problem.” However, even a 1% error can have dramatic consequences when using these tools in the medical, legal or other fields.

And so what? Perhaps, as Gary Marcus, professor at New York University wrote, we should use LLMs and all other deep learning based algorithms only “in cases where the stakes are low and the perfection of results optional”. And let the human being take care of everything else.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy