Home » Why does ChatGPT give wrong answers? The problem of “hallucinations”

Why does ChatGPT give wrong answers? The problem of “hallucinations”

by admin
Why does ChatGPT give wrong answers?  The problem of “hallucinations”

The accuracy of ChatGPT has significantly deteriorated in recent months, especially for applications based on the GPT-4 linguistic model. However, there is another important problem that AI has brought with it since its inception: that of “hallucinations”. But what are these “hallucinations” of ChatGPTand why do they occur?

The answer comes from TS2.Space, who tried to provide a “technical” explanation of the problems of ChatGPT and all the generative AIs. All applications based on “natural language” (i.e. that are accessed without writing lines of code) have aarchitecture known as “Transformer”composed of a series of overlapping levels and mechanisms that allow the model to interpret the questions that are asked by users and to learn from the latter and from the data that is provided.

During the stages of training, ChatGPT comes powered with dataset huge, many of which are written in natural language, through which the AI ​​learns to create relationships and links between words and sentences. The problem with this model, though, is that the AI ​​doesn’t comprehends in any way what he is learning and why a certain word (or phrase) is related to the others. In other words, ChatGPT does not understand the concepts it learns or the world around it, as the human brain does, but it only generates gods pattern text based on the information provided.

This causes the “hallucination” problem. If ChatGPT is asked a question based on words that have not been sufficiently represented in his dataset (such as proper names of little-known people, places, or historical events, or topics more recent than the last one training dell’IA), quest’ultima will struggle to find relationships and connections starting from the latter, leading to inaccurate answers and real blunders.

See also  You need to create a new character to play Diablo IV's seasonal content

To make matters worse are the optimization systems training of the AI. The ChatGPT text generation system is based on the probabilistic distribution of links between one word and another, which is scanned with a system known as beam search. The latter is very fast, but certainly not perfect: sometimes, in fact, it provides “delusional” results, with repetitions of the text or meaningless sentences, since the priority is given to the words that most often appear next to those of departure, and not to the construction of a coherent discourse or appropriate.

Finally, what makes the solution to the ChatGPT hallucination problem extremely complex is the lack of feedback systemswhich limit the effectiveness of the training same as the AI. The absence of these mechanisms, however, could however be resolved in the coming months, thanks to the intervention of researchers and engineers.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy