Home » Why there is currently no good method against AI hallucinations

Why there is currently no good method against AI hallucinations

by admin
Why there is currently no good method against AI hallucinations

Why there is currently no good method against AI hallucinations

Anyone who asks ChatGPT something hopes for factually correct answers. On code that works. To correctly solved math problems. Accurately summarized books. Correct data in biographies and correct quantities in recipes. In practice, however, this is often not the case. Research groups around the world are working on methods to prevent this. But so far they have only succeeded in limiting hallucinations, but not completely preventing them, reports MIT Technology Review in its current issue.

Advertisement

There are numerous assumptions about how and why large language models hallucinate. “The mechanisms of AI hallucinations are not yet fully understood,” says Iryna Gurevych, head of the Ubiquitous Knowledge Processing Lab at TU Darmstadt. “Which is related to the fact that it is difficult to understand the internal processes of a large language model.”

The definition of the problem is already unexpectedly complex: An overview study by the Hong Kong University of Science and Technology (HKUST) lists different types of AI hallucinations, each of which depends on the task and the context – i.e. the current request – to a language model . Whether the answer contradicts the information contained in the question or whether it adds additional, not easily verifiable information to a query (extrinsic hallucination). Whether the model should refer to acquired world knowledge (factuality) or should be consistent with the existing context (fidelity). This results in hallucinations that include factual errors as well as unverifiable information, nonsensical statements and implausible scenarios. And only some of these forms of hallucination can actually be combated using technology.

See also  AI chat is a technical job, but the copycat ChatGPT has already made money in "touching porcelain"-Securities Times

The crises of our time are coming together: war, global warming, environmental problems and technological developments. It seems like a labyrinth where the way out just doesn’t seem to come into view. The current edition at least tries to bring some order. Highlights from the magazine:

One of the easiest things to recognize are the actual hallucinations. To limit this, a technique called Retrieval Augmented Generation (RAG) is usually used. “Here, the language model is expanded to include a second, external knowledge component,” says Patrick Schramowski, researcher at the German Research Center for Artificial Intelligence (DFKI). Before the model generates an answer, it extracts the relevant information from the query and compares it with its parametric, trained knowledge as well as with external sources such as the open Internet or specialist libraries. This makes the answers more reliable, especially with regard to current events or in dialogues. “Ideally, however, the system still needs a fact check to verify this external knowledge. That’s where things get complicated,” says Schramowski.

On the one hand, a comparison with external sources only works if these sources are reliable. For example, many works with RAG rely on Wikipedia. However, their accuracy is assessed very differently in science – from 80 to over 99 percent – depending on which articles the authors have checked. On the other hand, language models do not really “understand” the meaning of a statement. Whether the statements agree with those of an external source is therefore usually checked using a mathematical function that can only calculate the formal similarity of both statements. Whether a statement is ultimately classified as true or false depends on various details.

See also  Someone completely remade Resident Evil 4's opening scene in LEGO

The lack of transparency in many models is also disadvantageous. “ChatGPT is a black box. You can use it to conduct research by repeatedly asking different queries and analyzing the output. This is suitable for critical observations. But when it comes to improving the models, open models offer us researchers more possibilities,” says Schramowski. With proprietary, closed models like ChatGPT, only the manufacturers can take action against hallucinations.

(wst)

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy