Home » The new Bing (which uses ChatGpt) already makes mistakes. But it shouldn’t surprise us

The new Bing (which uses ChatGpt) already makes mistakes. But it shouldn’t surprise us

by admin
The new Bing (which uses ChatGpt) already makes mistakes.  But it shouldn’t surprise us

It was just a matter of time. The enthusiasm for the new Bingdistinguished by one million users signed up on the waiting list to use itwas struck down by Microsoft search engine first errors.

The new Bing, we recall, is based on AI developed by Open AI, the San Francisco startup it created ChatGpt and in which Microsoft has invested 10 billion dollars. The interface is similar to that of a chat. AND Bing acts like an expert in the flesh: answers any question in natural language, using information found on the web.

But someone has noticed that even Bing, like Google’s Bard, cannot be fully believed (yet)..

Tutorial

Artificial intelligence, machine learning, deep learning: minimum glossary to understand AI

by Francesco Marino


Dmitri Brereton is an engineer and an independent researcher in the field of artificial intelligence. In the last few days he has spent a lot of time examining the demos used on February 7th in Redmondin the Microsoft headquarters, to announce the new Bing result of the collaboration with Open AI.

Brereton has discovered several more or less serious errors committed by Bing. He realized, for example, that by listing the pros and cons of a vacuum cleaner, Bing has ‘invented’ features which did not correspond to those present in the article cited as a source. Bing cited among the “cons”, for example, a power cord that is too short. But the vacuum cleaner in question is cordless.

Another research, related to the night clubs of Mexico City in view of a pleasure trip, was studded with incorrect suggestions or with incomplete information. Of a restaurant, for example, a non-existent website was mentioned for reservations and for consulting the menu. Another recommended place has only one review on TripAdvisor and the last review on Facebook dating back to 2016, two elements that lead to think it has been closed. Of another, however, Bing claims that “there are no ratings or reviews yet” But it’s not true: online there are many.

See also  Call of Duty: Modern Warfare II - Campaign Review Review - Gamereactor - Call of Duty: Modern Warfare II

Per Brereton, Bing made the most serious mistakes by analyzing an economic document, specifically the balance sheet of the Gap company that the search engine should have summarized for the user. Well Bing has altered some of the present figures and has invented others.

The event

In Paris, we saw Google’s most advanced AI. And we found out how Bard will work

by our correspondent Pier Luigi Pisa


Whether they are small or huge errors, in reality little changes: for those who want to make generative AI the heart of their search engine, these slips are a huge problem. For two closely related reasons. If the user has to verify every time all the information he receives from a search engine, will stop using it. And if that happens, the tech giants will lose a lot of money. Just as has already happened to Google, which paid for a mistake by Bard with a stock market crash of 100 billion dollars.

Precisely Brereton, asking Bing to provide information on the error committed by Google’s generative AI, has laid bare a critical issue that makes you smile. To underline Bard’s inefficiency, Bing made a mistake himself, writing that “Croatia left Europe in 2022”. As is well known, Croatia is not only part of the European Union but, on the contrary, it completed its integration process just recently, last January 1, by adopting the euro as its current currency.

The mistakes of Bard, and now also of Bing, they shouldn’t surprise us. It is no secret that generative AI sometimes suffers from data hallucinationi.e. the tendency to write answers that seem correct and plausible but which, in fact, contain incorrect or totally invented content.

See also  Car live ticker: High repair costs for electric cars

This problem is known to big techs who are jostling in the race for AI. Both ChatGpt and Bing specify, on their pages, that responses may have errors or gaps.

History

What happened to Meta in the AI ​​race? The answer should make you think

by Pier Luigi Pisa


Google has repeatedly said it wants to test Bard until it’s efficient and secure, in the name of a “responsible” AI that provides correct answers and avoids, above all, complying with requests from users who claim the use of violent or racist language, or suggestions regarding crimes or illegal actions. The same approach is shared by Meta, which retired its AI last November, Galacticatransformed by users into a generator of hate speech and conspiracy theories.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy