Home » “The Colosseum is a shopping mall,” says Meta’s new AI

“The Colosseum is a shopping mall,” says Meta’s new AI

by admin
“The Colosseum is a shopping mall,” says Meta’s new AI

“The Colosseum is a shopping centre, in Rome. It was built between the 60s and 70s in the Prenestino Centocelle district, near the Eur”. It is almost useless to point out how many errors, historical but also geographical, there are in these two sentences dedicated to one of the seven wonders of the modern world. It might be comforting to say that they were generated by an artificial intelligence. Too bad it is an AI trained on the “knowledge of humanity” to “consult and re-elaborate what we know about the universe”.

What is Galactica and how does it work

The project is called Galactica and is managed by Papers With Code, in collaboration with the AI ​​division of Meta. It is what is defined as a Large Language Model, i.e. a system trained on a very large amount of texts and capable of generating new ones. The difference is that, this time, the model is based on scientific articles, books and, more generally, recognized and verified sources. Yes, because Galactica’s goal is ambitious: to create and distribute knowledge, information with scientific relevance. The demo was available to everyone for just over a day and a half, before being replaced by a page inviting you to read the reference paper.

The operation is quite simple: a very clear search bar, Google style, from which to ask the system to generate a similar scientific article or a Wikipedia page, on the topic that the user prefers. At that point, the system returns a first portion of that content, offering the possibility to continue the generation.

See also  Study reveals how microplastics are changing the microbiome of seabirds

The problem is that, at least in this first phase, Galactica isn’t exactly providing verified information (and who knows if this isn’t the reason for pausing the demo). Apart from the Colosseum example, we tried generating a page dedicated to the Leaning Tower of Pisa. Well, according to the system, the well-known Tuscan monument would have been designed by Filippo Brunelleschi and completed in 1476. Too bad that Brunelleschi has nothing to do with the Leaning Tower of Pisa, finished about 200 years earlier, in 1275.

Other experiments we have conducted, on Galileo Galilei or on 5G, have had better results: the feeling, however, is that in every generation some kind of error or inaccuracy may be hidden. The examples of texts of dubious scientific validity created by Galactica have become quite widespread on social networks. On Twitter, many users have posted their generations, including an accurate description of a particular species of galactic bear.

What is not working

The fact that Galactica, like other text-based AIs, could invent scientific facts is hardly new. The researchers of Meta and Papers With Code insert this eventuality among the limitations of the project. In particular, they write: “Language models can hallucinate. There are no guarantees for truthful or reliable output from language models, even large ones trained on high-quality data like Galactica. Never follow the advice of a language model without verification.”

Here, but then the point is: does it make sense to use this technology to generate knowledge? The answer is no, according to University of Washington researcher Emily M. Bender, who has published an important scientific article on the subject. “The only knowledge – he wrote on Twitter – that a linguistic model has concerns the composition of words, their order”. And indeed, that seems to be the problem. AIs for generating text, such as GPT-3, predict the probabilities of one linguistic sign following another linguistic sign, but are not particularly good at interpreting the validity of information. All that matters, in other words, is the form, not the content.

Meta AI and the previous Blenderbot

It is not the first time that Meta’s AI division ends up at the center of a discussion on the effectiveness of its artificial intelligence solutions. Last August, Mark Zuckerberg’s company launched Blenderbot, an AI-based chatbot for conversations with web users. A few hours were enough before the system began to express racist comments and unflattering opinions against the founder of Facebook, accused, among other things, of always wearing the same shirt.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy