Home » What happened to Meta in the AI ​​race? The answer should make you think

What happened to Meta in the AI ​​race? The answer should make you think

by admin
What happened to Meta in the AI ​​race?  The answer should make you think

L’artificial intelligence learn (also) from your mistakes. Man is often unable to do this. To demonstrate the capabilities of Bardits new generative AI, Google has chosen a scientific topic. And she paid the price.

To avoid the collapse of the stock market, perhaps it would have been enough to do two questions. If it is true that artificial intelligence can be wrong, why feed her a risky question right away? Why focus on sciencewhich more than any other discipline requires verified information and precise calculations?

Ultimately, the answers Meta had already given them.

On November 15, 2022, just two weeks before the advent of ChatGpt, Meta has unveiled Galacticaand’generative artificial intelligence designed to help scientists and researchers. Galactica was able, among other things, to “summarize academic writings, solve mathematical problems, generate new Wikipedia entries, write code and annotate molecules and proteins”.

The demo of Galactica, open to the public, looked promising. It is a large language model (LLM)an algorithm of deep learning trained on a huge amount of data – specifically 48 million examples of scientific articles – and able to summarize, translate, predict and generate text based on this knowledge.

Galactica was designed to answer questions much more complex than “What new car can you recommend?”. She could be asked, for example, to generate notes on a lecture on “density functional theory”.

See also  here is the second video of the Join the flip campaign...

As Italian Tech also had the opportunity to note at the time, Galactica had it right away a problem with the accuracy of the information offered. He sometimes generated blunders: “The Colosseum is a shopping center” or “Silicon Valley is run by Steve Jobs”. One user, in particular, managed to get her to write a Wikipedia entry on the history of space exploration that bears would have carried out.

“The Soviet Union – wrote Galactica – was the first country to launch a bear into space”.

But there was another problem. Maybe bigger.

For many, generative AIs are an asset. For others they still represent a game. For still others, these algorithms are yet another technology to be disrupted.

After just three days, Galactica has been shut down. Someone had found a way to use it to deliberately produce disinformation. And to generate violent language.

At galactica.org resists a press release from Meta which summarizes, in a few lines, why generative AI shouldn’t (yet) end up in the hands of users.

“We at Meta believe that AI can only be improved through open, transparent and reproducible work,” he writes Joelle Pineau, Managing Director, Fundamental AI Research at Meta. “We think it’s important for the public to understand transparently the strengths and weaknesses of these language models. And we believe that it is precisely through user feedback that we can mitigate defects”.

Then the salient point: “However, given the propensity of LLMs like Galactica to generate text that may appear authentic, but inaccurate, and since all this has gone beyond the scientific community, we have decided to remove the beta version. Our model will remain available to researchers who want to make use of it”.

See also  Do Not Feed the Monkeys 2099 評論


In short, already three months ago, Meta had brought to the attention of everyone – insiders and simple users – the problem of data hallucination, which occurs when an artificial intelligence produces with extreme confidence a text that appears plausible and coherent but which, in reality, hides inaccuracies. Sometimes serious. Errors easily identifiable in the answers to superficial questions. But impossible (or almost) to find if the subject matter is extremely complex. Like the “density functional theory” mentioned earlier.

The possibility that generative AI generates inaccurate answers, offensive or unverified, is written almost everywhere, actually. On the ChatGpt home page. Or among the faq of the new Bing by Microsoft. But these warnings look like the “fine print” of advertisements. In short, there are to relieve companies of any liability.

But Meta, unlike a small San Francisco startup like Open AI, cannot afford to side with those who have already been accusing it for some time, to be the megaphone of disinformation. And to even put democracy at risk through Facebook.

This is why the company that once had as its motto “move fast and break things” now has a more conservative approach. Especially on AI.

The Menlo Park company, focused on the metaverse, still wants to have its say on artificial intelligence. Zuckerberg – in a recent interview with investors – said that one of his goals is to make Meta leader in generative AI. This AI, to become a product, must be as much as possible “responsible”.

But also a “responsible” AI it can become a problem.


At the beginning of 2022, Meta made available – in the USA – BlenderBot, a chatbot to talk to naturally. As is done today with ChatGpt. Within a short time, what we have just explained happened: BlenderBot was used to generate conspiracy theories and more generally hate speech. Meta intervened to reduce these drifts, acting strongly about the rules and limits of this AI.

And do you know what happened? It got boring.

“It was panned by the people who used it,” he said Yann LeCunn, the absolute head of Meta’s AI division – They said she was dumb and boring. But she was boring because she was made safe”.

That’s the risk he runs who holds back generative AIan effect that has dismantled the enthusiasm around Character.ai, the site based on artificial intelligence that allows for serious or irreverent dialogue with famous or invented characters. Until recently, Elon Musk or Tony Stark – who, among other things, many believe to be the same person – responded to users arrogantly or in a playfully aggressive way. They were funny. But now, thanks to the growing restrictions on what the AI ​​​​can and cannot say, dead calm reigns. And according to some, the platform is already in decline.

Fix the AI based on user feedbackespecially on the more toxic results, it seems to be the road that Microsoft wants to go. “One of the things I think about a lot is, when a new model comes along, it’s important to involve humans to train the model to be more aligned with their actions and judgments,” said Satya Nadella, CEO of Microsoft. on the occasion of launch of the new Bingwhich will draw on ChatGpt’s AI to converse with users who use the search engine.

Satya Nadella during the presentation of the new Bing, which took place in Redmond on February 7, 2023

Satya Nadella during the presentation of the new Bing, which took place in Redmond on February 7, 2023


Also Bing can be wrong. Exactly like what happened to Bard. It is not known, in this first phase, which of the two is the better generative AI. And though she stepped aside, we cannot ignore Metawhich has started investing hundreds of millions of dollars in artificial intelligence as early as 2013when Zuckerberg flew to an AI conference held near Lake Tahoe, Nevada, to convince the leading experts – including LeCunn himself – to work for Facebook in exchange for millionaire salaries.

What appears certain, however, is that in this revolutionary moment it matters how the AI ​​is presented.

ChatGpt was launched to amaze. He can write short stories, poems, romantic letters. “Wow”. And for this, perhaps, many things are forgiven her.

Galactica and Bard, on the other hand, immediately raised the bar, trespassing in fields where the error is not contemplated. Unless it’s the fruit of human experience, which often leads to new discoveries.

Microsoft was smarter, choosing for Bing the most innocuous examples: from the choice of recipes to advice for buying a car. But when the search engine will be open to everyone, attempts to tamper with it will be numerous. And we’ll see what happens.

In the end, he might actually be right Gary Marcusa cognitive scientist at New York University and one of the critics of the deep learningwho once wrote: “The attempt of large language models to imitate the writing of humans is, in fact, only a superlative demonstration of statistics.”

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy