Home » How Meta’s artificial intelligence works to detect fake news

How Meta’s artificial intelligence works to detect fake news

by admin

Eliminating offensive and potentially dangerous content has always been among the most important challenges for social networks. And it is particularly so for Meta, the company that manages Facebook and Instagram: between July and September 2021, Mark Zuckerberg’s company removed over 9 million posts that incited hatred, according to the latest Community Standards Enforcement Report.

According to Menlo Park, this is still not enough: nearly 4 of the 9 million deleted contents have been reported by people and not previously identified by artificial intelligence. To improve the percentages, Meta has developed a new AI system based on a technology called Fsl (la sigla sta per Few Shot Learning). This is an innovative approach to AI, able to find controversial posts more effectively, and above all to respond more quickly to any changes in Facebook and Instagram regulations.

the budget

The pandemic does not stop artificial intelligences: in 2020 investments grew by 40%

by Emanuele Capone

How Supervised Learning works
Currently, Meta uses artificial intelligence systems based on the so-called Sl, acronimo di Supervised Learning. These are tools that work on the basis of a long learning process. To train this kind of AI to recognize posts that contain information on Covid vaccines, for example, it is necessary to provide the system with hundreds of thousands of cases, previously tagged and categorized. At that point, after a learning process that can be very long, the system will be able to distinguish (and not always perfectly) coronavirus vaccine posts, identifying common words and traits.

See also  Green light for 10-minute videos: this is how TikTok hunts YouTube

The Sl model has two main problems. The first has to do with how it works. These systems were born to look for similarities, but they do not really know how to recognize the object of their analysis: they look for common elements and give the corresponding label as a result. The second problem is the time needed for learning: when, as in the case of Meta, we are dealing with the speeches of human beings, the object of moderation is constantly changing, in the words and meanings that are attributed to each other. to words. To keep up, artificial intelligence needs millions of examples and a lot of time. A time when a certain term or a particular meaning can be used to spread disinformation.

How Few Shots Learning works
FSL systems, which we can define as short-term learning, have the advantage of learning in a short time, based on a relatively limited number of examples. It is a somewhat closer model, as a concept, to the way humans learn.

Basically, as explained by Meta in a post on the company blog, the system was first trained to recognize the way people speak, based on billions of natural language examples pulled from around the web. It was then educated with a range of specific information on Facebook’s corporate policies. and Instagram, tagged and categorized over the years. In the end, can be updated with a text explaining a new moderation rule and, if possible, with a series of examples.

In other words, the Meta system has already learned, in recent months, to recognize the language that human beings use. Based on this learning, he was trained to understand that there are some ways of using the language that are not tolerated within the platform. The difference is substantial: there is no identification of common traits, but there is (or should be) an understanding of the language and of the single policy.

See also  Berlusconi, the state funeral in the Cathedral: the emotional embrace between the five children, Fascina in the front row: the coffin crosses Milan

This can be particularly useful and is already proving effective, as reported by the company, for identifying and moderating new kinds of fake news. In the announcement on the company blog, the example of the posts they contain is given the claim that the anti-Covid vaccine would modify the DNA, which the FSL system has learned to recognize and report as fake after a short training.

The national strategy for artificial intelligence: the race to catch up starts

by Alessandro Longo


The new challenges
This innovation comes at a particularly delicate moment for Meta regarding content moderation: recently, Minority leaders Rohingya sued Facebook for $ 150 billion for having contributed to the genocide of their people in Myanmar, due to the failure to intervene to stem racial hatred on the platform. The new challenge of Meta and FSL will be played on the (still unknown) capacity of the artificial intelligence of work effectively even on different languages from English.

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy