Home » The sixth sense of the new Facebook Ai for online hate and disinformation

The sixth sense of the new Facebook Ai for online hate and disinformation

by admin

The opening to the public of a “transparency center” and new numbers on the effectiveness of moderators and algorithms. Facebook chronicles its work in blocking online hate and other prohibited content. The social network of Mark Zuckerberg announces the results of the first quarter of 2021, in the usual “Report on the application of Community Standards”, but this time it goes a step further: “Explain how, when and under what rules a certain comment, it does not matter if text, photos or videos are deleted, ”he says Guy Rosen, vice president with responsibility for the ‘integrity’ of Facebook.

No post for 6 months. Facebook “wise men” still block Trump

by our correspondent Federico Rampini


It is not the step that many asked for, that is to make the data public in detail country by country so that third parties could also analyze them. Recently, for example, the non-governmental association Avaaz did so by contacting the European Commission. But the Transparency Center, to explain how harmful content is removed and the spread of problematic content reduced, goes some way in that direction. The site is divided into three sections, ranging from the approach to the methodologies and technologies used to the in-depth analysis of specific issues such as elections and disinformation, and on paper it should allow for easier interpretation of the quarterly reports on standards.

It is unlikely that it is enough to appease the critical voices asking for access to data, a request often motivated by the weight that social networks have in the political and social debate. The only concession made by Facebook was to contact an independent firm, Ernst & Young, to verify that the data is measured and reported correctly. However, the collaboration has not yet produced any official results.

See also  GARMIN CONNECT FITNESS REPORT 2022, HOW GROWING GRAVEL!

“From the extra deaths to the altered DNA. Facebook does not filter fake vaccine news “

by Jaime D’Alessandro



“We continue to review the rules of and work with hundreds of organizations to update them,” he explains Monika Bickert, vice president with responsibility for the ‘content policy’. “Since the start of the pandemic in April 2021, we have removed more than 18 million content from Facebook and Instagram globally in violation of our Covid-19 disinformation and harm policies. We are also working to increase consensus on vaccines and combat vaccine misinformation ”.

Looking at the numbers, even if we must keep in mind that they are statistics on a global basis, now in certain situations the intervention of the moderators and even more of the artificial intelligence (Ai) of Facebook has reached remarkable levels of effectiveness. For example, in the first quarter of 2021, the prevalence of adult nudity on both Facebook and Instagram was reduced to 0.03 or 0.04% of total content with 28 million content blocked before it was even reported. Violent and graphic content, on the other hand, is between 0.01 and 0.02% on Instagram and 0.03 and 0.04% on Facebook, with 16 million posts eliminated by Facebook’s countermeasures. In practice, about six views every 10 thousand with 26.9 million content deleted from the social network systems. Beyond mere spam, the numerically most relevant category together with fake profiles, these are the contents on which Facebook intervenes most. And it is the first time that the company communicates not only what it has canceled but also what it has escaped.

See also  Switching to Android is not troublesome, foreign media revealed that Google has developed a new function to transfer iCloud data with one click | Technology New Information | Digital

Mike Schroepfer, number one of Facebook technologies: “We were wrong, but the new Ai will stop hatred and bullying online”

by Jaime D’Alessandro



The Xlm-r algorithm, used since 2019 on the hate speech front and strongly supported by Mike Schroepfer, the Chief Technical Officer (CTO) Facebook, is capable of analyzing the texts and transferring the experience made into a language in another. This only partially compensates for the difference in effectiveness in English compared to other languages. Result: When data on hate speech content was first shared in Q4 2017, the proactive detection rate was 23.6%. This means that 23.6% of removed hate content was detected before a user reported it. The majority, on the other hand, was removed only after reporting. Today, the proactive detection rate is around 97%.

“But now we are working on a new Ai that is able to analyze at the same time only texts, images and videos”, he reveals the same Mike Schroepfer. “And it can check if the content violates one or more rules. This time we are talking about a very broad approach that goes beyond the text. A harmless photo with an equally harmless text underneath when taken individually can have a completely different meaning when put together. Until now it was a difficult analysis for the Ai. These are subtleties that were considered out of reach for an artificial intelligence. Here, we are overcoming this obstacle, in a few we are passing from the analysis of the detail to that of the context in which the individual elements are presented, also comparing them with what has been published in the past “.

See also  Peru Leads the Use of Artificial Intelligence at Work in Latin America: Study

However, there was no mention of the algorithm that organizes what people see on their message boards. That system that pushes to propose similar contents to each other by creating the so-called echo rooms in which any form of confrontation between different ideas is banned, effectively increasing the polarization of society. One of the latest research by Ca ‘Foscari in Venice has shown that the effect is there even if it has not gone so far as to measure the real flow. And therefore one wonders what would happen if the new Facebook AI, so advanced in understanding complex phenomena, were used precisely to understand how the most extreme opinions are formed on its social network.

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy