Home Ā» “Lies and hate, Facebook violates agreement with users”: Reporters Without Borders takes Zuckerberg to court

“Lies and hate, Facebook violates agreement with users”: Reporters Without Borders takes Zuckerberg to court

by admin

Facebook goes to court for spreading fake news and hate online. To report the social network of Mark Zuckerberg to the French judiciary is Reporters Without Borders, the organization that monitors the freedom of information and the safety of journalists, based in Paris. The complaint, addressed to the branches of France and Ireland (the latter is the European headquarters of Facebook), is based on the accusation of “unfair commercial practices”: the fact that false, offensive, harmful content continues to proliferate on the platform is a violation of the terms of service of the social network. Where Facebook is committed to its users to ensure a “safe space” and to “significantly limit the dissemination of false information”, there is strong evidence that these commitments have been breached. And not only in the crucial year of the American elections, not only in the dramatic year of the start of the pandemic: it still happens today, despite the reports, the campaigns, the committees, the reports, even after internal mea culpas and attempts to clean up undertaken on the platform.

Of course Facebook, and its subsidiary Instagram, are not alone under the observation lens. On Thursday Mark Zuckerberg will appear again before the US Congress together with Twitter CEO Jack Dorsey and Alphabet CEO Sundar Pichai to testify on the presence of extremist and disinformation content on their platforms.

But the court case filed in France is an interesting precedent – and indeed RSF hopes it will set a benchmark for future prosecutions at least in Europe, as terms of service are the same everywhere.

See also  Japan to Begin Release of Treated Radioactive Wastewater, Despite Global Controversy

In particular, Reporters Without Borders accuses, Facebook “constitutes the main hotbed of vaccine conspiracy theories for French-speaking communities.”

The dossier also contains a study by the American think tank German Marshall Fund, which in the last quarter of 2020 counted one billion and 200 million interactions with pages that spread hoaxes. As an example of Facebook’s inability to fight online hatred, RSF cited, among others, the dozens of insulting or threatening comments that appeared on the page of the satirical magazine Charlie Hebdo. Although the editorial office of the publication had been the target of a terrorist massacre, stresses RSF, the platform did nothing to eliminate comments that incited violence against the authors of Charlie Hebdo. Another example cited by RSF is the diffusion of “Hold Up – Return to chaos”, a French documentary full of conspiracy theories.

These are violations of the agreement with users which, according to the NGO, could cost Facebook hefty fines, “up to 10% of the average annual turnover”. And set a precedent for other such appeals. In recent months, Facebook has been the subject of several complaints in France. In early March, fourteen feminist militants appealed against Instagram’s decision to ban some of their content, while leaving hateful comments against them online.

Violations of the “agreement with users” are documented and documented practically in real time. Avaaz, an American nonprofit digital activism network for the defense of democracy, has done so in recent days, with a search that identifies 267 pages and groups on Facebook that in the American election year reached 32 million users with disinformation material and incitement to violence. Over two thirds of these groups and pages are directly attributable to extremist organizations that in recent months have become household names in the chronicles of the attempts to destabilize American democracy, up to the attack on Capitol Hill on January 6 last. This is Boogaloo, a movement active online and then poured into showdowns on the American streets with the promise of a second civil war and the overthrow of institutions. Or even pages and groups related to the conspiracy of QAnon, based on the belief that Donald Trump is waging a secret battle against the evil forces of the “deep state” and a network of Satanist pedophiles infiltrated in Hollywood, in the media, in the enterprise and especially in the big tech companies of Silicon Valley. Since 2020 Facebook has banned all of these organizations from the platform. Yet, says Avaaz, 118 of these 267 pages and groups are still active, with an audience of 27 million. Of these, over half are affiliated with Boogaloo, QAnon and other militias. And there are at least three contents of direct incitement to violence spread in recent weeks, which the group has urgently reported to Facebook.

See also  Storm in Dubai - Shock theory after the storm

In addition to the complaint, the report serves to understand how difficult it is for the platforms themselves to stem the phenomenon to which they have opened their doors (by encouraging it, given that it has been shown that controversial contents are those that generate more traffic and interactions and therefore more advertising interest. ). Indeed, the phenomenon is growing. Avaaz calculated that the 100 fake or disinformation posts that had the most interactions in 2020 received 162 million views. And even if some of these posts were later labeled as fake, these millions of users of the social network were never warned that what had passed before their eyes was a lie: the verification and “labeling” system does not provide a retroactive alert.

A similar problem is also dramatically affecting Instagram, which is owned by Facebook, especially in the field of health disinformation. A year ago, the social media of the photos had undertaken to remove “false claims or conspiracy theories reported by international health organizations and local authorities as potentially harmful to the people who see them”. Recent research, such as that of the Center for Countering Digital Hate, however, has shown that this content continues to flourish and is increasingly difficult to identify and block because it now travels on the accounts of “micro influencers”, self-styled doctors, healers, hucksters, experts in wellness, alternative medicine and oriental practices, which make a great impression on large communities.

Facebook seems aware that the problem is growing and now in an emergency phase. And he also knows that it is happening precisely because of the company’s precise design and market policy choices. One above all, focus on groups – in particular the “civic groups” – in which the contents travel under the moderation radar of public bulletin boards. Internal research, cited by the Wall Street Journal, indicates that in the US, groups have encouraged user polarization and have been used to organize and incite the violence following the elections. About 70% of the 100 most active civic groups on Facebook had problems with hate speech, misinformation, bullying and online aggression.

See also  Danger on TikTok, a terrible trend goes viral again: young people and children at risk | Delete it immediately

After trying to limit the advertising of these groups in the US, now Facebook has decided to intervene also in the rest of the world: when you browse Facebook you will no longer see the “recommendations” for political or health content groups. Which of course is not the same as deleting them, because users can always access them via invitations, friend recommendations, keyword searches. Another measure, for groups that have already violated Facebook standards, is a warning banner when trying to access the group itself. To prevent already banned groups from reforming under other names, when a certain number of users from banned groups find themselves in a new group, its content is put under observation and the whole group is deleted if infringing content is repeatedly posted with Facebook standards. In addition, individual users already reported for repeated violations in groups are suspended from being able to publish and even invite other users to groups. All measures aimed at using the weapons of the algorithm to limit the visibility of the “toxic actors” without completely dismantling the stage. That this is enough to stem the tide is indeed a very risky bet.

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy