- Flora Carmichael
- BBC reporter
A new study reveals that a huge matrix of more than 350 fake social media accounts is promoting a pro-China narrative and trying to reduce the credibility of individuals who are seen as opponents of the Chinese government.
An institution called CIR (Centre for Information Resilience, meaning “information restoration center”) issued a report stating that the purpose of this move is to weaken the legitimacy of the West and enhance China’s influence and image overseas.
The results of this study shared with the BBC also found that some fancy cartoons circulated in fake personal accounts. The satirical object included the exiled Chinese tycoon Guo Wengui, who had criticized the Chinese government many times.
Other controversial figures in the comics include Yan Limeng, a scientist who claimed to have revealed the origin of the new crown virus, and Steve Bannon, Donald Trump’s former political strategist.
These people have been accused of spreading false information, including rumors about the new crown epidemic.
These accounts are active on Twitter, Facebook, Instagram and YouTube. Some of them used fake avatars generated by artificial intelligence, while others came from hijacked accounts that had posted messages in other languages.
There is no conclusive evidence that the matrix is āārelated to the Chinese government, but according to CIR, a non-profit organization dedicated to combating false information, it is similar to the pro-China account matrix previously closed by Twitter and Facebook.
These accounts amplify the pro-China narrative voices, similar to those promoted by the Chinese authorities and official media.
Most of the content they share is focused on the United States, with particular attention to controversial issues such as gun laws and racial politics.
One of the narratives promoted by the social account matrix is āāthat the United States has a terrible human rights record. The posts on the fake account cited examples of the murder of George Floyd and discrimination against Asians.
Some accounts have repeatedly denied that there are human rights violations in Xinjiang, saying that these allegations are “lilies fabricated by the United States and the West.”
āThe purpose of this network seems to be to weaken the legitimacy of the West by amplifying the pro-China narrative,ā said Benjamin Strick, author of the CIR report.
How was this account matrix discovered?
CIR used some previously identified accounts to locate their favorite hashtags on social media, and then unearthed more accounts showing signs of participating in influence actions.
The clues that can be found in their whereabouts include a large number of promotional activities, repeated use of the same label, and so on. If an account is newly created, the username looks random and there are few fans, these characteristics will also attract attention.
Some peopleās accounts are used to post original content, while others just share, like, and comment on this content to help them gain a wider audience.
This type of activity is often called “grassroots marketing” because it aims to create the illusion of a grassroots movement.
Non-existent
Many fake personal data use photos generated by artificial intelligence, which is a relatively new phenomenon. It allows computers to create realistic images of people who don’t exist. Unlike stealing real-life avatars, images generated by artificial intelligence are created by a machine learning framework called StyleGAN and cannot be tracked using reverse image search.
As users and platforms become more vigilant about suspicious accounts, the use of fake personal profile pictures in false information activities has become more and more common.
CIR uses a variety of techniques to identify false personal avatars in these accounts. Synthetic pictures always put the eyes in the same place, so arranging them can identify a batch of fake personal profile pictures.
Normally, randomly collected personal profile photos will show more diversity in cropping and eye arrangement.
Other signs include fuzzy edges around the hair, strange angles of teeth, and blurred objects around the face.
Many Facebook accounts believed to belong to the account matrix appear to have Turkish names. These accounts may once belonged to real people, but were later stolen or sold and given new avatars.
The stolen account also spread pro-China comments on YouTube. These accounts used to publish information in English or German, and suddenly began to publish Chinese content from China’s official media after years of silence.
CIR shared its research results with relevant social media platforms.
Facebook has deleted the accounts indicated in the study.
A Facebook spokesperson said: “In September 2019, we removed a spam network that mainly published life and political headlines and party content in Chinese. This network has almost no interaction on our platform. We will Continue to work with researchers and industry peers to detect and stop their return attempts, just like the accounts mentioned in this report.”
YouTube also shut down accounts in the matrix that violated community guidelines.
Twitter stated that it has now deleted almost all accounts confirmed by CIR, as well as some other accounts with similar behaviors. The company said the investigation is still ongoing.
“When we can be confident that state behavior is related to the information dissemination campaign we identify, whether it is led by domestic or foreign countries, we will disclose them in our public archives.”
Illustration: Simon Martin