NewsGuard, the American agency that monitors the press and the public periodic reports on their reliability (here an example)debuted the AI-Generated Information Monitoring Center
The purpose is to keep an eye on the unreliable news sites generated by artificial intelligencewhich today would be at least 150, and precisely to produce reports, analyzes and fact-checking relating to disinformation and false narratives concerning AI.
The strange appeal against AI: “We risk extinction.” But companies continue to develop them
by Emanuele Capone
The number of these sites has more than tripled in the last month, and NewsGuard is specifically investigating how i Russian and Chinese state media have used AI-generated texts to disseminate false claims, presenting them as true: “China Daily – reads a note from NewsGuard – cited ChatGPT as a source supporting the false report that the United States runs a bio-laboratory in Kazakhstan which is allegedly conducting research on transmission of viruses from camels to humans in order to develop a biological weapon to be used against China. In April, another AI-generated site faked the death of current US President Joe Biden.
Furthermore, the constantly updated number of the data is available in the monitoring centre sites called Uain (the acronym stands for Unreliable AI-Generated News) identified by NewsGuard analysts: These are unreliable news sites created, in part or in whole, by artificial intelligence and operating with little or no human oversight. As mentioned, 150 Uain sites have been identified so far and analysts will continue to update the Monitoring Center as new sites and artificial intelligence-generated fake narratives emerge.
In May, when NewsGuard began monitoring UAIN sites, their number was 49: “One of the reasons we created NewsGuard 5 years ago was the ease and low cost of producing content farms,” he said. Steven Brillco-CEO of NewsGuard – Today’s AI-powered news sites are similar to the Macedonian content farms that spread disinformation a few years ago, except that production costs are even lower and these new sources can become even more prolific thanks to advances in artificial intelligence”. Gordon Crovitz, the other CEO of NewsGuard, recalled that “most of these sites were created based on the business model that generates revenue from programmatic advertising, which seems to work well. Brands have no intention of placing ads on sites that aren’t safe for their reputation, yet the lack of transparency of the programmatic advertising management system means that their ads, served by ad technology companies like Google, still appear on these sites”.
Misinformation, piracy and institutions: what’s really inside AIs like ChatGPT
by Francesco Marino
From this point of view, estimates predict that generative AI will allow more companies to increase i $2.6 billion in ad revenue that are targeted every year to sites that publish disinformation.
These sites, the NewsGuard report reads, usually have generic names, such as Top News Press and World Today News, often publish a constant stream of articles on various topics, including politics, technology, entertainment and travel. What is omitted is the fact that the sites operate with little or no human oversight and publish articles written largely or entirely by bots, rather than offer journalistic content created and curated in the traditional way.
The articles published by these sites generally feature banal and repetitive language, characteristic of texts produced by generative AI models: According to NewsGuard’s analysis, articles sometimes contain false information on various topics, such as references to events that never happened or false medical treatments.