Home » Stanford’s alarm: AI is also trained on thousands of child pornography images

Stanford’s alarm: AI is also trained on thousands of child pornography images

by admin
Stanford’s alarm: AI is also trained on thousands of child pornography images

Artificial intelligence is trained on data provided by humans. And these data, sometimes, they constitute a huge problem. It is not just a question of intellectual property of the images, texts or music from which the machines are “inspired”. There is an even greater danger, disturbing and hidden from the eyes, regarding child pornography.

The Stanford Internet Observatory found more than 3,200 images of suspected child abuse among those collected by LAIONa huge database on which the foundational models of popular generative artificial intelligences such as Stable Diffusion have been trained, which allow you to create realistic photos starting from a textual description.

Researchers from Stanford University worked together on the Canadian Centre for Child Protection and other organizations fighting child abuse. Among the images found in the database, at least a thousand the child pornographic nature was confirmed.

Following the publication of the Stanford investigation, the managers of the LAION database decided to temporarily suspend access to the data. “Our policy is zero tolerance for illegal content. So to be safe we ​​have removed the LAION datasets to verify that they are safe before making them available online again.”

Nel suo database LAION (Large-scale Artificial Intelligence Open Network) conserva 5.8 billion images. Artificial intelligence takes inspiration from this enormous archive. In practice, AI uses the details and characteristics it learns from existing images to produce similar ones but technically unpublished.

This is not to say that generative AI allows anyone to generate child pornography images. Bing Image Creator from Microsoft, Midjourney or Dall-E 3 from OpenAI – some of the most famous and open to the public – have very effective filters which prevent you from obtaining any type of nude image or one containing sexual intercourse.

See also  Greentech Baden-Württemberg wants to expand solar parks on federal roads

Users, however, have demonstrated in many cases of knowing how to circumvent the terms of service of the tech giants that develop and market artificial intelligence. Sometimes it’s a simple matter of semantics: just asking for things differently will get the results you want.

Artificial intelligence From “terrorist” Mickey Mouse to racist images: the launch of a new AI, even more powerful and easy to use, ended badly by Pier Luigi Pisa 10 October 2023

The presence of illegal images in the databases that the AI ​​draws on, even in small quantities compared to the billions of photos available, is therefore worrying. Not just because it can theoretically allow less controlled AI to generate explicit images, but also because real victims continue to be harmed. The abuse they have suffered, in fact, propagates again through the algorithms, taking on different forms from time to time.

Added to this is the simplicity with which, today, violent, defamatory or indeed explicit images may be obtained.

In fact, generative AI, which imitates human creativity, needs a simple prompt – a text command – to create amazing texts or stunning images. Therefore, those who use it do not require IT skills. All you need is a barely sufficient property of language. In the past, however, to create credible explicit content, at least familiarity with it was necessary professional photo editing programs.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy