Home » Artificial intelligence, content transparency

Artificial intelligence, content transparency

by admin
Artificial intelligence, content transparency

Listen to the audio version of the article

Lately, artificial intelligence has acquired a completely new ability: generating content upon user request. Language models like ChatGpt can produce convincing text, and systems like MidJourney churn out realistic images from textual descriptions. All AI systems that have found widespread use in society have had significant influence. But the arrival of AI capable of generating content opens the door to a completely new form of influence, because content lasts over time and influences everyone who encounters it.

This is important news, because content generated by people is a fundamental material on which culture is based. In a recent article, Yuval Noah Harari observes that language “is the stuff of which almost all human culture is made”: laws, political systems, and mythologies are created and transmitted through language. Other components of culture reside in images and music. All these forms of content can now also be convincingly produced by machines. For Harari, generative AI has “hacked the operating system of our civilization.” For this reason, he deserves special scrutiny.

Calls for AI transparency have so far focused on processes, the way AI systems are designed and developed, and the context in which they operate. But if these systems can put lasting content into the world, transparency must extend to them as well. In particular, people should have the ability to know whether certain content they encounter was created by a human or a machine.

Without this minimum level of transparency, the consequences for culture could be disastrous. Harari suggests that a culture of artificial content could trap humans “behind a curtain of illusions” and put democracy at risk. Daniel Dennett argues that the uncontrolled proliferation of artificial content “will undermine the trust on which society depends” and “risks destroying our civilization.” Both authors call for urgent precautionary measures, which require that AI-generated content be identifiable as such.

See also  The Lexar Professional GOLD R900/W800 in the test, better than Sony and ProGrade!

It is useful to distinguish between transparency governing direct interactions with AI (for example in conversations with chatbots) and transparency on content generated by AI. The first issue is already the subject of current or imminent legislation in several jurisdictions. We deal with the second question. A user can take content produced in a direct interaction with AI and pass it on, without revealing its origin, for example by posting it on social media or submitting it to a teacher or work colleague. What mechanisms could guarantee the transparency of the contents generated by AI in these latest “indirect” encounters?

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy