Home » Fake images created with AI, first moves against the Far West: what the C2pa standard is and how it works

Fake images created with AI, first moves against the Far West: what the C2pa standard is and how it works

by admin
Fake images created with AI, first moves against the Far West: what the C2pa standard is and how it works

Listen to the audio version of the article

Now we can no longer trust what we see on social media or in chats: did Joe Biden really participate in a meeting with the army general staff for the Gaza war? Did the tractors, protesting against Europe, really flood the center of Paris with bales of hay? These are two real examples of images created with artificial intelligence, which have circulated in recent days. A serious problem, with serious political implications; but also with impacts on people: AI is also used to create non-consensual nude images. To the detriment of celebrities (most recently Taylor Swift) or students (as happened in various Italian middle and high schools in recent months).
That’s why big tech is adopting a standard for media labeling en masse. A solution that can describe who created an image or video, when and how it was created, and the credibility of its source.
Now the standard that is taking hold is C2pa, Google has just joined, after Microsoft, Adobe, OpenAi, Meta.

What is C2pa

C2pa stands for Coalition for Content Provenance and Authenticity. Words that explain themselves. This is a group created by Adobe and Microsoft to find ways to certify the provenance of online media: where it comes from and how it was created. The standard is a “label” containing key information that allows us to distinguish the true from the false, such as the possible use of an artificial intelligence platform for the creation of a media.

Objectives of C2pa and this type of label

The aim is to combat deepfakes (against women or politicians, for example) and misinformation (as in the case of tractors). The label allows users and platforms to recognize AI-created images and doubt their veracity. Social platforms, by reading the label, can automatically, for example, warn users that a certain image was made by AI and indicate its origin. Let’s imagine a near future in which on X, Facebook etc. there will be a forest of stickers to help us distinguish the true from the false. The US Government and the European AI Act regulation also push for the presence of labels (also called watermarks or stamps) of this type, precisely against the risk of misinformation.

See also  TVs, monitors, notebooks & more on offer

How does it work in practice?

Technically, the standard is a “metadata”, i.e. basic information that is added to the other properties of the images and which can be automatically read by platforms. Already now images have “metadata” such as origin and date. Google has recently undertaken to label images created with its artificial intelligence tools, such as Gemini, which are increasingly integrated into smartphones with this standard ( like the recently launched Samsung S24). But the idea is that Google will also allow these standard credentials to be more widely adopted and recognized across all its services (such as Youtube, the Chrome browser, Android smartphones). Google has stated, moreover, that the standard is complementary to its other initiatives: Google DeepMind’s SynthID, the “About this Image” search tool and YouTube’s labels for altered or synthetic content. A few days earlier, Meta announced the arrival of new tools to identify C2pa metadata in images uploaded to Facebook, Instagram and Threads, so as to automatically apply labels to AI-generated images on the platforms. OpenAI announced this week that it will add C2pa metadata to images created with ChatGpt and the DALL-E Model 3. Leica has added the ability to add C2pa authentication metadata on its products.

The limits of C2pa and other labels.

For now C2pa is only available for images, but the coalition is working on video and audio files. Fake audio is already circulating with counterfeit voices of famous people who push us to buy bitcoin or various scams (they also recently used the voice of the presenter Mara Venier).

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy