Home » Labeling requirements for AI content according to the current and future legal situation

Labeling requirements for AI content according to the current and future legal situation

by admin
Labeling requirements for AI content according to the current and future legal situation

Many people agree that artificial intelligence is the future. However, there is so far less agreement on the legal handling of AI-generated content. Particularly when these are used in marketing or even traded as goods, the question arises as to certain information obligations for users and purchasers. Do they have to be made aware of the AI ​​origin of a piece of content? The following article sheds light on the current legal situation and also provides an outlook on future legal obligations under the planned and already largely drafted EU AI regulation.

I. AI labeling requirement: current legal situation

Currently, content created purely by AI cannot be protected by copyright as it lacks the necessary personal creation. This is necessary for the creation of copyright. Something different applies to content in which AI is only used as an aid.

1.) Be careful when naming the author: misleading through supposed copyright

If no one owns the copyright to AI-generated works, you could easily fall into the trap of putting your own name on it.

But be careful, because anyone who places their name under an AI-generated work conveys that it is protected by copyright and that the signatory is its creator.

If this is not actually the case, then this constitutes misleading behavior that is fundamentally anti-competitive and can therefore be warned against § 5 UWG.

Whether it can be seen at first glance that the content was generated using AI is irrelevant.

The only decisive factor for the potential for misleading and thus anti-competitiveness is whether labeling the work by name creates the impression of creative ownership.

For example, publishing an AI-generated text naming a human author would be anti-competitive.

The publication of AI-generated work by graphic designers under their own name would be equally misleading.

2.) Identification of the AI ​​origin?

The current legal situation does not provide for any regulation according to which an AI work must be labeled as such.

On the other hand, if there is no labeling, the overall impression may be given that the content is a copyrighted work.

For example, if someone publishes short stories on their blog that they basically wrote themselves, then an AI text can very well be classified as a copyrighted work even without information about the person who created the work.

See also  "Heroes Blossom in Hero City" multiple red cultural exhibitions were held in Guangzhou - Guangzhou Daily Ocean Network

If you usually publish works protected by copyright, it is therefore advisable to label the directly integrated AI content as such in order not to give the misleading impression that the AI ​​content is also the result of a personal intellectual creation.

However, labeling is not necessary if the overall impression is that AI-generated content is only an accompanying accessory to one or more personal intellectual creations and therefore cannot reasonably be attributed to the authorship of the actual creator of the work.

This is particularly true if the AI ​​content belongs to a different category of work than that for which the creator of a copyright is famous.

An author who publishes self-edited texts will therefore generally not have to label AI images that thematically accompany or underline these texts as “AI content”.

II. Outlook on the future legal situation: EU AI Regulation

To ensure greater security when dealing with AI in the future European regulation establishing harmonized rules for artificial intelligence (AI regulation) care for.

The new law will be the world‘s first comprehensive set of rules for AI.

The EU Parliament adopted the regulation on Wednesday, March 13, 2024, with 523 votes to 46 and 49 abstentionswhich means that it is fundamentally considered decided.

The adoption by the Council of the European Union, which is still required, is considered a mere formality.

As soon as it has been implemented, the regulation will come into force and will generally have direct legal effect 24 months later, probably from 2026.

1.) Aim of the AI ​​regulation

The aim of the new regulation is security and respect for fundamental rights as well as promoting innovation in the use of artificial intelligence.

Fundamental rights, democracy and the rule of law as well as ecological sustainability should be protected above all from high-risk AI systems. At the same time, the regulation is intended to stimulate innovation and thus bring the EU into a global leadership role.

See also  Cooperation between Campaign.Plus and IT law firm

The regulation sets out certain obligations for AI systems, depending on the respective possible risks and impacts.

In addition to harmonized rules for the placing on the market, putting into operation and use of AI systems in the Union, rules on prohibited applications and specific requirements for high-risk AI systems, measures to promote innovation and rules for market observation and market surveillance, the new regulation also establishes certain Transparency requirements.

2.) New transparency requirements

The transparency requirements apply to general-purpose AI systems and the models on which they are based.

One of the requirements is to comply with EU copyright law and to publish detailed summaries of the content used for training.

Additional requirements apply to the more powerful models that could pose systemic risks. For example, model assessments must be carried out, systemic risks must be assessed and mitigated, and incidents must be reported.
As part of the new transparency requirements, the regulation also requires providers to ensure “that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI do have”.

This obligation only falls away if the interaction with the AI ​​is “obvious given the circumstances and context of use”.

The regulation also sets out harmonized transparency requirements for AI systems used to generate or manipulate image, sound or video content.

Afterwards, AI-generated or edited images or audio and video content (“deepfakes”) must be clearly marked as such.

3.) In detail: AI labeling requirement according to Art. 52 Paragraph 3

Article 52 paragraph 3 of the regulation is decisive for the planned labeling requirement.

Article 52 establishes transparency obligations for certain AI systems, with paragraph 3 of the article regulating in particular the labeling obligation for AI-generated works.

According to this, “Users of an AI system that generates or manipulates image, sound or video content that noticeably resembles real people, objects, places or other facilities or events and that would falsely appear to a person as real or truthful (“deepfake”) […] disclose that the content has been artificially created or manipulated.”

Article 52 paragraph 3 further stipulates that the labeling requirement does not apply “if the use for the detection, prevention, investigation and prosecution of criminal offenses is permitted by law or for the exercise of the rights to freedom of expression guaranteed by the Charter of Fundamental Rights of the European Union and freedom of art and science is necessary and appropriate safeguards exist for the rights and freedoms of third parties.”

See also  Netanyahu warns against Supreme Court intervention in judicial restructuring

When the EU AI Regulation comes into force, a general labeling requirement for AI-generated image, audio and video content will be introduced, which must be observed by anyone who uses the recorded AI content as part of an independent professional or commercial activity , if otherwise the impression could arise that the content was created by a human and depicts real events.

III. Conclusion

According to the current legal situation, there is no legal obligation to label AI content. However, it may be necessary in individual cases to avoid misleading if AI content is mixed with human works in such a way that the AI ​​works could also be mistakenly assumed to be a human creation.

However, the new EU AI regulation, which is expected to apply from 2026, will require the labeling of certain AI content. So-called “deepfakes”, i.e. AI content in the form of images, sound or video material that are noticeably similar to real people, circumstances or events and appear real or truthful to people, will have to be explicitly marked.

However, the question of whether and how AI works that are not deepfakes should be labeled remains open given the current status of the regulation.

In this respect, it is likely that the potential for misleadingness that AI content can develop without labeling it through mental association with a human creation will continue to be important in the future.

Tip: Do you have any questions about the article? Please feel free to discuss this with us in the
Entrepreneur group of the IT law firm on Facebook.

Image source:
Rawpixel.com

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy