Home » The Taylor Swift Case: Three Ways to Combat Deepfake Pornography

The Taylor Swift Case: Three Ways to Combat Deepfake Pornography

by admin
The Taylor Swift Case: Three Ways to Combat Deepfake Pornography

The Taylor Swift Case: Three Ways to Combat Deepfake Pornography

Last week, sexually explicit footage of pop star Taylor Swift went viral on the Internet. Millions of people viewed deepfake Swift porn created by unknown people on social media platforms like X. The short message service has since taken drastic measures to stop the spread – for example, all search queries for the singer were blocked. This is not a new phenomenon: deepfakes have been around for years. However, the rise of generative AI in the last two years makes it much easier to create such content – even of an explicit nature.

Advertisement

This makes a whole new type of sexual harassment possible – and a massive violation of personal rights. Women are particularly affected, says AI expert Henry Ajder, who researches the area of ​​synthetic media. However, there is some hope: New tools and laws should make it more difficult for people to use deepfakes of people as weapons – and they could also help hold perpetrators accountable. The new possibilities concern three central areas.

Social platforms review posts that are uploaded automatically, sometimes through human content moderators, and remove content that violates their guidelines. However, this process is patchy at best and does allow problematic content to pass through, as the fake Swift porn on X shows. In addition, it is becoming increasingly difficult to distinguish between authentic and AI-generated content.

A technical solution could be so-called watermarking. Invisible digital watermarks would then allow computers to determine whether a recording was generated using AI or not. For example, Google has long since developed a system called SynthID that uses neural networks to change individual pixels in images and add a watermark that is not visible to the human eye. This marking should also be able to be read if the image has been edited or a screenshot has been taken. In theory, such tools could help social media companies improve their moderation and more quickly identify fake content, including problematic deepfakes.

Advantages: Watermarking can be a useful tool to make it easier and quicker to detect AI-generated content and identify harmful posts that should be removed. Standardizing the use of watermarks on all images would also make it harder for malicious actors to create problematic deepfakes, says Sasha Luccioni, a researcher at AI firm Hugging Face who studies bias in AI systems.

See also  The new "Glowing Pikmin" is here! "Pikmin 4 Trial Edition" that can play the prologue of the story is confirmed to be launched! -funglr Games

Disadvantages: Watermarking is still experimental and not widespread. And a determined attacker can still manipulate them. Internet companies and software providers also do not apply the technology across the board to all images. For example, users of Google’s AI image generator Imagen can choose whether their AI-generated images should be watermarked or not. Such factors limit the technique’s usefulness in combating deepfake porn.

At the moment, all the images we put online are freely accessible to anyone to create deepfakes if necessary. And because the latest AI systems that create images are so sophisticated, it is becoming increasingly difficult to prove that AI-generated content is actually fake. But a number of new defense tools enable users to protect their images from this form of digital exploitation – AI systems can then only display images of such people in a distorted or distorted manner.

One such tool, called PhotoGuard, was developed by researchers at MIT. It works like a kind of shield, changing the pixels in photos in ways that are invisible to the human eye. If someone uses an AI app like the Stable Diffusion image generator to manipulate a PhotoGuard-treated image, the result will look unrealistic. Fawkes, a similar tool developed by researchers at the University of Chicago, cloaks images with hidden digital signals that make it harder for facial recognition software to recognize people.

Another new tool called Nightshade could help people defend themselves against the use of their likeness by AI systems. The program, developed by researchers at the University of Chicago, adds an invisible layer of digital “poison” to images. The tool was designed to protect artists from having their copyrighted images exploited by AI companies without their consent. In theory, however, the software could be used for any image that the owner does not want to be evaluated by AI systems. If technology companies pull training material from the Internet without consent, these toxic images would disrupt the AI ​​model. And it’s very practical: pictures of cats could become dogs – and pictures of Taylor Swift could also become an animal, if desired.

See also  Stat of the week: Why users are turning away from social media

Advantages: The new tools make it harder for attackers to use images to create malicious content. They are exciting when it comes to protecting private individuals from misuse with AI images – especially when dating apps and social media companies use them by default, says AI researcher Ajder. Luccioni agrees: “We should all use Nightshade for every image we post on the internet.”

Disadvantages: This protection software only works on the current generation of AI models. However, there is no guarantee that future versions will not be able to easily override these mechanisms. The technique also doesn’t work on images that are already online – and is harder to use on images of celebrities, since famous people rarely control which photos of them are uploaded online. “It’s going to be a gigantic game of cat and mouse,” says Rumman Chowdhury, who runs the ethical AI consultancy Parity Consulting.

And technical dimensions are not enough anyway. The only thing that will lead to lasting change is stricter regulation, says Luccioni. Viral deepfakes like the one featuring Taylor Swift have given new impetus to legal efforts to crack down on such content in the US. The White House called the incident “alarming” and called on Congress to take legislative action. To date, the technology has only been regulated piecemeal and on a US state-by-state basis. For example, California and Virginia have banned the production of pornographic deepfakes without consent. New York and Virginia also prohibit the distribution of this type of content.

But now something could finally happen at the federal level. A new bipartisan bill that would make sharing fake nude images a felony was recently reintroduced to the U.S. Congress. A deepfake porn scandal at a New Jersey high school has also prompted lawmakers to respond with a bill called the Preventing Deepfakes of Intimate Images Act. The attention that the Swift case has brought to the issue could lead to more bipartisan support.

See also  Police arrest man: Stalker tries to break into Taylor Swift's apartment

Lawmakers around the world are also pushing for stricter laws on this technology. Britain’s online safety law, passed last year, bans the distribution of deepfake porn, but not its creation. The perpetrators face a prison sentence of up to six months.

In the European Union, the problem is being addressed from different angles with a series of new laws. The far-reaching AI Act requires creators of deepfakes to clearly disclose that the material was created by AI, and the Digital Services Act requires tech companies to remove harmful content much more quickly.

China’s deepfake law, which came into force in 2023, goes the furthest. In China, deepfake creators are required to take measures to prevent the use of their services for illegal or harmful purposes, obtain users’ consent before turning their images into deepfakes, authenticate people’s identities, and label AI-generated content.

Advantages: The regulation provides redress for victims, holds perpetrators of deepfake pornography without consent accountable, and has a strong deterrent effect. It also sends a clear message that creating deepfakes without consent is not acceptable. Laws and public awareness campaigns that make it clear that people who create this type of deepfake porn are sexual predators could have a real impact, Ajder says. “It would change the somewhat blasé attitude with which some people view this type of content as not harmful or not a real form of sexual abuse,” he says.

Disadvantages: It will be difficult to enforce these types of laws, says Ajder. With current techniques, it will be difficult for victims to identify the perpetrator and build a case against that person. The person creating the deepfakes could also be in another jurisdiction, making prosecution more difficult.

(jl)

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy