Home » Tech companies are failing in the fight against deepfakes

Tech companies are failing in the fight against deepfakes

by admin
Tech companies are failing in the fight against deepfakes

In a world where it is unclear what is real and what is fake, tech companies see democracy at risk. Although they promised joint action to protect elections, they are taking a lot of time to implement it. Things would be different.

This AI-generated image of Donald Trump is circulating on social media.

Mark Kaye / Facebook

A picture of Donald Trump at an African-American Christmas party recently circulated on Facebook. He radiates, appears young and energetic – and so close to the people. The country needs a president like that, right?

Very few users of social networks are likely to give more than this brief thought, after all, they usually scroll on quickly.

Only those who stay with the picture for longer notice that it is artificially created. Trump’s hands look deformed, he appears to have a kind of webbed skin between two fingers, and the nail is missing from his index finger. In addition, an illegible character can be seen on a man’s baseball cap in the background – a typical feature of AI-generated images.

Trump’s hand is deformed: an indication that the image is AI-generated.

It’s currently a super election year and politicians are discovering the advantages of AI image generators. A few clicks, a few text entries, and you have a picture of a situation that never existed. Trump praying in church. Trump running from the police. Trump among African Americans.

All of these images either serve narratives that help Trump or ones that are intended to harm him. The image of the Christmas party suggests to black voters who helped get Joe Biden into office in 2020 that Trump is a sociable guy – perhaps also increasingly electable to African Americans?

Tech companies made an empty promise

The example shows that social networks have not yet found a mechanism to label computer-generated images as such. In mid-February, twenty tech companies promised a coordinated approach to detect fakes in connection with elections, including representatives of arch-rivals Instagram, Tiktok, X and YouTube.

When announcing the agreement, they chose drastic words: “Deepfakes are a new and major threat to democracy,” said a Microsoft employee. “As soon as you doubt everything you see, democracy has a problem,” said one from Adobe. And the Open AI representative said: “We are in a historic election year. It’s about time [für ein koordiniertes Vorgehen].»

See also  Guide to the new Microsoft 365 Copilot: generative AI changes the face of Office apps

Little has happened since the announcement: artificially generated images are still being distributed on social networks without an AI label or warning. Many of them, like Trump’s photo, can quickly be revealed as fake upon closer inspection – but not all of them.

For example, in a case from the National Council elections in Slovakia in September. An audio file, supposedly a recording of a telephone conversation, spread on social networks. In it, Michal Simecka, the leader of the liberal party “Progressive Slovakia”, explains to a confidant, that he was manipulating the election.

The very well faked audio file was published two days before the election, at a time when Slovak media traditionally no longer report on the elections.

Between Trump’s cheap fake and Simecka’s deepfake, there is a wide range of more or less well-made fakes. No matter how quickly they are exposed, they all have an effect. After all, we know from psychological research that images have a stronger effect on feelings than text and that they are more memorable.

A flood of fake information is spreading online, just in one of the most important election years in history. In 2024, more people will be called to the polls around the world than ever before, including in the European Union and the USA.

Requirement: AI images must bear a watermark

Ramak Molavi, an independent AI expert, analyzed the most common methods that could be used to combat the flood of fake images on the Internet. She says that AI images must be made recognizable as such – ideally when they are created. This is the only way voters can make informed decisions about candidates.

Key to labeling AI images are watermarks. They are known as small marks within an image, perhaps a semi-transparent logo or a character somewhere on the edge of the image. Such visible markings are easy to manipulate. For example, it is enough to crop the image so that the watermark is removed. If someone wants to represent an AI image as real, this is possible without much technical knowledge.

See also  YouTube Primetime Channels: Subscriptions, Prices and Content

But there are also watermarks that survive such manipulation attempts unscathed, for example the cryptographic watermark. It is invisible to the human eye, but not to machines. Because where people see colors and shapes, machines only recognize individual pixels, i.e. small squares that are assigned to a specific color code.

If a watermark is inserted into an AI image, selected pixels receive a new pixel color value (hex value). For example, in the image of Donald Trump, a pixel with his blonde hair color would go from the original value #d8b96a to the changed value #d6b772.

For people the picture looks exactly the same, but for machines it has become different. Which pixels are changed and how much is determined by a pattern that appears random to the human eye but can be deciphered with a cryptographic key.

Example of a watermark pattern. The black dots represent pixels that are being changed.

The pattern in the cryptographic watermark remains even if the image is filtered, cropped or rotated, according to a Deepmind blog post. The Google subsidiary has created a cryptographic watermark called Synth-ID, which is currently being tested in a pilot phase by a small number of Google customers.

Platforms would have to check every image when uploading

Security researcher Molavi says that if AI image or video generators like Dall-E, Midjourney or Sora watermark their products, platforms like Facebook, Instagram or Tiktok could, in a second step, ensure that fakes do not spread uncontrollably. This would be possible by automatically checking every time an image is uploaded to see whether it has an AI watermark – and if so, clearly indicating this for users.

“The vast majority of AI-generated content could probably be found and labeled with this,” says Florian Tramèr, who researches the security of AI systems at ETH. However, says Tramèr, if someone with enough technical knowledge creates a deepfake, it is likely to remain undetected. The cryptographic watermark can also be manipulated. However, this requires so much technical knowledge that most users are unlikely to succeed.

“It’s a game of cat and mouse,” says Tramèr. As soon as a new, more secure verification method is found, there will already be people who learn to get around them.

Molavi sees it similarly. There is no method that can completely prevent deepfakes like Simecka’s. But methods like cryptographic watermarking “would be better than nothing,” she believes. However, because there is currently no overarching standard that all tech companies could follow, AI-generated content is spreading on the Internet en masse without labels or warnings.

See also  How to hide contacts on iPhone

Is trust in the state eroding?

For voters, this means: The AI ​​image revolution is making it increasingly unclear what is real and what is fake. Karsten Donnay, political scientist and professor at the University of Zurich, says: “We have to expect that people’s trust in information will continue to erode – and we know from research that this will also result in their trust in the media and ultimately in the state can sink.”

What the result of this is has not yet been sufficiently researched for European democracies, says Donnay. It is therefore still unclear whether voters in these post-truth times will vote more for populist parties or increasingly choose strong leaders.

“One thing is clear,” says political scientist Donnay: “If trust in the information erodes, people are more likely to seek out the information that already fits into their political worldview.” This reduces the common factual basis on which political discourse can take place.

The flood of fake photos on social networks is not good news, even if it is still unclear how AI labels would work. Either way, the message of the image remains, whether a label labels it as artificial or not.

Just like the image of Donald Trump among African Americans. It stays online, without an AI label. It was published by conservative presenter Mark Kaye, who produces his own radio show in a studio in Florida. He never claimed the picture was real, Kaye told the BBC.

“If someone votes after seeing a photo on a Facebook page, the problem is with the person, not the post,” says Kaye. And: “I don’t go out and take pictures of things that really happen. I’m a storyteller.”

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy