Home » The problem with the AI ​​fake detectors

The problem with the AI ​​fake detectors

by admin

There is this surprising twist in the debate about AI-falsified images: the AI ​​detectors that are supposed to detect fake images are in turn becoming a means of disinformation. The (incidentally very interesting) new US tech news site 404 draws attention to the case that made the rounds after the Hamas massacres in Israel: the disturbing alleged image of a baby burned by Hamas was dismissed as a fake. Many people believed that too – after all, the detector site “AI or Not” had classified the image as an AI product.

The problem is: the detectors don’t really work well.

The internet is filled with these kinds of tools, and we have no idea how they work exactly, or if they’re accurate. It’s also not clear what we’re supposed to take away from an automated system that spits out a figure that suggests an image is basically about as likely to be AI as it is not. When a black-box system that is at best 90 percent accurate suggests an image is 52 percent likely to be artificial, or 52 percent likely to be real, anyone can use that tool to make any argument they want.

Experts find no evidence that the image was manipulated. The rumor, originally spread by a single large X-account, spread in the propaganda-soaked social media atmosphere on the Internet and thus in the world.

The idea is important for future discussions about automated disinformation: It’s not just an image itself that can be manipulated. His discrediting by a detector can also be used manipulatively to label the other side as a liar. So the existence of AI may not be the problem at all. But the belief that all imaginable manipulations are possible with the technology.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy