Home » One man, five children and the risk that AI will transform war into a film set

One man, five children and the risk that AI will transform war into a film set

by admin
One man, five children and the risk that AI will transform war into a film set

In times of war one of the most important battles, especially in the era ofgenerative artificial intelligence, is that of misinformation. It is crucial the fact-checking photos that at first glance seem to come from ongoing conflicts and which, in fact, are result of an algorithm or they have been manipulated with computer graphics (or a combination of the two).

L’Associated France Press has become a point of reference for those who have doubts about it potentially false images that circulate online. I “debunker” of the AFP – a group of journalists specialized in debunking or disproving theories, beliefs or statements that are not supported by concrete evidence – has dedicated itself in the last month to viral images and videos that in theory tell the conflict between Israel and Hamas started last October 7th.

The work of fact-checkers today is very complex and is not limited to unmasking old photos passed off as new by those intending to spread misinformation.

L’alteration of realityin fact, it is now within everyone’s reach thanks to tools equipped with artificial intelligence – many of them free – which can create new images or videos starting from a simple textual description. A fake photo, which in the past required specific expertise (knowing how to use a photo editing program, for example) and rather long processing times, can now be generated in just a few minutes.

And despite tools like Dall-E 3 by OpenAI e Bing Image Creator di Microsoft adopt intelligent – and in many cases extremely effective – filters to prevent users from creating violent, offensive or racist realistic shots, some sometimes find ways to circumvent these protections and obtain a potentially harmful image.

See also  Mattia Coffetti pioneer of microchips under the skin

In the war between Israel and Hamas, which divides not only two peoples but also public opinion, a photo can have a profound impact on the perception of what is happening in the Gaza Strip and its borders. But the work of photojournalists, who risk their lives in the field, is increasingly overshadowed by those who use artificial intelligence to spread fake news.

One of the photos examined by AFP, on social media, shows what AI experts define as “telltale”, i.e. signals or clues that lead one to think it was generated by an artificial intelligence.

The shot, in which a man is busy removing five children from the rubble of a devastated building, apparently located in the Gaza Strip, was published on Facebook last October 27, in a post shared at least 82 thousand times. A few days earlier it had circulated widely on Instagram with the hashtag “Gaza-under-attack” and “Free Palestine”. On the social network X the image was also shared by the Chinese embassy in France.

“AFP photographers and other people in the Gaza Strip took many photos of parents trying to help injured and frightened children – we read on the page dedicated to the debunking of the shot of the man with the five children – but experts say that the image circulating online was computer manipulated and was probably generated by artificial intelligence.”

Looking carefully at the image, in fact, some “typical” signs of the images produced by the AI ​​can be seen, such as imperfections of the limbs. The feet of the child that the man is holding by the hand, in particular, appear disproportionate and with “not clearly defined” toes. Furthermore, it seems that there is something out of place in the way a child’s arms are entwined around the man’s neck. Additionally, the area around the adult subject’s right shoulder seems confusing, with a child’s dress appearing to be one with the man’s shirt.

See also  WWF study: Plastic not only chokes our rivers but also food, air and water

The image appears realistic, however, and it could be the result of multiple techniques: One or more elements could have been generated by an AI and then the work could have been completed by a human with professional photo editing programs.

In fact, even for the most advanced artificial intelligence, it is complicated to generate such a believable scene.

We tried to create a similar photo, describing to Dall-E 3 from OpenAI – the San Francisco company that created ChatGpt – the scene immortalized by the alleged shot analyzed by AFP, and the result was not only inaccurate – six children generated instead of five – but also extremely glossy.

Image generated by Dall-E 3

Even in our image children’s toes are more than normal and there are undefined details (the way children hold hands, for example). Furthermore, one has the sensation that the shot is, in its drama, too “perfect”. Almost like a movie, in short.

Ma the huge problem that AI poses is that the useful clues to unmask the images it generates are often from Puzzle week. You have to study an image for a long time and carefully. And if this is a job for fact-checkers, it certainly isn’t for a reader and especially for an average user of social networks who sees fake images scrolling among many others in his timelineand who perhaps does not want or have the time – nor the adequate preparation – to dwell on the truthfulness of the image.

In this specific case, the Palestinian people are doubly damaged by the work of the AI because civilian suffering – which is real – can be discredited from one or more shots of dubious origin. And from all those who, with magnifying glass in hand, feel entitled to say: “It’s a botched scam by the Palestinians.”

See also  Naughty Dog director admitted frankly that "The Last of Us Part 2" avoids safety cards: let Joel and Ellie continue to take risks | 4Gamers

But there’s more at stake the story of Israel’s devastating attack. The future of information is at risk.

The misinformation intentionally produced by those who intend to pollute reality for fun, or for propaganda purposes, is combined with that unexpectedly fed by very popular and theoretically reliable photographic archives.

Adobe – company he created in the late eighties Photoshopone of the most famous photo editing software – has put up for sale on its archive Adobe Stockused both by private individuals and by organizations and newspapers, a series of images generated by AI that meet the search criteria “Conflict between Israel and Palestine”.

The images in question are marked with the note “Generated by AI”.

But this warning, applied to an extremely controversial war that costs the lives of numerous civilians, does not seem sufficient release of responsibility. First of all because someone could buy the images – deliberately or by mistake – and then pass them off as real. And then because in this way skepticism is also fueled towards images that are actually real, taken by real photojournalists. In a world where any image is potentially fakehow can you believe in things that are real?

War, yes, is something real. It’s certainly not a movie set designed to host the perfect shot.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy