Home » Expose fake videos by copying medicine

Expose fake videos by copying medicine

by admin
Expose fake videos by copying medicine

Distinguishing the true from the false is one of the most important challenges of the present and the (near) future. If today there are already apps for a few euros that allow you to create videos in which even famous people, friends or ex-girlfriends appear in embarrassing situations, imagine what will happen in a few years to these fake videos, called deepfakes. We will be invaded by fake interviews, audio and video, made up of sentences never actually spoken by the protagonists, but created using words they said in very different and distant contexts, even in time, but made “true” by systems managed by artificial intelligence. A huge problem. Which affects everyone. Fortunately, fighting it could also turn out to be excellent business and so some digital giants have decided to work to stem it. Just a few days ago Intel presented FakeCatcher, its “hunter of fakes”. It has the distinction of being “the first real-time deepfake detector, with an accuracy rate of 96%”. Possible? «It works – explained Intel – analyzing the “blood flow” of the face and the subjects in the video pixels». The idea was born from a method used in some medical tests in which the amount of light absorbed or reflected by the blood vessels is measured “as the volume of blood in the microvascular bed of the tissue varies”. Said very trivially (the experts forgive us):
when the heart pumps blood into the blood vessels, these change colour, perhaps imperceptibly to the human eye but measurable by a very powerful machine. In practice: by analyzing the micro changes produced by the blood flows on 32 points of the protagonists’ faces, FakeCatcher unmasks the alterations produced by the counterfeiters. Already today, according to Forrester Research, the costs associated with deepfake scams have exceeded $250 million. Little compared to other types of scams. But we are only at the beginning. Because as Eric Horvitz, head of Microsoft’s research division claims, “finding fakes will be increasingly important, given that in
There are also already interactive fake videos in circulation, which give the illusion of talking to a real person. And that’s just the beginning.” FakeCatcher isn’t Intel’s only project to debunk fakes. There is an entire branch of the company, called Trusted Media, which is working on other systems for detecting manipulated content, “one of which will be based on the analysis of the gazes of the protagonists”.
All right, then. But if counterfeit hunters sharpen their guns, what tells us counterfeiters aren’t doing the same? Intel’s Demir has no doubts: “At the moment, no one can fool our system.” Yet Forrester Research analyst Rowan Curran told VentureBeat that the war between counterfeiters and fake hunters is more complex. “Intel’s counterfeit detector could be a significant step forward if it actually works as well as they claim. But the important thing is that its accuracy does not also depend on the ethnicity of the protagonist of the video». The reference is to a 2020 study by researchers from Google and the University of Berkeley which demonstrated «that even the best artificial intelligence systems trained to distinguish between real and manipulated content are susceptible to contradictory analyses, sometimes because they are deceived by the ethnicity of the subjects. Because even machines, since they are designed by human beings, can make mistakes and have racial biases. © reserved reproduction

See also  Here is a useful and effective technique against chronic stress if practiced for at least 6 months

© breaking latest news

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy