In a move designed to counter misinformation during the 2020 presidential election campaign, Facebook bans deepfakes: extremely convincing fake videos created using AI techniques.
Deepfakes use machine learning to analyze the way someone’s face moves as they voice different sounds. AI can then create fake video that mimics the movements needed for any word. This can then be combined with either chopped-up sound clips of the real person, or an impressionist, to create entire fake speeches that are hard to tell from real video footage…
The University of Washington used this technology as long ago as the summer of 2017 to create a fake Obama speech (below).
Facebook says it is now banning deepfakes.
People share millions of photos and videos on Facebook every day, creating some of the most compelling and creative visuals on our platform. Some of that content is manipulated, often for benign reasons, like making a video sharper or audio more clear. But there are people who engage in media manipulation in order to mislead.
Manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or ‘deep learning’ techniques to create videos that distort reality – usually called ‘deepfakes.’ While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases […]
We are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:
- It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces, or superimposes content onto a video, making it appear to be authentic.
However, that AI criteria means that misleading videos will not be banned if they are created using conventional editing tools. As the Washington Post notes, that means that clips like the viral one of House Speaker Nancy Pelosi would not be banned.
In May, someone created a video of Pelosi that made her appear to be intoxicated. This was achieved by slowing the video footage to 75% of normal speed, then adjusting the pitch of her voice to compensate. The result was convincing footage of Pelosi appearing to slur her words and needing a lot of time to formulate her thoughts. As the Post points out, that video would still be permitted under the new policy.
Facebook does say this is not the only step it takes.
Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages. If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.
The social network argues that this approach is better than simply removing them.
If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labeling them as false, we’re providing people with important information and context.
It is, though, hard to see why Facebook bans deepfakes but not fake videos created without the use of AI. Why not either ban all fake video or allow it all but label it as fake? The method used to create the fake seems an odd criteria for determining how it is treated.
You can see an example of a deepfake video below.
FTC: We use income earning auto affiliate links. More.