Facebook on Monday announced it would remove "deepfake" videos which can fool viewers into believing someone said something they did not say. Such videos have been a growing concern heading into the 2020 elections.
But, some low-tech videos that distort what the speaker is trying to convey, or how they say it, could still make it onto the platform.
Facebook says it will remove "misleading manipulated media" if it meets the following criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
The Guardian and The Washington Post point out one type of video that doesn't fall under this policy -- videos in which the speaker's own words or mannerisms are still used but the video or audio is slowed down or sped up. The outlets noted a video of House Speaker Nancy Pelosi that was widely shared last year in which it appeared she was slurring her words. The changes made to that video could be done with basic video and audio editing tools, not using AI.
Also still allowed will be videos that have been edited to omit or change the order of words the speaker used. So while it would be a real video with the speaker's actual words, the person posting it could choose to remove some context to make it sound like the speaker is saying something else
Satire and parody will still be allowed on Facebook.
Facebook adds that videos which don't meet its standards for removal are still eligible to be reviewed by its third-party fact-checkers.
Facebook admits that deepfakes are among the most challenging manipulated content to find. Facebook says it has also partnered with the news agency Reuters to train newsrooms to identify deepfake videos and other manipulated media.