Facebook bans deepface ahead of US presidential election
Facebook announced a new policy that bans AI-manipulated videos known as 'deepfakes' that could mislead viewers into thinking they are watching someone saying something that they did not in fact say.
The policy has been implemented ahead of the US presidential elections in 2020 as the social media platform tries to avoid fake news and the manipulation of voters and in a bid to prove to regulators that it has improved since the last presidential contest four years ago.
In 2016, Facebook allowed Russian hackers to target US voters with fake news advertisements as well as permitting apps that work through the platform to sell users’ data to analytics firm Cambridge Analytica. The firm then harvested the data to create specific political ads for the Donald Trump campaign.
The deepfake policy explicitly covers only misinformation produced using AI, meaning “shallow fakes” videos made using conventional editing tools - even though they are frequently just as misleading - are still allowed on the platform.
Misleading videos will be removed from Facebook and Instagram if they meet two criteria.
Those are “[if] it has been edited or synthesised […] in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say."
Or if it "is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic."