Deepfakes are coming. Is Big Tech ready?

Here's why it's so hard to spot deepfakes
Here's why it's so hard to spot deepfakes

Mark Zuckerberg insisted at Facebook's annual developer conference earlier this year that his company "will never be unprepared ... again" for meddling and disinformation efforts like those run by Russian trolls on its platform during the run-up to the 2016 election.

Yet the social media behemoth and its competitors may still be ill-equipped for their next great challenge: Fake videos that look so real you'd believe former President Obama really did call President Trump a "dipsh*t."

Platforms like Facebook, Twitter and YouTube have been moving recently to deal with the threats posed by misinformation and meddling that they didn't see coming, but now they face an emerging form of disinformation they know is on the horizon: Deepfakes -- doctored videos that will eventually fool even the sharpest eyes. As the technology to create deepfakes advances, experts say, it won't be long before it's used to foment discord or even affect an election.

"The opportunity for malicious liars is going to grow by leaps and bounds," said Bobby Chesney, professor and associate dean of the University of Texas School of Law who has been closely researching deepfakes.

Twitter, YouTube, and Reddit also are natural targets for deepfakes, and you can expect to see fringe platforms and porn websites flooded with them. Yet asked by CNNMoney just what they're doing to prepare for this looming problem, none of the major social media platforms would discuss it in detail. The companies wouldn't name the researchers they're working with, how much money they'll pour into detection, or even say how many people they've assigned to figure it out.

None of them offered much more than vague explanations along the lines of Facebook's promise to "protect the community from real-world harm."

That's not to say they aren't working on it at all. Facebook, for example, said it is collaborating with academics to see how their research might be applied to the platform. One researcher told CNNMoney that Google has reached out to him. But in the meantime, developers are continuing to work to perfect the technology and make the videos it produces more convincing.

Related: Mark Zuckerberg clarifies his Holocaust comments

The word "deepfakes" refers to using deep learning, a type of machine learning, to add anyone's face and voice to video. It has been mostly found on the internet's dark corners, where some people have used it to insert ex-girlfriends and celebrities into pornography. But BuzzFeed provided a glimpse of a possible future in April when it created a video that supposedly showed Obama mocking Trump, but in reality, Obama's face was superimposed onto footage of Hollywood filmmaker Jordan Peele using deepfake technology.

Deepfakes could pose a greater threat than the fake news and Photoshopped memes that littered the 2016 presidential election because they can be hard to spot and because people are -- for now -- inclined to believe that video is real. But it's not just about individual videos that will spread misinformation: it's also the possibility that videos like these will convince people that they simply can't trust anything they read, hear or see unless it supports the opinions they already hold.

Experts say fake videos that will be all but impossible to identify as such are as little as 12 months away.

Jonathon Morgan, the CEO of New Knowledge, which helps companies fight misinformation campaigns and has done some analysis for CNN, sees troll farms using AI to create and deliver fake videos tailored to social media users' specific biases. That's exactly what the Russian-backed trolls at the Internet Research Agency did during the last presidential election, but without the added punch of faked video.

Related: Whatsapp is adding new restrictions as killings continue in India

Aviva Ovadya, chief technologist at the Center for Social Media Responsibility, said social media companies are "still at the early stages of addressing 2016-era misinformation," and "it's very likely there won't be any real infrastructure in place" to combat deepfakes any time soon.

Many platforms already enforce rules around nudity that could apply to any faked porn they may find. But none of them have guidelines governing deepfakes in general, said Sam Woolley of the Digital Intelligence Lab at Institute for the Future. This goes beyond silly GIFs or satirical videos to more troubling content like, say, a faked video of a politician or businessman in a compromising situation, or hoax footage supposedly showing soldiers committing war crimes. "These have potentially larger implications for society and democracy," Woolley said.

Companies like Facebook and Twitter often argue that they are platforms, not publishers, and note that Section 230 of the 1996 Communications Decency Act absolves them of responsibility for content posted by users.

The recent uproar over the hate speech, fake news and disinformation polluting tech platforms has led companies to take more action -- even if the CEOs leading the effort have been inconsistent, even downright baffling, in explaining themselves. Still, the companies haven't addressed doctored videos specifically.

"It's not a question of if they take action on deepfakes, or if they begin to moderate this content," Woolley said. "It's a question of when. At the moment, the movement is pretty slim."

Infrastructure to combat faked videos may come from the Pentagon: The Defense Advanced Research Projects Agency is midway through a four-year effort to develop tools to identify deepfake videos and other doctored images. Experts in the field said using algorithms to analyze biometric data is one promising tool.

Satya Venneti of Carnegie Mellon University has seen some success identifying fakes by analyzing the pulses of people in deepfake demonstration videos. People typically exhibit similar blood flow in their forehead, cheeks, and neck. But she found "widely varying heart rate signals" in spoofed videos, something that happened when a video was layered with images.

Related: Google is hiring 10,000 people to clean up

In some cases, she saw heart rates of 57 to 60 beats per minute in the cheeks and 108 in the forehead. "We don't expect to see such wide differences," she said.

Siwei Lyu, director of Computer Vision and Machine Learning Lab at University at Albany SUNY, outlined another trick in a paper he co-wrote in June: Look for regular blinks. "If a video is 30 seconds and you never see the person blink, that is suspicious," he told CNNMoney.

After his paper was released, Lyu said deepfake developers used his research to successfully improve their own work to get around his detection system. Now, his team is exploring other ways of identifying fakes, but he declined to elaborate because he doesn't want to give away anything that might help people create more convincing fakes. "We're on the front lines," he said, adding that Google has expressed interest in collaborating with him.

GIF-hosting platform Gfycat uses an algorithm to examine faces frame-by-frame to ensure nothing's been doctored. Still, tech news site Motherboard found that some deepfakes eluded detection by Gfycat's algorithms.

Gfycat told CNNMoney that removing content flagged by its algorithm can take as long as a few days.

Related: YouTube will start displaying Wikipedia articles

Woolley and other experts said Facebook, Twitter, and other platforms can begin getting ahead of the problem by forging a broad partnership to tackle it together. For an example of that, the industry can look to how it has dealt with child pornography through a universal hashing system that is implemented across platforms to identify and block such material.

Blockchain and other secure public verification systems have also been pointed to as possible tools for marking the origins of videos and photos.

But it's important that the issue of deepfakes is approached broadly and across industries.

"Platforms are definitely part of the solution -- but it's not just the platforms. Platforms only control distribution in some domain," Ovadya said.

CNNMoney Sponsors