'Perfectly real' deepfakes will arrive in 6 months to a year, technology pioneer Hao Li says
Manipulated images and videos that appear "perfectly real" will be accessible to everyday people in "half-a-year to a year," deepfake pioneer Hao Li said on CNBC on Friday.
"It's still very easy, you can tell from the naked eye most of the deepfakes," Li, an associate professor of computer science at the University of Southern California, said on "Power Lunch."
"But there also are examples that are really, really convincing," Li said, adding those require "sufficient effort" to create.
"Deepfake" refers to the process using computers and machine-learning software to manipulate videos or digital representations to make them seem real, even though they are not.
The rise of this technology has, however, given rise to concerns about how these creations could cause confusion and propagate disinformation, especially in the context of global politics. Online disinformation through targeted social-media campaigns and apps such as WhatsApp has already roiled elections around the world.
Li's appearance on CNBC follows an appearance earlier this week at a Massachusetts Institute of Technology conference, at which he said he thought perfect deepfakes would arrive in "two to three years."
In an email to CNBC after it asked for clarification, Li said recent developments, in particular the emergence of the wildly popular Chinese app Zao and the growing research focus, have led him to "recalibrate" his timeline.
"Also, in some ways we already know how to do it," Li wrote in an email, adding that it is "only a matter of training with more data and implementing it."
Zao is a face-swapping app that allows users to take a single photograph and insert themselves into popular TV shows and movies. It is among China's most popular apps, although significant privacy concerns have arisen.
"Soon, it's going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions," Li said on "Power Lunch."
That is why research by academics is important, Li said, noting his work on deepfake detection with Hany Farid, a professor at the University of California at Berkeley.
"If you want to be able to detect deepfakes, you have to also see what the limits are," Li said. "If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways it's impossible to detect those if you don't know how they work."
To Li, the issue with deepfakes isn't the existence of the technology that can create them.
He said deepfake technology presents numerous benefits for the fashion and entertainment industries, for example. It could also enhance the efficacy of video conferencing, Li said.
"The real question is how can we detect videos where the intention is something that is used to deceive people or something that has a harmful consequence," he said.
Read More
No comments