Media Literacy: Deepfakes
This guide provides an overview of media literacy topics.
What are deepfakes?
"The term deepfake is typically used to refer to a video that has been edited using an algorithm to replace the person in the original video with someone else (especially a public figure) in a way that makes the video look authentic."
from Merriam-Webster, Words We're Watching: 'Deepfake', July 31, 2019.
Evaluation Tips
- Source.
- Consider the source. Who is claiming this video as their own? Where did it come from? What motives might they have for creating or sharing it?
- Corroboration.
- Are other credible sources sharing this information? Check multiple sources that cover multiple viewpoints.
- Blinking.
- Deepfake videos are often developed using thousands of pictures of a person, and those pictures usually show that person with their eyes open. Is the person in the video not blinking/the blinking does not look natural?
- Blur.
- Do you see a blur around the edges of the face or lips? If something comes over the face, like a hand, is there a blur, flicker, or other kind of glitch? Do they appear to have another set of eyebrows or other facial features? These could be a sign that a different face has been laid over the source video. Slow down or pause the video to better check for these cues.
- Fact-checking.
- Check to see if credible fact-checking organizations have investigated this video. Sites like Snopes, Politifact, and Factcheck.org can help you debunk a fake video.
- Confirmation bias.
- Consider your own biases. Just because a video makes claims that line up with your own views does not mean it has to be true. Check yourself and consult other sources, especially when something seems too outrageous, too good, or too bad to be true.
- Wait.
- In today's breaking news environment, news is disseminated quickly. It takes time for disinformation to be caught. Before you react or share, give the experts time to evaluate it. Then check back.
Detection Tools
- SensityAnalyze suspicious files and URLs to detect types of AI-generated visual threats. Beware that your sample submissions must contain human faces, as every analysis will look for signs of manipulation and synthesis on the face area.
Learn More
News:
Academic Articles:
- Protecting World Leaders Against Deep FakesS. Agarwal, H. Farid, Y. Gu, M. He, K. Nagano, and H. Li. Protecting World Leaders Against Deep Fakes, Workshop on Media Forensics at CVPR, Long Beach, CA, 2019.
- Exposing Deep Fakes Using Inconsistent Head PosesYang, Xin, et al. “Exposing Deep Fakes Using Inconsistent Head Poses.” ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 17 Apr. 2019, doi:10.1109/icassp.2019.8683164.
- Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance.Fletcher, John. “Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance.” Theatre Journal, vol. 70, no. 4, Dec. 2018, p. 455. EBSCOhost, doi:10.1353/tj.2018.0097.
- Imagining Deceptive Deepfakes An ethnographic exploration of fake videosMaster's Thesis from ESST – Society, Science and Technology in Europe, University of Oslo, 2018.
- Synthesizing Obama: learning lip sync from audioSuwajanakorn, Supasorn, et al. “Synthesizing Obama: Learning Lip Sync from Audio.” ACM Transactions on Graphics, vol. 36, no. 4, 2017, pp. 1–13., doi:10.1145/3072959.3073640.