When Russia began its invasion of Ukraine, Ukrainian President Volodymyr Zelenskyy warned the world about digital disinformation being spread. Only weeks later, in mid-March, a deepfake of Zelenskyy appeared in which he told his soldiers to lay down arms. The video was debunked and removed.
To protect the leader of Ukraine and his voice, researchers at UC Berkley worked on a facial and gestural model, providing digital detection to help distinguish between the real and the fake.
Recognize these folks? You shouldn't. They are all AI generated.
A bit more lighthearted and hitting the internet around the same time, three videos of Tom Cruise went viral. He was playing sports, enjoying a bubble-gum filled lollypop, and appeared to be falling.
These, too, were deepfakes. While researching the origin of the peculiar TikTok videos, Fortune.com discovered that visual effects specialist Chris Ume was behind the manipulated videos which featured the star’s images and mannerisms superimposed on impersonator Miles Fisher.
“The ability to alter and ‘fake’ audio, images, and video is becoming easier with tools widely available on the internet,” said Scott Grigsby, director of data science at PAR Government. “Humorous clips of celebrities are one thing, but the Zelenskyy fake was the first time a leader was targeted during a time of war.”
For more than a decade, PAR Government has been leading the way in this niche technology. Before the general public even heard of this, we were becoming experts. Working on data assurance, starting with Fingerprinting and Identification of a Digital Camera (FINDCamera), PAR began the march toward digital dominance.
FINDCamera allows law enforcement to take a digital image or video and match it to the exact camera used – not the kind of camera, but the actual shooting camera. Think of it like ballistics testing except instead of tracing a round through a barrel, we’re tracing an image through a lens.
PAR Government then was able to produce high-quality, mission-specific training and validation data to evaluate machine learning algorithms and models that other researchers create. Currently, PAR Government creates models to help customers detect deepfakes and other false information such as tweets, blog postings, and fake news.
“Deepfakes and other disinformation and misinformation is becoming increasingly common on social media sites like Facebook, YouTube, TikTok, and others,” Dr. Grigsby continued. “While there is a large effort to develop anti-forensic tools to help discover things like deepfakes, it is up to each individual to be knowledgeable, vigilant, and skeptical of information on the internet not coming from reliable sources.”
Here are Four Ways to Protect Yourself from Deepfakes
Be mindful: First step in protecting yourself from falling for a deepfake is just knowing they exist. Deepfakes can be highly realistic.
Be skeptical: Always ask questions and be critical of what you see, read, and hear. Possible signs of a deepfake include:
Seemingly odd messages for the organization or individual
A highly emotive message causing a strong reaction
It’s sensational and is looking to be reshared
Be proactive and be prepared: Check the facts by comparing the information to data available from trustworthy and verifiable sources. If you suspect a deepfake:
Do a reverse image search online
Do not share, forward, or retweet anything you think is false or misleading
If the content is concerning, inform the appropriate law enforcement officials
Finally, be thoughtful: Always think before you share. (But share this with everyone who wants to get smarter on this topic.)