Did you see that? Beware of Viral Videos
When Russia began its invasion of Ukraine, Ukrainian President Volodymyr Zelenskyy warned the world about digital disinformation being spread. Only weeks later, in mid-March, a deepfake of Zelenskyy appeared in which he told his soldiers to lay down arms. The video was debunked and removed.
To protect the leader of Ukraine and his voice, researchers at UC Berkley worked on a facial and gestural model, providing digital detection to help distinguish between the real and the fake.
“The ability to alter and ‘fake’ audio, images, and video is becoming easier with tools widely available on the internet,” said Scott Grigsby, director of data science at PAR Government. “Humorous clips of celebrities are one thing, but the Zelenskyy fake was the first time a leader was targeted during a time of war.”
For more than a decade, PAR Government has been leading the way in this niche technology. Before the general public even heard of this, we were becoming experts. Working on data assurance, starting with Fingerprinting and Identification of a Digital Camera (FINDCamera), PAR began the march toward digital dominance.
FINDCamera allows law enforcement to take a digital image or video and match it to the exact camera used – not the kind of camera, but the actual shooting camera. Think of it like ballistics testing except instead of tracing a round through a barrel, we’re tracing an image through a lens.
PAR Government then was able to produce high-quality, mission-specific training and validation data to evaluate machine learning algorithms and models that other researchers create. Currently, PAR Government creates models to help customers detect deepfakes and other false information such as tweets, blog postings, and fake news.
PAR Government advances the art and the science of media forensics by creating one of the world’s largest curated, high provenance data corpuses. We deploy it to accelerate our understanding of not only the threat, but our ability to counter it.
“Deepfakes and other disinformation and misinformation is becoming increasingly common on social media sites like Facebook, YouTube, TikTok, and others,” Dr. Grigsby continued. “While there is a large effort to develop anti-forensic tools to help discover things like deepfakes, it is up to each individual to be knowledgeable, vigilant, and skeptical of information on the internet not coming from reliable sources.”
Here are Four Ways to Protect Yourself from Deepfakes
Be mindful: First step in protecting yourself from falling for a deepfake is just knowing they exist. Deepfakes can be highly realistic.
Be skeptical: Always ask questions and be critical of what you see, read, and hear. Possible signs of a deepfake include:Be proactive and be prepared: Check the facts by comparing the information to data available from trustworthy and verifiable sources. If you suspect a deepfake:
Finally, be thoughtful: Always think before you share. (But share this with everyone who wants to get smarter on this topic.)