Deepfakes on the Rise: How to Protect Yourself
- By Erich Kron
- Oct 25, 2024
Senator Benjamin Cardin, chairman of the U.S. Foreign Relations Committee, is the most recent public figure to have experienced a targeted social engineering attack. The attack began with the Senator’s office receiving an email that purported to be from former Ukrainian minister Dmytro Kuleba, who Cardin had already known.
A virtual meeting was set up where Kuleba appeared on video. His voice and appearance seemed consistent with the Senator’s previous meetings. The conversation became suspicious when Kuleba asked Cardin politically charged questions regarding U.S. attitudes towards long-range missiles into Russian territory. The Senator and his staff immediately ended the call when they realized they were either speaking to an imposter or some sort of synthetic deepfake.
This isn’t the first instance of social engineering where synthetic media was weaponized. Earlier this year, a deepfake CFO conned a well-known design firm out of $25 million. A few months ago, advertising giant WPP reported an incident where audio and video of their CEO were cloned from YouTube, in an attempt to solicit money and sensitive information. Last year, a senior executive of a leading cryptocurrency firm disclosed how scammers created a deepfake hologram to dupe victims on a Zoom call.
Why Is Synthetic Media Becoming a Weapon of Choice For Scammers?
Tools and techniques used to create synthetic media (or alter authentic media) have been around for decades. Previously these tools were only accessible to people that had specialized skills and software. At the time, it would require days or weeks to create a sophisticated fake. With access to free online applications, advances in computational power and AI technologies, synthetic media can be whipped up today with little technical expertise.
Another reason why synthetic media is gaining popularity in scams and phishing schemes is because humans are far more likely to believe and trust something or someone they see or hear in comparison to something they read. Moreover, audiovisual content has a much higher resemblance to reality and perceived as more credible than text or email.
The remote work phenomenon is also partially to blame. As more organizations and employees get accustomed to meeting virtually, the need for physical verification doesn’t really exist. Until now. This empowers cybercriminals and state-sponsored attackers to carry out advanced persistent threats (APTs) as undetectable social engineering attacks and online fraud.
How Can Synthetic Media Affect Organizations?
Threat actors can operationalize synthetic media in a variety of ways. The most common and major threats to organizations include:
- Financial Scams: Threat actors have been using phishing emails and messages to impersonate executives (a.k.a. Business Email Compromise), causing billions of dollars in losses every year. With the mass availability of synthetic media, bad actors can make C-level impersonations even more realistic and believable, enabling them to design targeted and damaging social engineering attacks.
- Access and Infiltration: Malicious actors can employ deepfakes to deceive employees and gain access to company data, systems, and information. They can use deepfakes to manipulate employees into revealing their credentials; they can secure jobs by faking their identities to access insiders, data and systems.
- Reputational Damage: Threat actors can fabricate synthetic media, portraying senior leaders in objectionable and questionable circumstances with the goal of spreading disinformation, assassinating someone’s character, or damaging the reputation and brand of an organization. Deepfakes can be quickly disseminated across social media platforms before it can be blocked or disputed. This can have massive implications on stock prices. Threat actors can leverage deepfakes to blackmail and extort organizations and executives.
How Can Organizations Protect Themselves From Synthetic Media Risks?
While the media and governments are doing what they can to regulate platforms and report deepfakes, organizations also have a shared responsibility to protect themselves, their stakeholders and society as a whole. Here are some best practices that can help get started:
- Educate Employees How To Identify Synthetic Media: Teach employees to conduct a visual examination when they join online meetings – look for signs of manipulation such as lip-syncing issues, weird head, torso or eye movements, lack of neck muscle flexing or jitters; other physical properties such as feet not touching the ground or unusual speech patterns.
- Protect Identities of High Priority Individuals:To protect senior executives from being repurposed by synthetic media, organizations can consider adopting authentication techniques such as digital watermarking or using open-source tools developed by the Content Authenticity Initiative.
- Practice Continuous Cybersecurity Training: Regular phishing simulation exercises, ‘spot the deepfake’ contests, security fire drills and rehearsals can help motivate and engage users while also strengthening security alertness, skepticism, and intuition.
- Report Synthetic Media: If security teams or employees encounter deepfakes, they can be reported to U.S. Government entities including the NSA Cybersecurity Collaboration Center and the FBI at [email protected].
- Implement Robust Verification / Authentication Processes: Verify sudden or unexpected communications, especially those involving senior executives, sensitive information or financial transactions. Use tools like phishing-resistant multi-factor authentication and zero trust to reduce the possibility of identity fraud.
Synthetic media technology is evolving so rapidly that the boundaries between what is real and what is not are dissolving. It’s important that governments, NGOs, businesses and individuals become aware of these insidious threats, practice critical thinking and be prepared to take appropriate actions and cybersecurity measures.