Deepfakes on the Rise: How to Protect Yourself

Senator Benjamin Cardin, chairman of the U.S. Foreign Relations Committee, is the most recent public figure to have experienced a targeted social engineering attack. The attack began with the Senator’s office receiving an email that purported to be from former Ukrainian minister Dmytro Kuleba, who Cardin had already known.

A virtual meeting was set up where Kuleba appeared on video. His voice and appearance seemed consistent with the Senator’s previous meetings. The conversation became suspicious when Kuleba asked Cardin politically charged questions regarding U.S. attitudes towards long-range missiles into Russian territory. The Senator and his staff immediately ended the call when they realized they were either speaking to an imposter or some sort of synthetic deepfake.

This isn’t the first instance of social engineering where synthetic media was weaponized. Earlier this year, a deepfake CFO conned a well-known design firm out of $25 million. A few months ago, advertising giant WPP reported an incident where audio and video of their CEO were cloned from YouTube, in an attempt to solicit money and sensitive information. Last year, a senior executive of a leading cryptocurrency firm disclosed how scammers created a deepfake hologram to dupe victims on a Zoom call.

Why Is Synthetic Media Becoming a Weapon of Choice For Scammers?
Tools and techniques used to create synthetic media (or alter authentic media) have been around for decades. Previously these tools were only accessible to people that had specialized skills and software. At the time, it would require days or weeks to create a sophisticated fake. With access to free online applications, advances in computational power and AI technologies, synthetic media can be whipped up today with little technical expertise.

Another reason why synthetic media is gaining popularity in scams and phishing schemes is because humans are far more likely to believe and trust something or someone they see or hear in comparison to something they read. Moreover, audiovisual content has a much higher resemblance to reality and perceived as more credible than text or email.

The remote work phenomenon is also partially to blame. As more organizations and employees get accustomed to meeting virtually, the need for physical verification doesn’t really exist. Until now. This empowers cybercriminals and state-sponsored attackers to carry out advanced persistent threats (APTs) as undetectable social engineering attacks and online fraud.

How Can Synthetic Media Affect Organizations?
Threat actors can operationalize synthetic media in a variety of ways. The most common and major threats to organizations include:

  • Financial Scams: Threat actors have been using phishing emails and messages to impersonate executives (a.k.a. Business Email Compromise), causing billions of dollars in losses every year. With the mass availability of synthetic media, bad actors can make C-level impersonations even more realistic and believable, enabling them to design targeted and damaging social engineering attacks.
  • Access and Infiltration: Malicious actors can employ deepfakes to deceive employees and gain access to company data, systems, and information. They can use deepfakes to manipulate employees into revealing their credentials; they can secure jobs by faking their identities to access insiders, data and systems.
  • Reputational Damage: Threat actors can fabricate synthetic media, portraying senior leaders in objectionable and questionable circumstances with the goal of spreading disinformation, assassinating someone’s character, or damaging the reputation and brand of an organization. Deepfakes can be quickly disseminated across social media platforms before it can be blocked or disputed. This can have massive implications on stock prices. Threat actors can leverage deepfakes to blackmail and extort organizations and executives.

How Can Organizations Protect Themselves From Synthetic Media Risks?
While the media and governments are doing what they can to regulate platforms and report deepfakes, organizations also have a shared responsibility to protect themselves, their stakeholders and society as a whole. Here are some best practices that can help get started:

  • Educate Employees How To Identify Synthetic Media: Teach employees to conduct a visual examination when they join online meetings – look for signs of manipulation such as lip-syncing issues, weird head, torso or eye movements, lack of neck muscle flexing or jitters; other physical properties such as feet not touching the ground or unusual speech patterns.
  • Protect Identities of High Priority Individuals:To protect senior executives from being repurposed by synthetic media, organizations can consider adopting authentication techniques such as digital watermarking or using open-source tools developed by the Content Authenticity Initiative.
  • Practice Continuous Cybersecurity Training: Regular phishing simulation exercises, ‘spot the deepfake’ contests, security fire drills and rehearsals can help motivate and engage users while also strengthening security alertness, skepticism, and intuition.
  • Report Synthetic Media: If security teams or employees encounter deepfakes, they can be reported to U.S. Government entities including the NSA Cybersecurity Collaboration Center and the FBI at [email protected].
  • Implement Robust Verification / Authentication Processes: Verify sudden or unexpected communications, especially those involving senior executives, sensitive information or financial transactions. Use tools like phishing-resistant multi-factor authentication and zero trust to reduce the possibility of identity fraud.

Synthetic media technology is evolving so rapidly that the boundaries between what is real and what is not are dissolving. It’s important that governments, NGOs, businesses and individuals become aware of these insidious threats, practice critical thinking and be prepared to take appropriate actions and cybersecurity measures.

Featured

  • Hot AI Chatbot DeepSeek Comes Loaded With Privacy, Data Security Concerns

    In the artificial intelligence race powered by American companies like OpenAI and Google, a new Chinese rival is upending the market—even with the possible privacy and data security issues. Read Now

  • Survey: CISOs Increasing Budgets for Crisis Simulations in 2025

    Today, Cyber Performance Center, Hack The Box, released new data showcasing the perspectives of Chief Information Security Officers (CISOs) towards cyber preparedness in 2025. In the aftermath of 2024’s high-profile cybersecurity incidents, including NHS, CrowdStrike, TfL, 23andMe, and Cencora, CISOs are reassessing their organization’s readiness to manage a potential “chaos” of a full-scale cyber crisis. Read Now

  • Human Risk Management: A Silver Bullet for Effective Security Awareness Training

    You would think in a world where cybersecurity breaches are frequently in the news, that it wouldn’t require much to convince CEOs and C-suite leaders of the value and importance of security awareness training (SAT). Unfortunately, that’s not always the case. Read Now

  • Windsor Port Authority Strengthens U.S.-Canada Border Waterway Safety, Security

    Windsor Port Authority, one of just 17 national ports created by the 1999 Canada Marine Act, has enhanced waterway safety and security across its jurisdiction on the U.S.-Canada border with state-of-the-art cameras from Axis Communications. These cameras, combined with radar solutions from Accipiter Radar Technologies Inc., provide the port with the visibility needed to prevent collisions, better detect illegal activity, and save lives along the river. Read Now

New Products

  • AC Nio

    AC Nio

    Aiphone, a leading international manufacturer of intercom, access control, and emergency communication products, has introduced the AC Nio, its access control management software, an important addition to its new line of access control solutions.

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.

  • QCS7230 System-on-Chip (SoC)

    QCS7230 System-on-Chip (SoC)

    The latest Qualcomm® Vision Intelligence Platform offers next-generation smart camera IoT solutions to improve safety and security across enterprises, cities and spaces. The Vision Intelligence Platform was expanded in March 2022 with the introduction of the QCS7230 System-on-Chip (SoC), which delivers superior artificial intelligence (AI) inferencing at the edge.