How AI Platforms Like ChatGPT will Heighten Imposter Attacks—and How to Fight Them

How AI Platforms Like ChatGPT will Heighten Imposter Attacks—and How to Fight Them

Everyone in recent months has been busy either debating the merits of ChatGPT and similar natural language generators—or have been busy using them. The amount of material these AI-driven platforms will produce is mind-boggling, since there seems to be no shortage of businesses and content writers who feel they can benefit from the striking shortcuts these platforms provide. ChatGPT was in fact able to reach 100 million users in its first two months of public existence.

However, legitimate businesses aren’t the only ones who can benefit tremendously from these AI-powered capabilities to generate meaningful passages. Many pundits warn that this ability to create article-length narratives in mere seconds will make it frighteningly simple for criminals to create more persuasive phishing and imposter attacks, and at a far greater volume. This onslaught of new threats will hugely accelerate vulnerability, where even the savviest of network users could be tricked into turning over log-in credentials, healthcare information, or financial data.

It’s not surprising that AI would allow the ranks of cyber criminals to grow. Technology has often opened fields of expertise up to amateurs, making it easier for laymen with minimal skills to master tasks that formerly required much more training and effort. Consider that automated software allows anyone with a CAD program to draft impressive 3-dimensional designs, and WordPress and Wix allow users with even the most basic of abilities to create professional websites. We can view ChatGPT in the same light, as a tool for hackers. It not only allows anyone with an Internet connection to compose believable and supposedly informed text, but it also empowers hackers who start out with even the most rudimentary of skills to swiftly generate scripts and launch language for imposter cyber-attacks.

These imposter events come in various forms. In the corporate community, Business Email Compromise (BEC) occurs when nefarious actors breach the account of a high-level executive, often a CEO. The hacker will send emails from the CEO’s account directing other senior executives to do things like make large wire transfers or reveal sensitive log-in information. These “socially engineered” BEC attacks have increased by 65% since 2019, according to reports from software research site Gitnux, and are expected to spike dramatically along with the new sophistication of language generators.

Brand imposter attacks are when hackers create a credible mock-up of a site that the victim frequents, such as a financial institution, cloud provider, transport company, or healthcare organization. The criminals will send well-composed and convincing emails requesting that the victim click a link to their site due to some matter that needs attention. The user is then brought to the clever look-alike site, and prompted to enter their user names, passwords, banking details, address, or identifying healthcare information.

Here are some ways that ill-intentioned hackers can now produce code more quickly, launch attacks more precisely, and compose phishing content more eloquently than ever:

ChatGPT allows overseas hackers to write grammatically correct, accurately composed language. In the earlier days of phishing, hackers in unregulated foreign countries were often foiled by spelling mistakes, awkward phrasing, and unprofessional grammar that tipped-off readers. Natural language generators will produce well-composed email copy that is completely indistinguishable from ordinary native speech, since the text is not composed by an outsider. It’s composed by an AI algorithm, pulling from existing native sources.

ChatGPT makes it easier for cyber criminals to write effective malware. Not only do AI-based language generators instantly create prose, they can also quickly write code, aiding programmers in developing applications. Researchers have already reported evidence on the dark web of malicious actors abusing ChatGPT to speed the creation of new malware or fine-tune existing malicious programs. As usual, cyber criminals are intelligent and resourceful—they have already found ways to circumvent ChatGPT’s inherent safeguards.

How to Protect Against Heightened Attacks

All this makes it more critical than ever for businesses to use AI-driven email protection. The only way organizations can guard against the power and speed of advanced AI is to leverage the same technologies in their cyber security solutions. The challenge is that even many top-tier software packages don’t utilize best-in-class AI, because they were designed before these sophisticated tools had even been developed.

Many existing security solutions rely on traditional SEG (security email gateway) methods as their legacy technique. This involves the blacklisting of known malicious IP addresses. Yet contextual attacks like the BEC scenarios above simply can’t be detected by these SEG-based solutions. Cyber security solutions must employ powerful AI to interpret the text of ill-intended emails, identifying keywords like “wire transfer” and “credit card” or even recognizing attachments with sensitive images such as healthcare ID cards. Without these intelligent AI-based tools, which include optical character recognition, companies are vulnerable to a ramp-up in breaches now that criminals have access to tools like ChatGPT.

Organizations should consider solutions from new, next generation cybersecurity providers, especially those who specialize in email security, including solutions for anti-malware, anti-virus, and data loss protection. Outbound email protection like best-in-class encryption is also advisable, since hackers can’t exploit emails that they can’t decode. Businesses should also demand email security protection that is easy to use, in order to foster greater adoption across the organization. Technology that doesn’t get used is pointless.

In the end, the only genuine strategy for combatting the increased level of AI-based attacks from these platforms is to use the same AI tools against them. Don’t let your organization be swept up in the watershed of ChatGPT-assisted schemes.

Featured

  • New Report Reveals Top Trends Transforming Access Controller Technology

    Mercury Security, a provider in access control hardware and open platform solutions, has published its Trends in Access Controllers Report, based on a survey of over 450 security professionals across North America and Europe. The findings highlight the controller’s vital role in a physical access control system (PACS), where the device not only enforces access policies but also connects with readers to verify user credentials—ranging from ID badges to biometrics and mobile identities. With 72% of respondents identifying the controller as a critical or important factor in PACS design, the report underscores how the choice of controller platform has become a strategic decision for today’s security leaders. Read Now

  • Overwhelming Majority of CISOs Anticipate Surge in Cyber Attacks Over the Next Three Years

    An overwhelming 98% of chief information security officers (CISOs) expect a surge in cyber attacks over the next three years as organizations face an increasingly complex and artificial intelligence (AI)-driven digital threat landscape. This is according to new research conducted among 300 CISOs, chief information officers (CIOs), and senior IT professionals by CSC1, the leading provider of enterprise-class domain and domain name system (DNS) security. Read Now

  • ASIS International Introduces New ANSI-Approved Investigations Standard

    • Guard Services
  • Cloud Security Alliance Brings AI-Assisted Auditing to Cloud Computing

    The Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, today introduced an innovative addition to its suite of Security, Trust, Assurance and Risk (STAR) Registry assessments with the launch of Valid-AI-ted, an AI-powered, automated validation system. The new tool provides an automated quality check of assurance information of STAR Level 1 self-assessments using state-of-the-art LLM technology. Read Now

  • Report: Nearly 1 in 5 Healthcare Leaders Say Cyberattacks Have Impacted Patient Care

    Omega Systems, a provider of managed IT and security services, today released new research that reveals the growing impact of cybersecurity challenges on leading healthcare organizations and patient safety. According to the 2025 Healthcare IT Landscape Report, 19% of healthcare leaders say a cyberattack has already disrupted patient care, and more than half (52%) believe a fatal cyber-related incident is inevitable within the next five years. Read Now

New Products

  • Luma x20

    Luma x20

    Snap One has announced its popular Luma x20 family of surveillance products now offers even greater security and privacy for home and business owners across the globe by giving them full control over integrators’ system access to view live and recorded video. According to Snap One Product Manager Derek Webb, the new “customer handoff” feature provides enhanced user control after initial installation, allowing the owners to have total privacy while also making it easy to reinstate integrator access when maintenance or assistance is required. This new feature is now available to all Luma x20 users globally. “The Luma x20 family of surveillance solutions provides excellent image and audio capture, and with the new customer handoff feature, it now offers absolute privacy for camera feeds and recordings,” Webb said. “With notifications and integrator access controlled through the powerful OvrC remote system management platform, it’s easy for integrators to give their clients full control of their footage and then to get temporary access from the client for any troubleshooting needs.”

  • Mobile Safe Shield

    Mobile Safe Shield

    SafeWood Designs, Inc., a manufacturer of patented bullet resistant products, is excited to announce the launch of the Mobile Safe Shield. The Mobile Safe Shield is a moveable bullet resistant shield that provides protection in the event of an assailant and supplies cover in the event of an active shooter. With a heavy-duty steel frame, quality castor wheels, and bullet resistant core, the Mobile Safe Shield is a perfect addition to any guard station, security desks, courthouses, police stations, schools, office spaces and more. The Mobile Safe Shield is incredibly customizable. Bullet resistant materials are available in UL 752 Levels 1 through 8 and include glass, white board, tack board, veneer, and plastic laminate. Flexibility in bullet resistant materials allows for the Mobile Safe Shield to blend more with current interior décor for a seamless design aesthetic. Optional custom paint colors are also available for the steel frame.

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.