The audacious use of deepfake technology to perpetrate a $25.6 million heist in Hong Kong, as reported this past week, provides a Black Mirror-esque example of how attackers are leveraging AI to up-level modern social engineering attacks.
If the reports of the ongoing investigation are to be believed, the attack simulated an entire video conferencing environment and used a deepfake impersonation of a prominent Hong Kong CFO and other meeting participants. The targeted finance department victim was initially suspicious of a phishing email purporting to be from the CFO. However, the victim joined a web conference during which the attackers convincingly replicated the CFO’s voice and appearance, as well as impersonated other participants by leveraging advanced Artificial Intelligence (AI). The targeted victim was social engineered by the deepfaked video call participants into transferring $25.6 million dollars into five different Hong Kong bank accounts.
Over the last several years, my team and I have predicted the weaponization of this type of deepfake threat. Unfortunately, the costs and reality are now materializing. This particular attack serves as a stark reminder of the emerging vulnerabilities we face in the 2020s and beyond. It also reinforces why the zero trust mantra of “never trust, always verify” has gained popularity and rapidly increasing adoption in recent years.
In this threat landscape, a focus on zero trust and identity security have become paramount to parry the new attacks employing sophisticated techniques to manipulate and exploit personal information. Read on for insights on emerging deepfake threats, the implications of deepfake threats to enterprise security, and to learn 6 actionable steps you should consider to help safeguard your organization and your identity against such threats.
Why you should expect deepfake threats to rise
A deepfake attack is a threat vector that leverages technology to create new identities, steal the identities of real people, or impersonate real people, often for the purpose of gaining access to assets, including privileged information or money. While this practice was formerly seen in the creation of falsified images, documents, and videos, it is now more frequently being waged to create real-time video calls, and even to impersonate voices in elaborate vishing schemes.
As far back as 2022, 66% of cybersecurity professionals reportedly had seen deepfakes leveraged in a cyberattack. Most of those attacks took the form of video (58%), rather than audio (42%), targeting employees largely through email and mobile messaging. While deepfakes are created using various methods, AI is taking this to a whole new level.
AI development itself has taken leaps over the last year, and recent studies have shown that deepfakes are getting harder and harder for humans to detect as the fakes that they are. For instance, encoders are AI algorithms used to superimpose one face (presumably that of a colleague or management) onto a completely different person’s body, and deepfake applications can use autoencoders to transfer images and movement from one image to another. This enables attackers to easily create hyper-realistic audio and video content in real time. And with deepfake attacks gaining more success—both in terms of economics and notoriety—you can bet that more threat actors will be piling in.
4 implications of a deepfake attack
1. Financial losses
In the case of the recent CFO deepfake attack in Hong Kong, the motive seems clearly financial, with the attackers profiting in a $25.6 million windfall. Morphing AI and deepfake threats are contributing to an overall rise in corporate espionage.
Deepfake phishing and spear phishing attacks are some of the earliest threats known to make use of AI. One rising identity theft technique, known as “business email compromise” (BEC), involves a threat actor impersonating an employee, vendor, or other trusted party in an email communication to trick the employee into sending valuable assets, like money or privileged information. Deepfakes make these attacks more tailored to their target and harder to detect. In a public memo, the FBI categorized BEC as one of the most financially damaging online crimes and highlighted the fact that it exploits organizational reliance on email, and now also video calls, to conduct business—both personal and professional. BEC was responsible for $50.8 billion dollars in losses between October 2013 and December 2022, according to the FBI’s Internet Crime Complaint Center (IC3).
However, there are many, many other ways fraudsters are experimenting with deepfake technology to cause financial losses for businesses and individuals.
2. Exposure or theft of privileged assets
Deepfakes present a convincing way to impersonate an employee’s identity. This means deepfakes could also be used to gain authentication for an unauthorized identity by using visual or auditory confirmation. And if the hijacked identity provides any privileged access, it could enable the threat actor to move laterally and gain control, or visibility into, sensitive data and systems.
Consider, for example, the recent threat tactics waged to undermine multifactor authentication technologies and subvert identity federation. While organizations have become much better at protecting accounts using phishing-resistant MFA, threat actors have started targeting help desk technicians via social engineering attacks. In some high-profile breaches, threat actors successfully persuaded help desk technicians into resetting authentication on accounts.
Advances in deepfake technology will allow threat actors to better impersonate employees and clients, either through phone calls or video calls. This could put cybercriminals on the inside track to hijacking accounts with privileges that allow them to traverse an environment to further their illicit activities.
3. Personal identity theft and reputational harm
Deepfakes can also be used to commit identity theft and harassment. Malicious media creations can be alarmingly realistic and can cause significant harm to a person or organization’s reputation. Deepfakes of prominent individuals, customers, or employees could be used to inflict brand damage or create scandal.
4. The spread of misinformation
Deepfakes fall under the broader umbrella of synthetic media and are often used to create believable, realistic videos, pictures, audio, and text of events which never happened. Falsified news reports, such as by a deepfake of a respected authority or newscaster, can prey on people’s natural inclination to believe what they see and are highly effective at spreading mis/disinformation. This can pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.
6 strategies to mitigate deepfake attacks on enterprises
Below are 6 strategies organizations should leverage to protect against modern identity-based threats, such as deepfakes and other attacks.
1. Zero Trust
By implementing zero trust security controls and embracing a zero trust mindset, organizations can better prevent and mitigate identity-based attacks. Core concepts organizations should embrace include continuous, context-based authentication and monitoring, and the enforcement of least privilege. These should be paired with strong, tested policies.
While the deepfake attack in Hong Kong referenced in this blog was elaborate, there were multiple important controls that were weak or lacking. For instance, an organization could standardize on web conferencing software to be used for conducting company business meetings, and for which strong identity-based controls are enforced.
2. Deepfake pen testing
What a more apt way to prepare for a deepfake attack than to use deepfakes in penetration (pen) testing and training exercises? Deepfake video and audio pen testing is a potential method ethical hackers can employ to assess vulnerabilities in workflows and educate organizations about the risks associated with manipulated media. The technique typically uses artificial intelligence to create realistic, yet entirely fabricated, videos or audio recordings to mimic real individuals saying or doing things that are fictious as a part of a social engineering campaign. Through this recreation of a real deepfake attack, ethical hackers can gauge the effectiveness of security measures and internal processes (i.e. financial controls), identify potential weaknesses, and advise on how to fortify defenses against these types of social engineering attacks.
By demonstrating the potential for misinformation and deception through deepfakes, organizations can better understand the importance of implementing additional security and policy controls to safeguard against human exploitation. This proactive approach helps in raising awareness and enhancing cybersecurity strategies to combat the evolving threats posed by advanced digital manipulation techniques. Just like many other types of pen testing, organizations must take into account legal and ethical considerations of such exercises, along with risks.
3. Training and education
Implementing ongoing cybersecurity training for all employees is critical. In 2023, 74% of all breaches included the human element, with people being involved either via error, privilege misuse, use of stolen credentials, or social engineering. Educating employees so they are empowered to be vigilant in recognizing and thwarting such manipulative tactics is a critical line of defense, especially when deepfake technology may be involved.
In addition to keeping teams educated on the latest threats, there should also be training modules customized to educate specific groups of employees on the threats they are most likely to encounter. Simulating social engineering attacks that mimic real-life phishing can also help to create a culture of cybersecurity awareness within an organization.
Finally, internal processes and oversight should also be in place to protect against large data or financial loss, and employees should be well educated on these processes.
4. Multi-factor authentication (MFA)
When implemented effectively, multi-factor authentication (MFA) adds an important layer of protection against unauthorized access and provides confidence in the authenticity of an identity. Every privileged account should have MFA enabled.
Sign-in policies and conditional access policies should be in place to ensure that users must re-authenticate from the right device, location, or network to conduct privileged activities.
However, as high-impact MFA fatigue breaches have demonstrated over the past year, MFA alone is not enough. Moreover, not all MFA is equal. Phishing-resistant MFA, such as FIDO2, can provide stronger protection, which is especially important when protecting highly sensitive accounts and access.
5. Privileged access management (PAM)
Privileged access management (PAM) is foundational to zero trust and to identity security. No identities and accounts are more imperative to secure than those with privileged access to systems, data, applications, and other sensitive resources.
With the proliferation of different types of cloud accounts and machine identities, the line between privileged and non-privileged is increasingly blurred. Thus it should be no surprise to our readers that almost every attack today requires privilege for the initial exploit and/or to laterally traverse a network.
It is imperative for organizations to employ privileged access security tools that help them understand where privileged roles exist, onboard those privileged accounts for management, and enforce least privilege—meaning access is restricted to the amount and duration absolutely needed based on an approved context.
As an environment's risk landscape changes, PAM should ensure all privileges across cloud and on-premises are continuously right-sized. In addition, all privileged sessions should be managed, monitored, and edited. These best practices should be maintained whether privileged access is for human, machine, employee, or vendor, and whether occurring on-premises or remote.
6. Identity threat detection and response (ITDR)
Today, there are still certain characteristics that can help individuals identify the less convincing deepfakes—such as bad lip syncing, odd lighting or discoloration, awkward or unnatural head/body positioning and facial expressions, uncharacteristic speech patterns or phrasing, unnatural eye movement, or a lack of blinking. However, AI-driven improvements in deepfake technology are quickly rendering human visual and auditory discretion as unreliable in the detection of fakes.
Organizations will need to adopt modern technology and strategies, such as identity threat detection and response (ITDR), that can intelligently detect identity-based threats or risks. ITDR capabilities can help organizations proactively mitigate threats by adjusting security posture based on real-time risks, and also quickly respond to and shut-down in-progress attacks to minimize any damage potential. These capabilities are especially important when it comes to mitigating the risks associated with decentralized or external transactions.
Next steps for deepfake threat protection
The Hong Kong deepfake CFO breach serves as a wake-up call for organizations to reevaluate their cybersecurity measures and internal processes—because more of these modern types of attacks are on the way.
BeyondTrust combines capabilities for PAM, ITDR, and cloud infrastructure entitlements management (CIEM) within our Identity Security platform. With BeyondTrust, organizations stand poised to operationalize core tenets of zero trust, and to prevent many attack vectors outright, while also enabling organizations to rapidly detect and respond to in-progress threats.
BeyondTrust’s Identity Security Insights, a ground-breaking solution in the market, provides a holistic, centralized view across your enterprise identity estate—including Okta, Ping, Microsoft Entra ID (formerly Azure AD), Active Directory, BeyondTrust solutions, and more—to unmask and help respond to threats that other solutions miss.
Click here for a complimentary identity security assessment, or
contact us here to learn more about BeyondTrust security solutions.
Morey J. Haber, Chief Security Advisor
Morey J. Haber is the Chief Security Advisor at BeyondTrust. As the Chief Security Advisor, Morey is the lead identity and technical evangelist at BeyondTrust. He has more than 25 years of IT industry experience and has authored four books: Privileged Attack Vectors, Asset Attack Vectors, Identity Attack Vectors, and Cloud Attack Vectors. Morey has previously served as BeyondTrust’s Chief Security Officer, Chief Technology, and Vice President of Product Management during his nearly 12 year tenure. In 2020, Morey was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board, assisting the corporate community with identity security best practices. He originally joined BeyondTrust in 2012 as a part of the acquisition of eEye Digital Security, where he served as a Product Owner and Solutions Engineer, since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. Morey earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.