March 2024
In the last few weeks, a groundbreaking case of social engineering made the headlines. A multinational company in Hong Kong was scammed out of $25 million after an employee attended a video conference call with multiple deepfake recreations of the company’s executives and other employees.
It appears that the scammers were able to recreate the individuals using publicly available footage. An employee, working in the company’s finance department, attended a video conference call after receiving a phishing email message from someone appearing to be the company’s CFO asking for a transaction to be made. In that video call, apparently all members of the call were known executives to the employee and were deepfaked. Only the employee was an actual member of the company.
After attending the conference call, the scammers reportedly kept in touch with the employee for a week through additional video calls, messages through WhatsApp, and emails. During that time, the employee was given fraudulent instructions to conduct as many as 15 financial transfers.
The scam was unraveled by the employee after a conversation with headquarters. The employee reported that both the live images and voices of others on the call seemed real and recognizable. This is the first known case in Hong Kong to involve a successful scam using multiple deepfakes in one video call.
While there have been instances of deepfakes being used in social engineering, the breadth and elaboration of the deception in this case is pretty staggering. So too is the combined $25 million mistake.
Earlier this year, I hosted a webinar around deepfake technology. In the webinar, we demonstrated how easy it is to create a convincing deepfake.
As an example, below is a picture of myself, Hart Brown, the webinar’s facilitator, and the CEO of FPOV.
Using a deepfake generation tool called DeepFaceLab, our team was able to transform my face into various celebrities including Keanu Reeves, Robert Downey Jr., Tom Holland, Nicholas Cage, Sylvester Stallone, and Tom Cruise. This transformation was done live, during the session. The deepfakes were not recorded.
Deepfakes are media created in a way that is digitally altered to spread false information. They can be video, audio, and photos.
Sadly, many deepfakes are used to harass and target women, both celebrities and non-celebrities, by creating abusive videos and pictures of them. However, they are also being used more and more in fraud, politics, and cybercrime.
Some reports have chronicled dramatic rises in malicious phishing emails because of how easy they are to produce using generative AI tools.
Tools have been created and modeled off of popular generative AI platform ChatGPT with many of the ethical guardrails removed. These tools, like WormGPT, FraudGPT, and others are specifically created to be used in social engineering attacks. In social engineering scams, such as spearphishing and business email compromise campaigns, they help remove much of the telltale signs of traditional fraudulent emails such as misspellings or poor grammar. They can also be used to make the email sound more like the person sending it, making detection more challenging.
Impersonation attacks are also increasing. People can use voice cloning technology to send voice messages pretending to be a friend or loved one in a precarious situation. There have been several real-world examples of this. In Saskatchewan, Canada in 2023, an elderly couple received a call from an impersonation of their grandson claiming he needed money. Similarly, also in 2023, an Arizona mother received an AI voice scam call of her daughter telling her mother she had been kidnapped.
In 2022, an executive at Binance, a cryptocurrency exchange, claims attackers had created a deepfake of him and used it on videoconference calls to try and trick would-be investors. The executive only found out about it after people emailed him thanking him for meeting with them. This would indicate that in at least one case, someone was duped by the deception.
What are some of the ways that deepfake technology could be, or likely will be used, in social engineering scams?
One way to help limit the dangers of deepfake technology is through education. It is paramount that you educate your team members on how to identify novel social engineering and fraud scams using deepfake technology.
We are currently partnering with a large insurance agency to train associations and other types of organizations about the dangers of deepfake fraud.
Below are some tips to help you and your team spot deepfake media.
CONTEXT
CREDIBILITY
TECHNICAL
Deepfake media is only going to become more prevalent. Its use in social engineering is going to grow. Education is the best way to help your organization thwart this alarming reality. A good step would be to go and seek out additional tools on protecting you and your team members from advanced AI-generated social engineering.
SHARE