HOW DEEPFAKES AND AI ARE TURBOCHARGING SOCIAL ENGINEERING

BY Hart Brown

CEO
Future Point of View

March 2024

 

In the last few weeks, a groundbreaking case of social engineering made the headlines. A multinational company in Hong Kong was scammed out of $25 million after an employee attended a video conference call with multiple deepfake recreations of the company’s executives and other employees.

It appears that the scammers were able to recreate the individuals using publicly available footage. An employee, working in the company’s finance department, attended a video conference call after receiving a phishing email message from someone appearing to be the company’s CFO asking for a transaction to be made. In that video call, apparently all members of the call were known executives to the employee and were deepfaked. Only the employee was an actual member of the company.

After attending the conference call, the scammers reportedly kept in touch with the employee for a week through additional video calls, messages through WhatsApp, and emails. During that time, the employee was given fraudulent instructions to conduct as many as 15 financial transfers.

The scam was unraveled by the employee after a conversation with headquarters. The employee reported that both the live images and voices of others on the call seemed real and recognizable. This is the first known case in Hong Kong to involve a successful scam using multiple deepfakes in one video call.

While there have been instances of deepfakes being used in social engineering, the breadth and elaboration of the deception in this case is pretty staggering. So too is the combined $25 million mistake.

Earlier this year, I hosted a webinar around deepfake technology. In the webinar, we demonstrated how easy it is to create a convincing deepfake.

As an example, below is a picture of myself, Hart Brown, the webinar’s facilitator, and the CEO of FPOV.

Using a deepfake generation tool called DeepFaceLab, our team was able to transform my face into various celebrities including Keanu Reeves, Robert Downey Jr., Tom Holland, Nicholas Cage, Sylvester Stallone, and Tom Cruise. This transformation was done live, during the session. The deepfakes were not recorded.

Deepfake Keanu Reeves

Deepfakes are media created in a way that is digitally altered to spread false information. They can be video, audio, and photos.

Deepfake Robert Downey, Jr.

Sadly, many deepfakes are used to harass and target women, both celebrities and non-celebrities, by creating abusive videos and pictures of them. However, they are also being used more and more in fraud, politics, and cybercrime.

Some reports have chronicled dramatic rises in malicious phishing emails because of how easy they are to produce using generative AI tools.

Tools have been created and modeled off of popular generative AI platform ChatGPT with many of the ethical guardrails removed. These tools, like WormGPT, FraudGPT, and others are specifically created to be used in social engineering attacks. In social engineering scams, such as spearphishing and business email compromise campaigns, they help remove much of the telltale signs of traditional fraudulent emails such as misspellings or poor grammar. They can also be used to make the email sound more like the person sending it, making detection more challenging.

Impersonation attacks are also increasing. People can use voice cloning technology to send voice messages pretending to be a friend or loved one in a precarious situation. There have been several real-world examples of this. In Saskatchewan, Canada in 2023, an elderly couple received a call from an impersonation of their grandson claiming he needed money. Similarly, also in 2023, an Arizona mother received an AI voice scam call of her daughter telling her mother she had been kidnapped.

In 2022, an executive at Binance, a cryptocurrency exchange, claims attackers had created a deepfake of him and used it on videoconference calls to try and trick would-be investors. The executive only found out about it after people emailed him thanking him for meeting with them. This would indicate that in at least one case, someone was duped by the deception.

What are some of the ways that deepfake technology could be, or likely will be used, in social engineering scams?

  • They could be used as part of business email compromise attacks to bypass current prevention procedures, such as call back protocols. It could be as simple as a follow up to a well-crafted email, incorporate a more elaborate voice message, or request for a video call in a system not used by the company to trick an employee into moving money as was in the case of the recent Hong Kong scam.
  • They could be used to display an executive in a compromising video or saying something that could tank a company’s stock or scuttle an important merger.
  • They could be used to hurt a brand’s reputation with customers and business partners.
  • They could be used to trick banking systems or technology designed to verify their customers identity to prevent fraud or money laundering.
  • Ultimately, they could cause business interruption and other costs as managing the disruption could impact normal business activity and lead to financial loss and unexpected costs.

One way to help limit the dangers of deepfake technology is through education. It is paramount that you educate your team members on how to identify novel social engineering and fraud scams using deepfake technology.

We are currently partnering with a large insurance agency to train associations and other types of organizations about the dangers of deepfake fraud.

Below are some tips to help you and your team spot deepfake media.

CONTEXT

  • Is there cognitive dissonance? Does the media insinuate something that the person or persons involved would never say? Does the media bring up discomfort in you?
  • Does it have a professional look or is it low quality and glitchy? A telltale sign of phishing emails has always been bad grammar and typos. When it comes to deepfake media, a sign would be the quality of the media.
  • What is the setting? Consider the context of the media and the emotion it brings up in you. Is the setting in a busy environment or difficult angle to see clearly?
  • How are you viewing it? Deepfake media may be harder to spot on a mobile device because the screen is smaller.
  • Utilize reverse image searches: Search for a photo using a reverse image search to see if it is contained elsewhere.

CREDIBILITY

  • Corroboration: Has the media been corroborated by reputable sources?
  • Reputation: Is the organization or individual hosting the media or sharing it reputable and trustworthy? Is the author or source clear or does it seem to be shadowed?
  • Bias: Is there a clear bias inherent in the media? Those sharing AI-generated fake photos of politicians certainly have a bias in mind when they create and shared the imagery.

TECHNICAL

  • Metadata analysis: What can you learn from analyzing the data of the image or video.
  • Edges: Deepfake images often have jagged edges which can help detect the image.
  • Luminance: Deepfakes often have lighting inconsistencies which help with detection.
  • Clone detection: There are various techniques being utilized to differentiate between a real voice and a cloned voice.
  • Error Level Analysis (ELA): A technique using deep learning and machine learning to understand if an image has been modified.
  • Blood flow: Tools, such as Intel’s FakeCatcher, use ‘blood flow’ in the pixels of a video to “assess what makes us human.”

Deepfake media is only going to become more prevalent. Its use in social engineering is going to grow. Education is the best way to help your organization thwart this alarming reality. A good step would be to go and seek out additional tools on protecting you and your team members from advanced AI-generated social engineering.

SHARE


RELATED ARTICLES

2023 DIGITAL TRENDS

Lorem ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into …

BY James Joyce

June/July 2023
CYBERSECURITY - TECHNOLOGY

AI IN CYBERSECURITY: THE GOOD, THE BAD AND BEING ON THE PRECIPICE OF A NEW ERA IN TECHNOLOGY

As you might expect with cybersecurity, battlelines are being drawn between the people creating AI solutions to help protect companies and the people making AI software that is designed to find vulnerabilities in areas designed to protect data; systems; financial and personal information; intellectual property (IP); and Industrial Internet of Things (IIoT) and other IoT devices.

BY Dwayne Smith

September 2023
RISK

TIME ON YOUR SIDE: FIVE SCRAPPY WAYS YOUR PEO CAN USE AI TO SHRINK THE GROUP HEALTH SALES CYCLE

In your group health sales cycle, time is of the essence. Shorter sales cycles generally lead to larger volumes, higher revenues, more satisfied account execs, and repeat customers, especially for an annual purchase like group health insurance. You can shrink the time you turn a lead into a customer by adding a speedy new member to your sales team: artificial intelligence. AI can help you close deals faster than your competitors can get their boots on.

BY Kaitlyn Fischer

September 2023
CYBERSECURITY - TECHNOLOGY

ASK THE EXPERT: A Q&A WITH PAUL NASH OF BEAZLEY

Paul Nash is an employment practices liability (EPL) underwriter with Beazley. He is the EPL and Safeguard product leader for both the UK and US teams and was instrumental in developing the first SAM/SML policy issued by Beazley in 2006. He has more than 30 years of experience in the insurance. He recently spoke with Paul Hughes of Libertate Insurance about the state of the EPLI market, how he has seen the PEO industry evolve and more. PEO Insider captured their conversation.

BY PAUL HUGESBY

August 2023

ADVERTISEMENT

Ad for Sentara Health Plans