Generative AI financial scammers are getting very good at duping work email (2024)

More than one in four companies now ban their employees from using generative AI. But that does little to protect against criminals who use it to trick employees into sharing sensitive information or pay fraudulent invoices.

Armed with ChatGPT or its dark web equivalent, FraudGPT, criminals can easily create realistic videos of profit and loss statements, fake IDs, false identities or even convincing deepfakes of a company executive using their voice and image.

The statistics are sobering. In a recent survey by the Association of Financial Professionals, 65% of respondents said that their organizations had been victims of attempted or actual payments fraud in 2022. Of those who lost money, 71% were compromised through email. Larger organizations with annual revenue of $1 billion were the most susceptible to email scams, according to the survey.

Among the most common email scams are phishing emails. These fraudulent emails resemble a trusted source, like Chase or eBay, that ask people to click on a link leading to a fake, but convincing-looking site. It asks the potential victim to log in and provide some personal information. Once criminals have this information, they can get access to bank accounts or even commit identity theft.

Spear phishing is similar but more targeted. Instead of sending out generic emails, the emails are addressed to an individual or a specific organization. The criminals might have researched a job title, the names of colleagues, and even the names of a supervisor or manager.

Old scams are getting bigger and better

These scams are nothing new, of course, but generative AI makes it harder to tell what’s real and what’s not. Until recently, wonky fonts, odd writing or grammar mistakes were easy to spot. Now, criminals anywhere in the world can use ChatGPT or FraudGPT to create convincing phishing and spear phishing emails. They can even impersonate a CEO or other manager in a company, hijacking their voice for a fake phone call or their image in a video call.

That’s what happened recently in Hong Kong when a finance employee thought he received a message from the company’s UK-based chief financial officer asking for a $25.6 million transfer. Though initially suspicious that it could be a phishing email, the employee’s fears were allayed after a video call with the CFO and other colleagues he recognized. As it turns out, everyone on the call was deepfaked. It was only after he checked with the head office that he discovered the deceit. But by then the money was transferred.

“The work that goes into these to make them credible is actually pretty impressive,” said Christopher Budd, director at cybersecurity firm Sophos.

Recent high-profile deepfakes involving public figures show how quickly the technology has evolved. Last summer, a fake investment scheme showed a deepfaked Elon Musk promoting a nonexistent platform. There were also deepfaked videos of Gayle King, the CBS News anchor; former Fox News host Tucker Carlson and talk show host Bill Maher, purportedly talking about Musk’s new investment platform. These videos circulate on social platforms like TikTok, Facebook and YouTube.

“It’s easier and easier for people to create synthetic identities. Using either stolen information or made-up information using generative AI,” said Andrew Davies, global head of regulatory affairs at ComplyAdvantage, a regulatory technology firm.

“There is so much information available online that criminals can use to create very realistic phishing emails. Large language models are trained on the internet, know about the company and CEO and CFO,” said Cyril Noel-Tagoe, principal security researcher at Netcea, a cybersecurity firm with a focus on automated threats.

Larger companies at risk in world of APIs, payment apps

While generative AI makes the threats more credible, the scale of the problem is getting bigger thanks to automation and the mushrooming number of websites and apps handling financial transactions.

“One of the real catalysts for the evolution of fraud and financial crime in general is the transformation of financial services,” said Davies. Just a decade ago, there were few ways of moving money around electronically. Most involved traditional banks. The explosion of payment solutions — PayPal, Zelle, Venmo, Wise and others — broadened the playing field, giving criminals more places to attack. Traditional banks increasingly use APIs, or application programming interfaces, that connect apps and platforms, which are another potential point of attack.

Criminals use generative AI to create credible messages quickly, then use automation to scale up. “It’s a numbers game. If I’m going to do 1,000 spear phishing emails or CEO fraud attacks, and I find one in 10 of them work, that could be millions of dollars,” said Davies.

According to Netcea, 22% of companies surveyed said they had been attacked by a fake account creation bot. For the financial services industry, this rose to 27%. Of companies that detected an automated attack by a bot, 99% of companies said they saw an increase in the number of attacks in 2022. Larger companies were most likely to see a significant increase, with 66% of companies with $5 billion or more in revenue reporting a “significant” or “moderate” increase. And while all industries said they had some fake account registrations, the financial services industry was the most targeted with 30% of financial services businesses attacked saying 6% to 10% of new accounts are fake.

The financial industry is fighting gen AI-fueled fraud with its own gen AI models. Mastercard recently said it built a new AI model to help detect scam transactions by identifying “mule accounts” used by criminals to move stolen funds.

Criminals increasingly use impersonation tactics to convince victims that the transfer is legitimate and going to a real person or company. “Banks have found these scams incredibly challenging to detect,” Ajay Bhalla, president of cyber and intelligence at Mastercard, said in a statement in July. “Their customers pass all the required checks and send the money themselves; criminals haven’t needed to break any security measures,” he said. Mastercard estimates its algorithm can help banks save by reducing the costs they’d typically put towards rooting out fake transactions.

More detailed identity analysis is needed

Some particularly motivated attackers may have insider information. Criminals have gotten “very, very sophisticated,” Noel-Tagoe said, but he added, “they won’t know the internal workings of your company exactly.”

It might be impossible to know right away if that money transfer request from the CEO or CFO is legit, but employees can find ways to verify. Companies should have specific procedures for transferring money, said Noel-Tagoe. So, if the usual channels for money transfer requests are through an invoicing platform rather than email or Slack, find another way to contact them and verify.

Another way companies are looking to sort real identities from deepfaked ones is through a more detailed authentication process. Right now, digital identity companies often ask for an ID and perhaps a real-time selfie as part of the process. Soon, companies could ask people to blink, speak their name, or some other action to discern between real-time video versus something pre-recorded.

It will take some time for companies to adjust, but for now, cybersecurity experts say generative AI is leading to a surge in very convincing financial scams. “I’ve been in technology for 25 years at this point, and this ramp up from AI is like putting jet fuel on the fire,” said Sophos’ Budd. “It’s something I’ve never seen before.”

Generative AI financial scammers are getting very good at duping work email (2024)

FAQs

Generative AI financial scammers are getting very good at duping work email? ›

Criminals use generative AI to create credible messages quickly, then use automation to scale up. "It's a numbers game. If I'm going to do 1,000 spear phishing emails or CEO fraud attacks, and I find one in 10 of them work, that could be millions of dollars," said Davies.

How did scammer know my email? ›

They used “email harvesting” bots.

Harvesting emails is a fast way to get email addresses. Using a bot, cybercriminals search the internet for emails with "@" symbols. Harvesters can gather thousands of names and emails in seconds.

What is an email threat asking for money? ›

Sextortion emails, also known as an sexploitation email, fake emails are one of the most popular ways of sextortion, and in 2024, this is still the case. The email will likely contain poorly worded English, demanding the victim to pay a large sum of money to protect the content from being shared and distributed.

What is a reverse scammer? ›

Fraud By Relatives Or Financial Planners

This type of reverse mortgage scam involves a crooked financial planner or advisor talking you into getting a reverse mortgage when you don't need one. They may tell you to let them handle your proceeds to invest them for you, but then use the money for their own financial gain.

Is the AI money platform legit? ›

Customer Advisory: AI Won't Turn Trading Bots into Money Machines explains how the scams use the potential of AI technology to defraud investors with false claims that entice them to hand over their money or other assets to fraudsters who misappropriate the funds and deceive investors.

How did a scammer get my work email? ›

By using harvesting programs

Spammers and cybercriminals engage in phishing email scams by using harvesting software to steal and gather email addresses from the internet.

Can I stop my email from being spoofed? ›

The reality is that it's impossible to stop email spoofing because the Simple Mail Transfer Protocol, which is the foundation for sending emails, doesn't require any authentication. That's the vulnerability of the technology. There are some additional countermeasures developed to counter email spoofing.

Should I worry if a scammer has my email address? ›

Criminals who have your email address could potentially use it to impersonate you in an effort to carry out scams or phishing attacks against your friends, family, or coworkers. Especially if the email address they got is your work address.

What do I do if I received an email from a hacker demanding money? ›

Should I pay the ransom? If you are tempted to pay the ransom, you might be targeted with future scams, as the criminal will know they have a 'willing' customer. If a password you still use is included, then change it immediately.

Can a scammer be traced? ›

Some companies offer help from PIs to track down the people who scammed you. But in most cases, even uncovering a scammer's true identity won't bring them to justice. Scammers almost always operate out of foreign countries, making prosecution nearly impossible.

How do scambaiters work? ›

Scambaiting, or scam baiting, is a tactic used by vigilantes to target people who run scams. The process of scambaiting involves baiting a fraudster into engaging with the scambaiter, who then wastes the scammer's time, exposes their personal information, or even targets them with cyberattacks.

Can I use AI to make me money? ›

You can use AI to make money. The technology can enable you to create a variety of content you can monetize. You can also make money developing online courses with the help of AI. As your experience with the technology grows, you could even develop a course teaching others how to make money using the technology.

Can you trust an AI? ›

Humans are largely predictable to other humans because we share the same human experience, but this doesn't extend to artificial intelligence, even though humans created it. If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust.

Does PayPal use AI? ›

Streamlined checkout experiences: PayPal employs AI and machine learning to create a one-click checkout process. The AI system pre-selects each customer's preferred payment method, shipping address, and other relevant details to speed up each payment.

How did hackers get my email address? ›

Data breaches: Hackers may have obtained your email credentials through a data breach. If you use the same password for multiple accounts, one compromised account means a hacker can access all of them. Sometimes, hackers buy passwords from the dark web, where cybercriminals sell them after successful data breaches.

How did a scammer get my information? ›

Hackers can send you fraudulent text messages or phishing emails, tricking you into providing your phone number or other personal data. Autodialers. In some cases, scammers don't even need your phone number. Autodialers generate and call random phone numbers.

Should I be worried if a scammer has my address? ›

Scammers are disturbingly persistent. If they know your name, address, and phone number, they can use this as a launching point to find out more about you online and on public databases. For example, they could research your social media profiles or see if you're included in popular data broker lists.

How did a scammer get my phone number and email? ›

Data breaches are among the most common ways that scammers get access to your phone number. But there are plenty of other ways they can steal your digits as well. “People search” sites like WhoEasy collect and sell your personal data to telemarketers and hackers.

Top Articles
Latest Posts
Article information

Author: Barbera Armstrong

Last Updated:

Views: 6276

Rating: 4.9 / 5 (79 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Barbera Armstrong

Birthday: 1992-09-12

Address: Suite 993 99852 Daugherty Causeway, Ritchiehaven, VT 49630

Phone: +5026838435397

Job: National Engineer

Hobby: Listening to music, Board games, Photography, Ice skating, LARPing, Kite flying, Rugby

Introduction: My name is Barbera Armstrong, I am a lovely, delightful, cooperative, funny, enchanting, vivacious, tender person who loves writing and wants to share my knowledge and understanding with you.