Skip to main content

AI voice scams: Report shares 77% of victims lose money, how common it is, and how to protect yourself

AI voice scams are becoming more prevalent and can be extremely convincing because it sounds like you’re talking to a loved one. Now we’ve got an in-depth report that digs into how AI voice cloning works, how common the scams are, the likelihood of falling for one, the average cost, plus how to prevent and protect against AI voice scams.

In April, we saw some real-world examples of next-gen AI scams that are quite frightening. One of them used call spoofing so a loved one showed up on the victim’s phone as the person calling. Another one used an AI voice clone to try and extort ransom money from a mother to release her daughter – that wasn’t kidnapped.

As I noted in the piece above, it’s likely just a matter of time before attackers combine both call spoofing with AI voice clones.

Now McAfee has released an in-depth report on AI voice scams to help build awareness of the threat and a few easy ways to prevent and protect against it.

How does AI voice-cloning work?

McAfee highlights AI voice scams are a remix of “imposter scams” that have been around for a long time, but they can be much more convincing. Often it involves the scammer using a loved one’s voice and asking for money for an emergency or in some cases pretending to hold a loved one for ransom.

Because AI voice clone tools are so cheap and available, it’s fast and easy for malicious parties to create voice clones. And the way they get sample audio to do that is from people sharing their voices on social media. And the more you share your voice online, the easier it is for threat actors to find and clone your voice.

AI voice scams prevent

How common are AI voice scams?

While we’ve just started to see some real-world stories in the news about AI voice scams, McAfee’s study found they’re becoming quite common.

The global average showed 25% of people surveyed either experienced an AI scam or someone they know did.

That was higher in the US at 32% with India seeing the most trouble with AI voice scams at 47% of respondents saying they or someone they know has been affected.

AI voice scams how common

How accurate is it?

McAfee’s research found that voice-cloning tools deliver up to 95% accuracy.

In the publicly reported cases of AI voice-cloning scams, victims have recounted how the voice sounded “just like” the person being cloned. In one particularly egregious case, where a cybercriminal demanded a ransom for a fake kidnapping, the mother said it was “completely her voice” and that “it was her inflection.” It’s now harder than ever to tell real from fake, so people will need to assume they can’t always believe what they see and hear.

How often and how much do victims lose?

  • Sadly, McAfee’s research shows that 77% of AI voice scam victims lose money
  • More than one-third lost over $1,000
  • 7% were duped out of between $5,000 and $15,000
  • In the US, that number is highest with 11% losing between $5,000–$15,000

As a whole, imposter scams are believed to have stolen $2.6 billion in 2022.

How to prevent and protect against AI scams

As I previously wrote, and McAfee also shares, three major ways to prevent and protect against AI voice scams are:

  • Limit how much you share your voice and/or video online and/or set your social media accounts to private instead of public
  • Ask a challenge question or even two if you get a suspicious call – something only your loved one would be able to answer (e.g. name your childhood stuffed animal, etc.)
    • Remember, don’t ask a question to which the answer could be found on social media, online, etc.
  • Let unknown numbers go to voicemail and call or text the person directly from your phone if you’re concerned about them

Here are McAfee’s full recommendations:

  • Think before you click and share—who is in your social media network? Do you really know and trust your connections? Be thoughtful about what you are sharing on Facebook, YouTube, Instagram, and TikTok. Consider limiting your posts to just friends and family through the privacy settings. The wider your connections, the more risk you may be opening yourself up to when sharing content about yourself.
  • Identity monitoring services can help to alert you if your personally identifiable information is available on the Dark Web. Identity theft is often where AI voice and other targeted scams start. Take control of your personal data to avoid a cybercriminal being able to pose as you. Identity monitoring services provide a layer of protection that can safeguard your identity.
  • And four ways to avoid falling for the AI voice scam directly, include:
    • 1. Set a ‘codeword’ with kids, family members, or trusted close friends that only they could know. Make a plan to always ask for it if they call, text, or email to ask for help, particularly if they’re older or more vulnerable.
    • 2. Always question the source—If it’s a call, text, or email from an unknown sender, or even if it’s from a number you recognize, stop, pause, and think. Asking directed questions can throw off a scammer. For instance, “Can you confirm my son’s name?” or, “When is your father’s birthday?” Not only can this take the scammer by surprise, but they may also need to regenerate a new response, which can add unnatural pauses into the conversation and create suspicion.
    • 3. Don’t let your emotions take over. Cybercriminals are counting on your emotional connection to the person they’re impersonating to spur you into action. Take a step back before responding. Does that really sound like them? Is this something they’d ask of you? Hang up and call the person directly or try to verify the information before responding.
    • 4. Consider whether to answer unexpected calls from unknown phone numbers. It is generally good advice not to answer calls from strangers. If they leave a voicemail, this gives you time to reflect and contact loved ones independently to confirm their safety.

For more details, check out the full report. You can also read more about AI voice scams on the FTC’s website.

Top image via McAfee

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Michael Potuck Michael Potuck

Michael is an editor for 9to5Mac. Since joining in 2016 he has written more than 3,000 articles including breaking news, reviews, and detailed comparisons and tutorials.