Graphic with the text 'Spot An AI SCAM " indicating a warning about artificial intelligence scams. Graphic with the text 'Spot An AI SCAM " indicating a warning about artificial intelligence scams.

    Get Help Now
    24/7 Support

    How To Spot AI-Generated Scams

    The rapid expansion of the capabilities of artificial intelligence in recent years has changed a lot of aspects of our daily lives. It does not feel long ago that the world was mesmerized by the release of Siri. Now, AI is capable of answering your inquiries while driving your car. 

    This advancement holds the potential to expediate mundane tasks, but it also can be used to commit cybercrimes. In 2024, Americans lost over $108 million to AI scams(1). The capability to generate realistic photos and text-to-speech dialogue makes phishing and robocall scams nearly indistinguishable from legitimate contact. 

    While these AI scams becoming increasingly convincing, there arises a growing need to develop strategies to recognize and react to them. That’s why it’s important to work with cybersecurity professionals, like DFC, who are constantly evolving to stay ahead of curve. 

    Seeking assistance from these organizations can make all the difference in your effort to avoid these auto-generated attacks. Continue reading to learn more about the warning signs of AI fraud and what you can do to prevent becoming a victim.  

    The Rise of AI-Powered Deception 

    AI accessibility has increased drastically due to the uptick in available generative AI programs. Generative AI reached a record-high $56 billion in worldwide investment in 2024(2), meaning this technology is only going to become more commonplace. 

    With the ability to create deepfake videos and generate a full script of audio with just a photo and a short clip of someone’s voice, online scams have never been more convincing. Cybercrimes like phishing, impersonation, and online blackmail scams can now pass many previous screening measures.  

    Deepfakes: When Seeing Isn’t Believing 

    Deepfakes are digitally manufactured videos that utilize the likeness of an individual not actually depicted in the original video. This is done through the use of AI programs that are capable of overlaying data from photos over an existing video to make it appear as though an individual has said or done something they haven’t. 

    This technology has been used for a wide variety of shady activity. From false political endorsements to artificial adult content, many public figures have had their identities misused in deepfake content. However, deepfakes don’t only affect the reputation of celebrities. Here are some ways anyone could be victimized by deepfakes: 

    • Impersonation: Scammers can easily use deepfake technology to assume the identity of a trusted entity to obtain credentials or money. In 2023, a man from Mongolia sent $622,000 to scammers who used a deepfake to extort the man by convincing him that he was wiring money to a friend(3)
    • Romance Scams: Gone are the days of being able to ask somebody to hold up obscure object or say a specific phrase to prove their identity online. Deepfakes can persuade individuals that they have an online relationship with a real or created individual. This was the case for a 77-year-old woman from Scottland who was extorted for $22,000 in a deepfake oil rig scam(4)
    • Endorsement and Investment Scams: Public Figures with influence in the public zeitgeist are having their identities instrumented to promote fraudulent products or investments. Last December, Edmonton police had to issue a warning after residents lost a combined $1.9 million in an investment scheme being promoted by deepfake versions of Justin Trudeau and Elon Musk(5)
    • Sextortion: Scammers can commit sextortion without having to obtain explicit content from their target. Rather, they can manufacture this content using innocuous images and AI technology. The technology has resulted in such an uptick in reported sextortion cases that the FBI had to advise the public in 2023(6)

    While this content can be hard to differentiate from legitimate videos, there are ways you can detect a deepfake. Unnatural facial movements, discoloration, blurred backgrounds, poor lip-syncing, and any sections of the video that appear to glitch can tip you off that a video has been artificially altered.  

    AI Phishing: Crafting the Perfect Bait 

    The variety of generative AI programs available can all be combined to culminate in a catastrophic phishing attack. Scammers can use Large Language Models such as Chat GPT to generate highly personalized and convincing phishing emails

    These programs can gather information from across the internet to craft a well-written, grammatically correct, and highly personalized email that can even emulate the style and tone of the supposed sender if a writing sample can be provided. Combine this with a deepfake or robocall, and these schemes become very difficult to detect. 

    A study published by Harvard showed that 60% of participants were deceived by AI-generated phishing messages, a similar rate compared to man-made schemes. However, the price of pulling off these scams can be reduced by over 95% by utilizing AI tools(7)

    Despite using AI, these schemes can be detected using similar criteria to other phishing scams. Poorly formatting, generic addresses, a sense of urgency, and suspicious links or attachments are all tell-tale signs of a phishing email. 

    AI Robocalls: The Voice of Deception 

    AI technology is also capable of cloning the voice and cadence of an individual with just a small speech sample. Additionally, an assessment from Consumer Reports found that four of the top six voice cloning software’s simply required the user to check a box saying the owner of the voice consented to its use(8)

    This technology can be used to conduct robocalls used in vishing scams to promote an agenda. This can range from a scam call that is supposedly coming from your boss all the way to PSA from the president telling citizens not to vote(9)

    Scammers combine this with generative AI’s ability to create scripts that can mimic natural speech patterns. It can even be prompted to speak like a well-known figure. This recipe for realistic robocalls has been exploited for cyber extortion

    While the technology becomes increasingly convincing, there are still characteristics that make robocalls recognizable. Most robocalls stay true to their name, sounding robotic and moving at an unnatural pace. Some examples of voice cloning scams have shown difficulty reading large numbers, such as the Elon Musk diabetes cure scam(10) 

    Recognizing the Red Flags: Techniques for AI Scam Detection 

    To avoid technologically transformed and transacted scams, you need to know how to detect them. We’ve already discussed some recognizable characteristics, but you need to know how to find them. The following techniques can help you pull back the mechanical mask of an AI scam. 

    • Utilize software such as AI and Deepfake detection programs which can determine if a piece of media has been digitally manipulated. 
    • You can take a screenshot of a suspected deepfake video and perform a reverse image search to see if any results come up detecting the deception. 
    • Take a zero-trust approach to any online interaction with someone whose identity you aren’t completely sure of. 
    • Continue educating yourself on the common tactics used in these schemes. The landscape of cybercrime is ever evolving, so continual research and education is necessary. 

    Handling all of this on your own can be a daunting task. Consider seeking the aid of cybersecurity professionals who are well-versed in combating AI scams. 

    Digital Forensics Corp.’s Role in Combating AI Scams 

    Here at DFC, we are ready to assist you with any AI scam or cybercrime you may be facing. Our team of experts have years of experience handing case just like yours. Some of the ways we can help you include: 

    • Our use of proprietary techniques and advanced technology, such as metadata analysis and frame-by-frame examination, that allows us to detect deepfake and AI-generated material. 
    • We’ve worked with thousands of clients to help the properly collect and preserve evidence of cybercrimes and identify and prevent fraud threats. 
    • Our in-house legal counsel and connection with law enforcement agencies around the world can help you navigate the challenges of legally pursuing the perpetrator. 

    Prevention and Protection: Staying Safe from AI Scams 

    Taking action to safeguard yourself from AI scams is the best way to prevent fraud and other damages. Some measures you can take to avoid becoming a victim include: 

    • Use strong, unique passwords for each online account and perform regular password updates. A password manager can help generate strong passwords and keep track of them for you. 
    • Utilize two-form identification on any platforms that offer it to prevent bad actors from accessing your account if they are privy to your passwords. 
    • Keep software up to date on all of your devices. Outdated software often has security vulnerabilities that can be exploited. 
    • Avoid oversharing online. Do not give out any personal information in exchanges on the internet. 

    Like any cybercrime, AI scams should be reported to the platform it occurs on and the police. Doing so can help bring the perpetrator to justice and prevent future fraud attempts toward you and others. 

    Working with cybersecurity professionals like the ones at DFC can help you collect the necessary evidence to file these reports and give you the best chances at successful litigation. 

    DFC Fights Against Cyber Extortion 

    The rise of AI has made online fraud and blackmail more difficult to prevent by making it easier for cybercriminals to gain their victim’s trust. This makes its incredibly important for individuals to proactively stay informed and vigilant. 

    However, you don’t need to take on that challenge alone. Working with DFC can help you detect possible AI scams, collect evidence if they’ve already occurred, and put necessary safeguards in place to avoid them in the future. 

    If you have been the victim of blackmail, extortion, fraud, or any other cybercrime transpiring from an AI scam, you need to act fast. Call us today for a free consultation with one of our experts. 

    Sources: 

    1. Americans Lost $108M To AI Scams (Government Data) 
    2. Generative AI funding reached new heights in 2024 | TechCrunch 
    3. Man Scammed by Deepfake Video and Audio Imitating His Friend 
    4. ‘AI deepfake romance scam duped me out of £17k’ 
    5. Edmontonians lost $1.9M in investment scams featuring AI versions of Trudeau, Musk: EPS 
    6. Internet Crime Complaint Center (IC3) | Malicious Actors Manipulating Photos and Videos to Create Explicit Content and Sextortion Schemes 
    7. AI Will Increase the Quantity — and Quality — of Phishing Scams 
    8. AI Voice Cloning Report 
    9. Criminal charges and FCC fines issued for deepfake Biden robocalls : NPR 
    10. Facebook scammers want you to think Elon Musk can cure diabetes