Contents
- What is a digital scam
- Why are scams getting harder to spot
- If scams are so obvious, why do smart people still fall for them
- What Al changes
- Where scams begin
- The scam types people should know first
- What brand impersonation means
- What cross platform scams look like
- Why crypto scams matter in this report
- What the evidence says about exposure and harm
- Why normal defenses do not solve the problem
- What people should do in real life
- What kind of training really helps
- What platforms and institutions should do
- If someone remembers only one thing, what should it be
- What is the bottom line for 2026
- References
This report is written for readers who are new to the topic. It explains what today’s digital scams look like, why they are getting harder to spot, how Al is changing the threat, and what people can do to stay safer. The main lesson is simple: modern scams do not usually look like obvious spam anymore. They often look polished, patient, and believable. They also move across platforms, starting in one place and finishing in another.
What is a digital scam
A digital scam is a trick that uses phones, websites, email, text messages, social media, ads, comments, or messaging apps to steal money, passwords, personal details, or trust. Some scams try to get payment right away. Others take days or weeks to build a relationship before asking for money or access.
Why are scams getting harder to spot
Because the old warning signs are weaker than they used to be. Bad spelling, messy design, and clumsy wording used to be useful clues. Now many scams look polished. They use clean branding, natural language, fake customer support, and realistic websites. Some even use long conversations to build trust before the real trap appears.
If scams are so obvious, why do smart people still fall for them

Because modern scams are not built around stupidity. They are built around pressure, trust, timing, and emotion. A person may be rushed, worried, lonely, hopeful, embarrassed, or distracted. A scammer only needs the right message at the right moment. The research shows that scams work by creating believable situations, not by finding foolish people.
This matters because shame is part of the problem. People who feel embarrassed are less likely to report what happened, which lets the scam continue hurting others.
What AI changes
Is AI creating totally new scams
Mostly no. The strongest evidence suggests that Al is making old scams stronger, faster, and cheaper rather than replacing them with something completely new. The basic tricks are still familiar: impersonation, fake support, fake investment opportunities, phishing, urgent warnings, job scams, and romance style trust building. What AI adds is scale, speed, and realism.
So what does AI actually do for scammers
Al helps scammers in four main ways:
| What Al helps with | What that means in plain language |
|---|---|
| Better writing | Scam messages can sound fluent, calm, and professional |
| Better targeting | Messages can be shaped to fit a person, workplace, or interest |
| Better targeting | Messages can be shaped to fit a person, workplace, or interest |
| More automation | Scammers can produce and test many messages very quickly |
| Better conversations | cams can feel like real back and forth chats instead of one bad email |
The key point is that Al removes many of the clues people used to rely on. A message can sound normal and still be dangerous.
Where scams begin
Where do people usually run into scams now
Scams now begin in many places, not just email. Research points to social media, online ads, comment sections, text messages, search results, fake support replies, and direct messages as major entry points. One large exposure study found about 149,000 devices encountering scam domains each day, with social media ads playing a major role in driving people into scam sites.
Are comment sections and ads really dangerous
Yes. Many people think of ads and comments as background noise, but they are active scam surfaces. Research on media platforms found large scale comment scams that impersonated creators, staged fake reply threads, and pushed users into private chats on Telegram or WhatsApp. Research on scam exposure also found that ad clicks, especially on social media, were a major path into scam domains.
The scam types people should know first
What are the most important scam types to understand
For a beginner, these are the most important ones:
| Scam type | How it usually works | Why it is dangerous |
|---|---|---|
| Phishing | A message pushes you to click a link or open a fake login page | It steals passwords or account access |
| Quishing | A QR code leads to the same kind of trap | People often treat QR codes as harmless images |
| Fake support | A scammer pretends to be customer service or tech help | Victims may hand over payment or recovery phrases |
| Brand impersonation | A fake account copies a real company name, logo, or support style | It feels routine and trusted |
| Investment scams | A site or person promises easy returns and shows fake profits | Losses can become very large |
| Pig butchering | A scammer builds trust over time, then pushes fake investments | It mixes emotion and money |
| Comment scams | A scam starts in public replies under a video or post | It moves people into private chats quickly |
| Smishing | A text message pushes you toward a link, call, or reply | Mobile users often react fast under pressure |
These scams often overlap. One scam can begin as impersonation, move into phishing, and end in payment theft.
What brand impersonation means

Why are fake brand accounts such a big problem
Because many people are trained to trust familiar names. If a fake account looks like a bank, a payment company, a streaming service, or a customer support page, the contact can feel normal instead of dangerous. One study found more than 349,000 squatted social media accounts targeting 2,625 major brands. Many copied names, logos, and support language closely enough to look believable at a glance.
What kinds of harm follow from brand impersonation
Brand impersonation is not just annoying. It can lead to password theft, fake account recovery, fake recruitment, fake giveaways, fake discounts, and direct payment scams. It is often the first step that makes the rest of the scam feel legitimate.
What cross platform scams look like
Why do scammers try to move people off the first platform
Because the first platform is often only the hook. The real scam usually becomes easier once the target moves into a private space with less moderation and less public visibility. A scam may start with a tweet reply, a YouTube comment, a text message, or a social media ad, then shift to email, a web form, Telegram, WhatsApp, or a fake investment site.
Why is that move so important
Because once the conversation moves, the scammer gains more control. They can answer questions, build trust, create urgency, and keep the victim away from public warning signs. In simple terms, public contact gets attention, but private contact gets payment.
Why crypto scams matter in this report

Are crypto scams really a big part of the problem
Yes. The recent literature treats cryptocurrency as one of the biggest scam surfaces in the current ecosystem. It appears in giveaway scams, fake investment platforms, fake recovery services, and pig butchering. One study detected 43,572 unique cryptocurrency investment scam sites in only eight months of 2024. Another showed how fake support scammers waited for people asking wallet questions online, then pulled them into private channels and stole either direct payments or secret recovery phrases.
Why are crypto scams so effective
Because they mix three things that are powerful together: technical confusion, urgency, and poor recovery options. Once money or wallet credentials are sent, recovery is often difficult or impossible. Scam sites also use fake profit dashboards, fake certificates, and false withdrawal signals to make the fraud feel real.
What the evidence says about exposure and harm
How common is scam exposure
Very common. Survey evidence across 12 countries found widespread scam exposure, with about 15 percent of internet users reporting money lost in the prior year. Social media was the single most common first contact channel in that study. Large scale device telemetry also showed daily exposure on a very large scale, which supports the idea that scams are not rare edge cases.
Are these mostly small losses
Not always. Some losses are small, but many scam systems are built to keep extracting more over time. Research on pig butchering linked the problem to very large losses. Research on crypto investment scams and comment scams also found evidence of substantial financial harm and weak blocklist coverage.
Why normal defenses do not solve the problem
Do spam filters, browser warnings, and blocklists protect people well enough
Not by themselves. Several studies found that many live scam sites are missed by blocklists and that detection often happens too late. One large scam exposure study found that most scam domains were seen by users before they appeared in commercial scam feeds. Another found poor coverage for crypto investment scam sites. Comment scam research found almost none of the linked malicious investment sites on existing blocklists.
Why is detection still so weak
Because most detection systems are narrow while real scams are mixed and fast moving. Research reviewing Al based scam detection found that models often work on one scam type only, depend on outdated data, and do not generalize well as scam tactics change.
What people should do in real life
What is the safest mindset to have
The safest mindset is not panic. It is pause. The goal is not to trust nothing. The goal is to slow down when money, urgency, secrecy, or account access enters the conversation.
What are the most important rules to follow
| Rule | Why it matters |
|---|---|
| Treat QR codes like links | They can lead to the same traps as phishing emails |
| Do not trust a message just because it looks polished | Good design and good grammar are no longer proof of safety |
| Be extra careful when a chat moves from public to private | That move is a common step in many scam flows |
| Do not trust support accounts just because they use a real logo or brand name | Fake support is now common across social platforms |
| Be skeptical of urgent money requests, secret offers, and guaranteed returns | These are core scam tactics across many scam families |
| Never share one time codes, recovery phrases, or passwords in chat | Real services do not need them in that way |
| Report suspicious messages early | Reporting is imperfect, but early reports can still help stop spread |
These rules sound simple, but they match the best supported patterns in the recent evidence. [2], [3], [6], [7], [8], [12]
What kind of training really helps
Does scam awareness training work
Sometimes, but not all training is equally good. The most useful studies do not support one time generic tips as the best answer. They support practice based training that teaches people how manipulation works.
What kind of training seems to work best
Three approaches stand out:
| Training style | What it does well | Main caution |
|---|---|---|
| Interactive practice | Helps people judge real versus fake examples | The effect can fade over time |
| Inoculation style games | Teaches common manipulation tactics in a memorable way | People may become briefly over cautious right after training |
| Simulated scam conversations | Lets people practice responding in a safer learning setting | Still needs wider real world testing |
The simple lesson is that people learn better by practicing scam situations than by reading a short list of tips once and forgetting them later.
What platforms and institutions should do
Is this only an individual responsibility problem
No. The research strongly suggests that this cannot be solved by telling users to be more careful. Platforms, companies, payment systems, and public agencies all shape the scam environment.
What should they improve first
The evidence points to a short list of high value improvements:
- Faster removal of fake brand and fake support accounts
- Better review of ads and landing pages on social platforms
- Better sharing of cross platform scam signals
- Earlier blocking of scam infrastructure before campaigns fully launch
- Safer reporting systems with clear outcomes and feedback
- Detection systems that adapt to changing scam patterns instead of relying on old data
If someone remembers only one thing, what should it be
The most important point is this: modern scams often look normal at first. They may begin with a helpful reply, a clean ad, a brand logo, a QR code, a creator comment, a job message, or a friendly chat. The danger is usually not in the first contact alone. It appears as the interaction deepens, moves across channels, and starts asking for money, codes, passwords, recovery phrases, or secrecy.
What is the bottom line for 2026
The scam world of 2026 is shaped by the meeting of old manipulation and new technology. Al makes scams easier to write, target, and scale. Social platforms, ads, comments, and private messaging give scammers many ways to reach people. The best response is layered: calmer decisions by users, better design by platforms, better reporting, better detection, and training that teaches how scams work instead of only what they look like.
References
- [1] M. Schmitt and I. Flechais, “Digital deception: generative artificial intelligence in social engineering and phishing,” Artificial Intelligence Review, vol. 57, Oct. 2023, doi: 10.1007/s10462-024-10973-2.
- [2] B. Acharya et al., “Conning the Crypto Conman: End-to-End Analysis of Cryptocurrency-based Technical Support Scams,” 2024 IEEE Symposium on Security and Privacy (SP), pp. 17-35, Jan. 2024, doi: 10.1109/SP54263.2024.00156.
- [3] X. Li, A. Rahmati, and N. Nikiforakis, “Like, Comment, Get Scammed: Characterizing Comment Scams on Media Platforms,” Proceedings 2024 Network and Distributed System Security Symposium, 2024, doi: 10.14722/ndss.2024.24060.
- [4] P. Kotzias, M. Pachilakis, J. Aldana-Iuit, J. Caballero, I. Sánchez-Rola, and L. Bilge, “Ctrl+Alt+Deceive: Quantifying User Exposure to Online Scams,” Proceedings 2025 Network and Distributed System Security Symposium, 2025, doi: 10.14722/ndss.2025.241816.
- [5] M. Houtti, A. Roy, V. N. R. Gangula, and A. Walker, “A Survey of Scam Exposure, Victimization, Types, Vectors, and Reporting in 12 Countries,” ArXiv, vol. abs/2407.12896, Jul. 2024, doi: 10.48550/arXiv.2407.12896.
- [6] R. Oak and Z. Shafiq, “Hello, is this Anna? : Unpacking the Lifecycle of Pig-Butchering Scams,” Symposium On Usable Privacy and Security, pp. 1-18, Mar. 2025.
- [7] M. Weinz, N. Zannone, L. Allodi, and G. Apruzzese, “The Impact of Emerging Phishing Threats: Assessing Quishing and LLM-generated Phishing Emails against Organizations,” Proceedings of the 20th ACM Asia Conference on Computer and Communications Security, May 2025, doi: 10.1145/3708821.3736195.
- [8] B. Acharya et al., “The Imitation Game: Exploring Brand Impersonation Attacks on Social Media Platforms,” USENIX Security Symposium, 2024.
- [9] M. Muzammil, A. Pitumpe, X. Li, A. Rahmati, and N. Nikiforakis, “The Poorest Man in Babylon: A Longitudinal Study of Cryptocurrency Investment Scams,” Proceedings of the ACM on Web Conference 2025, Apr. 2025, doi: 10.1145/3696410.3714588.
- [10] C. Robb and S. Wendel, “Who Can You Trust? Assessing Vulnerability to Digital Imposter Scams,” Journal of Consumer Policy, vol. 46, pp. 27-51, Dec. 2022, doi: 10.1007/s10603-022-09531-6.
- [11] A. Roy, V. N. R. Gangula, and S. Mukherjee, “ShieldUp!: Inoculating Users Against Online Scams Using A Game Based Intervention,” ArXiv, vol. abs/2503.12341, Mar. 2025, doi: 10.48550/arXiv.2503.12341.
- [12] Z. Sun et al., “From Victims to Defenders: An Exploration of the Phishing Attack Reporting Ecosystem,” Proceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses, Sep. 2024, doi: 10.1145/3678890.3678926.
- [13] A. Papasavva et al., “Applications of AI-Based Models for Online Fraud Detection and Analysis,” Crime Science, vol. 14, Sep. 2024, doi: 10.1186/s40163-025-00248-8.
- [14] L. Ai et al., “Defending Against Social Engineering Attacks in the Age of LLMs,” Conference on Empirical Methods in Natural Language Processing, pp. 12880-12902, Jun. 2024, doi: 10.18653/v1/2024.emnlp-main.716.
- [15] A. Nahapetyan et al., “On SMS Phishing Tactics and Infrastructure,” 2024 IEEE Symposium on Security and Privacy (SP), pp. 1-16, May 2024, doi: 10.1109/SP54263.2024.00169.
- [16] B. Acharya and T. Holz, “An Explorative Study of Pig Butchering Scams,” ArXiv, vol. abs/2412.15423, Dec. 2024, doi: 10.48550/arXiv.2412.15423.
- [17] O. Hoffman, K.-Y. Peng, S. Kamal, Z. You, and S. Venkatagiri, ScamPilot: Simulating Conversations with LLMs to Protect Against Online Scams. 2026. doi: 10.1145/3772318.3791313.
