Scam at Scale: How AI Industrializes Victim Targeting

Merkle Science
June 3, 2025

Like savvy entrepreneurs, scammers aim to succeed at scale. They optimize for efficiency, experiment to boost their success rates, and chase the highest possible returns—their currency, however, is deception. 

Increasingly, these bad actors are leveraging automation and AI to streamline the entire fraud process, from identifying and grooming targets to executing and sustaining long-running crypto schemes. In this article, we examine how scammers are building fake identities at scale, customizing outreach through data-driven targeting, and using AI-driven chat tools to manage multiple victims simultaneously. 

Automated Persona Creation and A/B Tested Scripts

Think about how long it takes you to fill out a social media profile. You might spend far too much time fine-tuning how your background, experiences, and personality are presented to the world. Scammers face the same challenge—but at scale. They must fabricate entire personas from scratch, often by the hundreds or thousands. And they have to strike a delicate balance: the profiles must be attractive and engaging enough to earn attention and build trust, especially when offering financial “advice” as part of a pig butchering scam. But if they seem too perfect—or too generic—they risk triggering suspicion and being flagged as fake.

In a public service announcement issued in December 2024, the Federal Bureau of Investigation (FBI) warned that scammers were increasingly using generative AI to create large volumes of fake social media profiles. These AI-generated personas often include profile photos, life updates, and even private “selfies,” and are used in romance scams, confidence fraud, investment schemes, and spear-phishing campaigns. There is one key advantage of using AI-generated images over stolen photos: they won’t appear in reverse image searches, making them harder for skeptical targets to detect and more likely to pass as real.

These images don’t just showcase the scammer’s face—they construct an entire illusion of success. Scammers often supplement profile photos with visuals of luxury: exotic vacations, fine dining, high-end fashion, and expensive cars. The intent is to create “social proof”—curated signals of wealth designed to convince victims that they’re dealing with a sophisticated, successful investor. These lifestyle cues play a critical role in establishing credibility and building emotional trust early in the interaction.

But scammers aren’t just refining how they look—they’re also optimizing how they communicate. Borrowing tactics from digital marketing, they now deploy A/B testing to determine which messages resonate most with potential victims. By experimenting with different greetings, emotional appeals, or investment hooks on a small sample, they can identify the highest-converting scripts and scale them across hundreds of interactions. 

Some operations even layer on sentiment analysis, dynamically adjusting responses based on a target’s tone or hesitation. A skeptical victim, for instance, might receive a longer backstory or more rapport-building before the scammer makes their financial pitch. This kind of optimization turns scams into finely tuned funnels—where every word is tested, measured, and refined for maximum impact.

Use of AI to Scan and Profile Potential Victims on Social Platforms

If you read the anonymized accounts of pig butchering victims, one pattern becomes clear: there is no pattern. Victims span the full spectrum of society—recent graduates, retirees, seasoned professionals, single parents, and even happily married individuals. This demographic diversity suggests that scammers are casting a wide net, relying on a shotgun approach: they target anyone who engages, hoping emotional rapport and persistence will wear down resistance over time.

Yet beneath this broad targeting lies a more nuanced truth. Certain psychological or emotional triggers increase a person’s susceptibility to pig butchering scams. These include limited financial or digital literacy, high risk tolerance, excessive trust in online relationships, or recent emotional upheavals such as a breakup, divorce, or bereavement. While scammers may glean these signals manually by browsing profiles, it’s often difficult to do so at scale.

That’s where AI-driven profiling enters. Scammers now use automated tools to mine social media and online footprints, building psychological profiles of prospective targets. These profiles are not just demographic—they identify emotional vulnerabilities such as loneliness, FOMO, or a need for validation. For instance, a single mother frequently posting about a painful breakup may be especially receptive to love-bombing, emotional support, and gentle persuasion. In her case, appealing to a desire not to disappoint a romantic interest may prove more effective than promising high investment returns.

The rise of AI allows scammers to move beyond generic manipulation, enabling deeply personalized, psychologically targeted fraud at scale.

Chatbots Can Maintain Dozens of Concurrent Scam Conversations

There are now entire large language models (LLMs) tailored specifically for criminal use—WormGPT being one of the most well-known. But in reality, scammers don’t need specialized tools. In one cybersecurity experiment, researchers simply prompted Mistral AI’s open-source model (available on Hugging Face) to act as a scammer’s assistant. The model quickly complied, attempting to extract a password through social engineering tactics—no hacking required.

According to TrendMicro, AI-powered chatbots are commonly used for the early stages of pig butchering scams. They handle the initial outreach—sending greetings, sparking interest, and keeping prospects engaged—before handing off the conversation to a human operator who takes over once emotional or financial manipulation begins and more complex communication, such as video chatting with real-time deepfake face-swapping and voice cloning, is needed. This “human-in-the-loop” workflow allows scammers to scale operations dramatically, even targeting niche language groups such as European Portuguese or Québécois.

The broad accessibility of both generative AI and conversational bots has made such scams not only scalable, but alarmingly common. A study by cybersecurity firm McAfee found that 26% of people say they—or someone they know—have been approached by an AI chatbot posing as a real person on a dating app or social media platform.

But this new wave of fraud carries a paradox: AI may be too good. Just as investors are warned that if something sounds too good to be true, it probably is, the same caution applies to digital conversations. If someone always responds in flawless grammar, never uses slang, and types with an eerie level of polish—it may not be a person at all. It could be an AI-powered chatbot, or a scammer relying on one.

Conclusion 

As scammers industrialize their tactics using AI, automation, and psychological profiling, the scale and sophistication of crypto fraud are only accelerating. From fabricated personas to emotionally tailored scripts, today's scams are engineered for mass manipulation. In this landscape, traceability is essential. 

That’s where blockchain analytics comes in. Tools like Tracker, Merkle Science’s crypto tracing solution, help law enforcement and investigators follow the money, identify suspicious flows, and stop bad actors in their tracks. To stay ahead of AI-powered scams, reach out to Merkle Science for a free demo of Tracker and see it in action.