Back to All Articles

ACFCS Special Contributor Report: How Proactive Digital Detection Is the New Frontline Against Financial Crime

ACFCS welcomes perspectives from across the financial crime prevention community. The opinions expressed in this piece are the views of the author and do not necessarily represent the views of ACFCS or its members.

The skinny:

  • In this special ACFCS Contributor Report, a law student addresses the responsibilities of tech providers, whose platforms are major enablement tools in financial crime, related to the prevention and detection of financial crimes;

  • the social implications of AI on youth and adults and how technology has shifted social perspectives on risk;

  • and what tech companies can learn from the BSA framework to better protect our communities.

 

A Contributor Perspective by:
Jacqueline Kitzes, MBA
A third-year law student (3L) at the Benjamin N. Cardozo School of Law & co-founder of the AI and the Law Society
December 15, 2025
jacquelinekitzes@gmail.com

With editing and minor content contributions from ACFCS staff

Generative artificial intelligence (AI) is fundamentally reshaping the landscape of financial crime, accelerating the speed, scale, and sophistication of illicit activity across fraud, exploitation, and trafficking.

Criminals now leverage AI to automate deception, obscure identity, and operate across digital ecosystems at a pace that outstrips the traditionally reactive mechanisms of financial-sector detection. As a result, risk is increasingly created long before any transaction occurs—and often far outside the visibility of financial institutions.

This shift exposes a structural limitation in the current financial-crime framework. While banks and other regulated entities remain essential for detecting illicit financial flows, transaction-based monitoring is no longer sufficient on its own to keep pace with AI-enabled crimes. A teller, compliance analyst, or SAR-filing institution cannot reasonably be responsible for detecting harms that originate on technology platforms, in private digital spaces, and well before monetization occurs.

Instead, meaningful prevention now depends on technology companies recognizing their role as upstream risk holders and implementing safety and security principles within the products that enable these interactions.

The financial sector has already developed a mature regulatory playbook for this kind of risk: the Bank Secrecy Act (BSA). Through mechanisms such as Suspicious Activity Reports (SARs), Know Your Customer (KYC), and Enhanced Due Diligence (EDD), the BSA demonstrates how institutions can be required to monitor behavior, escalate risk, and report suspicious activity without bearing sole responsibility for criminal conduct.

By adapting these principles to AI-driven platforms, technology companies can move from passive intermediaries to active participants in financial-crime prevention—creating a model of ethical AI accountability that reflects the realities of modern, digitally mediated harm.

The Erosion of Digital Caution: AI as the New Recruiter

 
Many forms of financial crime and exploitation operate through a recognizable pipeline: an initial phase of recruitment or inducement, followed by increasing control, dependency, or coercion, and only in the later stages resulting in monetization, financial transfer, or laundering.
 
This pattern is visible not only in certain forms of human trafficking, but also in unwitting money-mule schemes, online fraud networks, and even extremist recruitment, where trust and ideological alignment are established long before any financial activity occurs.
 
Within human trafficking specifically, these dynamics vary by type. While some forms of sex trafficking involve prolonged grooming and emotional manipulation, labor trafficking more often relies on deceptive recruitment practices, economic pressure, and the abuse of vulnerability rather than intimate grooming.
 
In both contexts, traffickers consistently target individuals experiencing emotional, social, or economic instability—such as those seeking employment, belonging, safety, or shelter—laying the groundwork for exploitation well before financial indicators become visible.
 
Person in a hoodie with glowing AI symbols around them.

The introduction of generative AI is fundamentally reshaping how vulnerability is created and exploited across age groups. While minors remain among the most at risk, adults are increasingly victimized through AI-enabled scam schemes, including romance fraud, investment scams, and social-engineering operations that leverage AI to simulate trust, emotional intimacy, and credibility at scale.

Platforms such as Meta’s Facebook and Instagram have been particularly susceptible to these dynamics, where AI-assisted impersonation, scripted engagement, and targeted manipulation facilitate prolonged deception before any financial loss occurs.

Users of all ages are increasingly drawn to AI companionship or advisory applications that simulate human interaction. Children may treat these systems as “therapists” or nonjudgmental friends, while adults may rely on them for advice, emotional support, or professional guidance.

This consistent, supportive validation from a screen can erode the instinctive caution that traditionally protected users from trusting unknown or malicious actors online.

That emotional openness often migrates to other digital spaces—gaming environments and social media platforms such as Snapchat, Roblox, Discord, Instagram, and Facebook—where traffickers or scammers can more easily pose as peers.

Critically, nearly one-third of teenage users report preferring to discuss serious matters with an AI companion rather than a real person, and approximately one-quarter acknowledge sharing personal information, including real names, locations, or private concerns, with these systems.

This vulnerability sets the stage for rapid, sophisticated recruitment tactics, often obscuring the identity and intent of malicious actors long before financial coercion begins.

The Evolving Threat Arsenal: Scale, Scripts, and Impersonation

Generative AI tools and large language models (LLMs) function as highly effective, scalable instruments for bad actors, streamlining recruitment, intensifying psychological manipulation, and deliberately obscuring red flags across a range of exploitative schemes—from human trafficking to financial scams, unwitting money-mule operations, and even recruitment for extremist or terrorist activity.

As an example in the human trafficking context, the grooming of victims used to be a slow, manual process dependent on the predator's individual skill set. AI completely transforms this process, enabling a single trafficker to generate individualized messages, tailor emotional responses, monitor profiles, and conduct multiple grooming conversations simultaneously. This exponential increase in capability turns grooming into a mass operation.1 

To maximize efficiency, a bad actor uses open-source LLMs to automate the introductory phase of grooming across dozens of targets: storing each individual’s personal details, mimicking slang, typing quirks, and insecurities, and generating emotionally calibrated responses continuously.

This level of scaling allows psychological manipulation to proceed effectively for months before any financial transaction even registers.

Traffickers deploy AI to bypass awkward phrasing or delayed responses, creating convincing personas that instantly build trust.

They achieve persuasive deception by utilizing AI translation tools to craft culturally nuanced messages that resonate with victims. Furthermore, they leverage AI for deepfake creation, generating non-consensual exploitative material (AIG-CSAM or "deepfake nudes) that is then weaponized for coercion.2 

Imagine, for example, a bad actor seeks to quickly build rapport with a target—on a gaming platform, social media site, or dating app. For minors, AI may craft messages to mimic a peer or provide emotional support. For adults, AI can simulate a trustworthy financial advisor, a romantic interest, or a business partner, tailoring language and tone to exploit insecurities, build trust, and induce the sharing of sensitive information.

The AI instantly produces emotionally calibrated scripts, eliminating linguistic vulnerabilities that might otherwise serve as an early warning sign.3 

The Financial Crime Blind Spot: Why SARs Come Too Late

Under the BSA, financial institutions are mandated to conduct due diligence, monitor transactions, and file Suspicious Activity Reports (SARs) when illicit financial activity is detected.

This framework is essential for following the financial trail, disrupting criminal networks, and uncovering the profit motive behind a range of crimes, including human trafficking, fraud schemes, money-laundering operations, and other financial exploitation.

However, reliance solely on transactional monitoring means intervention only occurs during the later stages of the crime. By the time a financial indicator triggers a SAR, the victim has already suffered the primary psychological trauma of recruitment, grooming, and coercion.

This blind spot is precisely where AI-enhanced exploitation thrives, increasing sophistication and anonymity to reduce the likelihood of detection.4 

If, for instance, a 16-year-old is groomed online using AI-generated emotional manipulation, or an adult is targeted through an AI-assisted romance scam or investment fraud scheme, financial indicators often appear only after exploitation has begun, manifesting as transfers to high-risk jurisdictions, crypto micro-transactions, or fraudulent payments. By the time a financial institution files a SAR on these transactional anomalies, the victims have already suffered irreversible harm.

The earliest, most critical warning signs were the digital grooming behaviors that could potentially be detected months earlier, not the financial ones.5 

The AI-BSA Imperative: Upstream Accountability for Tech

The effectiveness of current crime-prevention efforts hinges on proactive identification and mitigation of risk. AI-enabled platforms increasingly serve as vectors for a wide range of illicit activity, from human trafficking and financial scams to cyber-enabled attacks, including large-scale espionage and fraud campaigns.

As a result, the onus must shift to technology companies—including social media platforms, AI companion applications, and other digital services—to monitor their products and implement safety, security, and ethical-use principles.

Financial institutions and law enforcement remain essential partners, but meaningful prevention now requires that tech companies act as upstream risk holders, detecting and mitigating behavioral and transactional indicators of illicit activity before harm occurs.

Framework for Accountability: The AI-BSA Imperative

The exponential rise of generative AI has effectively moved the most critical stages of exploitation—recruitment and grooming—into a regulatory blind spot, necessitating an expansion of accountability modeled on financial-crime regulation.

The established Bank Secrecy Act (BSA) mandates financial institutions to monitor the flow of funds using tools like transaction monitoring, CDD/EDD, and SARs to expose criminal networks.

This framework provides a potent blueprint for shifting responsibility upstream to technology companies that host platforms enabling pre-crime victimization. Specifically, this proposed AI-BSA framework would require the technology sector to implement analogous controls.6

AI SARs on Digital Behavior

The traditional financial SAR fails to intervene in time, as indicators are generated only in the later stages of trafficking, long after psychological trauma has occurred.

Therefore, tech companies should be mandated to file AI-SARs—Suspicious Activity Reports on digital behavior—which proactively identify and report high-risk, non-financial behavioral patterns indicative of mass grooming, coercion, or the creation and distribution of AI-Generated Child Sexual Abuse Material (AIG-CSAM).

Detecting signs of suspicious activity, including concerning conversations or fabricated image use, must become a central part of technology platforms' duty to prevent crime.7 

AI KYC (Know Your User)

Improved verification and account assurance is essential for all users, not just minors, to prevent AI platforms from being exploited by bad actors.

This enhanced layer of vigilance is necessary to combat the anonymity and sophistication that generative AI affords bad actors, whether targeting minors, defrauding adults, or orchestrating large-scale corporate or cyber-enabled attacks.

Yet most AI platforms rely only on self-attestation, providing little real protection. If AI companies were required to implement robust, verifiable account-creation checks (AI-KYC), they could reduce access for high-risk users and bad actors from the outset.

AI-KYC lays the foundation for broader oversight: knowing who is creating an account is just the first step. Customer Due Diligence (CDD) involves understanding how an individual intends to use the platform, while Enhanced Due Diligence (EDD) is triggered when risk indicators suggest potentially harmful activity.

For example, if AI systems monitored for key phrases or behavioral patterns indicative of coercion, grooming, or other illicit intent, platforms could flag risky accounts early and limit their ability to harm others. Applied in combination, AI-KYC, CDD, and EDD establish a proactive framework to detect and prevent exploitation before it escalates.

AI EDD: Enhanced Due Diligence for AI Platforms

Generative AI allows criminals to automate and scale grooming operations across dozens of victims simultaneously, dramatically increasing the reach and efficiency of illicit activity.8 

Building on the EDD concept, AI-EDD would require technology platforms to apply heightened scrutiny and system-level monitoring to users or groups flagged for high-risk behaviors.9 

This enhanced layer of vigilance is necessary to combat the anonymity and sophistication that generative AI affords traffickers.

In Summary

Technology companies bear a significant responsibility to ensure their platforms do not become venues for exploitation.

Specifically, they must implement Safety by Design principles, evaluating how their tools could be misused to generate harmful or abusive content, and acting decisively to prevent harm before deployment.

This must include developing AI safety features designed to detect the creation of persona aligned with trafficking typologies and scanning for language containing threats or coercion.

AI technology presents a dangerous challenge, enabling criminals to operate at scale and with convincing deception.

However, applying the principles of financial crime compliance to the digital sphere—shifting the focus from detecting ill-gotten proceeds to detecting the pre-crime tactics of grooming and recruitment—is the necessary next step in protecting victims and holding institutional enablers accountable.

In this evolving digital landscape, prevention is the only true form of protection.

About the author

Jacqueline Kitzes, MBA, is a third-year law student at the Benjamin N. Cardozo School of Law and a co-founder of the school’s AI and the Law Society.

She is currently enrolled in Professor Barry Koch’s Global Corporate Compliance course, where her studies include issues at the intersection of emerging technologies, corporate governance, and financial crime compliance.

Jacqueline is interested in corporate compliance, governance, and the evolving role of technology in risk management and regulatory frameworks.

See What Certified Financial Crime Specialists Are Saying

"The CFCS tests the skills necessary to fight financial crime. It's comprehensive. Passing it should be considered a mark of high achievement, distinguishing qualified experts in this growing specialty area."

KENNETH E. BARDEN 

(JD, Washington)

"It's a vigorous exam. Anyone passing it should have a great sense of achievement."

DANIEL DWAIN

(CFCS, Official Superior

de Cumplimiento Cidel

Bank & Trust Inc. Nueva York)

"The exam tests one's ability to apply concepts in practical scenarios. Passing it can be a great asset for professionals in the converging disciplines of financial crime."

MORRIS GUY

(CFCS, Royal Band of

Canada, Montreal)

"The Exam is far-reaching. I love that the questions are scenario based. I recommend it to anyone in the financial crime detection and prevention profession."

BECKI LAPORTE

(CFCS, CAMS Lead Compliance

Trainer, FINRA, Member Regulation

Training, Washington, DC)

"This certification comes at a very ripe time. Professionals can no longer get away with having siloed knowledge. Compliance is all-encompassing and enterprise-driven."

KATYA HIROSE
CFCS, CAMS, CFE, CSAR
Director, Global Risk
& Investigation Practice
FTI Consulting, Los Angeles

READY TO BEGIN YOUR JOURNEY TOWARDS
CFCS CERTIFICATION?