Back to All Blog Posts

Federal regulators to banking industry: How do you use AI to fight financial crime, train data, mold models, secure cyber, serve customers?

AI grpahics on distorted background

The skinny:

  • The five top regulators of the country’s financial services sectors are querying the industry about the pros, cons, potential and prognostication tied to artificial intelligence, including in the areas of financial crime compliance.
  • The U.S. Treasury’s Office of the Comptroller of the Currency (OCC), the regulator of the country’s largest and most complex institutions, and others have released a request for information (RFI) to better understand how, why and why not when it comes to banks of all sizes using artificial intelligence (AI) – including a rare glimpse of risks pollyannish third-party vendors don’t want to discuss.
  • These operations have roughly the next two months to answer a bevy of questions tied to AI, including systems, models and automated machinations to better calibrate anti-money laundering (AML) risk, engage in deeper and more data-driven investigations, more quickly uncover frauds and counter rising cyber vulnerabilities and high-profile attacks.

By Brian Monroe
bmonroe@acfcs.org
April 7, 2021

The five top regulators of the country’s financial services sectors are querying the industry about the pros, cons, potential and prognostication tied to artificial intelligence, including in the areas of financial crime compliance.

The U.S. Treasury’s Office of the Comptroller of the Currency (OCC), the regulator of the country’s largest and most complex institutions, and the top oversite bodies for credit unions and consumer protection have released a request for information (RFI) to better understand how, why and why not when it comes to banks of all sizes using artificial intelligence (AI).

These operations have roughly the next two months to answer a bevy of questions tied to AI, including systems, models and automated machinations to better calibrate anti-money laundering (AML) risk, engage in deeper and more data-driven investigations, more quickly uncover frauds and counter rising cyber vulnerabilities and prevent high-profile attacks.

To read the full 23-page RFI and comment, click here.

Depending on the responses, regulators may issue more guidance on where AI can help institutions and pitfalls to avoid.

The interagency statement also gave a rare glimpse of AI apocalyptic scenarios, including disorganized data tainting conclusions, unvalidated models running amok or being “poisoned” by hackers, uncoached systems self-updating beyond human understanding and even a lack of “explainability” when it comes to engaging with examiners.

The RFI “seeks comments to better understand the use of AI, including machine learning, by financial institutions; appropriate governance, risk management, and controls over AI; challenges in developing, adopting, and managing AI; and whether any clarification would be helpful,” according to the notice.

Regulators also want to know if there are any barriers for entry in testing the waters for AI for smaller banks and credits unions, that may not have the budgets, systems, internal expertise or clout to attract vendors to take a chance on them to upgrade technology for the greater good.

The RFI sought input on AI for a broad array of banking systems, from countercrime compliance to customer service, credit risk to automated chatbots for online banking and natural language processing for telephone transactions.

For many who follow the at-times collegial, and other times, adversarial relationship banks have with their regulators, the RFI is somewhat comical.

In surveys, public statements and in private grumblings, fincrime compliance teams have stated the main reason they have not more widely adopted AI and other regulation technology, or regtech, systems is because they feared regulatory reactions.

At issue: examiners exsanguinating for missed AML minutiae over here even while you are innovating for effectiveness over there.

Person's eye with graphics

When it comes to AI, institutions must weigh benefits, risks

Even so, the RFI detailed a host of potential benefits for taking a gamble on AI.

“AI has the potential to offer improved efficiency, enhanced performance, and cost reduction for financial institutions, as well as benefits to consumers and businesses,” according to regulators. “AI can identify relationships among variables that are not intuitive or not revealed by more traditional techniques.”

In that same vein, rather than a system being overwhelmed by data, AI generally thrives in such an environment.

“AI can better process certain forms of information, such as text, that may be impractical or difficult to process using traditional techniques,” according to the RFI. “AI also facilitates processing significantly large and detailed datasets, both structured and unstructured, by identifying patterns or correlations that would be impracticable to ascertain otherwise.”

Which is why so many large international banking groups are testing the waters with AI when it comes to fincrime compliance programs.

The RFI gives several examples of current AI use cases in that arena, including:

  • Flagging unusual transactions: This involves employing AI to identify potentially suspicious, anomalous, or outlier transactions (e.g., fraud detection and financial crime monitoring). It involves using different forms of data (e.g., email text, audio data – both structured and unstructured), with the aim of identifying fraud or anomalous transactions with greater accuracy and timeliness.
  • Internal, external threat identifier, force multiplier: It also includes identifying transactions for AML investigations, monitoring employees for improper practices, and detecting data anomalies.
  • Cybersecurity sword and shield: AI may be used to detect threats and malicious activity, reveal attackers, identify compromised systems, and support threat mitigation. Examples include realtime investigation of potential attacks, the use of behavior-based detection to collect network metadata, flagging and blocking of new ransomware and other malicious attacks, identifying compromised accounts and files involved in exfiltration, and deep forensic analysis of malicious files.
Hand extending out with graphics

What the AI regtech vendors don’t tell you can hurt you

But what the RFI did that many pollyannish reports on AI and AML don’t is explain that there are also risks when implementing technologies that can think for themselves.

The use of AI “could result in operational vulnerabilities, such as internal process or control breakdowns, cyber threats, information technology lapses, risks associated with the use of third parties, and model risk, all of which could affect a financial institution’s safety and soundness.”

The regulators also touched on the double-edge sword of data, noting it might not be the panacea third-party technology vendors make it out to be.

“Data plays a particularly important role in AI,” they said. “In many cases, AI algorithms identify patterns and correlations in training data without human context or intervention, and then use that information to generate predictions or categorizations.”

But because the AI algorithm is “dependent upon the training data, an AI system generally reflects any limitations of that dataset,” according to the RFI. “As a result, as with other systems, AI may perpetuate or even amplify bias or inaccuracies inherent in the training data, or make incorrect predictions if that data set is incomplete or non-representative.”

These concerns are not lost on fincrime compliance professionals.

Over the last decade, AML teams have had to create entire “model risk management validation” teams to better ensure not just the accuracy and veracity of the data, but the integrity and ingenuity of the backend models powering risk assessment, transaction monitoring and sanctions screening systems.

The RFI also highlighted a lesser known cyber risk for AI: not just data theft or corruption, but data “poisoning,” through targeting of the original batch of historical data used to coach and craft an AI system, called “training data.”

“Like other data-intensive technologies, AI may be exposed to risk from a variety of criminal cybersecurity threats,” according to the RFI. “For example, AI can be vulnerable to ‘data poisoning attacks,’ which attempt to corrupt and contaminate training data to compromise the system’s performance.”

But not all bad AI learning habits come from outside hack attacks.

In some cases, AI systems can be guilty of “overfitting,” which basically means they run into the same problems as married couples: taking something out of context and blowing it out of proportion.

“Overfitting” can occur “when an algorithm learns’ from idiosyncratic patterns in the training data that are not representative of the population as a whole,” according to regulators.

“Overfitting is not unique to AI, but it can be more pronounced in AI than with traditional models. Undetected overfitting could result in incorrect predictions or categorizations.”

More pressure to innovate, tinker with technology in quest for ‘effectiveness’

The timing of the regulatory inquiry on AI, and the potential and perils for compliance teams, is not an accident.

It comes just months after the U.S. unveiled the biggest update to the country’s fincrime compliance defenses in the past 20 years: the Anti-Money Laundering Act (AMLA), what many are calling a “once in a generation” event.

The AMLA is anchored in compliance teams creating richer and more relevant intelligence for law enforcement, providing more weapons and funding for FinCEN to analyze data and divine criminal trends and closing the loop with industry by forging stronger public-private information sharing partnerships.

The AML upgrades in Congress parallel initiatives at FinCEN, part of a multipronged approach to strengthen fincrime compliance countermeasures, shift the needle toward “effectiveness” and sharpen intelligence sent to federal investigators.

For banks, AML efforts would be shifting more toward creating “effective and reasonably designed” programs that produce filings with a “high degree of usefulness” to law enforcement, according to a September notice.

FinCEN is also engaging stakeholders to glean if they could better manage risks, resources and threat actors if the bureau created national AML priorities – a similar refrain as in the AMLA – which would be informed by other national illicit finance, proliferation and terror risk assessments published in recent years.

Taken together, the RFI on AI by regulators and FinCEN pushing the sector to better analyze and wield data, fincrime compliance professionals should be more ready to grill their vendors.

AML teams must be cognizant of current and future AI pain points to better allay examiner fears of potential automated data disasters but still not lose sight of the quickly encroaching standards where investigative results will outweigh regulatory processes.

See What Certified Financial Crime Specialists Are Saying

"The CFCS tests the skills necessary to fight financial crime. It's comprehensive. Passing it should be considered a mark of high achievement, distinguishing qualified experts in this growing specialty area."

KENNETH E. BARDEN 

(JD, Washington)

"It's a vigorous exam. Anyone passing it should have a great sense of achievement."

DANIEL DWAIN

(CFCS, Official Superior

de Cumplimiento Cidel

Bank & Trust Inc. Nueva York)

"The exam tests one's ability to apply concepts in practical scenarios. Passing it can be a great asset for professionals in the converging disciplines of financial crime."

MORRIS GUY

(CFCS, Royal Band of

Canada, Montreal)

"The Exam is far-reaching. I love that the questions are scenario based. I recommend it to anyone in the financial crime detection and prevention profession."

BECKI LAPORTE

(CFCS, CAMS Lead Compliance

Trainer, FINRA, Member Regulation

Training, Washington, DC)

"This certification comes at a very ripe time. Professionals can no longer get away with having siloed knowledge. Compliance is all-encompassing and enterprise-driven."

KATYA HIROSE
CFCS, CAMS, CFE, CSAR
Director, Global Risk
& Investigation Practice
FTI Consulting, Los Angeles

READY TO BEGIN YOUR JOURNEY TOWARDS
CFCS CERTIFICATION?