AUSTRAC USING MACHINE LEARNING TO BETTER UNCOVER INTERCONNECTED CRIMINAL GROUPS, IMPROVE AML ALERTS

An Australian regulator is aiming to create a better way to identify money laundering and related underlying predicate crimes by using artificial intelligence and machine learning to widen the scope of analytical capabilities beyond individual transaction trails.

That is one of the goals of the Australian Transaction Reports and Analysis Centre (Austrac) in its partnership with RMIT University in Melbourne detailed in a recently-released report describing their work on a new transaction monitoring system currently in testing that would put less pressure on the experience of individual analysts, lower false positives and work in a near real-time basis.

The partnership is meant to improve on what most financial institutions around the world are already doing out of necessity, as part of best practices or as a result of explicit anti-money laundering (AML) obligations. Currently, many institutions use complex automated transaction monitoring systems – tuned with sophisticated scenarios, algorithms and models – to generate alerts of possible suspicious activity indicative of money laundering or other crimes.

But a key drawback of these systems is that they generate many false positives, putting enormous strain on the experience and decision-making prowess of human analysts and the related ratio of human analysts to the volume of overall incoming alerts, in some cases resulting in compliance teams being overwhelmed and activity being missed.

The partnership between Austrac and RMIT is also one of the rare instances that a financial regulator has stated publicly it is tinkering with machine learning to improve results in financial crime detection and prevention, a move that no doubt will be reviewed and potentially mirrored by examiners in other countries.

The move could also nudge banks to further their own fintech and artificial intelligence (AI) pursuits.

“Prevention of money laundering is seen as a high priority by many governments, however detection of money laundering without prior knowledge of predicate crimes remains a significant challenge,” according to the report.

But part and parcel of the current problem is that “previous detection systems have tended to focus on individuals, considering transaction histories and applying anomaly detection to identify suspicious behavior.”

However, money laundering “involves groups of collaborating individuals, and evidence of money laundering may only be apparent when the collective behavior of these groups is considered.”

The system being crafted by Austrac, however, “advances the current state of-the-art by analyzing both explicit transaction relationships and implicit relationships derived from supplementary information.”

Analyzing smaller ‘communities’ for better results

The system “extracts small, meaningful communities from this network in a manner that allows existing business knowledge to be considered in the process. Supervised learning is then applied to these communities to obtain trained classifiers,” meaning the trained part of the system cognizant of a given set of known suspicious activities.

The Austrac system performs four main tasks:

  • Modelling of relationships derived from Austrac data as an attributed network.
  • Extraction of communities from the transaction network.
  • Calculation of features from extracted communities, capturing information related to transaction dynamics, party demographics and community structure.
  • Supervised machine learning, treating extracted communities as observations.

As a result, a “suitable level of accuracy is achieved at high levels of precision. This is an important characteristic for our system, as use in a live environment necessitates a low rate of false positives.”

A lower number of quality alerts is vital as human analysts will be following up on activity, the report said, adding that the project, as envisioned, will be able to analyze millions of transactions and boil down meaningful data.


Step by Step: How the Austrac system, designed to run on an ongoing basis, analyzing new activity as it occurs, works:

  1. Initially, Austrac extracts a random set of communities from the transaction network, and combines these with a set of known suspicious communities.
  2. That forms the training set for the supervised learning piece of the system and creates a “trained classifier.”
  3. Now having obtained the “trained classifier,” the system is then employed for analysis of new activity.
  4. For each new transaction reported, the initiating party is treated as a seed and the community containing this party is extracted from the network.
  5. Selected features are then calculated, and the community is classified as suspicious or non-suspicious using the previously trained classifier.
  6. Those communities that are deemed to be suspicious are then passed to intelligence analysts for further investigation.

For its project, Austrac only looked at transactions tied to wires and large cash deposits submitted to the financial intelligence unit (FIU) in 2012.

The core elements of the initiative include the “construction of a network model representing relationships derived from financial records held by Austrac, extraction of meaningful communities from this network, generation of features capturing the key characteristics of these communities, and finally, classification using a supervised learning approach.”

Swimming together transactions, supplementary relationships

The “major novelty of this system” and improvement of current systems, according to the research paper, is described below:

  • Network analysis that combines financial transactions and supplementary relationships: The network analyzed by our system contains multiple relationships, represented using typed edges. Examples of edge types would be transactions and associations. In addition to the actual remittance of funds, parties may be linked by shared accounts, shared use of agents, overlapping geolocations, etc.
  • Relationship weighting: In determining the strength of a connection between two parties, different types of relationships are weighted to reflect perceived importance. This allows business knowledge to be incorporated into the network model.
  • Treatment of groups as observations for supervised learning classifiers: Comparable systems described in the literature have focused on individual parties, typically analyzing each party’s transaction history in isolation. Our system considers groups of transacting parties as the basic unit of analysis, extending the notion of ‘know your customer’ to a network setting.
  • Tighter extractions, faster intelligence: Groups of tightly interacting parties may be extracted from a network in different ways. We have elected to use a bottom up approach for this task, extracting relevant parties from a small region centered on each new transaction. This approach is particularly suited to the operational needs of a near-real-time intelligence environment.

A system using those parameters would likely be able to improve on current financial institution initiatives – even those using rules-based scenarios, decision tress and cluster-based supervised and unsupervised learning – as it will be able to better identify and inculcate new methods of financial crime beyond previously established patterns already encoded in current systems, according to the report.

Moreover, the Austrac system could lower the size of networks of interlinked entities captured by transaction analysis, allowing the entities actually involved in money laundering to be identified more quickly.

“Another drawback of existing methods is that they often result in excessively large communities,” according to the report. “In general, meaningful communities are thought to contain less than 150 individuals…and published typologies indicate that investigation of money laundering operations often focuses on a relatively small number of key parties.”

AI being looked at by more banks, regulators

Those sentiments, and a desire to improve outcomes, are supported by recent reports on AI and AML.

“The traditional approach of using rule-based software and large compliance teams are proving inadequate to meet with regulatory and business targets,” according to Arin Ray and Neil Katkov, analysts with Celent in a new research report entitled “Artificial Intelligence in KYC-AML: Enabling the Next Level of Operational Efficiency.”

“Banks need to consider new tools and technology to better address the challenges plaguing their KYC-AML operations,” according to the report released in August.

AI solutions in KYC-AML can “bring in significant operational and cost benefits through automation of manual processes, and superior analysis and insights,” according to the report. It lists several examples, including:

  • A central data pool with “intelligent” data allows for holistic customer view and ease of data access, enabling faster investigation and better results.
  • Data can be trained to become context-aware by incorporating prior knowledge and rules. Additionally, the solution can be made to learn and update rules based on ongoing and new cases.
  • Unstructured data analysis capabilities help in tracking news, social media and web information, and performing linguistic analysis, enabling easier and efficient parsing of long lists, other information sources, and employee communication.
  • Advanced analytical capabilities help in identifying patterns, links and networks of bad actors, and suspicious activities.
  • AI capabilities can also help in scanning documents and better understanding regulatory changes.

Spanning the financial sector, banks “across the board have expressed interest to explore and adopt AI-enabled solutions as part of their KYC-AML operations; actual adoption is likely to be led by large global banks,” according to the report.

Some banks are already using AI enabled solutions, and “we are likely to see its further adoption the next 18-24 months,” according to the report.

OCC creates new office to review innovation, AI

Innovation and how artificial intelligence can be used by banks, fintech firms and potentially even examiners are also on the minds of officials at the U.S. Treasury’s Office of the Comptroller of the Currency (OCC), the regulator of the country’s largest and most complex banks.

In October, the OCC stated in a report looking at the broader issue of how the agency would support a framework for responsible innovation that it had approved the creation of an “Office of Innovation.”

The agency stated the office will be headed by a Chief Innovation Officer assigned to OCC headquarters with staff located in Washington, New York and San Francisco.

The office “will be the central point of contact and clearing house for requests and information related to innovation,” according to the OCC, adding that it is slated to begin operating in the first quarter of next year.

It will also implement other aspects of the agency’s framework for responsible innovation, including:

  • establishing an outreach and technical assistance program for banks and nonbanks,
  • conducting awareness and training activities for OCC staff,
  • encouraging coordination and facilitation,
  • establishing an innovation research function, and
  • promoting interagency collaboration.

“Technological advances, together with evolving consumer preferences, are reshaping the financial services industry at an accelerated pace,” according to the OCC.

“Over the last several years, a large and growing number of nonbank financial technology companies (fintechs) have emerged to provide financial products and services through alternative platforms and delivery channels.”

On the financial crime compliance end, Fintechs also “are leveraging new technologies and processes, such as cloud computing, application programming interfaces, distributed ledgers, artificial intelligence, and big data analytics,” the agency said.

*Editor’s Note: Certain words in the report, such as “behavior” and “analysing,” were changed to U.S. spellings for consistency and clarity.