As ACFCS surveys the landscape of what new challenges and opportunities in financial crime 2018 will bring, we are continuing our “Lessons Learned” series, asking key thought leaders what last year taught the community and how that knowledge should help arm compliance professionals for the year ahead.

Not surprisingly, a good predictor of what will happen in 2018 is rooted in trends from 2017, a year where criminals made history with record hack attacks and equally massive data hauls that put millions of people and companies at risk.

These groups – whether large organized criminal outfits, rogue nation state regimes or small-time criminals – didn’t discriminate.

Their targets spanned the spectrum of small and large businesses alike, including banks, law firms, and household name companies, even gaining access to the vast treasure trove of information held by a credit reporting agency.

That likely will only continue this year, and potentially even get worse.

In one word, the sheer magnitude of the data obtained in 2017 was “unprecedented,” said Keith Furst, founder of Data Derivatives, a boutique consulting firm helping institutions with implementing, fine tuning, and validating financial crime systems.

Furst was kind enough to lend his thoughts and insight on these issues and others in a chat with ACFCS Director of Content, Brian Monroe. Here is an edited transcript of that conversation.

What do you think were the biggest financial crime trends in 2017 and why?

One of the biggest financial crime trends of 2017 was the commoditization of sensitive data. While cyberattacks have been increasing in sophistication and frequency for the past few years, the sheer magnitude and quality of the data obtained in 2017 was unprecedented.

For example, the Equifax hack will undoubtedly change the rules of the identification game. In other words, how do I know, you are who you say you are? Simply, possessing the correct data is not enough anymore.

One way to address identification is using knowledge-based authentication (KBA) where questions are asked to the person, which the real person and not the average cybercriminal should know.

Also, it implies that the answers to some of those questions may not be easily accessible in cyberspace. The other emerging trend is biometrics, which could help address some of the identity problem, but could create other issues. For example, if one day, your fingerprints can authorize a money transfer, open your car, and unlock your phone, then what happens when your fingerprints are stolen?

The sad thing about the current state of the world, is that everything is for sale – including sensitive data – and nothing is off limits. There are various marketplaces on the darknet that specialize in the sale of sensitive data, credit card information, child sexual exploitation, hacker-for-hire services, etc.

The fact that data breaches happened with greater frequency and success in 2017 fed the demand as ordinary criminals learned how to monetize these data sources.

For anyone who wants a comprehensive, readable, and non-technical account of cyberwarfare, cybersecurity, and cyberattacks then I highly recommend the book, “Virtual Terror: 21st Century Cyberwarfare,” by Daniel Wagner.

How did the industry respond to those vulnerabilities, regulatory focal points or criminal tactics?

Cybersecurity is a very complex issue because it involves many disparate and amorphous actors, including other nations initiating cyberattacks. Hence, imposing regulations on the private sector can help strengthen protections and controls, but it may not address all of the actors, issues, and challenges in a comprehensive way.

For example, it has been documented that the Chinese government initiates cyberattacks against companies in the United States and steals intellectual property, which is shared with the private and academic sectors to help fuel their economic growth.

So, simply imposing regulatory requirements on private companies that have to protect themselves from an adversary with the financial resources, technical expertise, and determination of a foreign government is not a fair fight.

In other words, cybersecurity is also a topic of foreign policy, and the US government should clearly define parameters of what types of aggression fall into what category and what types of responses are permissible from the private sector.

That being said, there is a lot of value in creating a framework for cybersecurity best practices and the New York Department of Financial Services (NYDFS) was the first US financial services regulator to propose one with its part 500 regulation.

Let’s examine the case of Equifax, to understand what responsibility it bears, a situation where the company got hacked, reportedly, because of a vulnerability identified, but never patched.

Equifax failed to deploy a patch that could have prevented the hack from happening, which means there was an internal governance failure. The other major failure of Equifax was that the company didn’t encrypt the social security numbers of millions of people and left them in a plain text format.

Hence, if a hacker did breach their system, accessing the data was that much easier. However, the one thing that Equifax can’t control is the quality of software available on the market.

Could subjecting institutions to more stringent regulatory rules be unfair, to a certain degree, by not holding the software industry accountable for the products they produce and the cybersecurity standards adhered to?

In summary, it’s a good thing that the NYDFS created the part 500 cybersecurity rule, but policy makers must not lose sight of the fact that this is a complex problem with many interrelated actors and penalizing specific agents within the ecosystem could obfuscate the problem.

The financial crime resulting from data breaches also reemphasizes the urgent need for more robust information sharing mechanisms among foreign governments, financial intelligence units (FIU), corporations, and other law enforcement groups.

What else do you think financial crime compliance professionals, regulators and FIs should be doing to better detect and prevent financial crime?

The Clearing house published an excellent paper in February 2017 titled, “A New Paradigm: Redesigning the U.S. AML/CFT Framework to Protect National Security and Aid Law Enforcement,” where they outline some key recommendations.

I don’t agree with all of the recommendations proposed, but a good majority of them make a lot of sense.

The paper discusses information sharing, clarifying regulatory rules, the need for a central repository of beneficial ownership information, regulatory sandboxes, etc. I agree with the paper’s recommendation that regulators should offer institutions the option to participate in regulatory sandboxes under a safe harbor rule that prevents penalties if something goes wrong.

US regulators seem to be worried that allowing sandboxes will give institutions the opportunity to wiggle their way out of responsibility.

The reality is that identifying money laundering and other types of financial crime is very complex and using more advanced technology, such as the machine learning, natural language processing (NLP), and computer vision, can aid in that process.

Many enforcement actions reference governance as one of the main causes to serious compliance failures. But why are compliance programs so hard to govern effectively?  Well, because they are complex systems, and managing complexity is not easy. This leads to another question of whether new technology can help reduce complexity and make governance easier.

Artificial intelligence (AI) and regulatory technology (regtech) are full of hype right now and sometimes it’s hard to parse out the prize from the promise. However, institutions should be cautiously optimistic, as am I, and should start by focusing on innovation with small use cases regardless of the regulatory environment they are in.

There have been some incredible advances and achievements of AI-embedded technology, so institutions need to start experimenting now so they don’t fall behind.

Also, big data platforms can help address one of the major issues plaguing financial crime programs for years, which is data integrity. In these central repositories, institutions can manage the enterprise meaning of their data and not only its movement.

What is an example you have seen using these technologies?

There was an AI vendor which helped a leading global financial institution reduce false positive alerts by 20% from its transaction monitoring system (TMS). This is an important step in the right direction because it frees up capital to invest in other areas of a compliance program, such as risk assessments, model risk management, quality assurance, etc.

What do you think will be the big issues to tackle in 2018?

There will probably be a spike in new corporation registrations, including shell companies, as Trump’s new tax plan, Tax Cuts and Jobs Act or TCJA, incentivizes people to open corporations as vehicles to hold assets, shield income, pay dividends, etc.

It’s ironic that on one hand, US policymakers are pushing for more transparency on the beneficial owners of legal entities as proposed by the TITLE Act and Corporate Transparency Act, but on the other hand, pass a law that will likely increase the number of legal entities designed to play tax games.

This actually creates more work for financial institutions because they will have to conduct more due diligence on opaque legal entities. Financial institutions should plan on using automated solutions and robust reference data to deal with the increasingly complex and burdensome problem of beneficial ownership.

Lastly, do you have any tips to help banks maximize resources and better keep their teams strong in a time of tight budgets?  

A colleague of mine once told me that some banks don’t have time to look at new technology because they are too busy managing their current program. Well, this is exactly the reason why innovation needs to be a top priority for compliance teams in 2018.

The regulatory requirements and the nature of the problem continue to increase in complexity, so doing things the same way is not sustainable.

While some regulatory regimes have embraced the notion of a regulatory sandbox, this should not prevent institutions operating within other jurisdictions from experimenting. This doesn’t mean that anything needs to get deployed into production, but what it does mean is there should be activity and proof of concepts (POCs) happening in the background.