A record number of anti-money laundering (AML) professionals, regulators and vendors recently converged in Hollywood, Florida, for the ACAMS 24th Annual AML & Financial Crime Conference, making it the largest ACAMS Hollywood conference to date.
There was a clear focus on topics such as artificial intelligence (AI), machine learning and robotic process automation (RPA), reflecting the industry’s interest in how these, and other advanced technologies, can help in the fight against financial crime.
Busting the Buzzwords
While the role of AI in compliance and financial crime screening continues to be validated by customers, there were questions about what constituted true AI. It was noted that vendors and practitioners alike use AI terminology in a broad, generic sense, but few can articulate the difference between AI, machine learning, and deep learning.
For instance, in a panel discussion named, ‘What Do We Mean When We Say Regtech’, Dave Loeser from Accuity joined other speakers in discussing the differences between various regtech tools, such as KYC and financial crime screening, and encouraged participants to gain a deeper understanding of the nuances of AI and RPA.
The session also explored how regtech processes can strengthen oversight and satisfy regulatory requirements. While attendees agreed that technology could help solve many financial crime compliance problems, there was not always agreement over where technology could have the maximum impact.
For instance, one of the presenters indicated that machine learning techniques would be best suited to sanctions and PEPs screening, which are less complex than transaction monitoring or customer due diligence. However, an audience poll indicated otherwise – that transaction monitoring would derive even more benefit from machine learning.
Finding Common Ground in Three E’s
Efficiency, effectiveness and explainability are the pillars underpinning any successful financial crime compliance solution, and the ACAMS conference touched on all three.
The concept of effectiveness came up in several sessions. Financial institutions need to identify their challenges or gaps to help define what ‘effective’ means to them. Inefficient programs are, by default, ineffective. Implementing tools that cannot be explained to regulators or auditors are also ineffective.
Explainability is critical, particularly when it comes to introducing AI techniques to a financial crime screening programme. AI can be useful in a first-level match review, but organisations must ensure they understand the inner workings of their solutions so they can document exactly what is being done. One FDIC examiner indicated that it is “troubling” when everyone talks about AI, but few can explain it.
The truth is that AI, in its various forms, is finding applicability in regtech.
However, before blindly adopting AI, organisations must first review their existing data, people, and processes to get their internal house in order.
Any conversation about AI in the context of KYC or financial crime screening therefore needs to be realistic. And effective. And explainable.