Excellence in Sanctions Compliance: Moving from Transparency to Explainability


This final installment of our three-part sanctions compliance blog series looks at the industry shift from transparency to true explainability.

Explainable Compliance

As described in our recent white paper, Excellence in Sanctions Compliance: The Role of Effectiveness, Efficiency and Explainability (the 3E’s), explainability means being able to understand both the inputs and outputs of a sanctions screening system.

Inputs include not just data and the quality, breadth and tagging of data, but also policy, procedure and system configurations. Outputs include alerts triggered by the screening activity as well as the final actions or outcomes, such as closing accounts or blocking funds.

Understanding the inputs and outputs is important to both users who need to clear alerts and to auditors and regulators who evaluate the strength and execution of an organisation’s sanctions compliance programme.

Screening outputs are often presented as a score, or multiple scores, that help to prioritise alerts and give users a sense of their severity. Yet, it is no longer acceptable to blindly trust the score – organisations must understand how the system determined it. This can be accomplished, for example, by listing the attributes associated with the alert and score or by demonstrating how much each attribute contributed to the decision the system made.

Shifting from Transparency to Explainability

Historically, the focus has been on transparency in vendor systems. This was highlighted in the 2011 model risk management guidance issued by the Office of the Comptroller of the Currency (OCC).

However, with the rise of artificial intelligence (AI) and other advanced technologies, there are often hidden variables and decisions that cannot be transparent. The industry needs to move to explainable AI, where elements can be explained clearly even if they may not be fully transparent.

This shift from transparency to explainability does not stop at sanctions screening solutions – an organisation’s entire compliance programme needs to be explainable to auditors and regulators. This includes the strategies, policies and choices that go into the programme, including how settings are configured and why, how workflows and reviews are set up, how daily updates are managed, and what validation is given by whom. These are principles for sound governance.

For example, if an institution is planning to release a new cross-border payments product, the risk assessment should be reviewed for new risks, mitigating controls and residual risk. Once the risk assessment is updated, the next step is to look at the policies and procedures that support the mitigating controls and ensure they are documented and explainable.

Finally, attention should be given to how those policies are implemented with technology. The system must be tested, monitored and most importantly, explainable to users and regulators.

Explainable AI

With advanced technology solutions, explainability can be more complex as it requires clarifying how the solution’s rules, analytics or AI techniques reached the conclusion that they did. Advanced technology enables organisations to understand how inputs relate to outputs of the system – a concept that appears simple but is more complicated than it seems.

Modern AI algorithms attempt to approximate the action of the human mind, which is notoriously difficult to explain. The closer algorithms get to that ideal, the more difficult it becomes to explain the algorithm, which in turn makes explaining the outputs even more important.

One of the ways of doing this is to attempt to track how the model handles variable weighting and decision points and reverse engineer how outputs relate to inputs. In getting closer to the ideal of approximating the human mind, it is important to remember that humans make mistakes, and machines can as well – but hopefully not so often.

There are additional techniques that help to make AI explainable to operational users. For example, incorporating a user interface design that clearly highlights attributes relevant to the decision, and reporting and audit trails that automatically keep an eye on model health and reduce the chance for bias.

Explainability Through Documentation and Records

While technology is crucial in implementing sanctions screening controls, the programme’s explainability starts with comprehensive documentation; risk assessments help explain the objectives, while policies and procedures provide formal details of how the mitigating controls are designed. Comprehensive and up-to-date procedures are necessary for regulators and auditors to assess the sanctions compliance programme’s adequacy and for the organisation to ensure the process is stable and efficient.

Finally, record keeping is instrumental in demonstrating the actual control execution. The greater the level of detail kept, the stronger the explanation that the institution can provide to the regulator.

Transparency in sanctions screening is a first step, but today, explainability is a critical goal. Organisations must be able to explain the entire chain from strategy and risk assessment to policies and procedures. That documentation must be backed by comprehensive records. Explainability informs more efficient and effective sanctions screening controls because it is critical for staff and management to make the best of the tools and understand how they meet the organisation’s risk appetite.

Together, effectiveness, efficiency and explainability are the three primary dimensions to consider when pursuing excellence in sanctions screening programmes. They ensure that the right resources are applied to screening processes and that no technological black boxes spew results beyond control of the institution. Regulators and auditors expect only glass boxes in sanctions screening.

Download our white paper, Excellence in Sanctions Compliance: The Role of Effectiveness, Efficiency and Explainability to learn more.