Meet regulatory expectations on AI-led AML

Blog

Meet regulatory expectations on AI-led AML



Financial institutions increasingly use artificial intelligence to strengthen compliance with anti-money laundering (AML) rules such as know your customer (KYC) verification. However, many are concerned about how regulators view activity in this emerging field.

Organizations worry that their AI-led processes miss potential risks, and are concerned about meeting regulators’ expectations that they should be able to explain and defend their AI models’ decisions and avoid biases.

There are no specific regulations about using AI for KYC, but authorities generally support firms experimenting with AI to strengthen compliance effectiveness, for example, by reducing time spent processing false positives so they can focus on genuine cases.

In the US, regulators published guidance welcoming institutions experimenting with AI to strengthen their AML compliance. It said this experimentation will not necessarily or automatically open banks to increased regulatory scrutiny or expectation, even if it exposes gaps in existing processes.

This generally positive attitude will likely be implemented worldwide following the Financial Action Task Force’s 2022 report, which commended new AML technologies and AI’s ability to “analyze data accurately, in real-time and help better identify emerging risks.”

But AI-led AML also presents multiple challenges. For example, AML Intelligence recently highlighted that the technology can be limited by inadequate data and processes, or it can tempt companies into over reliance on the model with insufficient human supervision. 




The benefits of machine learning 

Machine learning can dramatically increase compliance efficiency in AML activities such as screening customers against adverse media, sanctions, watchlists and politically exposed persons. Generally speaking regulators not only welcome but expect companies to use machine learning to reduce false positives and strength compliance.

If your machine learning model produces different results, they generally accept that because you are trying a new approach. However, they will still want to know how your AI model works. For example, they may say, “Show me your program. You looked at one million AML alerts - can you explain how the machine learning algorithms filtered them?”

They will want to see your processes for rectifying any mistakes. And they want you to show that your model is not missing anything important, such as a known criminal moving money through your bank. If it does miss something, you could still be fined and suffer reputational damage.

Institutions will also naturally want their AI to avoid bias, especially where that leads to discrimination. The EU’s upcoming AI Act contains specific proposals that artificial intelligence should not breach any fundamental rights, which would include unlawful discrimination. And the European Central Bank’s supervisory board has also warned recently that AI “can perpetuate or even stimulate racial bias if data to train algorithms does not reflect the diversity of EU society.”




Governance and AML

These factors all lead to AI governance becoming a critical topic for banks. They must ensure they have the right documentation to show the regulator, explaining how their model is trained and makes decisions; how it avoids missing important cases; and how they fix mistakes. Documentation also needs to show how the model avoids bias around gender or ethnicity, for example.

This evidence can get particularly complicated for companies that customize their rules and models, as they must show how they defend their model’s decisions in each case. They also need to show that they treat information consistently across different parts of their organization.

In our experience, the key is transparency. Some organizations still associate artificial intelligence with a opaque box approach, which means you can’t see the workings and how it makes decisions. This lack of transparency would be a major issue for a regulator and banks must avoid this in the AML space




How Moody's Analytics can help

We help organizations by providing data orchestration, human expertise, and explainable AI models.

We designed our intelligent screening solution to provide transparent AI, that can help gain regulators' confidence. It’s powered by our Grid database that increases automation and efficiency in screening and risk monitoring.

Our solution is designed to solve precise problems, in line with regulators’ expectations that companies’ use of AI should be specific and not overcomplicated. It uses a supervised learning model that provides accountability and traceability of its decisions.

We emphasize the importance of human expertise in combination with AI models to investigate alerts generated by the models to provide assurance and oversight. This ability to demonstrate accountability and reliability is crucial for earning regulators' trust in your AI-led AML program.




Learn more

If you're interested in building an AI-led program, contact us to learn more about our intelligent screening solution.