The Financial Revolutionist

View Original

How banks can use AI for financial crime compliance

Peter Piatetsky is the President of Castellum.AI, a financial crime prevention solution. He has a decade of experience in risk and compliance.

Of the many use cases for AI, the technology is a powerful tool to help banks combat financial crime. Experts say it’s most effective when banks use it for narrowly-scoped goals, particularly repetitive manual work that includes summarizing and analyzing large amounts of data. Three financial crime use cases in particular lend themselves to well-structured applications of AI. The FR’s Chief Research Officer Dan Latimore sat down with Piatetsky to get his views.

What are the best use cases for AI to help combat financial crime?

First off, let’s look at the data standardization, categorization and enrichment capabilities. When cleaning data sourced from various regions, languages, and formats, AI can rapidly and accurately process this information, distinguishing, for example, individuals, vessels, and entities quickly and accurately. Given the volumes processed, the AI outperforms traditional manual processes. A clean data set is the foundation for meaningful LLM utility. 

AI can also offer alerts to improve financial compliance. AI helps in the screening process by distinguishing between true positives and false positives in financial compliance alerts, with a structured approach to determine whether more information is needed. This defined decision-making process is where Large Language Models (LLMs) show significant value. 

Finailly, AI can play an important role in writing suspicious activity reports. AI streamlines the process of compiling vast amounts of data into standardized reports. This automation ensures consistency and efficiency in reporting suspicious activities, and saves an enormous amount of time for human analysts.

Where are we at with explainability?

One of the enduring concerns about AI, particularly in highly regulated industries like banking, is its explainability. For the vast majority of companies, building their own LLM makes no sense, so choosing the right vendor is critical. Beyond the  more general models like ChatGPT or Anthropic, some companies provide specialized LLMs that can be explained in detail to auditors and regulators. Models are trained differently and can make mistakes, just like humans. It's crucial to work with vendors who can explain how their AI systems reach conclusions. Don’t use black-box models that can’t provide clear explanations, or whose vendors claim that their system is proprietary, as this can be seen as negligence by regulators. And when we look at consent orders, the ones with the largest fines are where no one could really explain how the process arrived at the answer.

How has the attitude of banks evolved over the last year? Are they becoming more comfortable with AI? How about smaller institutions? 

The adoption of AI in banking varies significantly between large and small institutions. Tier one banks are taking a cautious approach, forming internal task forces to identify suitable applications for AI. They are aware of the extensive scrutiny they face and the challenges inherent in bringing a new technology into a complex system. That being said, they’re obviously using the technology. 

Smaller banks, on the other hand, are more agile and keen on the cost savings and efficiencies that AI can offer. There’s a growing interest in AI among non-tier one banks. These institutions are looking to leverage AI for immediate benefits, showing a shift in the industry's landscape where even smaller players are recognizing the potential of AI.

As banks navigate this transformative journey, they must balance innovation with regulatory compliance, ensuring that their AI systems are both effective and transparent.