How Saifr is overhauling compliance ops through AI

Saifr is a Boston-based AI-powered solution that helps organizations comply with regulatory and risk requirements. Founded in 2020, Saifr was officially launched to the public in 2022 after incubating in Fidelity Labs.

In an interview with The Financial Revolutionist, Vall Herard, CEO and Co-Founder of Saifr, outlines Saifr’s mission, describes the need for AI-powered regtech, and addresses concerns about the use of AI in finance. 

This interview has been edited for length and clarity.

The Financial Revolutionist: How do you describe Saifr?

Vall Herard, CEO and Co-Founder, Saifr: At heart, what Saifr is trying to do is reduce the compliance friction that exists in financial institutions. Go inside any financial institution compliance space, there's always some friction that exists between compliance and various departments. In Saifr’s case, the particular friction that we're trying to reduce is the fact that in regulated industries, such as financial services, anything that you put out that someone can see, hear, or read is regulated by a set of rules around communications.

So if you are a marketing or content producer at a financial institution, and you're creating content to engage with customers, whether they be retail customers or institutional customers, you want to get that out of the gate as quickly as possible. And so we can reduce some of the friction that exists right now, and help you get that content out to market five to 10 times faster. So you might think of it as a grammar check for compliance. 

How did you come across that market gap?

I was doing AI before we were calling it AI, so old-fashioned regression econometrics, if you will. And so, through that, I was in trading, I went into risk management, primarily market and credit risk. In those spaces, we're always trying to quantify things. Eventually, we got to a point where we started talking about operational risk management, and how to model the losses that a company can have because of failures in their operational systems. 

Somehow, no one ever thought about compliance as one of the risks that you could measure and calculate. How do you go about doing that? One of the biggest problems we had in trying to model operational risk when I was working in investment banking was to try and get access to data so that we could see what the data tells us about what kinds of models we can build. 

I remember being at a compliance conference about three years ago; I asked whether this is something that people would be interested in, and this room full of chief compliance officers at some of the major financial institutions out there said yes, but the data doesn't exist. And so when I got to Fidelity, we started kicking around this idea. And lo and behold, we had access to all of this data that essentially represents private data, but really the collective experiences of really large financial institutions. 

We also went out and collected additional data and reviewed that data, because that's one of the interesting aspects about what we do. You can’t just go out and collect information that's just sitting there on the web. There's a certain level of expertise that you need.

To what extent does that then rely upon qualitative feedback from people who are doing this work already in order to calibrate your models?

That's a very good question. There is this exercise around risk appetite that needs to take place. Once we collect all of the data, we have experts in the compliance space who look at the results of the models. When we go to a client, we ask them for a certain set of documents to help calibrate the model, because something that we may identify as a risk may be borderline. It may be something that they are comfortable with, but another company may not be. So that human-in-the-loop feedback and retraining of the model is something that we pay very close attention to, because the risk appetite varies across different organizations. And even the rules can be slightly different for communications going to an institutional investor versus a retail investor.

I imagine certain institutions that you want to work with see AI as a risk factor itself. How do you go about responding to that or addressing that concern?

It's a fair concern. The financial services industry is very accustomed to using models to try and understand risk. A lot of the legislation and regulations that you have are risk-based. And when you look deep enough, the industry uses first-generation AI models. So for example, the need to stress-test your organization came out of the Dodd-Frank Act, but the way you stress test is essentially up to you. What most companies do is essentially use a regression model to model out all of the risk parameters on the balance sheet of the bank over eight quarters. A lot of these financial institutions are essentially using first-generation or second-generation AI models, to say, “Hey, we're going to put x billion dollars aside.”

We have a very defined model risk policy. What that means is sometimes we look at the output of different models and see if the results converge, and test whether they agree with what a person would say. This is where the concept of backtesting comes in. We have former SEC and former FINRA staff attorneys on our team. However, we won’t tell them what the results of the models are. Instead, we ask what they would have flagged and then do a human submission comparison.

How close are your models now?

We're consistently about 93% to 95% of what a human compliance officer would typically have identified. One of the reasons for building Saifr is the fact that a lot of compliance work is very repetitive. We want the human to catch the stuff that a machine will never be able to catch, the stuff that requires human judgment. The 7% to 5% of stuff that we don't catch are the complex cases.