Overhauling lending fairness with FairPlay AI

This interview has been edited for length and clarity.

The Financial Revolutionist: What is FairPlay AI, and who uses it?

Kareem Saleh, Founder & CEO: We cheekily refer to ourselves as the world's first Fairness-as-a-Service company. Fundamentally, we make software that allows anybody using an algorithm to answer five questions: Is my algorithm fair? If not, why not? Could it be fairer? What's the economic impact to our business or being fairer? and finally, Did we give the folks we rejected a second look to see if we said no to somebody we ought to have approved? 

Our customers are mostly financial institutions who want the economic, regulatory, and reputational benefits of being fair. We work with some of the biggest names in FinTech. So Figure, Happy Money, Octane, and a bunch of others. They use us to automate their fair lending, testing, and reporting, and to run second-look and decline management programs designed to improve positive outcomes for historically disadvantaged groups. 

What need does FairPlay look to address?

We have this very unfortunate history of redlining in America, where financial institutions refuse to lend in certain majority-minority, low and moderate income, predominantly Black neighborhoods. Through the Fair Housing Act in 1968, Congress says, “Okay, you are prohibited from considering the race or the sex or the national origin of a loan applicant, and that's going to make things fair.” We try to achieve fairness through blindness effectively, but more than 40 years into it, fairness through blindness has not made a significant dent in homeownership: The Black homeownership rate today is no higher than it was when they passed the Fair Housing Act. 

We've been working with a technique that we call “fairness through awareness.”

For example, one of the variables that we often see being used in loan underwriting is consistency of employment. If you think about it, all things being equal, consistency of employment is necessarily going to discriminate against women between the ages of 18 and 45 who take time out of the workforce to raise a family. So maybe what we ought to do before declining someone for inconsistent employment is basically tell these algorithms that women will sometimes exhibit inconsistent employment, and before you reject someone for inconsistent employment, maybe do a check to see if they resembled good applicants on other dimensions. We call that fairness through awareness.

What does that accomplish?

So far, the work that we're doing shows that something like 25% to 33% of the highest scoring Black, Brown and female folks that get declined for loans would have performed as well as the riskiest white men that get approved.

To what extent do you actually anticipate fairness through awareness trickling upwards into regulatory policy, versus it being an opt-in product that you have to build out for lenders and other players who want it?

I was in Washington earlier this month, and we met with the CFPB on this stuff. Part of what's motivating the interest in fairness through awareness is the emergence of these new algorithmic fairness techniques that have as their animating goal to do a better job of being sensitive to populations that are not well represented in the data.

And I was super psyched to see that the Philly Fed recently put out a paper on fairness through awareness techniques, which basically said, “Hey, we acknowledge for the first time that maybe taking into account demographic class membership would have done not only a better job for those communities, but also a better job for the lenders.”

Given the judicial system’s apparent turn against affirmative action, to what extent might fairness through awareness just get quashed in some circuit court?

When we talk about fairness, we mean for poor white people too, and this is candidly the argument that's made by folks on the left like Bernie Sanders. One of the key axes in US structural inequity is low and moderate income communities. One of the things that is cool about fairness through awareness is you can expose the models to many different subpopulations. So it's not just Black people, it can be rural white people in West Virginia, and their riskiness can also be overstated by things like credit scores, too. So, fairness through awareness does not necessarily mean privileging one group over another group. It just means recognizing that different groups have different characteristics and perhaps different performance from a credit risk perspective.

Why do banks and lenders agree to be part of this vanguard?

It’s very hard when you work in financial services to go to a lender and try to get them to be fairer for the purpose of being fairer. We often have to go to them and say, “Hey, look, your unfairness is costing you money. Actually, if you took a fairer approach, not only would you get this humanitarian and regulatory dividend, but you'd actually benefit economically, too.”

Finally, what do you actually see as within the realm of possibility when it comes to really bringing fairness through awareness to housing and lending in the US?

We have a Black woman Vice President who's very keenly focused both on financial empowerment but also on historical discrimination. We have a Black Deputy Comptroller of the Currency who gave a speech earlier this month saying that they are laser focused on fair lending issues. You've got states like New York and California who are also trying to be in the vanguard with respect to AI governance and harnessing AI systems to be fair. It’s an exciting time for those of us who believe in the promise of this technology, but not in a techno-utopian way.