AI/ML and equity in lending
/Artificial intelligence has been around for decades, but its presence in public consciousness has increased dramatically since the advent of generative AI tools like ChatGPT. There have been benefits and costs to this perceptive shift for AI-focused fintechs.
Jamie Twiss, Founder and CEO of Australia-based lending solution Beforepay, told The Financial Revolutionist that, while inbound interest from financial institutions has increased, many sales conversations are now more conceptual—CEOs looking to implement AI in some shape or form—rather than discussions about concrete next steps using Beforepay’s solutions.
Regardless of some executives’ relative nescience on the topic, Twiss said he foresees a paradigm shift on the horizon, wherein all lenders use AI and ML to execute on lending decisions. “I can say with a high level of confidence that in 10 years, everybody is going to be lending this way,” he said, likening the sea change to a shift from cathode-ray tube TVs to high-definition devices.
But this seismic shift hasn’t been accompanied by regulatory evolution. That gap between technology and law has left much of the industry in the lurch, anticipating and hedging against state action. For AI-driven lending platforms, for instance, these regulatory uncertainties have fomented an intra-industry debate about the kinds of technologies and algorithms that should be deployed.
In the eyes of NY-based Stratyfy, a startup offering AI-driven predictive analytics and decision management solutions for financial institutions, regulations potentially coming down the pike should compel users of AI in lending and other financial operations to opt for interpretable algorithms, rather than black-box or even explainable ones (which are a step less layperson-friendly than interpretable algorithms).
“When you're in financial services… [there are] a lot of regulations, and when you're with the regulators, anytime you're using algorithms, you need to be able to explain what it is doing,” said Deniz A. Johnson, Stratyfy’s COO. “You have a very high risk of getting a fine if you can't explain what you're doing in an understandable way for all stakeholders, from your compliance team down to your regulators.”
Having interpretable lending algorithms, in Stratyfy’s view, creates staffing advantages; explainable models still require data-fluent employees in order to translate complex algorithms into plain language. Interpretability, Johnson argues, opens the door to more decision makers who may use qualitative or simpler quantitative data, rather more numerically or structurally challenging flows.
“When you want to lend to more underrepresented groups, when you want to shift your credit box differently, you have the option to do that without needing a whole data science team,” Johnson said of interpretable algorithms.
But Twiss of Beforepay thinks interpretability comes with tradeoffs, especially between the level of interpretability of a model and its accuracy. “The accuracy of the models is directly what drives fairness and inclusion,” he said, arguing that there should be a balance between the accessibility of an algorithm and its accuracy through complexity. At the same time, he said, neural network-based models that are “totally unexplainable” aren’t viable either.
Instead, Twiss suggested that SHAP values—SHapley Additive exPlanations—can be used to identify the decisive variables in mutlifactor algorithms. Rather than flagging every part of a decision tree, a SHAP value could flag the more critical reasons for a loan being approved or rejected: a lengthy history of gambling, sporadic income, or some other variable.
“I think we as a society have to make a decision about how much discretion do we want to give lenders on these lending decisions,” he concluded, gesturing to the need for more regulation around lending and AI.