The Financial Revolutionist

View Original

AI and the next wave of fintech

Artificial intelligence is undoubtedly on the rise. ChatGPT and other AI-powered chatbots have gained prominence since the beginning of the year, highlighting the potential for AI to disrupt a range of sectors, including finance and fintech. But its shortcomings also suggest possible risks to overzealous or premature implementation: 

Lending

AI has the capacity to rapidly reshape lending operations in its image. Most notably, its ability to interpret alternative lending data and generate actionable insights can expand the number of unbanked or underbanked applicants who can receive credit, including at favorable rates.

An ambitious cohort of fintechs and incumbents are already working on viable products within this domain. Up-and-coming players like Conductiv are working to aggregate alternative data sources and then translate those inputs into clear action items for lenders. Others still like Stratyfy are looking to ensure that these AI models actually generate lending decisions that improve financial inclusion, rather than exacerbate bias.

Support

AI chatbots have already been around for a number of years. Bank of America’s Erica chatbot, for example, has been used more than 1 billion times since its launch in 2018. The banking giant’s support product can generate efficiencies by preventing long wait times for phone support. It also highlights the potential for support to expand into new areas with added complexity through more sophisticated AI engines. 

For the time being, however, many banks are holding off on integrating products like ChatGPT into their operations. Especially if implemented in the advisory space, AI-powered chatbots in their current iteration could generate false information with potentially catastrophic results. 

“You have machine-generated responses that have an air of authority about them because they came from a computer, so there must be some validity to them,” said Steve Rubinow, former Chief Information officer of the New York Stock Exchange, in an interview with American Banker. “And so it might have a false sense of credibility for some issues.”

Risks

Other risks remain as well. The data sets upon which AI depends can reflect existing biases—whether based on gender, race, income, or some other variable—and inadvertently amplify the effects of those potentially discriminatory valuables if calibrated in certain ways. Given how scantly AI is regulated, furthermore, there are few compliance standards that can certify the social effects of certain models over others, though some private actors are working on these efforts. 

And, finally, AI-powered products can create cybersecurity vulnerabilities. Internal documents uploaded to open-source AI models can become part of the platforms’ data sources, potentially compromising sensitive proprietary information. These concerns have compelled major banks to ban certain AI tools internally, waiting for—or building out—more appropriate products that can securely suit their needs.