Inside banks’ AI strategies: Characteristics of leaders
/A publishing note from The FR: We'll be giving your inbox a much deserved break through the end of the year — but rest assured we'll be back in your inbox with the latest in fintech on January 7. Happy Holidays, and thanks for being part of The FR community!
Dan Latimore, chief research officer of the FR, was recently a guest on the American Banker podcast discussing his latest research on AI implementation at banks. It draws on the FR’s latest Most Impactful report on AI and banking. Here are some excerpts drawn from the transcript of his interview with Penny Crosman, executive editor for technology at American Banker. We encourage you to check out the full interview here.
Penny Crosman:
Which are the [AI] use cases that you feel are really bringing about good results today in some of these banks?
Dan Latimore:
I would, in stepping back, think about big buckets of AI use cases. I talk about five of them, and there are interesting use cases in each of them. The first is just digesting data.You've got, say, a fraud analyst who has got to potentially sort through hundreds if not thousands of pages of potential documents. If the AI assistant can help in digesting that and flagging the most salient points and/or digesting that huge corpus of data into something much more manageable, that's a big help.
The second big bucket I'd put in there, and this in some ways applies more to the bigger banks who've got huge legacy overhang and technical debt. It aids in coding and not just in testing or writing and developing testing new code, but also translating things like COBOL into more modern languages that have a much bigger developer base who can actually deal with them.
The third bucket I'd categorize is pattern detection. So particularly in fraud, going out and looking for anomalies that should be investigated further by an analyst. And all of these, by the way, humans need to be in the loop at some stage.
The fourth one that is pretty interesting that goes into a squishier direction, if you will, is just generating first drafts. So on the marketing side or on the advisor side, they might be composing letters to clients, or even on the customer service rep side and the call center, putting together a first cut at what a customer response should be, and then giving that to the human to get over that blank page problem.
Then, I'd categorize the fifth as natural language processing, and it's just gotten so much better with gen AI than it was five years ago. And so people can get answers — both employees and customers — to basic questions and get pointed in the right direction for a certain piece of information rather than having to dig through lots of menus or compose in very carefully constructed syntax, some kind of query of a database. So those are the five buckets that I'd think about it in, and each of those has some pretty interesting use cases.
Penny Crosman:
That fourth one to me feels the riskiest, letting AI create the first draft of anything. Everyone always says there's a human in the loop, but what if that human only gives a cursory glance and misses something huge? I'm probably not taking an enlightened view in this case, but I just feel like a lot can go wrong with that first draft concept, but I'm happy to be proven wrong over time.
Dan Latimore:
I think it certainly can, and with all of these, you've got to have the same controls in place, and whether it is for one-to-one communication, random compliance checks or audits, or whether it's on a marketing communication, a standard process where things should always be reviewed by at least two people, you've still got to have those checks and balances in there and have people understand that there's a process that they've got to follow.
Penny Crosman:
What about the people leading these projects? In your research, have you thought about the leadership traits and personality traits of the people who do this well? And do you have any thoughts on what kinds of personal qualities and leadership qualities need to be brought to bear to execute well in this area?
Dan Latimore:
Well, the first thing I'd observe is that you've got to have buy-in from the top of the house to really make this meaningful because you want to have resources devoted to the initiatives. You want to make sure that learnings from across the bank, whether it's a relatively small community bank where it's much easier or a behemoth or a titan, in our case, you want to make sure there's a mechanism where experiments and learnings can be propagated across the institution so that mistakes, which will be made…[are] rectified.
It's kind of like the whole self-driving car where once it encounters an anomaly, every other self-driving car picks up on that. But without that executive support mandating that cross institution collaboration, you're going to have a very tough time. I think the next part is that there's a huge element of change management here, and so change management principles apply to AI initiatives as well as they do to any other kind of initiative. Keeping people informed, having checkpoints, publishing progress reports, having metrics and goals, and talking about how you're meeting them, all those are crucial. Just because gen AI in particular is a new technology doesn't mean that governance goes out the window.
Listen to the full interview at American Banker.