Five AI use cases for banks

Carey Ransom is a SaaS entrepreneur, executive, investor, advisor, and contributor for The FR, and has started, grown and/or led 8 B2B and consumer SaaS companies during startup and growth phases. He is currently President of Operate and the Managing Director of BankTech Ventures.

At BankTech Ventures, we spend a lot of time advising community banks on technology opportunities and changes. AI has been a particularly active topic of interest over the past two years.

Community banks can make tremendous efficiency and intelligence gains through machine learning and AI capabilities, which are showing up more frequently in software they’re using. Banks cannot ignore them as they try to keep up with their peers and competitors. What’s more, team members are increasingly expecting better tools and solutions.       

Community bankers often ask about key areas where they should be experimenting with AI today and how they should think about risks. Safety and soundness of AI rollouts is high on the priority list. Here are a few key use cases we think they should consider:

1. Fraud detection and prevention

AI can analyze patterns and anomalies in real time to flag suspicious activities, and can support human analysts with additional layers of checks, scoring and bias detection

Considerations: Banks need to ensure AI models don't create ‘false positives’ that inconvenience customers or inadvertently target certain groups. Human oversight may be needed for complex cases.

2. Personalized financial advice and product recommendations

AI models for personalized and predictive recommendations can analyze customer data to provide tailored financial advice and product suggestions.

Considerations: Financial services firms should be transparent about the models they use so they can explain AI-sourced advice. It’s important to ensure AI-generated recommendations align with customers' best interests above profitability.

3. Credit risk assessment

Although many bankers disagree, AI models can more easily process troves of data for fast, precise and personalized credit risk calculations.

Considerations: We need to regularly audit AI models for bias and ensure compliance with fair lending regulations.

 4. Customer service chatbots and virtual assistants

AI-powered chatbots can handle routine inquiries and provide more customer self-service options. They can also improve response times, freeing up human agents who can handle complex questions.

Considerations: Banks need to implement robust security measures to protect customer data. They should also offer clear escalation paths to human agents when the issue can’t be resolved by a chatbot.

5. Regulatory compliance and reporting

AI can help banks comply with changing regulations and help automate reporting processes. AI assistants can also be agents of change, providing team members with analysis of data and transactions. Based on these reports, they can identify opportunities to deploy AI agents.

Considerations: We need to ensure AI systems are explainable and auditable. They should be deployed alongside compliance teams and not used against them. Of course, these systems may require human oversight to interpret complex regulatory requirements. 

Getting ahead of the risks

When rolling out an AI system, banks should consider six risk-mitigation principles:

  • Data privacy and security: Implement strong encryption and access controls as a key guardrail to protect access to sensitive customer information.

  • Transparency: Ensure AI decision-making processes can be understood and explained by bank staff, especially for regulatory purposes. Test AI models for potential biases or unintended consequences based on protected characteristics, including race, gender and age.

  • Oversight: Maintain appropriate human supervision and intervention capabilities for all AI systems.

  • Governance: Develop clear guidelines and governance structures for the ethical deployment of AI in banking and reinforcement of the banks’ values. Ensure all AI applications adhere to relevant banking regulations and guidelines, which will continue to evolve, as regulatory bodies better understand best practices for AI in banking.

  • Continuous monitoring and updates: Embrace the continuous iteration of AI models and ensure results continue to improve, both in accuracy and relevance.

  • Resilience and fallback mechanisms: Develop robust backup systems and processes in the event of system disruptions and failures, all the while eliminating single points of failure risks.