How to regulate AI in financial services
In an address to 27th Annual Symposium on Building the Financial System of the 21st Century in Washington, D.C. last week, Federal Reserve Governor Michelle Bowman highlighted the potential for AI to transform financial services. She stressed the need for regulators to be open to AI’s growing prominence and opportunities, despite its risks.
Bowman took office as a member of the Board of Governors of the Federal Reserve System in 2018. She was previously state bank commissioner of Kansas and served in the George W. Bush administration, as deputy assistant secretary and policy advisor to Homeland Security Secretary Tom Ridge and as director of congressional and intergovernmental affairs at the Federal Emergency Management Agency.
Below are five key takeaways from her talk, followed by relevant excerpts from her remarks.
1. Core use cases for AI in financial services include analyzing unstructured data, combating fraud and expanding access to credit.
One of the most common current use cases is in reviewing and summarizing unstructured data.
AI use cases like this may present opportunities to improve operational efficiency, without introducing substantial new risk into business processes. In some ways, the joining of AI outputs with a human acting as a "filter" or "reality check" can capture efficiency gains and control for some AI risks. Similarly, AI can act as a "filter" or "reality check" on analysis produced by humans, checking for potential errors or biases.
AI tools may also be leveraged to fight fraud. One such use is in combating check fraud, which has become more prevalent in the banking industry over the last several years.
Could AI tools offer a more effective way for banks to fight against this growing fraud trend? We already have some evidence that AI tools are powerful in fighting fraud. The U.S. Treasury Department recently announced that fraud detection tools, including machine learning AI, had resulted in fraud prevention and recovery totaling over $4 billion in fiscal year 2024, including $1 billion in recovery related to identification of Treasury check fraud. While the nature of the fraud may be different in these cases, we should recognize that AI can be a strong anti-fraud tool and provide significant benefits for affected bank customers.
If our regulatory environment is not receptive to the use of AI in these circumstances customers are the ones who suffer. AI will not completely "solve" the problem of fraud—particularly as fraudsters develop more sophisticated ways to exploit this technology. But it could be important if the regulatory framework provides reasonable parameters for its use.
Another often-discussed use case for AI in financial services is in expanding the availability of credit.
AI could be used to further expand this access, as financial entities mine more data sets and refine their understanding of creditworthiness. Of course, we also know that using AI in this context — in a way that has more direct impact on credit decisions affecting individual customers — also presents more substantial legal compliance challenges than other AI use cases.
AI also has promise to improve public sector operations, including in regulatory agencies. As I have often noted, the data relied on to inform the Federal Open Market Committee (FOMC) decision-making process often is subject to revisions after-the-fact, requiring caution when relying on the data to inform monetary policy. Perhaps the broader use of AI could act as a check on data reliability, particularly for uncertain or frequently revised economic data, improving the quality of the data that monetary policymakers rely on for decision-making. Additional data as a reliability check or expanded data resources informed by AI could improve the FOMC's monetary policymaking by validating and improving the data on which policymakers rely.
2. An overly cautious approach to AI regulation might get in the way of the benefits it could yield for organizations.
While these use cases present only a subset of the possibilities for the financial system, they illustrate the breadth of potential benefits and risks of adopting an overly cautious approach that chills innovation in the banking system. Over-regulation of AI can itself present risks by preventing the realization of benefits of improved efficiency, lower operational costs, and better fraud prevention and customer service.
3. Regulators should be careful not to hamper competition and push innovators outside of the regulated banking system.
An overly conservative regulatory approach can skew the competitive landscape by pushing activities outside of the regulated banking system or preventing the use of AI altogether. Inertia often causes regulators to reflexively prefer known practices and existing technology over process change and innovation. The banking sector often suffers from this regulatory skepticism, which can ultimately harm the competitiveness of the U.S. banking sector.
We view institutions within the scope of federal banking regulation (banks and their affiliates) as being "in the perimeter," while entities that operate under other regulatory frameworks (including money transmitters licensed under state law) are "outside the perimeter."
We know that the regulatory perimeter is permeable, and there is always the risk that activity pushed outside the perimeter can transmit risk back into the system even as the activities garner less scrutiny and regulation than banks. Put differently, the overly conservative approach may present only a façade of safety, masking underlying risks to the financial system and those who rely on it.
Of course, there are risks to being overly permissive in the AI regulatory approach. As with any rapidly evolving technology, supervision of its use should be nimble. Its users must make sufficient risk-management and compliance investments to conduct activities in a safe and sound manner, and in accordance with applicable laws and regulations. While the banking system has generally been cautious and deliberate in its AI development and rollout, others have not. When left improperly managed and unmonitored, it can result in unintended outcomes and customer harm.
4. Regulators should do more to understand AI’s opportunities.
Many banks have increased AI adoption to an expanding number of use cases. As this technology becomes more widely adopted throughout the financial system, it is critical that we have a coherent and rational policy approach. That starts with our ability to understand the technology, including both the algorithms underlying its use and the possible implications—both good and bad — for banks and their customers.
In suggesting that we grow our understanding and staff expertise as a baseline, I acknowledge that this has been, and is likely to remain, a challenge. The Federal Reserve and other banking regulators compete for the same limited pool of talent as private industry. But we must prioritize improving our understanding and capacity as this technology continues to become more widely adopted.
We must have an openness to the adoption of AI. We need to have a receptivity to the use of this technology and know that successful adoption requires communication and transparency between regulated firms and regulators. One approach regulators can use to reframe questions around AI (and innovation generally) is to adopt a posture that I think of as technology agnosticism.
We should avoid fixating on the technology, and instead focus on the risks presented by different use cases. These risks may be influenced by a number of factors, including the scope and consequences of the use case, the underlying data relied on, and the capability of a firm to appropriately manage these risks. Putting activities together may be a helpful way to get a sense of broad trends (for example, the speed of AI adoption in the industry), but is inefficient as a way to address regulatory concerns (like safety and soundness, and financial stability). This may seem like an obvious point, but at times regulators have fallen prey to overbroad categorizations, treating a diverse set of activities as uniformly and equally risky.
5. Openness to AI doesn’t always mean adding new regulations.
A posture of openness to AI requires caution when adding to the body of regulation. Specifically, I think we need a gap analysis to determine if there are regulatory gaps or blind spots that could require additional regulation and whether the current framework is fit for purpose. Fundamentally though, the variability in the technology will almost certainly require a degree of flexibility in regulatory approach.