Building a risk framework for the AI payments era

Subscribe to the FR

Dan Frechtling is a veteran risk leader serving as senior vice president of product and strategy at LegitScript, where he drives product innovation, grows merchant intelligence, and strengthens partnerships with platforms, marketplaces, and payment companies.

AI is reshaping every layer of the payments ecosystem: how merchants onboard, how transactions flow, and even how consumers shop. But fraud is accelerating in tandem with this technology. Global e-commerce losses reached $44.3 billion in 2024 and are expected to double by 2029, increasingly driven by fraudsters using their own AI to mimic merchants and customers, steal credentials, and bypass checks.

When the tools that make payments faster also make fraud harder to catch, the industry has to ask itself, how much autonomy should these systems be given?

That question is why 2026 won’t be about plugging automation into old workflows but about making smarter choices about where AI plays a role. The most important step will be recognizing that human oversight is essential to successfully maintaining complete trust, context, and compliance. If the last few years were about building AI tools, the near-term future requires learning how to supervise them.

The line between AI and judgment

AI excels in tasks that exceed human capacity. It can scan thousands of merchant records in seconds, flag mismatched category codes, detect sudden shifts in sales patterns, and monitor activity around the clock. It’s a powerful first filter for underwriting, Know Your Business (KYB) checks, and early-stage fraud detection, yet those capabilities stop short where contextual interpretation is required.

The most consequential vulnerabilities surface as false negatives, such as when outdated models miss emergent fraud behaviors; and false positives, such as when legitimate merchants or customers are incorrectly flagged due to overly rigid rules or model drift.

Consider the case of transaction laundering, where fraudsters can now use innocuous-looking storefronts generated by AI to mask illegal payment processing. Algorithms can spot inconsistent data points, but it can’t always interpret product semantics . Human reviewers who understand ambiguity and nuance will always outperform models when determining intent. This is especially important when it comes to high-risk or highly regulated verticals such as firearms, gambling, crypto services, and telehealth.

The same challenge shows up in cross-border underwriting, where a product’s legal status changes by region. AI can detect anomalies, but humans can better interpret context and make reliable judgment calls.

The same challenge shows up in cross-border underwriting, where a product’s legal status changes by region. AI can detect anomalies, but humans can better interpret context and make reliable judgment calls.

Autonomous shopping agents push the conversation further

The newest twist is the rise of agentic commerce and AI shopping agents that compare prices in real time, fill carts, and place orders on a user’s behalf. Nearly 60% of consumers expect to use AI agents in their shopping journey within two years, which means payment providers now face new compliance questions that did not exist before:

  • How do we “identify” an AI agent the same way we identify a human customer or merchant?

  • If an agent initiates a purchase, how do we verify user intent? 

  • How do we determine if a user has properly authorized an agent’s behavior?


Early ideas like “Know Your Agent” checks are emerging as the next logical extension of Know Your Customer (KYC) and KYB checks. Come 2026, payments teams won’t just be vetting humans, but also the AI agents executing actions for them.

Hybrid AI models are the path forward

When evaluating risk, let AI handle the volume and human experts handle the contextual decisions that determine final outcomes:

  1. Automation takes the first pass: It screens merchants, runs continuous monitoring, and filters out the straightforward cases.

  2. Experts review edge cases: Humans focus on ambiguous content, complex merchant models, rapid behavior changes, and anything involving intent.

  3. Risk teams get to work more strategically. Instead of manually sorting routine signals, analysts spend their time on the problems that threaten compliance or consumer trust.

This approach solves one of the biggest challenges with AI; that most models can’t explain their decisions and often carry the risk of bias. Hybrid programs close that gap, because the highest-impact calls still involve humans who can document how and why a decision was made.

Balance speed with judgement

Enjoying your read? Sign up here

All payments innovation comes with governance obligations. AI delivers speed, and humans make regulatory and ethical decisions. As agentic commerce expands and fraud continues to evolve, that balance is what sustains trust. Explainable, defensible decisions keep merchants, consumers, and regulators aligned. This is how innovation becomes sustainable.