Inside Alloy’s 2026 State of Fraud report

Subscribe to the FR

Fraud isn’t happening in sudden waves anymore: it’s settling into a steady, grinding climb that’s affecting product roadmaps and risk budgets.

Alloy’s State of Fraud Report released this week shows 67% of institutions seeing more fraud events, with credit unions and regional/community banks absorbing the sharpest increases. AI has become both the accelerant and the unknown: 91% of decision-makers say criminals are using AI more often, especially for synthetic identities, smishing and vishing–style coercion tactics, and fake or manipulated documents.

The FR editors sat down with Alloy’s VP of Product Marc Weisman to break down what the findings mean for financial institutions.

1. How is AI changing the economics of fraud?

For bad actors, AI makes it faster and cheaper to scale fraud attempts; what used to take a week of manual work is now a few minutes of prompt engineering and script automation. Generative models lower the skill required to produce convincing identities, documents, and outreach at scale. 

Financial institutions are keenly aware of this shift, 91% of FIs reported they are seeing more financial crimes being committed with AI technology. And many are responding with solutions (machine learning models, agentic solutions and additional data sources) that help them respond quickly to the new pace of threats. Some 82% of FIs said their investment in AI-driven fraud prevention increased this year. 

2. Which AI-enabled fraud tactics are most misdiagnosed by banks right now? What common blind spots are you seeing in detection?

It’s difficult to make a blanket statement here because it really depends on the controls each organization has in place to catch things like synthetic identity fraud, which most (89% of FIs) ranked as the most concerning tactic as it evolves with AI. With the right solutions in place — behavioral analytics and AI-driven fraud models — FIs can better identify synthetic identities.

3. How should institutions rethink fraud models when the “attacker” is no longer human? Do traditional rules or scoring models fail under automated attack patterns?

The models and the approach are actually going to be the same. At the end of the day, financial institutions have to start by understanding the identity of the actor and if that identity is a real person, a bad actor or bot as a first step.

If the attacker is a bot and not a human it is critical that FIs have actionable AI models in place, ones that don’t just show an attack is happening, but that immediately flag how to remediate it. 

That way, the FI can respond quickly enough to prevent mass losses. The alternative today for FIs who don’t have actionable models in place is that they often are forced to just shut down channels completely until they can get a handle on the attack. That is a bad outcome for them and their legitimate customers.

4. How is AI blurring the line between third-party fraud, first-party fraud, and scams?

It has always been challenging to accurately classify types of fraud, but AI is making it even harder to determine whether the bad actor is the truly first party fraud or a bad actor posing as a legitimate customer. AI driven tools are also making intent much harder to understand and prove. 

Liability is going to be a tricky new phase as well. For example, if a customer uses an AI bot to buy an item when it hits a certain price point, who is responsible if something goes wrong and the bot spends too much money or makes an error — the customer, the merchant, or the maker of the bot? Those are the types of questions leaders are thinking through today as well.

5. What would a realistic, AI-forward fraud strategy look like in 2026, and how does that differ from what most banks are actually doing today?

To start, FIs need to have a strong foundation in place, meaning they need the flexibility to adapt to the ever-changing nature of fraud risk with new risk models, new data solutions, rules, and policies. That’s table stakes and not new.

Enjoying your read? Sign up here

But to be AI-forward in 2026, FIs need to be using predictive AI so that they can not just react faster, but anticipate attacks and identify risk the moment it enters the system. Predictive AI models can detect patterns like subtle inconsistencies in application data, device behavior, writing style, or timing. These flags can then be used to reveal synthetic identities or coerced consumers long before a transaction occurs. That means less fraud, fewer false positives, less friction, and faster approvals for legitimate customers. 

Finally, we know that fraud risks are constantly evolving because bad actors are trying new tools, methods, technologies. So it is critical for organizations to have the flexibility to be able to quickly update their models, data sources and other tools to future proof against the next wave of threats.