Why deepfakes are scaling faster than defenses
/AI-powered fraud is scaling faster than institutions can respond. Only 7% consider themselves more than moderately prepared to detect and prevent it, a number that highlights a broader pattern in the ACFE and SAS’s fourth benchmarking report on anti-fraud technology.
More than half of survey respondents said each tracked AI-powered fraud scheme increased over the previous two years. Deepfake social engineering led the surge, flagged as having increased significantly by 44% of respondents, alongside consumer fraud and scams at 38%. The outlook is worse: respondents expect generative AI document fraud and forgery, deepfake social engineering, and deepfake digital injection to increase significantly over the next two years.
The FR breaks down the key implications of the 2026 Anti-Fraud Technology Benchmarking Report for financial institutions.
Generative AI: promising tool, uneven deployment
Generative AI's footprint in anti-fraud programs is small but is poised to grow fast. Only 16% of organizations currently use generative AI in their anti-fraud programs, though 58% plan to in the future. Among those already using it, phishing and scam detection leads at 49%,followed by risk identification and assessment at 46% and report writing at 45%.
The governance picture is where things get uncomfortable. More than 8 in 10 of respondents say explainability and auditability are important factors in adopting generative AI, yet only 6% feel completely confident explaining how their AI and machine learning models actually make anti-fraud decisions.
The bias issue compounds it: only 18% of organizations test their AI models for bias or fairness, despite 75% of respondents flagging it as an important adoption factor.
"The biggest barrier to advanced AI adoption isn't interest — it's data readiness and governance," says Stu Bradley, SAS's SVP of risk, fraud and compliance solutions. "Until organizations trust their data and manage the full AI lifecycle, these technologies will stay experimental instead of delivering real value."
The stakes are clear to those tracking the threat closely.
“Understanding industry key benchmarking of anti-fraud measures and solutions is critical at a time when AI attacks are scaling, are contextually intelligent, consistent and targeted,” says Mary Ann Miller, VP evangelist and fraud executive advisor at Prove. "Fraud-type classifications are expanding in the nature of the attacks and solutions required to respond."
Other technologies deployed in the fraud fight
AI and machine learning use in anti-fraud analytics grew from 18% of organizations in 2024 to 25% in 2026, with another 28% expecting to adopt it within two years. If that holds, more than half of organizations will be using AI/ML for fraud analytics by 2028.
Physical biometrics is the most widely deployed emerging technology, rising from 34% adoption in 2022 to 45% in 2026. Cloud-native fraud detection platforms remain a niche: only 10% of organizations currently use them. And despite the obvious efficiency case for automation, only 29% of organizations automate routine fraud investigation tasks.
Budget constraints are the most persistent obstacle, cited as a major challenge by 57% of respondents and a consistent finding since the report's first edition.
The 2026 Anti-Fraud Technology Benchmarking Report is based on survey responses from 713 ACFE members, collected in October 2025.