Identifying Document Fraud with AI: How Firms Can Fight Back Against a New Class of Attacks (The Demo Room #14)
- Michael Lawrence

- 3 days ago
- 5 min read

Welcome to The Demo Room – your front-row seat to the future of RegTech, RiskTech, and AI innovation.
In this series, we document our research interviews with the most forward-thinking vendors tackling the industry's biggest challenges. Each blog is built around a comprehensive product demo, providing clear insights into how these innovations address industry challenges.
On this occasion, we met with Martin Rehak, CEO of Resistant AI, to discuss document fraud detection using AI to prevent fake, tampered or AI-generated documents from bypassing your onboarding.
Parker & Lawrence’s 2025 Generative AI in Risk and Compliance study shows financial crime is the domain most financially exposed to generative AI (GenAI), both positively and negatively:
31% of respondents (n=224, senior risk and compliance professionals) named financial crime as the top domain benefiting from new opportunities or efficiency gains unlocked by GenAI, with 50% placing it in their top three.
38% named financial crime as the top domain negatively affected by new or heightened challenges, with 55% placing it in their top three.
When asked which GenAI risks were driving the greatest financial impacts, one area stood out: fraud, scams and targeted manipulation.
64% said this was the single biggest driver of financial impact overall, across risk domains.
Among those who ranked financial crime as their most impacted domain, 68% said that this was being driven by GenAI-powered fraud, scams and manipulation.
This aligns with a growing body of external research showing that document fraud has accelerated in both volume and sophistication since the emergence of easy-to-use generative tools.
In 2024, digital document forgeries overtook physical forgeries for the first time, increasing by 244% year over year — a 1,600% increase since 2021.
False identity cases in credit applications also rose in 2024, increasing by 60% when compared to 2023, with only 23% of firms reporting confidence in dealing with AI and deepfake fraud.
In the first half of 2025, a record 217,000 fraud cases were reported to the UK’s National Fraud Database, with AI being used “to create fake identities, forge documents, and bypass verification systems with alarming accuracy”
Across studies, the same conclusion emerges: GenAI has collapsed the cost, effort and expertise required to create convincing fake documents, while dramatically increasing attack speed and volume. This has tipped the balance in favour of fraudsters, unless firms upgrade their detection capabilities.
The Problem for Firms
When fraudulent documents slip through, firms are opening the door to financial crime, money laundering, and the upstream criminal activity that generates illicit funds. They’re also exposing customers to downstream harm, from account takeovers to long-tail identity abuse.
To avoid these outcomes, and the regulatory and reputational fallout that follows, firms have little choice but to throw resources at the problem. At a minimum, they must demonstrate they are taking “reasonable steps” to detect and prevent fraud. In practice, that often means large, manual review teams, expensive technology stacks, and reactive processes that struggle to keep pace.
For too many firms, this approach has proven ineffective and costly:
Synthetic identity fraud alone resulted in $20 billion in losses for U.S. financial institutions in 2020 — a figure that can be expected to have risen, given the significant growth in identity fraud cases.
In 2024, 95% of UK banks and fintechs reported a rise in compliance screening costs, with spending reaching £21,400 per hour to fight financial crime and fraud.
In Q1 of 2025 alone, documented financial losses from deepfakes in financial fraud cases are reported to have reached $46 million.
These numbers only confirm what was already known. First, that manual review cannot scale in line with the volume of fraud attempts, which may involve efforts to disguise identities behind any of the thousands of legitimate documents from around the world. And second, that scale isn’t the main issue; precision is. Modern document forgeries are already at, or beyond, the limits of human detection. Firms need new tools built for this new class of fraud. Identifying document fraud with AI is emerging as the most effective countermeasure.
A Solution: Identifying Document Fraud with AI
The overlap in AI’s conflicting effects is striking: 80% of Parker & Lawrence Research respondents who ranked financial crime as their biggest area of new AI opportunity also ranked it among their top two areas of new external threats. The scale of the GenAI challenge is motivating firms to fight fire with fire, deploying AI internally to counter rapidly evolving external threats. This is supported by separate findings that 98% of firms have either already implemented or plan to adopt AI and machine learning analytical tools into financial crime screening processes by 2026.
Resistant AI document fraud detection can detect fake documents in seconds with its own AI engine drawing on an evolving taxonomy of over 500 indicators, detecting tampered regions, editor fingerprints, metadata drift, and recompression artifacts across both PDFs and photos. Resistant focuses on document construction rather than content, making it language-agnostic and privacy-preserving.
With this approach, Resistant AI has already become an industry leader, analysing more than 150 million documents for fraud. It excels where document variability is high and attackers iterate quickly: KYB onboarding (business registration, certificate of incorporation, & ownership documents), lending artifacts (bank statements, pay stubs, financials), insurance claims workflows (police reports, hotel bookings, flight tickets) and marketplace/seller vetting. The approach has also expanded to background-check workflows (employee fraud/qualification and payslip manipulation), a risk vector that was niche two years ago but is now material in remote hiring pipelines.
Parker & Lawrence's View
Our research concluded that, broadly, GenAI is experiencing an ROI problem that is not inherent, but based on the choice of use case and solution design. Resistant AI is named as a technology leader in our report not only because of their ability to solve a major global issue that is still on the rise, but because they do so with an appreciation of their commercial impact.
>90% fewer manual reviews
5x review speed
<20 seconds per document
22x average ROI
That is, Resistant AI is not only helping firms to improve compliance, but commercials too.
We also appreciate that Resistant AI’s solution design is built for results and explainability, rather than marketing and perception. The team has made a conscious decision not to use GenAI for core fraud detection. It is used solely for summarising analysis results, never in the analytic core, where its probabilistic nature and susceptibility to hallucination risk undermining high-stakes decisions.
Instead, the stack layers forensic analysis, ensembles of machine learning models, neural networks, computer vision, and behavioral/device analytics to create hundreds of independent tripwires. AI exists within the Resistant AI philosophy of using the right model for the right job.
IBM estimates that creating a deepfake costs only $1.33, while the global cost of deepfake fraud is expected to reach $1 trillion. And with document fraud being such a major contributor to these figures, Resistant AI is set to play a major part in closing this gap.
Get Involved
Are you ready to become a thought leader? Reach out to discuss our ongoing research initiatives, how they impact your firm and where we can work together to position you at the forefront of your industry.

