Generative AI in risk and compliance 2025
Why risk management is the key to unlocking AI value
This report examines the role of risk and compliance in the AI revolution, supported by a comprehensive program of primary research:
-
A survey of 224 senior risk and compliance professionals across the UK and US.
-
Evaluations of 620 applications of GenAI across risk and compliance.
-
Interview insights from eight named industry experts.
We identify an ROI paradox which explains why GenAI has thus far fallen short of expectations, and explain why risk management is the path to AI value, and to sustainable AI development too.
The report was produced by Parker & Lawrence Research, in collaboration with Executive IT Forums, with significant contributions from a number of leading companies and experts:
Research Contributors
Industry Experts




Emma Parry
Independent advisor and board member




Executive Summary
Risk and compliance have taken center stage in the AI revolution, with firms needing to increase their GenAI risk exposure to overcome a lack of ROI, while simultaneously improving risk management to ensure returns are sustainable.
There are many ways to frame the present AI moment: a societal and economic revolution, an evolutionary leap, or an existential threat. There are groups acting out of each of those paradigms, but the prevailing, defining paradigm is that of an AI arms race.
The AI Futures Project’s “2027 AI” scenario envisions the emergence of superintelligence by 2027, not through deliberate coordination, but through the relentless acceleration of this arms race dynamic. While much of their analysis forecasts future events, the AI arms race is already among us.
In Government and industry, there is an urgency to act, born out of a fear of being left behind. A fear which, in the context of superhuman, autonomous intelligence, could easily imply being left behind for good. This is dangerous, as incentives exist to overlook safety in the hope of a competitive advantage.
“The AI race has become a prisoner’s dilemma: no one can step back without fear of being left behind, locking everyone into a path toward dangerous escalation.” – Su Cizem, AI Policy Fellow, Institute for AI Policy and Strategy
Global Call for AI Red Lines
We support the Global Call for AI Red Lines, organized by the Center for Human-Compatible AI, The Future Society and the French Center for AI Safety. Su Cizem contributed to the project from its inception through launch, during her time with both of the latter two organizations.
Our subsequent observations on firms’ struggles to achieve ROI with generative AI do not primarily reflect limitations in the underlying technology provided by model developers, but rather the approaches and use cases chosen by deployers.
We are not suggesting that developers such as OpenAI, Anthropic, Google, or Meta should address this challenge by taking on greater risk or further scaling capabilities. On the contrary, we believe that the technology, in its current form, already has the potential to be transformative across a range of business applications, including risk and compliance, as we explore later in this paper.
Our recommendations are directed toward deployers, who must balance commercial pressures to justify their investments with their responsibilities to manage risk.

While optimists will insist that guardrails can develop and be implemented alongside innovation, the asymmetries between innovation’s dynamism (inherently) and regulation’s rigidity (for practical purposes) raise concerns. In other words, effective regulation is difficult at the best of times, and borders on impossible with such a rapidly moving target.
“AI is moving at a pace that outstrips formal policymaking. Most companies are attentive to the risks, but governance mechanisms need to keep evolving in parallel with model releases.” – Su Cizem, AI Policy Fellow, Institute for AI Policy and Strategy
Yet, there is hope. The truth is, a safety incident helps nobody (unless it is adversarial), and hurts everyone involved; the institution at fault, the impacted customers, the third and fourth parties, and industry regulators too.
Our research adds further optimism. We track thousands of technology companies building products to enhance risk and compliance. While some build solutions to mitigate AI risks directly (through AI governance, model risk management, or red teaming), many others mitigate downstream AI risks ranging from deepfakes in financial crime and fraud to operational resilience challenges arising from rapid enterprise adoption. Many of these same companies are using AI to solve these problems, demonstrating that AI innovation can contribute to its own safety.
Crucially, the best of these vendors are aware of the broader commercial incentives. They’re able to deliver products that enhance risk and compliance outcomes while increasing operational efficiency and robustness, improving the bottom line.
Further, technology leaders in this space envision a world in which risk and compliance becomes an important strategic engine: taking the considerable visibility, analytics, recommendations, and forecasting from their products, and turning them into strategic insights.
The research points to a clear, but difficult conclusion. While the backdrop of an AI arms race would appear to require an equal and opposite reaction in the form of responsibility and safety, and firms’ uncertainties around managing AI risks should limit their adoption of high-risk use cases, the failures of prevailing low-risk AI use cases to deliver ROI would appear to simultaneously motivate a greater level of risk-taking.
Although these states appear to be in conflict, they each lead to the same conclusion: elite risk management is the only sustainable path forward. As well as the direct benefits in limiting unnecessary, unjustifiable risks, this approach also enables firms to take more of the right risks. By supporting the visibility, understanding (including quantification), and mitigation of risk, firms can confidently operate closer to their risk appetite, strategically trading risk for reward.
In short, risk management is not the blocker, but the path to ROI on AI, and to sustainable AI development too. AI safety is AI progress, not because of what it restricts, but because of what it allows. It is now time for the industry to recognize risk and compliance as the strategic engine that it can be.
Research approach
This report examines the role of risk and compliance in the AI revolution, supported by a comprehensive program of primary research:
A survey of 224 senior risk and compliance professionals across the UK and US:
98.2% report that generative AI (GenAI) is presenting new or increased challenges, with financial crime, compliance management, and cybersecurity most financially impacted.
Less than half of risk and compliance professionals are “very confident” in their organization’s ability to control GenAI risks.
96.9% report the use (current or planned) of multiple GenAI capabilities in risk and compliance use cases, with data generation & structuring, summarization comparison, and information retrieval being the most common.
Respondents face an average of four barriers to adoption. The most common was data quality and availability (45.3%), followed closely by a cluster of cost-related concerns, which appeared in three of the next five results.
Analyst view of 620 applications of GenAI across risk and compliance, spanning 10 GenAI capabilities and 10 GenAI risks, mapped against 31 risk and compliance categories from 8 domains:
From 310 capability mappings, 47 transformative use cases were found, mostly involving GenAI's advanced reasoning and judgment capabilities.
From 310 risk mappings, 22 transformative challenges were found, mostly due to GenAI's fraud and cybersecurity risks.
Seven technology leaders were identified for their ability to enhance risk and compliance within their category.
Interview insights from eight named industry experts:
Confirmed firms’ challenges with data, including granular details on metadata completeness and quality, and the critical challenge of combining structured and unstructured data.
Learned more about the ongoing struggles to achieve ROI on AI, including the cautious, low-risk approach to deployment and the lack of yield from these use cases.

