The State of AI Safety in 2025: Risk vs Reality According to Enterprise Leaders
- Michael Lawrence
- 2 days ago
- 4 min read
For all the talk of AI safety in 2025, the most significant threats facing businesses are often not the ones dominating the headlines – and the disconnect between perception and reality could prove costly.
Our latest research explores the real-world risks facing businesses in 2025, from complex compliance challenges to the hidden pitfalls of scaling AI.
The Expanding AI Risk Landscape
AI risks are not just technical but span a wide range of ethical, operational, and societal dimensions. MIT’s AI Risk Taxonomy categorises these risks as follows:
Discrimination and Toxicity – Includes issues like unfair bias, exposure to toxic content, and unequal performance across demographic groups.
Privacy and Security – Encompasses data breaches, unauthorized access, and the compromise of confidential information.
Misinformation – Covers the spread of false or misleading information, pollution of the information ecosystem, and large-scale disinformation.
Malicious Actors and Misuse – Involves threats like cyberattacks, fraud, mass harm, and the malicious manipulation of AI systems.
Human-Computer Interaction – Considers risks like overreliance, loss of human agency, and unintended consequences from human-AI collaboration.
Socioeconomic and Environmental Harms – Reflects concerns about power centralisation, increased inequality, and environmental impacts.
AI System Safety, Failures, and Limitations – Addresses the risk of AI systems acting unpredictably or beyond their intended scope, including transparency and interpretability challenges.
Business leaders are increasingly aware of these challenges. A recent McKinsey survey of over 1,300 executives gathered the perception of AI risks versus their actual manifestation as negative incidents:

Understanding the Gaps Between Perceived and Actual AI Risks
The data highlights several key narratives shaping the current landscape of AI risk perception versus actual incidents:
Increasing Perception of Certain Risks
Over the past year, the perceived relevance of several high-profile risks has surged. Inaccuracy saw a notable rise, from 56% in 2023 to 63% in 2024, reflecting high-profile cases where AI models have generated confidently incorrect outputs – a critical issue as more businesses integrate generative AI into customer-facing roles. Intellectual property infringement also climbed sharply, likely influenced by ongoing litigation around the use of copyrighted training data, and the broader conversation around digital rights management in AI contexts. The explainability gap also widened, as organisations increasingly recognise that opaque AI models can undermine trust, create legal liabilities, and complicate regulatory compliance.
Stabilising or Declining Perception for Other Risks
Interestingly, some categories have seen their perceived relevance stabilise or decline. Regulatory compliance dropped slightly, from 45% to 42%, which might seem counterintuitive given the rapid growth in AI regulation. However, this could indicate that some firms are underestimating the complexity of the challenge, having not yet fully contended with the operational realities of scaling regulated AI systems. Our full report details the extensive, often overlapping requirements across jurisdictions, from the EU AI Act to NIST 800-53, that can pose substantial compliance risks as firms transition from experimentation to production-scale deployment. Similarly, workforce displacement has dropped as the speculative, big-picture concerns about AI-induced unemployment give way to the more immediate, tactical challenges of integrating AI into existing workflows – a more pressing issue for companies currently competing to deploy these systems at scale.
Actual Negative Incidents – Key Pain Points
Despite these perception shifts, the actual incidents reported paint a different picture. Inaccuracy remains the leading cause of negative consequences, affecting 23% of respondents. This may seem surprising given the rapid progress in model capabilities, but it underscores a critical reality: even the most advanced models can struggle with domain-specific accuracy and reliability under real-world business pressures. Intellectual property infringement(16%) and cybersecurity (12%) follow closely, reflecting the complex legal and technical challenges of integrating AI at scale – from data provenance and copyright claims to vulnerabilities in model architecture.
The Gap Between Perception and Reality
The chart reveals a striking gap between perceived and experienced risks, with the majority of categories showing much lower actual incident rates than their perceived threat levels. This might seem reassuring at first glance, but it also underscores a potentially dangerous complacency. Risk is a forward-looking measure – just because a problem hasn't occurred at scale yet, doesn't mean it won’t in the near future.
The Elephant in the Room – 'None of the Above'
Perhaps the most intriguing data point is the 40% of respondents reporting negative consequences from an "unnamed" risk. This could indicate that many businesses simply lack the language or framework to accurately categorise their AI-related failures. Alternatively, this could reflect a more systemic, cross-cutting risk that doesn’t fit neatly into predefined categories – a worrying possibility for organizations attempting to future-proof their AI strategies.
How Leaders are Approaching AI Safety in 2025
As AI adoption accelerates, companies that prioritise safety, compliance, and robust governance are emerging as leaders in this critical period.

Investing in structured AI risk management today will not only prevent costly incidents tomorrow but also build long-term trust and competitive advantage.
Ready to take control of your AI strategy? Check out A Business Leader's Guide to the AI Revolution for industry-leading insights into AI safety, governance, and risk management.