top of page

Rethinking Vulnerability: Why Self-Disclosure is Failing Regulated Firms (Demo Room #9)

  • Writer: Nathan Parker
    Nathan Parker
  • Jun 18
  • 5 min read

Welcome to The Demo Room – your front-row seat to the future of RegTech, RiskTech, and AI innovation. 

In this series, we document our research interviews with the most forward-thinking vendors tackling the industry's biggest challenges. Each blog is built around a comprehensive product demo, providing clear insights into how these innovations address industry challenges.


On this occasion, we spoke to David Basson and Ralph Tucker, Co-founders of empath_AI.

-

Each year, one in four people in the UK will experience a mental health problem, and the pandemic has accelerated a persistent rise in anxiety, depression, and cognitive decline. In the final quarter of 2024, 22.6% of adults reported high levels of anxiety, still above pre-pandemic averages. At the same time, nearly one million people in the UK are now living with dementia, according to Alzheimer’s UK, with that number expected to rise sharply as the population continues to age.


These conditions often remain hidden, with up to 90% of mild cognitive impairment cases going undiagnosed in primary care. As mental health overtakes musculoskeletal conditions as the leading cause of disability, the implications for regulated firms are clear: vulnerability is more prevalent, more complex, and less likely to be disclosed than ever before.


The Problem for Firms

Regulators increasingly view customer vulnerability as a firm-wide responsibility, not just a frontline issue. That means systems, controls, and oversight must be designed to anticipate and mitigate harm before it occurs.

The FCA defines a vulnerable customer as someone who, due to their circumstances, is especially susceptible to harm, particularly when a firm fails to act with appropriate levels of care. 


Vulnerability is not a fixed category; it can be temporary, sporadic, or permanent, and includes factors such as mental health, cognitive decline, life events (like bereavement), financial difficulty, and low resilience to digital or financial complexity.

Why this regulatory focus? Because the evidence shows that vulnerable customers face worse outcomes across the board: they’re more likely to experience service exclusion, financial harm, mis-selling, and abuse, particularly from scams and fraud. 


In response, firms are expected to anticipate harm, not just react to it. But this expectation clashes with the stubborn operational reality that most vulnerabilities go undetected. 


While firms are investing heavily in staff training, data capture, and process redesign, these efforts rely on the fragile assumption that vulnerable customers will self-disclose. Yet, for many living with mental health challenges or any form of cognitive impairment, that assumption breaks down.


People experiencing conditions like anxiety, depression, or early cognitive impairment sometimes cannot, or will not, put their hand up and say “I need support”. As a result, these individuals are invisible to incumbent processes reliant on customer declaration.


This leaves firms unable to activate the internal systems and controls needed to properly support vulnerable customers, let alone guarantee the “good outcomes” that regulators now require. This gap creates a growing compliance and conduct risk. When vulnerabilities go unnoticed, it can render services inaccessible, leading to poor customer experiences and difficult interactions. In some cases, customers risk serious harm such as mis-selling, fraud, and scams.


It also places a significant operational burden on frontline staff, who are expected to detect complex psychological or cognitive issues in under five minutes, without clinical training and while under pressure to meet call time targets.


The scale of the issue is hard to ignore:


  • Mental health is now the leading cause of disability in the UK, overtaking physical conditions.

  • 90% of mild cognitive impairment cases go undiagnosed in primary care, where clinicians have an hour, not a few minutes, to assess a patient.

  • Contact centres remain the frontline for many regulated services, yet they are not equipped to make these kinds of assessments.


The result is a widening gap between regulatory expectations and operational realities, a gap that leaves vulnerable customers unsupported and firms increasingly exposed.


A Solution: Passive, Proven, and Ready-to-Deploy


Operationally: What empath_AI does and where it fits

empath_AI addresses the core bottleneck in supporting vulnerable customers: early, accurate identification. It sits seamlessly within existing customer contact workflows, whether embedded in Interactive Voice Response systems, agent-assisted calls, or digital wellness journeys. With just 40 seconds of natural speech, it analyses vocal characteristics to assess for signs of anxiety, depression, mild cognitive impairment, and early Alzheimer’s, all without needing scripted questions, manual triage, or customer disclosure.


Rather than replacing agents or workflows, empath_AI augments them. In a contact centre, for example, empath_AI runs silently in the background and delivers real-time prompts to agents on how to better handle the call, such as slowing down, offering reassurance, or rerouting to a more suitable journey. In digital channels, it can trigger supportive journeys tailored to the customer’s emotional or cognitive state.


The Technicals: How it works and why it’s different

empath_AI is built on a category of machine learning known as Emotion AI, distinct from sentiment analysis or voice biometrics. It doesn’t analyse what is said, but how it’s said, looking for patterns in rhythm, pitch, clarity, and flow that serve as vocal biomarkers of mental and cognitive health.


Its engine is clinically validated and forms part of Microsoft’s medical version of Copilot, where it supports healthcare professionals in the early detection of mental health conditions. In trials, it has demonstrated the ability to detect cognitive decline up to two years earlier than clinical assessments. Unlike diagnostic tools, empath_AI is not making a medical judgment or creating labels; it’s a decision support tool that helps surface hidden risk and support better outcomes.


The Result: Four high-impact use cases

empath_AI enables several core applications across regulated and risk-sensitive industries, but four stood out to us as immediately tangible:


  • Vulnerability detection: Identifying customers who may be struggling but haven’t disclosed, surfacing that insight where it can make a difference.

  • Fraud prevention: Spotting individuals at higher risk of scam exploitation or cognitive manipulation, particularly in high-value transactions.

  • Wellness & claims triage: Helping insurers and employers prioritise mental health claims more accurately and manage return-to-work journeys with greater precision.

  • Employee assistance & monitoring: Powering daily check-ins and longitudinal wellbeing data to support staff and reduce long-term absence.


In each case, the same core assessment engine powers a different outcome, unlocking new levels of visibility, empathy, and control where traditional approaches fall short.


Parker & Lawrence’s view

The current regulatory push around customer vulnerability marks a shift from passive fairness to active responsibility. But while expectations have changed, capabilities haven’t kept pace. Most firms still rely on frontline staff to identify vulnerability based on what customers say, a method that is unreliable at best and dangerous at worst.


empath_AI offers a fundamentally different approach. It removes guesswork by passively detecting emotional and cognitive signals in natural speech, enabling early, accurate, and scalable vulnerability identification. It enhances human judgement, providing timely insights that help firms act with care before harm occurs.


This is an innovative opportunity to future-proof customer interactions, reduce the burden on overstretched frontline teams, and unlock new value in fraud prevention, wellness, and claims triage. Crucially, it empowers firms to live up to the spirit of the regulations, not just the letter.


Emotion AI has already proven its value in other major markets. In Asia, Ping An has deployed AI voice triage to transform underwriting; in the US, Cigna has integrated emotion AI into virtual mental health support; and in Israel, Vocalis Health powers voice-based wellness monitoring for employers. Insurers like SOMPO and Takeda are showing measurable impact, with improved early detection, reduced care costs, and clinical-grade accuracy in mental health screening. 


With rising mental health claims, an aging population, and expanding regulatory pressure, the UK is poised to follow suit. It’s a question of how quickly forward-thinking firms will move to adopt it.


Get involved

Are you ready to become a thought leader? Reach out to discuss our ongoing research initiatives, how they impact your firm and where we can work together to position you at the forefront of your industry.



bottom of page