Consent: The next trust challenge for AI in CX
Do your customers understand how much data you capture and how, where, and why it is used? Board member Ashlea Atigolo explains why customer consent matters
Add bookmark
As AI changes how, where and why we collect customer data, Advisory Board member Ashlea Atigolo explains what it means for consent and trust
In my previous column, Empathetic AI: How emotional intelligence is reshaping CX in 2025, I explored how empathetic AI is changing customer experience. The focus was on emotional intelligence and how systems can recognise frustration, adapt tone, and respond in ways that feel more human. At its best, empathetic AI improves outcomes by making customers feel understood rather than processed.
As artificial intelligence (AI) becomes more deeply embedded across CX journeys, however, a quieter and less examined question is emerging. If empathetic AI is about how systems respond to customers, the next challenge is about whether they should, and under what conditions.
That challenge is AI consent.
It is not a legal footnote. It is becoming one of the most consequential design decisions in modern customer experience.
Don't miss any news, updates or insider tips from CX Network by getting them delivered to your inbox. Sign up to our newsletter and join our community of experts.
Why AI consent now sits at the center of CX
Customer experience has always been built on an implicit trust contract. Customers share information because they believe it will be used proportionately, respectfully, and for a clear purpose. CX teams have long understood this in human terms. What changes with AI is not intent, but scale and consequence.
For CX leaders responsible for end-to-end journeys across digital, service, loyalty, and brand, AI consent is not confined to one tool or channel. It cuts across chatbots, personalization engines, journey orchestration platforms, agent assist tools, voice analytics, vulnerability detection, and customer data platforms. Every AI-mediated interaction introduces decisions about how customer data are captured, inferred, reused, and retained.
In that context, consent becomes a CX governance issue, not simply a compliance or technology concern. It shapes trust, brand perception, and the organization’s ability to deliver consistent experiences at scale.
A recent FT Adviser article by my colleague Elemi Atigolo, The hidden AI reality behind “Can I record this meeting?”, captures this shift through a deceptively simple moment familiar to many professionals. His work, grounded in the development of agentic AI systems in highly regulated financial environments, highlights that recording itself is no longer a meaningful event. What matters is the chain of processing that follows.
This insight extends well beyond meetings. It applies directly to modern CX operations.
When data capture becomes a signal of a broader consent challenge
Across contact centers, chat interfaces, messaging platforms, video calls, and in-person appointments, customer interactions are increasingly captured and processed by AI-enabled tools. The rationale is clear. AI promises efficiency, consistency, insight, and scale. In many cases, it delivers real value.
That is less visible is how profoundly this changes the customer data lifecycle.
Historically, data capture implied storage. A recorded call, for example, might be reviewed for quality assurance or compliance, then retained or deleted according to defined policies.
Today, in many CX architectures, captured data is simply the starting point of a much broader AI pipeline. Audio is converted to text through automated speech recognition.
Text inputs from calls, chat interfaces, contact forms, and messages are fragmented, enriched, indexed, summarized, analyzed for sentiment or intent, and combined with other behavioral and contextual signals. Outputs are generated for agents, surfaced in dashboards, fed into analytics, or shared across platforms and vendors.
From a technical perspective, this evolution is logical. From a customer perspective, it often creates a gap between expectation and reality.
Customers believe they agreed to the organization capturing their data, whether through audio, a form submission, or a message in a chat interface. In practice, they may be consenting to far more: inference, enrichment, reuse, extended retention, and in some cases model improvement. This expansion of use is rarely made explicit within the experience itself.
This is where the processing of customer data with AI becomes relevant. The issue is not AI in isolation. Customer data processed with AI is simply the most visible example of how AI expands data use beyond what customers reasonably expect across the wider CX estate.
Why this matters specifically for CX leaders
CX functions sit at the most sensitive point of the organization’s data landscape. Customers disclose personal information, financial stress, health issues, family circumstances, identity details, and emotional context in service interactions that they would never share elsewhere.
AI systems do not interpret sensitivity in human terms. They operate on retrieval and prediction.
Once customer disclosures are converted into machine-readable assets, they can move across systems, teams, and vendors unless deliberately constrained.
This creates a distinct challenge for CX leaders. An interaction may feel empathetic in the moment, but trust is shaped by what happens after the conversation ends. When customers later discover how their data have been processed, combined, or retained, the experience can be reinterpreted in ways the organization did not anticipate.
There is also a practical operational consequence. As interaction data fragments across form inputs, transcripts, chat messages, summaries, embeddings, analytics layers, and third-party systems, organizations can struggle to honor access, correction and deletion requests with confidence. Consent that cannot be operationalized across the full AI lifecycle is consent in name only.
Lawsuits and class actions: How customers are taking back control
Recent legal cases underline why this issue is moving quickly from theoretical to practical.
In the United States, several lawsuits against AI transcription and meeting-assistance providers allege that conversations were recorded and processed without appropriate consent, including claims relating to model training and the capture of biometric voice information.
In healthcare, a proposed class action against Sharp HealthCare alleges that ambient AI was used to record clinical consultations without explicit patient permission, with data flowing to third-party vendors and deletion controls proving difficult to enforce.
These cases are not about whether AI should exist. They are about whether individuals reasonably understood how their data would be used.
For CX leaders, the relevance is clear. Transcription happens to be where this mismatch between expectation and reality is most visible today, but the same issue applies to chatbots retaining and further processing conversational history, journey analytics inferring vulnerability, agent assist tools resurfacing past interactions, and personalization systems combining behavioral signals across channels.
Courts and regulators are increasingly testing whether consent reflects reality, not whether a disclaimer existed.
This is not a rejection of AI-led CX
Having designed and deployed conversational AI and agentic AI systems in regulated industries, I am not suggesting any of this diminishes the opportunity AI brings to customer experience. When designed responsibly, AI can enable faster resolution, better accessibility, more consistent service, and more relevant personalization.
The point is that AI introduces a new requirement and that is consent-aware design.
Consent can no longer be treated as a static statement at the edge of the journey. It must shape how data are processed, where data flows, how long it is retained, and whether it can be reused. In practice, this requires closer alignment between CX, technology, legal, and data governance teams, and a shared understanding of what AI systems are actually doing behind the scenes.
Three priorities for CX leaders in 2026
First, map the full AI data journey. Document what happens from the moment customer data is captured to every system it touches, including analytics tools, AI models, subprocessors and vendors. Without this visibility, meaningful consent is impossible.
Second, update consent language to reflect AI reality. Customers should be told clearly whether AI is transcribing, summarizing, analyzing sentiment, or learning over time, and what choices they have. Consent should be specific enough to be understood and flexible enough to be withdrawn without degrading the experience.
Third, hold vendors to consent-operational standards. CX leaders should ask whether customer data is excluded from model training by default, how deletion requests propagate across systems, where data is processed geographically, and what evidence can be provided to demonstrate compliance. These are CX risk questions, not technical curiosities.
From empathetic AI to trusted AI
Empathetic AI will help CX leaders think more carefully about how systems speak to customers. AI consent forces us to think just as carefully about what systems do with customers’ words.
In 2026, trust will not be defined by how intelligent an AI interaction feels in the moment. It will be defined by whether customers believe the organization respected their agency once the conversation was over.
The next phase of CX maturity lies not only in better experiences, but in experiences customers can genuinely agree to.
Quick links
- The Weavers: The overlooked generation holding CX together
- CX Isn’t a Platform. It’s a Philosophy
- The AI governance maturity model for CX leaders: Where does your organization stand in 2026?
