The AI readiness gap: Why CX leaders hold the key to successful AI implementation
Sue Duris outlines the questions all organizations need to ask before scaling AI, rather than waiting until problems emerge
Add bookmark
Here's a scenario I keep seeing play out.
An organization runs a successful AI pilot. Customers respond well, metrics improve, leadership is excited. The business case gets approved for a wider rollout. And then — slowly at first, then all at once — things start going sideways. Response quality becomes inconsistent. A bias issue surfaces in the data. An AI-influenced decision affects a customer in a way nobody anticipated, and nobody is quite sure whose problem it is to fix. The pilot that worked beautifully in a controlled environment is now causing more problems than it solved.
Sound familiar? You're not alone.
I’ve had this conversation with CX leaders more times than I can count. In almost every case, the technology wasn't the problem. The organization simply wasn't ready for what successful artificial intelligence (AI) demands — and the most underestimated, under-addressed dimension of that readiness is governance.
That may not be what you want to hear when you're excited about an AI initiative and under pressure to show results. But the data is unambiguous, and understanding it is what separates CX leaders who scale AI successfully from those who spend months managing the fallout of a pilot that never should have been scaled in the first place.
This article covers:
- Launching AI is easy. Transforming with it is not
- Governance is the hidden variable — and here's why most get it wrong
- Why CX leaders are the right people to lead
- The readiness dimensions most organizations skip
- The AI organizational readiness self-assssment
- Readiness is a continuous discipline, not a one-time gate
Don't miss any news, updates or insider tips from CX Network by getting them delivered to your inbox. Sign up to our newsletter and join our community of experts.
Launching AI is easy. Transforming with it is not
Intercom's 2026 Customer Service Transformation Report surveyed more than 2,400 global customer service professionals and found that 82 percent of organizations invested in AI over the past year. By almost any measure, AI adoption in customer service has become mainstream.
But here's the number that should give every CX leader pause: only 10 percent have reached what the research calls mature deployment — where AI is genuinely embedded into core operations, given real responsibility, and delivering compounding returns. Declan Ivory, VP of customer support at Intercom shared that the main factor consistently separating this 10 percent from everyone else is a strong conviction and belief that you can deliver a vastly superior customer experience through a high-quality AI implementation. He further notes that “once you have this conviction, then there is a sense of urgency to move with speed and intentionality as it ultimately leads to significant top line benefits in terms of customer loyalty and retention”.
What about the rest? Stuck somewhere between enthusiasm and execution.
The outcomes that separate early-stage teams from mature ones are not marginal. Intercom also found that among teams at initial deployment, 62 percent report improved customer service metrics. Among mature teams, that rises to 87 percent. The ROI gap is even wider: 43 percent of early-stage teams can demonstrate return on investment, versus 70 percent of mature teams. Confidence that customer expectations can always be met — that critical feeling of being in control of the experience — triples as organizations move from initial to mature deployment.
These are not incremental differences. They describe fundamentally different organizational realities.
What distinguishes them? It is not which tools organizations chose. It is the intentionality and discipline with which they built the infrastructure to support, monitor, improve, and govern those tools as they scaled. Intercom puts it plainly: launching AI is easy, but transforming with it is not. And the thing that enables transformation — the connective tissue that holds everything together as AI moves from pilot to enterprise capability — is governance.
McKinsey found that 44 percent of organizations have experienced negative consequences from AI implementation. That's nearly half. And when you examine why, the causes are overwhelmingly organizational: absent accountability structures, inconsistent data that wasn't clean enough to automate, teams that weren't prepared to work alongside AI, governance frameworks that existed on paper but had no operational reality.
Organizations blame the technology because it’s more comfortable than acknowledging they weren't ready for what the technology demanded of them.
Governance is the hidden variable — and here's why most get it wrong
When organizations plan an AI pilot, preparation typically focuses on three things:
- Identifying the right use case,
- Selecting the right vendor,
- Building the technical infrastructure.
All legitimate. All necessary. All insufficient.
The questions that determine whether a pilot can scale — and whether scaling creates value rather than chaos — are governance questions.
Who owns the decision when AI produces a wrong or harmful outcome? What threshold of declining accuracy triggers a review? How will the organization respond when a customer discovers that AI influenced a decision that affected them? If the vendor updates their model and AI behavior changes, who is responsible for detecting that and deciding whether it's acceptable?
These are not hypothetical edge cases. They are the everyday operational reality of AI at scale. And without answers established before deployment, every one of these situations becomes an unplanned crisis rather than a managed response.
Here's the pattern I see most often: a pilot succeeds in a controlled environment with a well-defined scope, clean data, and close monitoring. Leadership declares victory. The rollout is approved. And the moment scaling begins, every unresolved governance question surfaces simultaneously — who owns cross-functional escalation, what data can AI access across different markets and jurisdictions, how will bias be monitored at volume, what does the customer communication plan look like when something goes wrong.
Without pre-established answers, scaling becomes organizational firefighting — which consumes exactly the capacity that AI was supposed to free up.
The five-stage AI governance maturity model I outlined in a previous CX Network article maps the journey from ad hoc and ungoverned — where AI initiatives emerge from individual teams with no centralized oversight — through to adaptive and evolving, where governance anticipates emerging risks before they materialize.
The uncomfortable truth is that most organizations launching AI pilots are genuinely at Stage 1 or 2 of governance maturity, while behaving as though Stage 3 or 4 infrastructure is already in place.
They have enthusiasm, budget approval, and a vendor selected. They do not yet have the accountability structures, escalation protocols, measurement frameworks, or cross-functional alignment that responsible scaling requires.
This is not a criticism. It is a diagnosis. And the reason it matters is that CX leaders are uniquely positioned to address it — if they choose to step into that role deliberately.
Why CX leaders are the right people to lead
The Intercom research found that nearly a third of organizations now report their customer service teams are leading their company's AI transformation strategy. That is not a coincidence.
CX leaders sit at the intersection of what AI can do technically and what it means for real people — and that is precisely where governance lives. You see the full impact of AI decisions in a way that technology leaders, however capable, often cannot see from their vantage point.
Here's what I mean. A misconfigured routing algorithm is a technical problem. The customer who was classified incorrectly, routed to the wrong service tier, received a response that ignored their history, and then found no human willing or able to override the AI's decision — that is a trust and loyalty problem with compounding consequences. CX leaders see both sides. They understand the full cost. And that understanding gives them credibility as governance advocates that is genuinely rare.
When you raise a governance concern, you are not being cautious or bureaucratic. You are protecting the customer relationship that every other function in the business depends on.
That reframe matters. It creates permission to slow down when necessary and the authority to insist on standards that might otherwise be treated as optional.
You also have the cross-functional relationships that governance requires. AI governance does not live in one function — it requires technology, legal, operations, compliance, and product working together with clear roles and real coordination. CX leaders typically have working relationships across all those functions in a way that IT-led governance initiatives often struggle to build. You are natural conveners of the cross-functional governance committee that effective AI scaling depends on.
And because you see the full customer journey, you are well placed to anticipate where AI expansion will create friction rather than consistency. More than half of the organizations in the Intercom study plan to scale AI beyond customer support in 2026, and nearly half express a desire to create a unified customer experience as they do so.
But the risk the research flags is real: if sales deploys one AI agent, marketing launches another, and customer success builds a third — each optimized for its own workflows and operating in isolation — customers experience inconsistency, repetition, and the jarring sense of being processed rather than known.
CX leaders are the natural guardians against that fragmentation. But that role only works if you have already established governance credibility within the organization.
The readiness dimensions most organizations skip
Here's where the conversation needs to get concrete. Genuine AI readiness is not a pre-launch checklist completed once and filed away. It is an honest, ongoing assessment across five dimensions that need to be in place — or actively being created — before a pilot attempts to become an enterprise-wide capability.
1. Strategic and leadership alignment
This goes deeper than executive sign-off on a budget. It means leaders have reached genuine consensus — not just polite agreement in a meeting — on the organization's risk tolerance for AI, the primary objectives the initiative is meant to achieve, and how those objectives will be measured in terms that reflect customer value, not just operational efficiency.
I’ve worked with organizations where senior leaders enthusiastically approved AI initiatives in principle and then resisted — genuinely resisted — what governance required of them in practice: slowing a deployment, modifying a vendor contract, investing in human oversight they'd hoped AI would eliminate.
That resistance is not bad faith. It is the gap between what leaders agreed to and what they understood they were agreeing to. Strategic alignment means those conversations happen before the pilot launches, not during the crisis that reveals they never did.
The AI roadmap should be treated as a living business document — reviewed quarterly by cross-functional leaders, updated as regulations shift and customer expectations evolve, owned by the business as much as by technology. If your AI roadmap sits exclusively in a product or technology team, that is a readiness warning sign.
2. Governance and accountability infrastructure
This is where most organizations have the widest gap between what they believe is in place and what exists. Governance fails when everyone assumes someone else is responsible, or when policies exist on paper, but no one has real authority to act on them.
Ivory mentioned that with over 50% of organizations he found that are planning to scale AI beyond customer support in 2026, the biggest governance risk that organizations can overlook is the need to drive a seamless CX when deploying AI beyond customer service. He noted this means “you need a single point of control for the deployment, operation and governance of customer-facing AI across multiple use cases along the customer journey. This avoids poor AI orchestration that can lead to a poor customer experience.”
Effective governance infrastructure means a cross-functional committee with clearly defined membership, regular meeting cadence, and — critically — actual decision-making authority, including the authority to pause or roll back AI deployments. It means defined roles: who provides customer intelligence, who owns technical performance monitoring, who ensures regulatory compliance, who manages escalation. It means predetermined decision frameworks for different incident types, so that when something goes wrong, the response is coordinated rather than improvised.
For organizations using third-party platforms — and most are — governance must extend to vendors. When your vendor's AI interacts with your customers, you own the outcomes regardless of who built the technology. Vendor contracts should include transparency requirements, bias audit access, data handling agreements, and incident notification protocols. This is not standard in most vendor relationships, which means it needs to be negotiated explicitly, upfront.
The governance committee needs more than structure — it needs a shared operating framework for metrics. Specifically: what are you measuring before AI goes live, what do you measure after, and what do you do when the numbers move?
Start with baselines. Before deployment, document your current performance across the metrics that matter — first-contact resolution, average handling time, customer satisfaction, complaint rate, escalation rate, and any segment-level breakdowns that might reveal bias post-deployment. Without these baselines, you cannot tell whether AI is genuinely improving performance or simply shifting problems. Speed improvements that coincide with rising complaint rates are not wins. They are warning signs that got missed because nobody was comparing against a meaningful baseline.
Post-deployment, define your thresholds in advance — not after something looks wrong. What percentage drop in customer satisfaction triggers a governance review? What escalation rate signals that human override is happening too frequently and the AI needs retraining? At what point does a bias signal in a customer segment move from a monitoring note to a formal incident? These thresholds should be documented before go-live and owned by the governance committee, not left to individual judgment in the moment.
Accountability mapping closes the loop. For each key metric, the committee should have defined answers to three questions: who monitors it on a continuous basis, who is responsible for escalating when a threshold is crossed, and who has the authority to act — including modifying, pausing, or rolling back the deployment. When those three roles are different people, you have a functional governance chain. When they collapse into one person or remain undefined, you have a gap that will surface at the worst possible time.
3. People and culture readiness
Here's the dimension that strategic planning almost always overlooks: does your organization have the psychological safety to surface problems before they become customer incidents?
Teams that operate in a culture where AI initiatives are celebrated as leadership priorities — and where raising concerns feels politically risky — will hide performance problems until they become crises. If your frontline teams cannot flag when AI is failing customers without worrying about how that reflects on them, you will always be the last to know. Building explicit channels and explicit permission for those conversations is not a soft people issue. It is a governance imperative.
Culture readiness also encompasses AI literacy across the organization. Not technical depth — but enough contextual understanding for each role to fulfill its governance responsibilities:
Leadership needs strategic literacy: the ability to evaluate risk classifications, question vendor claims, and make investment decisions grounded in realistic expectations rather than vendor slide decks.
Governance committee members need operational literacy: the ability to interpret performance metrics, identify potential bias signals, and evaluate whether customer impact measurements are capturing the right things.
Frontline teams need practical literacy: knowing how to explain AI-influenced decisions in plain language, recognizing when AI falls short, and documenting feedback in ways that improve the system.
One of the most preventable reasons AI initiatives stall — or quietly fail — is that staff feel blindsided by a technology that changes their work without their input. I've seen this play out even in organizations with strong leadership alignment and solid technical infrastructure. AI works. The metrics improve. And adoption is still patchy six months in because the people expected to use it every day were brought in too late and too superficially.
Change management in AI isn't a communications exercise. It's a design principle. The teams closest to your customers — your agents, your frontline supervisors, your operational leads — have knowledge of edge cases, customer behavior, and process nuances that no implementation team can fully capture upfront. Involving them early is not just good people practice. It produces better AI outcomes.
What that looks like in practice:
Involve frontline staff in use case selection and pilot design, not just rollout.
Create feedback mechanisms that give them a real channel to flag what's working and what isn't — and demonstrate that feedback is acted on.
Define what AI adoption looks like in their roles and build usage metrics into performance conversations, not as surveillance, but as a shared commitment to making the system work.
When staff see their input shaping AI rather than AI shaping their work without them, ownership follows. And ownership is the difference between an AI system that continuously improves and one that slowly degrades as the people using it work around it.
Staff adoption metrics deserve the same attention as customer-facing performance metrics. Track not just whether the system is being used, but how it's being used, where human override rates are high, and what that tells you about gaps in AI performance or gaps in staff confidence. Both matter.
Both dimensions — literacy and active involvement — share the same logic.
None of this requires months of training programs. It requires intentional, role-appropriate education that treats governance capability as a core competency, not an optional enrichment activity.
4. Data and process readiness
This is the dimension most frequently underestimated — often because organizations discover the problem only after they have committed to deployment.
AI does not improve processes. It amplifies them. If the underlying processes are inconsistent, if the knowledge base that an AI agent will draw on is fragmented, outdated, or organized around team silos rather than customer needs, if the data contains historical bias — all these problems become faster and more visible at scale, not smaller. Before automating a process, that process needs to be documented, understood, and clean. Before deploying a customer-facing AI agent, the knowledge it will rely on needs to be current, accessible, governed for accuracy and audited for bias.
Organizations also need human performance baselines established before deployment — so that AI performance can be meaningfully compared. Saying AI is faster is not a governance insight. Instead, saying “AI achieves 82 percent first-contact resolution versus our human baseline of 74 percent, but generates three times the complaint rate on billing inquiries” is actionable intelligence.
5. Customer trust readiness
This is the dimension that most distinctly separates CX-led AI governance from technology-led AI governance — and the one most likely to be absent in organizations that treat AI primarily as an efficiency play.
Customer trust readiness means being able to answer, before deployment: How will customers know when AI has significantly influenced a decision that affected them? What is the pathway for a customer to request human review? Do you have pre-approved communication protocols — reviewed by legal, ready to deploy — for when something goes wrong? Or will you be drafting responses in real time during a live incident?
Research published by KPMG in 2025 found that while two-thirds of people use AI regularly, and 83 percent believe the use of AI will result in a wide range of benefits, trust remains a critical challenge as almost 46 percent of people globally are willing to trust AI systems. This aligns quite well to Salesforce’s research that customer trust in businesses to use AI ethically dropped from 58 percent to 42 percent between 2023 and 2024.
Declining trust isn’t inevitable. It reflects the gap between how organizations are deploying AI and what customers experience as transparent and fair. Organizations that address this proactively — building disclosure practices and human override pathways into the customer experience design rather than the terms and conditions — are building competitive advantage, not just managing compliance.
The AI organizational readiness self-assessment
Use these questions as a diagnostic — a structured conversation to have within your organization before you scale, not after problems emerge. Honest answers to hard questions now are far less costly than discovering the gaps through a customer incident.
Strategic & leadership alignment
- Has senior leadership reached a genuine consensus on AI risk tolerance — or just agreed to the business case?
- Are success metrics defined in customer terms, not only operational efficiency terms?
- Is there a documented AI roadmap reviewed at least quarterly with cross-functional input?
- Do leaders understand what governance will require of them, including the possibility of pausing or modifying a deployment?
- Is responsibility for AI strategy shared across CX, technology, and business leadership — or does it sit primarily in one function?
Governance & accountability
- Does a cross-functional AI governance committee exist with defined membership, authority, and meeting cadence?
- Are decision rights clearly defined — including who can pause, modify, or discontinue a deployment?
- Do escalation protocols exist in documented form before deployment, covering a range of incident severities?
- Does your vendor governance require transparency about model behavior, bias testing, and incident notification?
- Is there a clear process for keeping governance frameworks current as AI capabilities and regulations evolve?
- Are pre-deployment baselines documented across all key customer and operational metrics?
- Are performance thresholds defined in advance – specifying what level of change triggers a review, escalation, or intervention?
- Is accountability clearly mapped for each metric: who monitors, who escalates, and who has authority to act?
People & culture
- Do frontline teams feel safe raising concerns about AI performance without risking their standing?
- Do governance committee members have enough AI literacy to question performance data and recognize bias signals?
- Are AI-related responsibilities reflected in job descriptions and performance measures across relevant roles?
- Does leadership treat AI limitations and setbacks as learning opportunities — or as problems to minimize
- Do frontline teams have the practical AI literacy to recognize when AI is underperforming and explain AI-influenced decisions to customers in plain language?
- Do middle managers and supervisors have enough AI literacy to coach their teams through AI-assisted workflows and identify performance issues before they escalate?
- Does leadership have the strategic AI literacy to evaluate vendor claims, interpret performance data, and make informed governance decisions?
- Were frontline staff involved in use case selection and pilot design – not just informed at rollout?
- Are there active feedback channels for staff to flag AI performance issues, and is there evidence that feedback is acted on?
- Are AI usage and adoption metrics tracked at the staff level and incorporated into performance conversations?
Data & process
- Is there a single source of truth for customer data — accessible to AI systems — rather than fragmented records across multiple platforms and teams?
- Are the processes you plan to automate documented, consistent, and clean enough to be automated?
- Is the knowledge base that customer-facing AI will draw on current, structured, and governed for accuracy?
- Have you established human performance baselines to compare meaningfully against AI performance post-deployment?
- Is there an ongoing data quality monitoring process — not just a pre-deployment data review?
- Is data governance — access controls, privacy compliance, and bias auditing — in place for the data that AI will use?
Customer trust
- Have customers been clearly informed — and given meaningful choice — about how their data is used within AI systems?
- Are there specific protections and alternative pathways in place for vulnerable customers or those who may be disadvantaged by AI-driven interactions?
- Can you explain to a customer, in plain language, how AI influenced a decision that affected them?
- Do customers have a clear and accessible pathway to request human review of AI-influenced decisions?
- Do you have pre-approved communication protocols for AI incidents — reviewed by legal and ready to deploy?
- Are proactive disclosure practices designed into the customer experience itself, not relegated to terms and conditions?
- Are customer trust indicators — satisfaction, complaint rates, sentiment around AI interactions — monitored and reported to your governance committee?
Readiness is a continuous discipline, not a one-time gate
The organizations scaling AI successfully are not the ones that passed every readiness question before launch. They are the ones that treat readiness as an ongoing discipline — assessing honestly at each stage, closing gaps methodically, and building governance infrastructure that evolves in step with the technology.
Intercom's research captures this well. What distinguishes mature deployments is a continuous improvement loop — training, testing, deploying, analyzing — with governance providing the discipline to ensure that the loop produces genuinely better outcomes over time, rather than just more automation.
The organizations that thrive are not necessarily those with the most advanced technology. They are the ones with the organizational maturity to deploy confidently, adapt quickly when performance falls short, and maintain customer trust through every iteration of a technology that will keep evolving whether organizations are ready or not.
The readiness gap is real. It is the most consistent variable separating those who transform with AI from those who just launch it. But it is also closeable — for CX leaders who are willing to name it honestly, engage the governance conversation early, and step forward as the organizational bridge that successful AI transformation genuinely requires.
That bridge is yours to build. Will you start before the next pilot, or after the next crisis?
Quick links
- AI Is exposing the CX authority gap
- Consent: The next trust challenge for AI in CX
- 101 practical principles from behavioral science