It was a big week for CX Network last week. We ran our All Access: The AI Revolution in CX webinar series, with 12 sessions and 21 speakers sharing case studies and discussing the most pressing issues surrounding AI in CX, such as voice AI, responsible AI and the impact of agentic on CX teams. During the series, we released our flagship CX Horizons report packed full of proprietary research and analyzed by our editor-in-chief. We also released our Top 50 AI in CX Leaders to Follow in 2026 list, several of whom featured in the series as speakers.
If you missed out on the excitement – not to worry! All sessions are available to watch on-demand on our CX+ platform here. If you're more the reading type, we've got you covered. Read on for the key takeaways from the event.
Key findings from our CX Horizons report: machine customers are here, but customers still don't trust brands with AI
Naturally, there is much focus on how brands are using AI, as CX Network editor-in-chief Melanie Mingas said in her session about her research for the CX Horizons report, "the big news here is not about how organizations are using AI for CX, but how customers are now using AI for CX and service. We kind of see this as being bigger than the internet right now […] Consumers are now choosing AI-first rather than search-first journeys when they're buying things online. They might even be doing this in lieu of going direct to a brand or retailer's website or app".
This, of course, has huge ramifications for CX practitioners.
Despite using AI themselves, customers remain deeply distrustful of how brands use them. In this, trust has become a strategic differentiator.
Sue Duris, principal consultant at M4 Communications, report contributor and one of our top AI in CX leaders to follow, who joined Mingas to analyze the report findings, put it this way "I feel that 2026, trust is an inflection point, and I also believe that trust has become a differentiator. If a customer doesn't feel that they trust the brand, they're going to leave. And I think that tolerance level is lowering and lowering."
Another recurring alarm bell surfaced by the research was that organizations, while racing to deploy AI, are largely unready to implement it successfully. The data that acts as foundation for AI is often fragmented and poorly maintained. Duris said "Ambiguity does not exist in AI. And if your CX program and your operating systems aren't where they should be, AI is going to be the magnifying glass and expose those things". Mingas agreed, saying "Tech cannot fix your culture or your people problem. Tech can only fix a tech problem. And overlaying it on top of processes that are broken is only going to magnify those broken processes".
Watch the full session here.
Moving from AI investment to CX impact
AI is not simply a bolt on anymore. Increasingly, companies are using it as the core infrastructure powering CX from scheduling to quality assurance. "Every leading organization today is building their CX strategy around AI. They are building platforms that can deliver fully automated service, 24-7, with no human in the loop, solving customer intents and being able to resolve it in real time", said Dana Shalev, director of product marketing at NiCE.
This of course has profound implications for CX teams but the future wont be AI versus humans but humans and AI agents working together, with AI handling repetitive tasks and humans focused on complex and emotional interactions. "The hybrid workforce is no longer a future concept — it's the reality. Humans and AI agents are all working together, they each play a distinct role, and that's the new operating model", said Jennifer Wilson, director of product marketing at NiCE. Wilson turned to Lufthansa to illustrate her point, saying "They're automating over 16 million conversations per year, with an 80 percent automation rate for refunds and rebookings alone, handling peak volumes of 12,000 messages per minute, including translation in real time. This is not experimentation – this is AI they truly operationalized."
The reality is that visibility and observability is what separates ROI from failure. "When organizations can't see how the AI is performing, where it's delivering value, where it's falling short, they can't optimize it. That's where things start to break down", said Wilson, "The difference between that AI investment you're making and real business impact comes down to three things: visibility, control, and continuous optimization".
Watch the full session here.
How AI can elevate live agents
In his provocative hit session, Mike Egli, CX transformation practice leader at RingCentral, was frank about the impact of AI in contact centers. "When you take the low-hanging fruit into AI, every single interaction that hits the human is a 100% guarantee of complexity. We've killed what we've called the 'breather call' for decades. There's no more simple status updates, no easy win to reset the brain – every pickup of a call is a high-stakes, high-emotion baseline". He emphasized the human cost of this, saying "87% of agents are reporting extreme stress. They're working on a pressure cooker that has no release valve - it's just call after call after call." Egli railed against the "full automation" myth, saying "we're spending billions to create a more expensive, often more irritating path to the same human agent that we were trying to avoid in the first place. We have to stop chasing the myth of total automation".
Egli advocated for investing in human agents, rather than seeking to replace them, explaining that companies that augment their agents see 32 percent higher CSAT scores and save between $10,000 and $15,000 on onboarding costs due to higher retention rates. The key to retention, he said, is recognition: "When agents feel like they're part of a winning team that actually sees their effort – they don't leave, they perform." Agent-assist AI has enormous potential. Egli explained "Agent-assist technologies can easily reduce handle times by a minute per call. If you couple that with identifying a few workflows where we're also seeing drag and streamlining those, you see 30-50% overall reductions in handle time."
Watch the full session here.
How MSU Federal Credit Union leverages AI for improved agent and customer experience
MSU Federal Credit Union (MSUFCU) shared their step-by-step approach to AI-implementation that was grounded in trust building with members and staff. Colleen Cole, senior vice president of member service and lending at MSUFCU said "We actually looked at leveraging AI internally, with internal chatbots managed by that same team. When they find great experiences with the tools they're using, they're then more confident in the experience that our members are having as well". And it isn't just the agents enjoying the tools, as Cole explained "On day one, one of the agents said how smooth it was and how confident they felt talking about products and services immediately. We had a 10 percent lift in CSAT the following month just by removing some friction there".
Cole explained how MSUFCU decided what to automate: "We really looked at the first basic use case of automating the simple things. What are the requests we're getting consistently? How many times is our team answering the question of 'what is your routing number?', and how do we just take that off their plate right away? That's going to get our members a quicker answer and remove that repetitive, mundane task from the team".
Responsible AI in regulated environments and the build versus buy debate
Building an AI demo is relatively easy, but implementing it responsibility in a regulated, customer-facing environment is an entirely different matter. "The real question is not whether AI can produce a good demo. The real question is whether it can operate safely, where customers, regulators, privacy expectations, and operational realities all come into play", opined Mrunal Gangrade, vice president of data engineering at JPMorgan Chase. Like other speakers at the series, Gangrade doubled down on the trust element: "In regulated customer environments, trust and traceability are not optional add-ons. They are the foundational requirements". She continued, "prototype success is not the same as production readiness. The closer AI gets to the customer, the more the organization needs discipline around how it is designed, deployed, and governed".
The session also touched on the build versus buy debate that many organizations are having internally at present. Gangrade said "For regulated industries – because of the need for deep control, transparency and explainability – the preference tends toward building. For non-regulated environments, buying is often better because it saves time and reduces workforce burden. A hybrid model, where you build some and buy some, is often the best fit".
When AI goes beyond simply fixing pain points: the Nathan James story
Taylor Johnson, director of CX at Nathan James, when asked why the online furniture retailer implemented AI, emphasized the importance of putting CX first, saying "It's important to stay rooted in improving the customer experience. There weren't necessarily any pain points we were trying to solve. We saw specific areas where AI could actually give our customers a better experience than a human could […] If AI is making things easier for your customer, then everything else will follow after that". On the metrics that matter during deployment, she warned against leaning on containment rates as a single source of truth, advising "be careful about making containment rate one of the only numbers that you look at. Don't use AI as a band-aid to cover underlying issues. Try to get to the bottom of why these questions are coming up in the first place". Sam Chandler, director of customer success at Kustomer, recommended "Your executive leadership just wants to make sure they're seeing ROI in their investment. Stop thinking like a CX leader when you're talking about ROI on your AI investments. You need to think like a CIO".
Both panellists were emphatic about the human-in-the-loop approach, noting that, while AI mistakes are manageable, but when no humans catch them, it can be brand-damaging. "A mistake made by AI that no human catches – that would be a failure in our model. That's exactly why we've put a lot of energy into keeping humans in the loop, not just about catching mistakes, but making sure that the system itself is designed so that those mistakes don't slip through unnoticed", said Johnson.
Watch the full session here.
True hyper-personalization must be based on trust
The shift from segmentation to real-time, individualized experiences represents one of the most exciting elements on the frontier of AI-driven CX. Banani Mohapatra, senior manager or AI, machine learning and data science at Walmart said "Personalization was primarily segmentation – grouping different people into buckets, like personas: moms, techies, pet lovers. But today, we have moved from buckets to individual user journeys. We now have the ability to serve a unique experience to 100 million people simultaneously, where the system isn't just reacting to what you purchased yesterday, it's actually predicting what you need right now".
The line between creepy and cool, however, is thin. It comes down to the data it is based on, and whether or not the customer knowingly shared it. "As long as the platform uses the information the user has shared with them, they feel comfortable. But when it starts using information customers haven't shared, it feels unsafe. If the value is greater than the discomfort, the user accepts it. If the discomfort is more, the trust gets broken", said Kailash Thiyagarajan, senior machine learning engineer at Apple. Mohaptra agreed, saying "Personalization can feel very helpful when you're anticipating a need. But it can easily feel intrusive when you're exposing a private data point that the user hasn't even realized they shared. That's where transparency kicks in".
Watch the full session here.
Responsible AI in practice and under pressure
For Aarohi Tripathi, senior data engineer at CVS, "responsible AI" isn't about policy statements but becomes real when its tracked through concrete metrics. "You need to have firm metrics, where you can actually track whether the fairness metrics are there, the bias metrics are there, the abuse metrics are in place", she said. She, like others during the series, advocated for a human-in-the-loop approach, saying "there should always be a check-in process, whether it is a human in the loop, or you're tracking things. When you see that the metrics are off, you can respond to it and get it corrected".
The discussion addressed the underlying causes of bias. Historical data sets can carry the assumptions of the past, feeding them into modern AI systems and encoding historical biases. Tripathi used the example of the gender pay gap – which although not closed now, is less prominent – to illustrate this, explaining that "that's legacy data, but you're still training your system on it, and that can show up as inconsistencies when you finalise your model. We need to re-evaluate, re-balance the data."
Businesses are feeling the pressure to keep pace with AI developments. This can cause organizations to abandon AI initiatives mid-stream, which can have dire consequences for quality and outcomes. Tripathi said "Every two months there is a new model, and something you are doing currently, you are just dropping it and moving to the new thing. The difficulty at this point is how do you take to completion what you started, and not just drop it just because there is a new model out there." Staged rollouts could be the answer. She said "I am a firm believer of staged rollouts. You do not deploy your model to 100 percent of the population. You test on a smaller population first, get the feedback, and you continue doing that until you've given it out to all of them."
Voice AI is the fastest path to resolution
Voice AI is emerging in contact centers as the fastest path to resolution, and old-fashioned IVRs are on the way out. Lauren Kiefer, GTM at ElevenLabs, has a positive outlook: "We used to have the old friction era. I was very much guilty of slamming zero every time I called a help centre. The reason was that these IVR bots were never designed to help, they were designed for deflection. Now, in this new world of AI, AI can actually take the full resolution end-to-end. And that's really what customers are looking for – they want speed and customer experience to coexist."
Voice AI is now emotive, which can help build trust with customers. "[ElevenLabs] agents can actually detect frustration and respond in a calming, reassuring way. It's not just based on what they say, but how they say it. If I call and say, 'Where is my package? I have a big meeting tomorrow, it's incredibly important' I want whoever I'm talking to deeply understand that urgency. Now, through the power of smart AI, we can really do that."
The technology also has huge potential in training and development. Kiefer explained "There is nothing more powerful than simulated training. You can have AI calling in that are crying, that are yelling, that are frustrated and see how your CX agents handle those interactions. You can give them feedback live on the call of, 'You did a great job de-escalating here. During this part, you said this, you should have said this.' For me, it's the new wave of learning and development."
AI is opening up new possibilities of using video in CX strategy
AI is removing the barriers that previously made video an inaccessible tool for many CX teams. David Gunn, vice president of customer success at SundaySky, delved into this point: "With AI, it lowers the barrier to entry for creation, it lowers the time to creation, and it lowers the skill and the staff needed to create content. You're shifting from 'which of our initiatives are worth building a video for and investing 6 to 8 weeks into?' to 'what else can we create with video? What else do we have that is currently a document that no one's reading that we can turn into an engaging video?'" Video can be one of the most exciting pieces of the hyper-personalization trends. Gunn said "when people get a video that is tailored to them, it resonates, it gets watched, and it gets acted upon. It provides a fantastic experience for the customer that feels like, 'wow, this was built just for me', while on the back end, you're able to create that at scale for every single one of your customers."
For Gunn, the best moment to deploy video in CX is during onboarding, when the window to demonstrate value is narrow. "I think the biggest impact is at that first stage, really making sure that you're engaging with [customers] close to the time of purchase, personalizing with details around what you know about their reason for purchase and pain points, and giving them clear guided next-step actions when they need it most. That's where you're going to have really great impact."
The AI assurance gap and what this means for CX
While many organizations are busy deploying CX, testing can be neglected, jeopardizing any progress made. It must be continually tested and monitored to ensure it's having the desired effect, unlike more traditional IVR systems. Hamish Graham, founder & CEO of Pump CX, explained "AI is not like a traditional IVR. It continually evolves, it learns, it adapts, and it drifts over time. What we're finding is these AI systems are degrading without you touching the system. Intent handling is subtly changing. Handoffs that used to be clean start to fail in edge cases. Sometimes it fails spectacularly, and we read about it in the news. Other times it fails quietly, and you don't realize you've got a problem - you're just getting silent failures." He delved deeper, sharing an example from an insurance company who had deployed AI in customer service: "The AI appeared to have adjusted its behaviour, having learned from previous interactions that most of the time when it asked the wellness check question, the customer was fine. So it actually started to deprioritize asking that question, and instead continued optimizing for containment and looking for an outcome when that should never have been the goal in this situation. When the systems looked at it, they said: response time's excellent, it's calling the right APIs, no errors flagged. Yet the outcome was just fundamentally wrong."
The none-deterministic nature of agentic and generative AI means that every conversation is different. Testing must be rigorous, continual and not treated as a one-and-done. Not doing this can have major consequences. As Graham explained, "Most organizations don't have a way to continually assure and validate that the system's working right now. Without that, journeys degrade, edge cases come in, containment may actually go up as customers get frustrated and don't bother trying to exit the bot – instead, they exit your business. But it's a gradual problem, and you don't know until it's a real problem."
So how should testing be carried out? Trent Evans, vice president of business development at Pump CX, advised "With traditional IVRs, you could test it a thousand times and get the same outcome. What we're finding with agentic AI is, because it's non-deterministic, every conversation is different. You can be testing an AI voice bot, and for 500 calls it runs perfectly well and then we find on the 505th call it starts hallucinating, or it's advising customers to talk to their competitors. My personal belief is that you would want to test every single individual call journey at least a thousand times to be confident that it's working."
Watch the full discussion here.
What it takes to build and rollout an agentic analytics platform: data, tiered governance and change management
Rajesh Sura, currently head of data engineering and analytics for North America stores at Amazon, joined the series to talk about his experience rolling out agentic analytics platforms. The barrier, he said, is not technology but data. "When you deploy enterprise-level AI solutions at scale into production, the first biggest thing you will realize very quickly is AI is not the hard part – the data is. Data quality issues which were acceptable at the time of human analysis become blockers for AI. AI simply starts exposing every flaw in your data foundation." His approach is to reframe agentic projects: "We took a step back and rethought the problem. Instead of saying 'let's build AI', we started saying 'let's build a data foundation that AI can actually trust and operate on'."
Tiered governance is the best framework for agentic AI deployments. For this, Sura offered a practical governance template for practitioners embarking on their agentic journey: "Not all decisions are equal. Tier 1 is fully autonomous - low risk, high frequency decisions, like generating a standard weekly report. Tier 2 is human in the loop – medium risk decisions like recommending a pricing change. AI produces the insights, but we require human approval before an action. Tier 3 is AI assistance only – very high-risk, high-strategic decisions. AI provides the analysis, but a human makes the final call." Explainability is also essential when working with agentic AI. "Every autonomous decision must answer three questions: What did you decide? Why did you decide that – can you explain the reasoning chain? And what data did you use, including the quality scores of those data sources? So end users understand: is this a definitive output, or is there logical reasoning that AI has applied on top of it?"
Change management is notoriously difficult and is, in most cases, more difficult than the technology itself. In his experience throughout his career, he said "the biggest challenge is organizational, not technical. We've had to change incentive structures. You can't reward people for speed of report generation when AI does that automatically. You need to figure out how to rightly reward people so they're embracing AI and bringing more strategic value to your business." Communication must be a focus point for any teams implementing AI. "People just don't trust what they don't understand. You need to make sure change management starts from day one. You need to help them understand what AI is, how you will be leveraging it in the next 3 months, and get them prepared. You cannot just go in one fine morning and say, 'from today you will start using AI tooling." Also helpful is identifying internal advocates. Sura said of this "We identify early adopters and make them our advocates. Bring success stories from them and share those with a broader audience within their job family or role because you can drive organic adoption when your peer is talking about it, rather than a tech team coming in and presenting about it."
Watch the full session here.
Quick links
Key insights from All Access: AI Revolution in CX and Contact Centers
The Future of Customer Listening: dynamic, active, and AI-powered
CX is chasing ROI – but still thinking reactively