There is a number that appears in almost every voice AI business case. It is presented as proof of success. It is celebrated in QBRs, included in board reports and used to justify expanding automation programs across more contact center queues.
That number is the containment rate.
And it is quietly misleading you.
Containment does not measure whether your customer's problem was solved. It measures whether they gave up trying to reach a human.
What containment actually means
When your voice AI achieves a 70 percent containment rate, it means 70 percent of callers did not transfer to a live agent. That sounds like progress. It might even feel like progress if you are primarily measuring cost per call.
But consider what is actually happening in those contained calls. Some customers genuinely resolved their issue – the AI answered their question, completed their transaction, gave them what they needed. That is a success worth measuring.
Others hit a wall. They could not get to the information they needed. The AI did not understand them, or understood them but could not help, or gave them an answer that was close but not quite right. They hung up. They did not transfer – so the system logged a containment. But they will call back tomorrow. Or they will not call back at all, and you will wonder why churn is rising in that customer segment.
The containment rate cannot tell you which of those two groups is larger. It does not know. It does not care. It just counts calls that ended without a transfer.
How we ended up measuring the wrong thing
Containment made sense as a metric in the IVR era. The goal of early automated phone systems was to reduce demand on human agents – to deflect simple calls so that more complex ones could receive proper attention. In that context, containment was a reasonable proxy for efficiency. If the system handled the call, it was cheaper than a human handling it.
Voice AI is capable of something categorically different. It can understand intent, hold context across a conversation, take action in back-end systems and resolve issues end to end. It is not a deflection tool. It is a resolution engine. And we are still measuring it as if it were a deflection tool.
The result is that many organizations are optimising hard for a metric that actively conceals the quality of their customer experience. High containment can coexist with high frustration. The numbers look right. The customers feel wrong.
High containment can coexist with high frustration. The numbers look right. The customers feel wrong.
What true resolution looks like
A resolved interaction is one where the customer achieved what they called to achieve, without needing to call back, without needing to transfer, and without leaving the conversation feeling like they got the runaround.
Measuring resolution requires asking a different set of questions:
- Did the customer get a definitive answer?
- Did the AI take the action the customer needed, not just inform them an action was possible?
- If the customer called about a billing dispute, was the dispute resolved, or were they told to call back during business hours?
- If they called to change their delivery address, was the address changed, or were they given instructions for how to change it themselves?
These are harder questions to answer than containment. They require integrating your voice AI data with your CRM, your case management system, your repeat contact analysis. They require tracking what happens after the call, not just whether a transfer occurred during it.
But they are the questions that actually tell you whether your automation investment is delivering value to customers, not just reducing costs on a spreadsheet.
The metrics shift is also a commercial shift
There is a secondary effect that CX leaders often overlook. When you optimize for containment, you are inadvertently training your AI program to prioritize deflection over resolution. The models and flows that score well are the ones that end calls efficiently, not necessarily the ones that serve customers well.
When you shift to measuring resolution, your entire improvement loop changes. You start asking: what types of interactions have low resolution rates, and why? Is it a capability gap, the AI cannot access the right system? Is it a conversation design problem, the AI is not asking the right questions? Is it a trust gap, customers do not believe the AI's answer and want a human to confirm it?
Each of those is a specific, fixable problem. Containment rate cannot surface any of them.
A practical starting point
For CX leaders who want to begin this transition, the most immediate step is to add a repeat contact dimension to your voice AI reporting. Track how many customers who were 'contained' called back within 48 or 72 hours on the same issue. That single addition will tell you more about your actual resolution rate than any containment figure.
From there, build toward post-call surveys that ask specifically about resolution – not satisfaction, which is different – and integrate those results with your AI interaction logs. The patterns that emerge will almost certainly surprise you.
The goal is not to make containment disappear as a metric. Cost efficiency still matters. But it should sit downstream of resolution, not in front of it. An interaction that resolves the customer's issue and is efficient is the standard worth setting. An interaction that is efficient but resolves nothing is not a success. It is a deferred problem.
An interaction that resolves the customer's issue and is efficient is the standard worth setting. An interaction that is efficient but resolves nothing is not a success.
The industry needs a new benchmark
The voice AI platforms entering enterprise contact centers today are genuinely capable of achieving resolution at scale. The technology is no longer the limiting factor. What is limiting progress is the institutional habit of measuring deflection and calling it success.
The contact center industry has a long history of defining metrics that are easy to collect rather than meaningful to act on. Average handle time. First call resolution (which has always been harder to measure than vendors claim). Containment.
The leaders who will build the most differentiated customer experience in the next three years will be those who made the decision today to stop celebrating containment and start demanding resolution. The technology is ready. The question is whether the measurement culture is.
Quick links
- Building trust in digital-first brands: What we learned at All Access: Digital CX 2026
- Big hype, no flavor: The missed opportunities in AI
- 6 Essential tools to drive visibility in AI-search