When your customer is a machine: Rethinking service design for AI agents
Are you prepared for machine customers? Sue Duris outlines six things all CX leaders need to do now
Add bookmark
Sarah's AI agent wakes her up at 6:47am with a notification: “I've reviewed your mortgage against 47 lenders overnight. You're overpaying by $340/month. I've started three applications and scheduled calls with the top two. Approve?”
She taps “yes” while making coffee. By lunchtime, her agent has negotiated rates with two banks, flagged a better deal on her home insurance, and transferred her emergency fund to a higher-yield account.
Sarah never visited a bank website. Never compared rates in a spreadsheet. Never sat on hold. Her AI agent did it all.
This isn't science fiction. It's happening now – and most brands have no idea how to handle it.
Don't miss any news, updates or insider tips from CX Network by getting them delivered to your inbox. Sign up to our newsletter and join our community of experts.
The silent revolution in customer behavior
Gartner predicts that by 2028, 15 percent of day-to-day work decisions will be made autonomously by AI agents. But in customer experience, the shift is already here.
Consumers are using AI as an invisible layer between themselves and brands. Almost 60 percent of customers now use AI tools to help them make purchase decisions. They're drafting complaint emails, scripting service calls, comparing products, and negotiating terms – all before a brand even knows they're being evaluated. And according to Accenture, 55 percent of consumers would be open to using generative AI agents to purchase for them.
So, with machine customers here, is your service design prepared to work with them?
What breaks when empathy doesn't matter
Traditional service design is built on human psychology. Journey maps trace emotional states. Personas capture motivations. Friction is removed to reduce frustration.
But an AI agent doesn't get frustrated waiting on hold. It doesn't feel delight at a personalized offer. It doesn't form emotional attachment to your brand voice.
As Don Scheibenreif, Vice President – Distinguished Analyst, CIO and AI Leaders Research at Gartner, notes: “They are logical and will make decisions based on rules that may or may not be transparent. Second, they can also process large amounts of information. Lastly, machines focus on completing tasks efficiently and without emotion, and they can't be influenced by being ‘wine and dined’.”
This creates fundamentally different challenges depending on your industry.
Financial services: When loyalty becomes purely transactional
Banks have spent decades – and billions – building “trusted advisor” relationships. They've designed branches as warm spaces, trained staff in emotional intelligence, created loyalty programs that reward tenure.
None of this matters to an AI agent optimizing for its human principal.
The agent cares about machine-readable product terms, API response times, transparent fee structures, and instant execution. When Sarah's AI agent can switch her mortgage in an afternoon, what does “customer loyalty” even mean?
Traditional loyalty programs won't work.
Points, tiers, status – these are psychological nudges that don't affect AI decision-making. An AI agent doesn't care about gold status. It cares about the net present value of the relationship.
Barclays has started to recognize this shift. They've begun publishing structured API documentation for common products, allowing third-party apps to query rates and terms programmatically. But this is still early – most banks treat their APIs as afterthoughts, not as primary customer interfaces.
The service design question becomes: How do you create loyalty when switching costs approach zero?
The answer might be what I call “functional loyalty” – API reliability, transparent pricing, agent-friendly policies, and data portability. The irony: true loyalty might emerge from making it effortless to leave. When switching is trivial, brands that retain customers must be genuinely offering the best value.
Healthcare: When your gatekeeper meets an AI advocate
If financial services faces a loyalty crisis, healthcare faces something more fundamental: a power shift.
Healthcare service design relies on patients not fully understanding the system. That complexity isn't a flaw – it's intentional. Patients are guided through processes designed to keep them dependent on expert navigation.
Then, an AI agent shows up that does understand the system – and is programmed to advocate relentlessly for its human.
Picture this: James has chronic back pain. His AI agent parses his insurance policy, cross-references symptoms against treatment databases, identifies that his GP's recommended approach has a 34 percent success rate for his condition while a specialist has 72 percent, books the appointment, drafts an appeal when insurance denies coverage, and escalates through three levels until approved.
James never navigated the insurance maze. His AI agent did the advocating.
This breaks the current healthcare service model in several ways:
First, it eliminates the effectiveness of gatekeeping. Insurance companies rely on patients giving up when faced with denial letters. AI agents don't give up.
Second, it exposes quality gaps. When agents can easily compare provider outcomes, hospitals that rely on information asymmetry face a reckoning.
Third, it creates an arms race. If patients have AI advocates, do insurers deploy AI gatekeepers? Do they become more transparent to avoid the adversarial dynamic?
UnitedHealthcare's recent moves are instructive. After facing criticism for AI-driven claims denials, they've started publishing more structured coverage criteria – for transparency, but also because they know AI agents will be parsing these policies regardless.
The principal-agent problem no one's talking about
Here's where it gets messy: when an AI agent acts on behalf of a human, who's really the customer?
Sarah's mortgage AI agent optimized for the lowest rate. But what if Sarah values the bank where she's been a customer for 15 years, or prefers a provider with better customer service ratings even if it costs slightly more? The AI agent doesn't know these preferences unless Sarah explicitly programs them in – and most people won't.
This creates a fundamental tension. Brands are used to understanding customers through behavior, stated preferences, and relationship history. But when an AI agent intermediates, all of that context disappears.
The brand sees an agent optimizing for specific parameters, not the nuanced human behind it.
The question becomes: do you serve the agent's optimization function, or try to communicate value that might matter to the human principal? Get it wrong, and you either lose the business or erode trust with the actual customer.
The organizational readiness gap
Here's what makes this genuinely difficult: most CX teams are structured around capabilities that become less relevant.
Skills that matter less:
- Empathy and emotional intelligence (still needed for humans, not for agents)
- Qualitative research methods (AI agents don't sit for interviews)
- Journey mapping focused on emotional states
- Psychological nudging and behavioral design
Skills that become critical:
- Structured content design – creating documentation AI can reliably interpret
- API-first service design – treating programmatic access as primary
- Plain language policy writing – eliminating ambiguity
Systems thinking that connects internal operations to customer-facing outcomes
This isn't just a technical challenge. It's a capability crisis. Many CX professionals will need to develop skills they never expected to need.
Service design starts to look more like information architecture.
According to Nextiva's 2025 CX Trends report, 81 percent of companies admit they could improve CX by consolidating customer data from all touch points into one system. When the majority can't even handle basic data integration, how prepared are they for AI agents that can parse multiple systems simultaneously and exploit every gap in service design?
The hybrid reality nobody wants to talk about
You can't just optimize for machine customers and ignore humans. For the foreseeable future, you'll have all of these simultaneously:
- Humans interacting directly
- Humans using AI as research tools
- AI agents with human oversight
- Fully autonomous agents
- Mixed journeys where humans start and AI finishes
- You need service design that works for all of these.
Your website needs to be compelling for humans and machine-readable for AI. Your service team needs to handle emotional calls and bot-submitted escalations. Your pricing needs to be psychologically appealing and programmatically transparent.
But here's the pattern most organizations will see: the verification moment. The AI agent does the research, runs the comparisons, drafts the communication then presents it to the human for approval. Sarah tapped “yes” on her mortgage recommendation. James reviewed his AI agent’s insurance appeal before it was sent.
This is the critical handoff point that most brands haven't designed for. Do you communicate with the AI agent? The human? Both? When Sarah's AI agent presents three mortgage options, should your bank send additional context directly to Sarah, or trust that the AI agent has presented the relevant factors?
Design for the wrong moment, and you're either bombarding humans with information their AI agent already processed, or you're losing the chance to influence the decision before the human approves it.
Most organizations are trying to retrofit AI-compatibility onto human-centric systems—and it shows.
What CX leaders should do now
You don't need to rebuild everything overnight. But you should start to:
Audit for machine-readability. Take your three most common customer journeys. Could an AI agent complete them using only your documentation and public interfaces? Where would it get stuck? Those are your friction points.
Map delegation likelihood. What are customers most likely to hand off to AI? Price shopping, routine reordering, subscription management, complaint drafting, appointment scheduling. Design these to work flawlessly for AI first.
Create structured content. Convert policies, product specifications, and FAQs into formats AI can reliably parse. Plain language. Unambiguous terms. Consistent structure. This helps humans too.
Ask the liability questions now. If an AI agent books the wrong appointment, orders the wrong product, or accepts unfavorable terms, who's responsible? You don't need all the answers yet, but you need to be thinking about how your refund policies, complaint processes, and terms of service handle AI-mediated transactions. Start those conversations with legal and product teams now.
Rethink your metrics. CSAT and NPS measure human satisfaction. What's the equivalent for AI agents? Task completion rate, API uptime, time-to-resolution, escalation frequency. You'll need both sets of metrics.
Think functionally about loyalty. If switching becomes trivial, what makes you genuinely better? Not emotionally appealing – better in ways that matter when AI is optimizing. Price? Speed? Reliability? Transparency?
The trust transfer problem
Here's something that should keep brand leaders up at night: for decades, you've built trust directly with customers. Marketing created awareness, sales built relationships, service reinforced loyalty. The brand owned the customer relationship.
But when an AI agent intermediates, that direct relationship fractures.
The human trusts their AI agent to act in their interest. The AI agent evaluates your brand based on structured data, API reliability, and optimization parameters. Where does brand trust fit in this equation?
Some brands will try to earn the AI agent’s “trust” through perfect technical execution—flawless APIs, transparent data, instant responses. Others will try to reach around the AI agent to communicate directly with humans. Both approaches have merit, but they require fundamentally different strategies.
The brands that figure this out will recognize they're not building one relationship anymore – they're building two.
One with the AI agent (based on functional excellence), and one with the human principal (based on values, brand affinity, and trust). And these two relationships must work in concert, not in tension.
The organizations that will win
The brands that thrive won't be the ones with the most empathetic customer service or emotionally resonant brands – though those still matter for human customers.
They'll be the ones that make their entire operation AI-native: transparent, structured, efficient, reliable.
They'll recognize AI agents aren't a threat to customer relationships – they're a new type of customer relationship.
And they'll stop designing services around the assumption that friction and complexity create defensible moats. Because when your customer is a machine, friction is just lost business.
Sarah's AI agent is already shopping. Will it find what it's looking for when it arrives at your door?
Quick links
- AI governance: A CX leader's guide to responsible AI implementation
- Empathetic AI: How emotional intelligence is reshaping CX in 2025
- Are AI-generated images putting customers off?