This is a column by CX Network Advisory Board member Ashlea Atigolo, and this piece is about empathetic AI and the importance of emotional intelligence.
With generative artificial intelligence (AI) agents and systems now handling everything from refunds to mortgage approvals, missteps of chatbots getting it wrong can’t be just seen as mere technical blips anymore. They’re a sign that we’ve been asking the wrong questions. Instead of “How smart can this be?” we need to ask, “Does this feel right?”
As someone who has spent years designing conversational AI assistants for highly‑regulated industries, I’ve seen tools evolve from simple chatbots to systems that summarise entire meetings.
What’s stuck with me isn’t the lines of code, but the reactions of the people using it. Watching these responses and moments of frustration, confusion and relief up close has led me to a single conclusion: For meaningful, personal customer experiences emotional intelligence and trust must be built into the core of every AI system we design and use.
Why “smart” isn’t enough
Generative models and agentic AI have unleashed a wave of automation. Articles, emails and even legal briefs can be produced in seconds. Contact‑center bots reset passwords and change delivery addresses without human intervention. But speed does not equal satisfaction.
According to an article on CX Network, most people still want to talk to a person:
More than 90 percent of US consumers prefer human agents for customer interactions, according to a recent Kinstra survey. A Five9 survey found that 75 percent prefer talking to a real human in person or over the phone for customer support.
One commentator within CX Network’s report went as far as to warn companies that they risk “losing customers if they replace humans with chatbots.”
Why the skepticism? When bots speak without empathy, they can sound uncaring or in most cases, even mechanical. A 2024 survey quoted by Zendesk found that 71 percent of customers believe AI can make service more empathetic and 67 percent want AI that adjusts its tone depending on how they feel.
What empathetic AI means
“Empathetic AI” isn’t some fluffy concept. They are sophisticated AI systems that recognize emotional cues or sentiments and respond in ways that feel considerate and supportive.
In my consulting work, I’ve seen how well‑designed AI can de‑escalate anger and make users feel heard. Unfortunately, I still regularly witness chatbots ploughing through scripted and rigid flows and completely missing the mood of the user.
With AI companies now developing what industry experts call "empathy technologies" that detect nuanced emotional states, I believe successful customer experience will need to evolve.
This means embracing unfiltered storytelling, emotional transparency and authentic community. It means ditching polished corporate scripts and letting users see the people behind the brand.
When my co-founder and I built our first financial‑AI assistant, we quickly realized we couldn’t start with the technology. We had to map the emotional journey of as many users as possible. So we asked ourselves questions like: What does a confused investor need to hear? How do we calm someone who is anxious?
Only then did we build the algorithms and frameworks behind our AI systems. We also discovered that building empathy requires infrastructure, and this includes real‑time feedback loops to learn from interactions, escalation points for complex cases, emotion‑detection modules, guardrails to prevent hallucinations and the need to obtain AI consent. Without these layers, generative AI models can hallucinate, deliver inconsistent tone or could expose your business to potential future claims regarding the use of user or customer data.
AI consent: Protecting both users and your business
These technical safeguards work hand-in-hand with ethical practices, particularly around AI consent. To improve customer experiences while using AI chatbots or AI agents it's important to note that more data may be required from the user. As a result of this, one important factor that should be considered within the CX journey is getting AI consent from your users.
What does AI consent mean? Simply put, it's about transparency and permission. Users should know they're talking to AI and understand how their data will be handled. By securing explicit AI consent upfront, organizations not only respect user privacy, build the trust that drives better customer relationships but also can prove that users or customers agreed to their information and data being used with AI.
Learning from missteps and success stories
The Air Canada case demonstrates what happens when these principles aren't followed. In this widely reported case, the airline's chatbot told a bereaved airline passenger that they could apply for a refund. The statement was unfortunately incorrect, and the airline later claimed the bot’s answers were not legally binding. However the court disagreed, ruling that what the AI said amounted to a representation from the company. The brand lost not just the case but also customer trust.
Another misfire I’ve seen involved a financial bot that confidently offered complex tax advice – well outside its remit. When challenged, the provider said the bot wasn’t authorized to give advice. Tell that to the customer who followed its instructions and filed incorrectly.
Positive examples show how empathetic AI can improve experiences. Take H&M’s virtual assistant. Instead of just parroting policy, it guides, suggests and adapts using context from the customer’s search and purchase history. Customers report that it feels more like a helpful store associate than a robot. Forbes recently noted that AI works best when used as a copilot.
This means that the AI handles repetitive tasks while humans focus on high‑value interactions where empathy and problem‑solving are essential. When AI summarizes interactions and automates note‑taking, agents can concentrate on delivering personalized service.
Surveys show that when companies over‑automate, customers notice. A recent CX Network article reported that a Five9 survey found that nearly half of consumers distrust information from AI bots. Another CX report warned that going all‑in on automation at the expense of human connection leads to declining satisfaction and long‑term loss of loyalty. The best outcomes come from blending human insight with machine intelligence.
Practical steps for CX leaders
So how do we build CX systems that feel human?
Start with emotion, not technology: Map out your customer’s emotional journey by understanding how customers feel at every stage and let these insights guide your CX design choices.
Ask emotional questions: For every interaction, ask what your customer is feeling (anxiety, anger, excitement, etc.) and define the response needed, both from people and AI systems.
Adopt AI‑aware governance: Develop internal policies and training that help staff understand how empathetic AI works. Assign someone to oversee AI ethics and ensure the responsible use of technology.
Maintain the human touch: Use AI for routine or repetitive tasks, but always provide customers with a clear and easy path to reach a human agent for complex or emotional needs.
Empower agents with real‑time insights: Give your human agents tools that offer live customer insights and emotional cues, so agents can respond empathetically in the moments that matter most.
Personalize every interaction: Use previous data and to personalize responses, making each customer feel recognized and valued, not like just another case or ticket.
Solicit customer feedback continuously: Build customer satisfaction (CSAT) scores into the user journey, analyze, and act on customer feedback to ensure CX systems remain aligned with actual emotional needs.
Promote transparency and honesty: Be clear with customers when they are interacting with AI, and set honest expectations about what the technology can and cannot do.
A human‑first future
I believe AI will continue to reshape CX. Voice notes, emotion‑aware summaries and generative reports are becoming mainstream across finance, healthcare and retail. But the organizations that succeed won’t be those who bolt generative models onto old processes. They’ll be the ones who build empathy into their AI from the ground up.
CX leaders have a choice: Embrace empathetic AI or risk becoming the next cautionary tale. For me, the choice is obvious. I want my technology to feel like a caring colleague, not a cold server rack. I want my AI to enhance loyalty and avoid the pitfalls of over‑automation. When automated systems act as your company's voice, they should demonstrate the same professionalism and care that defines your best customer interactions.
Quick links: