Last week, a colleague showed me their company's new AI chatbot with obvious pride. Within three interactions, it confidently told a frustrated customer that their legitimate complaint was "outside our policy parameters" and suggested they "explore our FAQ section". The customer had already read the FAQs. They needed a human. Instead, they got algorithmic stonewalling dressed up as innovation.
I've seen this movie before. In 1999, I watched companies spend millions on websites that were essentially digital brochures because "everyone needs to be online". The technology was real. The imperative was real. But the execution was often disastrous, driven more by fear of being left behind than by genuine strategy. Many of those companies didn't survive the correction that followed.
We're living through a remarkably similar moment with artificial intelligence (AI) in customer experience, and it's time for CX leaders to learn from history rather than repeat it.
The pattern is unmistakable
The parallels between the dot-com boom and today's AI frenzy are hard to ignore. In the late 1990s, every business "needed" a web presence – strategy optional. Consultants sold six-figure website projects to companies that barely understood email. Investors threw money at anything with ".com" in the name. Sound familiar?
Today, "AI transformation" has become the new corporate imperative. Boards demand AI strategies. Vendors promise revolutionary results. LinkedIn is flooded with AI success stories that conveniently omit the disasters. And just like companies once added "e-" to their name for credibility, now every product is "AI-powered," whether the AI adds value or not.
The underlying mistake is identical: implementing technology before understanding the problem it should solve.
The dot-com era gave us websites that replicated print catalogues online with no thought to user experience. Today, we're getting chatbots that replicate the worst aspects of phone trees, and "personalisation engines" that creepily reference your browsing history without actually understanding your needs.
Both eras share the same FOMO-driven adoption curve, the same consulting gold rush, and the same willingness to declare victory based on implementation rather than outcomes. We're not asking "Did this improve our customers' lives?" We're asking, "Do we have AI yet?"
But this time, the stakes are higher
Here's what's different, and more dangerous: dot-com failures usually meant wasted money and bad websites. AI failures in customer experience can fundamentally damage trust and relationships at scale.
When an AI system hallucinates product information, mishandles sensitive customer data, or makes decisions with hidden bias, the consequences ripple far beyond a single interaction. When we automate empathy badly, we don't just frustrate customers—we signal that we don't value them enough to involve actual humans in their problems.
The technology itself is also more powerful and more opaque. A bad website was obviously bad. A bad AI system can seem impressively competent while making subtle errors that erode customer confidence over time. It can fail in ways that are harder to detect, harder to explain, and harder to fix.
This is precisely why CX leaders cannot sit on the sidelines of the AI conversation. We have both an opportunity and an obligation to shape how this technology gets deployed in our organisations.
The CX leader's responsibility in the age of AI
Customer experience professionals are uniquely positioned to be the voice of responsible AI implementation. We live closest to the customer impact. We see what works versus what sounds good in a boardroom. We understand that technology should serve human needs, not the other way around.
This means we need to become comfortable championing AI where it genuinely improves customer outcomes while pushing back hard on implementations that don't. That dual mandate is uncomfortable but essential.
AI excels at certain CX applications. Analysing sentiment across millions of interactions to identify systemic issues? Brilliant. Routing complex cases to the right specialist based on nuanced context? Extremely valuable. Providing customer service agents with real-time suggestions and information? Often transformative. Handling straightforward, high-volume transactions that customers prefer to self-serve? Absolutely.
But AI as a replacement for human judgment in emotionally charged situations? As a way to avoid staffing contact centres adequately? As a shield against listening to customer feedback? That's where CX leaders need to draw lines.
We must insist on real measurement. Not "we deployed AI" metrics, but outcomes that matter: Did customer satisfaction improve? Did resolution time decrease without sacrificing quality? Are we solving problems or just shifting them? Can we explain to customers how and why AI is involved in their experience?
We need to keep humans in the loop where it matters most. Some customer interactions require empathy, creativity, and judgment that AI simply cannot provide. Our job is to identify those moments and protect them from misguided automation.
Perhaps most importantly, we need to champion transparency and trust. Customers should know when they're interacting with AI. They should have clear paths to human assistance. And we should be honest about what our AI systems can and cannot do, rather than overselling capabilities that lead to disappointment.
A different kind of boom
The dot-com bubble burst because too many companies prioritized having technology over using it well. The survivors were those who focused relentlessly on creating genuine value for users—companies that understood the web was a tool, not a strategy.
The same principle applies now. AI is not a CX strategy. It's a powerful set of tools that, wielded responsibly, can dramatically improve how we serve customers. But only if we're honest about its limitations, thoughtful about its applications, and willing to say no to implementations that sound innovative but deliver poor experiences.
CX leaders didn't cause the current AI frenzy, but we're the ones who will live with the consequences of how it's deployed. That gives us both the authority and the responsibility to advocate for responsible AI—AI that genuinely serves customers rather than just checking a box for the board.
The question isn't whether your organization will adopt AI in customer experience. It will. The question is whether you'll help ensure it's done in a way that builds trust rather than erodes it, that enhances human connection rather than replaces it, and that solves real problems rather than creating new ones.
That's the lesson from the dot-com era we can't afford to ignore: technology is only as good as the judgment behind its implementation. And in customer experience, that judgment must come from us.
Quick links
- How to navigate the hype around AI
- AI governance gap threatens customer trust, Genesys study warns
- The ethical AI guide for customer experience