AI and customer trust: Reflections from CCW 2026 Sydney

After chairing CCW 2026 Sydney, Claire Cunningham reflects on how excitement around automation and AI also demands restraint before customer trust takes a hit

Add bookmark
Claire Cunningham on stage at CCW Syndey 2026

Customer Contact Week Australia New Zealand took place in Sydney on the 18th and 19th of March. This year’s theme was The Future of Contact: Seamless, Intelligent, Deeply Human

The theme rang through loud and clear across the two days. Artificial intelligence (AI), automation, and agentic tools are on the top of everyone’s minds, but with that comes a deep concern for not only the changes required of our teams but also how customers are reacting to a quickly changing service landscape. 

While there was so much excitement about automation, the underlying concern was, how far is too far? At what point does speed and ease cross the line to frustration and eventually an irreparable erosion of trust?

Rest assured, there was a clear line. 

Newsletter signup

Don't miss any news, updates or insider tips from CX Network by getting them delivered to your inbox. Sign up to our newsletter and join our community of experts. 

Stop worrying about being left behind

Across multiple sessions and discussions, it was very apparent that every organization is at very different stages of the AI and automation journey. 

It was heartening to see that the competition is not necessarily ahead of the curve. No one has perfected it. No one was fully automated. Some hadn’t even begun to use automation tools. Many shared their mistakes and successes. It was clear that everyone is still learning and still trying to get it right. 

Many attendees were still in the exploration stages of their AI journey, while speakers were naturally further ahead – but not all, and for good reason!

A standout perspective came from the first panel, where Matthew Wormwold from Fire and Rescue NSW, who leads their Triple Zero call center, shared that unlike most organizations focused on ROI or customer trust, their constraints are shaped by legislation and an uncompromising need for zero-risk technology.

While they are working to make emergency services more accessible for those who are deaf or hard of hearing, the most impactful automation they’ve introduced is simple: GPS coordinates automatically sent the moment a call is made. It has reduced administrative effort and improved response times significantly.

But when it comes to introducing automation like IVR into emergency calls, the answer is clear: not anytime soon. And the room’s collective reaction said it all, some experiences must remain unmistakably human.

So, if you haven’t implemented AI or automation in your teams yet, don’t panic! The worst thing you can do is rush in and layer technology onto your processes without clear intent and strategy.

That said, many speakers reflected that they wished they had started earlier, not to get it perfect, but to fail faster, learn quicker, and improve over time.

Interestingly, while some organizations chose to build their own tools, many more opted to adopt existing solutions to avoid the cost and complexity of building from scratch. The exhibitor floor was a strong reminder of just how many sophisticated tools already exist and how quickly they’re evolving through real-world use across industries.

What the greatest success stories had in common though was a clear starting point: a genuine intent to solve a real problem.

Successful implementation starts with solving the right problem

The organizations sharing their successful journey worked hard and thought hard about what the right problem to solve was. Most started right at the heart of the matter with their frontline teams. They asked:

  • What is the most time consuming task preventing our team from creating more value?
  • What is the most common customer issue or request?
  • Where is the friction for the customer? And where is the friction for the team?
  • What would the ROI actually be?
  • Do we build, or do we buy?
  • What is the smallest environment we can test this in?

Alongside this, they had to weigh up risk. 

If a customer were to encounter an automated tool or bot, what would their reaction be? Finding the right friction point to reduce the risk is key. Many brought in external consultants, sometimes multiple, to pressure test their approach and find the right problem. 

They reduced risk at every turn and took it slow. 

Another common strategy was transparency with customers and teams about the new technology. Reducing the risk also came down to letting customers know they were interacting with AI and could revert to a team member at any point. 

The key foundations of trust were built early and intentionally:

  1. Choose the most frustrating problem
  2. Reduce risk
  3. Be transparent


And interestingly, what consistently captured the room’s attention were the “failure” to success stories. The organizations willing to share their journeys transparently were the ones that sparked the most energy and engagement.

On Day 2, a panel on Humanizing AI featured Sian Howatson from Swyftx, who shared a simple but powerful shift. Adoption of their AI improved significantly once they started treating it like a co-worker rather than a new tool.

Initially, the bot had been introduced as just another tool and engagement was low, with ROI falling short of expectations. But by giving it a (non-human) name and positioning it as an extension of the team, perceptions changed.

Listening to team feedback and adding more humanizing elements made it feel more approachable and ultimately drove much stronger adoption.

Team feedback, cross-collaboration, and communication are crucial

The first person to find a problem with new technology is the customer. 

The second is the frontline team. 

The last is the IT team. 

Across multiple panels, one thing was clear: a swift response to a customer problem built trust rather than eroded it. 

When a customer raises an issue with the frontline, they are speaking to the face of the business. They expect to be heard and see action. 

One delegate in a discussion group shared that their tools now enable real-time updates from IT on issue resolution, significantly improving customer satisfaction.

But this only works if you’re listening in the right places!

Over and over, speakers reinforced the key to building customer trust with AI was to listen to the frontline team regularly. Wes Self, COO of GMHBA, reminded us that “Those closest to the customer understand them best”. The frontline team can flag what customers aren’t able to verbalize on a survey. 

But listening to the team is only half the story. Hearing the feedback is one thing. Actioning it closes the loop. One organization went so far as to train and upskill their frontline agents to code fixes into the AI, enabling faster rectification. 

Almost every speaker, no matter the topic, reiterated that their AI tools would not have been so successful without team feedback. Being able to immediately fix the unforeseen problems that always arise was key to maintaining customer trust while trailblazing. Hint: Slack channels seemed to be the most popular way to stay on top of anything that popped up. 

And then came the reality check.

On a panel about Redesigning the Agent Experience, Kirsty Robinson from Tafe Queensland, and Kim Phillips from Tyro Health, reminded us that:

  • New tech won’t solve a process problem
  • New tech won’t solve a staffing problem
  • New tech won’t solve a capability problem
  • New tech won’t solve a leadership problem
  • New tech won’t solve a culture problem

New tech can solve a tech problem. 

You have to apply the right solution to the right problem. Throwing tech at a process or human problem, won’t solve it. 

Humans aren’t going anywhere

The best analogy to come out of the conference came right out of the gate on Day 1.

In a keynote, Win with Trusted Human Centered Intelligence, the CEO of CDM Direct, Kris Ram, reminded us AI is only as good as the humans who use it. 

He likened it to the Iron Man suit. Tony Stark without the suit is just a human with regular abilities. The suit without him is a lifeless hunk of tech. It needs direction, judgment, and a human mind behind it.

This is exactly how we should implement AI in our organizations, not a replacement for humans but something that amplifies them.

The worst thing we could possibly do is deploy a bunch of unmanned Iron Man suits to save the world. It just can’t do it. 

We now have the tech to allow us to enable humans to be super humans on the frontline. Customers can have their problems solved faster and easier while frontline staff no longer have to waste time moving appointments, sending emails, taking notes. They can be freed up to spend time with customers in need with complex inquiries without the rush of having to get to the next, possibly low value task.

Allowing frontline staff the time and ability to use empathy, critical thinking, and problem solving for customers is where they are most effective and provide the most value. Both commercially and customer experience wise. There is no need for humans to do repetitive, low skill tasks like rescheduling or canceling an appointment when a robot can do that. 

The real magic sits at the intersection of AI capability and human capability, where teams are empowered to use the technology and elevate the moments that require empathy, judgment and connection.

There’s a real concern from customers about job loss, but confidence increases when organizations invest in upskilling their teams, enabling more meaningful human interaction. That’s where trust is built.

Where is the line?

Customer trust in organizations and in AI is fragile.

Customers aren’t sure: 

  • Who they’re talking to
  • Whether their problem will actually be solved
  • If their feedback mattered
  • If they use the automated option now, will the person who helped them today still be there tomorrow if they need them?
  • Will they be able to get through to a real person when it critically matters? Or will they get stuck in an endless AI argument loop?

Those moments that matter to customers are clear. When an issue is emotive and complex, humans need humans. They don’t want a bot. 

The fear is that the overuse of those technologies will remove the very-necessary humans from the equation and they won’t be available when it matters most. 

If I’ve learned anything from my time rolling out a customer experience strategy across self checkouts, it’s that having a human there to serve you is a much needed comfort blanket. 

Customers are forgiving for a long time. They will tolerate mistakes on your AI journey but they want to know that your organization still values them as humans. 

After you’ve nailed your foundation, to land a killer AI strategy that builds customer trust rather than erodes it, you need to land the journey by:

  1. Listening to the frontline
  2. Fixing mistakes fast
  3. Upskilling the team, instead of losing them
  4. Introduce it thoughtfully while keeping human assistance available
     

We saw it at CCW. AI and customer trust don’t have to be mutually exclusive. When done right, everyone wins. Organizations that build thoughtfully and intentionally will be the clear winners in the AI race. 

Quick links 


Latest Webinars

Why is customer experience still breaking in the age of AI?

2026-05-14

10:00 AM - 11:00 AM EDT

Join this webinar to learn more about how agentic and voice AI are shaking up CX and customer servic...

How to kickstart your AI implementation loop

2026-04-15

01:00 PM - 02:00 PM EST

A practical session about implementing AI, from scratch, for maximum impact

AI agents that learn: How to build systems that continuously improve themselves

2026-03-31

10:00 AM - 11:00 AM EDT

Empower your teams to build AI Agents that learn from every interaction, autonomously adapt, and con...

Recommended