CX Network research found that consumer demand for data privacy is a top challenge for CX practitioners in 2026, and that awareness of how AI uses their data is the top consumer behaviour trend. Similarly, new research from YouGov and PegaSystems has uncovered that 54 percent of customers do not trust businesses to use generative AI responsibly.
Wider developments, such as the emergence of AI-only Reddit copy-cat, Moltbook, have further highlighted the privacy risks associated with AI. While organizations push on with AI deployment across marketing, service and backend operations, the gap between technological capability and customer trust is becoming harder to ignore.
In this session, we’ll explore what ethical AI looks like in practice, beyond high-level principles. From consent design and explainability to bias mitigation and human oversight, we’ll examine how CX leaders can lead the charge embedding responsible practices into everyday processes. We’ll also discuss how to communicate AI usage transparently, so customers understand not just that AI is being used, but how and why it benefits them. In an era where personalization and automation depend on data access, trust is no longer just an issue of compliance, but a competitive differentiator.
Attendees will learn:
- How to design customer data and AI strategies that prioritize transparency and accountability.
- Practical approaches to balancing useful personalization with privacy expectations.
- How to turn responsible AI practices into a trust-building advantage rather than a regulatory burden.