Your top 5 FAQs on generative AI and CX, answered

We ask Dan O’Connell, chief AI and strategy officer at Dialpad, to answer some of the most frequently asked questions around the use of generative AI in the world of customer experience

Add bookmark
Leila Hawkins
Leila Hawkins
10/11/2023

Close up of ChatGPT on screen

Everybody in CX has a question about generative AI, whether it’s to do with its application, suitability or capabilities. To answer five of the most frequently asked questions, CX Network sat down with Dan O’Connell, chief AI and strategy officer at Dialpad.

 

1. How can organizations use large language models (LLMs) and generative AI to drive customer loyalty?

Dan O'Connell: Customer loyalty is of huge importance to businesses. Studies have shown that it costs more to acquire a new customer than it does to retain an existing one.

AI can improve customer intelligence, bolstering the ability of customer service agents to provide real-time support and assistance and improve the experience as a whole. By offering personalized recommendations through AI, businesses can consistently deliver relevant suggestions through predictive analytics, creating a sense of trust and loyalty with their customers.

LLMs and generative AI can build on our existing AI-powered customer satisfaction scores (CSAT) by offering an explanation of why a call or interaction was scored a specific value. Specifically, we can highlight and summarize the important parts of a customer interaction that influence the AI CSAT for the user. This process is pivotal to building trust with users and vital to the success of ML-powered features.

Organizations using Dialpad’s CSAT are able to move away from traditional survey response rates of 5 percent and instead have 100 percent of calls monitored for customer satisfaction. This works alongside features that support agents during the calls and interactions with real-time coaching and live sentiment analysis, helping them navigate conversations and reach a resolution with their customer faster.

Features like this means organizations gain a richer picture of the customer experience at every stage of the journey.

2. One of the key limitations of ChatGPT is that it only draws on data from 2021 or earlier. How does Dialpad’s solution compare?

DOC: LLMs like OpenAI’s ChatGPT can be effective for general-purpose functionality, but they often struggle with reliability issues such as rate limits, hallucinations, and precision. In contrast, DialpadGPT specialized and leverages over five billion minutes of proprietary enterprise conversation data. Using this specific dataset ensures that DialpadGPT’s outputs are generated with understanding from key industries such as customer service, sales and recruiting.

Developing a model that's specific to these types of conversations can be difficult to do, but we were in a unique position to do this fast as we have the data and the capability.

3. Accuracy is a key concern with LLMs, especially in risk-averse sectors like financial services or healthcare. Do LLMs built on proprietary data provide more accurate information?

DOC: One thing that LLMs such as ChatGPT do not allow for is transparency and control over the knowledge they draw on. This means the ability to see the facts, tone of voice and information that it will use to create content before it is delivered to end users.

We use several approaches to continuously train and tune our LLMs as well as model parenting, reinforcement learning from human feedback and more. Without proper parenting or the use of industry-specific data, the chances of generating incorrect or inappropriate responses can increase - which can result in negative customer experiences or significant brand damage.

4. We heard from speakers at our recent CX in Financial Services event that personalization must not be driven solely by a public LLM, but must be based on an organization’s own data. What are the advantages of taking this approach?

DOC: An LLM is only as smart as the data it draws from. So the better the dataset, the better the output. A specialized, domain-specific LLM will generate a more tailored and relevant output while a more generic open-source LLM has a higher likelihood of delivering hallucinations and biased results. Additional advantages of custom models include greater accuracy and fluency for specific business purposes, increased speed and reduced cost.

Industry-specific data and contextualized data is important for specific use cases for LLMs. Organizations cannot rely on generic data from public sources if they’re looking to have that competitive edge, however if a business was to utilize proprietary data, you can enhance the performance of models to generate personalized recommendations to customers and really maximize the output of your LLM. Particularly in customer service, where real-time insights play a crucial role, this type of data can really become a game changer to help empower businesses to provide exceptional service and stay ahead in a rapidly evolving market.

5. What is your outlook for the future of generative AI in CX?

DOC: Generative AI will become the focal point of most business operations and continue to be a key driver in improving customer experience in the near term. By utilising AI to gain deeper insights into consumer needs and personalize interactions, businesses can foster customer loyalty and drive revenue growth.

Companies will need to understand how AI can drive the most engaging online experiences for their customers or risk losing them as technology continues to advance - making impactful features that really work for them. Customers now demand a personalized digital experience with tailored recommendations and self-service options, making it mandatory for businesses to comprehend how AI can empower them to enhance their services.

Read more: 


RECOMMENDED