How to get generative AI governance in place
As CX Network research finds 48 percent of organizations have no governance model for generative AI, we ask the experts how to get started
Add bookmarkListen to this content
Audio conversion provided by OpenAI

Whether its agentic or generative AI, automation, data analytics or chatbots, artificial intelligence (AI) demands more than technical know-how and investment dollars; it demands an organization has strong leadership and robust processes and guardrails to ensure AI use is ethical, effective and legal. In short, AI requires governance.
While regulations such as the EU AI Act can stipulate how and where AI is used in certain situations, individual organizations still require a means by which to govern how, where, when and why they and their employees are using AI.
However, according to CX Network’s research to establish the Global State of CX in 2025, 48 percent of organisations lack a company-wide approach to generative AI best practices and governance.
A lack of governance can jeopardize the effectiveness of generative AI models, as well as their accuracy and impact on the wider organization’s reputation. With generative models in particular, a failure to ensure proper and ethical use can cause anything from hallucinations to a major PR disaster, undermining customer trust and even impacting the value of the company.
“The expanding use of AI, particularly generative AI, in enhancing CX presents significant challenges for organizational leadership and trust,” says Jaakko Lempinen, chief customer and portfolio officer YLE Finland and a CX Network Advisory Board member.
“This is not just a compliance issue but a key competitive advantage and trust builder in an increasingly digital world. The organizations that get this right will be better positioned to maintain customer trust and navigate the evolving regulatory landscape while innovating with AI technologies,” he says.
Michael Anderson, VP of client management for Davies consulting division says: “AI governance is an area that is very fast moving and incredibly unforgiving, yet so many organizations are really on the back foot. In the coming months we may see a lot more in the news from the companies that fail to get that piece in place quickly enough.”
Anderson’s top advice is to appoint a person responsible for overseeing AI use, however he says the organizations doing this are “few and far between” and mostly in regulated industries like finance.
Roles such as chief AI officer (CAIO) and AI ambassador are growing in popularity, but with AI developing so fast – and with so much to consider to ensure its use is watertight – not everybody has the right skills for the job.
Anderson says: “Obviously, there are only so many people who hold the skills relevant to have a role like that and be appointed to that position, which means there is also a huge skills gap that needs filling, and it needs filling very quickly.”
Don't miss any news, updates or insider tips from CX Network by getting them delivered to your inbox. Sign up to our newsletter and join our community of experts.
What is AI governance?
Not only does a robust governance model allow an organization to work within established frameworks and greatly reduce the risk of costly and embarrassing AI mistakes, as a company policy it supports strategic and people-related decisions, gives all employees a clear rulebook by which to operate, and builds trust with stakeholders, including customers.
The main components of AI governance can include but are not limited to:
Transparency: The foundation of trust, transparency is critical in any AI model. All stakeholders – including the end users and beneficiaries of AI systems – must be able to understand how a model works and makes decisions to hold it accountable. According to IBM, developers can share the underlying AI algorithm’s logic and reasoning, the data inputs used to train the model, the methods used for model evaluation and validation and more. However, transparency is about more than source code disclosure. It should allow stakeholders to assess the model’s predictive accuracy against fairness, drift and biases.
Human oversight and accountability: The EU AI Act recognizes human oversight and accountability as a critical element of AI governance – and it includes an act dedicated to this point. Part of how systems should be monitored and evaluated, oversight and accountability must be led by people with the relevant skills and experience as well as knowledge of the system.
Data protection: Guardrails, cyber security and data hygiene are just some of the essential steps to ensuring that AI models are only working with the data they are supposed to work with. If a model has access to sensitive company data and is allowed to converse directly with customers, there is a chance it can disclose that information. On the other hand, data uploaded to public LLMs also becomes public. This caused issues for Samsung when it was found employees had pasted confidential source code into ChatGPT.
Leadership: As highlighted by Anderson, effective leadership is a critical first step when implementing AI governance. A C-suite executive with the jurisdiction to devise and implement AI strategy is now vital for any company that wants to utilize AI, although many of the skills required may be learned on the job because of the rapid pace of AI development.
Aligning AI use with company values: It isn’t enough to simply ensure AI use aligns with strategic targets and company goals, it must also align with the organization’s values. These can be formed by industry best practices or regulations, ESG principles, the brand or simply “doing the right thing”.
Robust testing: The most effective way to ensure an AI tool is safe to be let loose on customers is to test it, particularly against worst-case scenarios to eliminate any weaknesses in the system. For example, a chatbot on the Chevrolet website was tricked into selling a vehicle to a customer for US$1. In another instance, a customer was able to trick DPD’s chatbot into swearing and writing a poem about how bad DPD was as a delivery partner.
AI governance in public service media
AI governance is essential in regulated industries and high-risk scenarios, but that isn’t just about banks and healthcare. Public-service media companies, such as the Public Broadcasting Service (PBS) in the US or the British Broadcasting Corporation (BBC) in the UK, also fall into this area.
As Lempinen explains, Finland’s public media service Yle has “recognized that responsible AI use requires a systematic approach and clear governance models,” and as such the organization has established an AI responsibility leadership team to ensure AI development adheres to high ethical standards.
From an organizational perspective, Lempinen says this highlights three critical challenges:
Building trust: Organizations prioritizing ethical AI development and operational transparency can strengthen customer relationships and differentiate themselves from competitors. The public service algorithm concept we've developed at Yle is an example, emphasizing content diversity, serendipity and respect for user autonomy.
The role of leadership: Effectively leveraging AI in CX requires a strategic approach that integrates technology into business objectives. Our AI strategy focuses on responsible applications of generative AI, such as personalization and disinformation detection. This demonstrates that AI must be embedded into the business model, not treated as an isolated function. Moreover, AI governance, combined with data governance practices, requires new types of tools.
Customer-centric perspective: Customer trust is built through transparency and governance of AI systems. Our work in promoting diversity and creating transparent processes, allowing users to influence content and recommendation algorithms, supports this approach.
Lempinen says that to succeed in implementing AI governance, organizations must prioritize four things:
- Creating clear AI strategy and governance models.
- Transparent communication about AI use to customers.
- Systematic approach to ethical impact assessment.
- Developing staff competencies in responsible AI use.
Quick links
- The ethical AI guide for customer experience
- From Taco Bell to DeepSeek: 4 ways AI is disrupting CX
- Agentic AI solutions to up your CX game