Written by Emily Florrick, Brian McGowan, and Tim Phelps
As the rapid rise of generative AI (GenAI) continues to accelerate changes in the way organizations work, unexpected paradoxes are also emerging.
The majority of leaders in multi-billion dollar organizations KPMG recently investigated They say they intend to integrate more GenAI into new initiatives and business functions and train more employees to use AI. Of these respondents, 71% said they use GenAI data for decision-making, 52% said technology shapes their company’s competitive position, and 47% said it They answered that it was useful for excavations.
Because AI can process large amounts of data at incomprehensible speeds and enhance human capabilities, insights, and productivity, these organizations have huge potential to generate powerful advantages in both operational efficiency and innovative strategies. provided to you.
But even some organizations eager to adopt AI approach the technology cautiously, clearly assuming the risks rather than the benefits. Will AI lead to employee layoffs, create cybersecurity risks, and compromise data privacy?
That’s why the biggest challenge in implementing AI is not developing the technology itself, but creating a trusted environment.
To realize the potential of AI, organizations, their customers and employees, and regulators must trust that AI will only produce results that are useful, relevant, safe, and reliable. To build that trust, AI must be designed for reliability and high ethical standards. Deploying AI boldly, quickly, and responsibly means adhering to these standards and regulatory obligations from the beginning.
Guidelines and guardrails
Any organization adopting an AI strategy must put trust at the heart of its policies.
Organizations need an autonomous AI governance body to establish trust in tools and data sources as well as develop ethical rules, guidelines, and procedures. An AI steering committee can manage AI across all teams and make it clear to all employees, partners, and customers how and when to use AI (and not to use it).
As AI becomes more pervasive and ubiquitous in business models, AI-wary organizations must take first steps in governance to build trust and confidence and reduce technology risk. Even if an organization is not yet ready to establish a full governance structure, many organizations are still considering a chief AI officer, a C-suite leader who understands the technology and understands its broader opportunities and risks. I am. Additionally, companies that are not ready to standardize AI practices and procedures across all business lines may identify an incubator team to undertake an initial deep dive.
Organizations including KPMG are now automating and extending parts of their AI governance, security, and risk management programs by connecting directly to the underlying infrastructure and capturing and extracting metadata to meet configured guardrails and We are enabling controls to be discovered and monitored more efficiently.
Another strategy is to take a risk-tiered approach, applying different governance standards to AI systems based on risk and impact to customers, partners, and employees.
Building a culture of trust
As we began our AI journey at KPMG, we took an approach that kept trust at the center of our plans.
We started with a commitment to trustworthy AI that outlines our strategy, with ethical pillars to ensure that the use of AI is always trustworthy and human-centered. We use those values to develop AI policies and guidelines for each phase of the AI lifecycle, including usage expectations for employees and partners, and data considerations for what is allowed and what is not. , and established an AI Council to actively shape guidelines. And communicate our AI policies to our 39,000 employee staff.
With these guidelines and our team established, we began learning and developing AI across our organization, using personalized persona-based training to safely and responsibly understand our approach to AI. We provided all employees with the guidance they needed to adopt and hire the company.
KPMG’s AI and Digital Innovation Office has launched KPMG aIQ, a company-wide AI transformation program. It is focused on driving the adoption of AI across all areas of the business to create value for clients and improve the experience for employees. The program puts AI technology directly into the hands of all partners and experts, allowing employees to access use cases, new products, training courses, and personalized AI coaching.
Going AI-first
How do organizations advancing AI balance bold innovation with responsible use?
Establishing a governance team and learning infrastructure will pave the way for you to become an AI-first organization that strives to unify AI adoption policies across teams and disciplines. It requires continuous testing and constant vigilance.
Executives are recognizing the potential of AI to drive innovation, generate revenue, and optimize operations. According to , 54% of executives expect GenAI to support new business models, and 46% expect it to help develop new products and revenue streams. KPMG survey on executives’ outlook on GenAI. 95% of executives said they believe training and education is essential for their organization to use GenAI ethically. Additionally, 91% believe regular audits and human oversight are important.
By keeping AI leaders in sync with the C-suite, organizations can ensure that their biggest concerns are addressed. One of the sensitive aspects of AI, especially for organizations in highly regulated sectors, is ensuring compliance. It is important to establish guardrails to govern the use of AI to ensure IT and governance, risk, and compliance (GRC) leaders apply it responsibly and ethically.
In 2023, our organization will create an AI Center of Excellence (AI CoE) that will be responsible for evaluating emerging products and platforms and determining which AI tools and technologies will be deployed in other parts of the organization and outside the organization. was established. The AI CoE is the core of GenAI-enabled technology experimentation, research, development, and deployment across the company. This informs our tools and technology approach and provides the foundation for executing our AI strategy across the company.
By deploying our own AI-first infrastructure and programs, KPMG is building a training program This also applies to our partners and customers. This is an effort to unify the network’s standards for AI governance, guidance, and best practices for building trust in AI by teaching it to control the technology itself.
KPMG also collaborates on product development with alliance partners, helping them improve existing products and design new ones.
We believe these differentiated AI strategies are the top solution areas for trusted AI.
“KPMG and ServiceNow have a strong partnership and collaboration focused on innovation, AI, and digital transformation,” said Michael Park, senior vice president and global head of AI Go to Market at ServiceNow. “Their approach to AI development and deployment demonstrates their commitment to transforming businesses and supporting their clients’ AI journeys by establishing a robust governance structure and clear roadmap from the outset. is the foundation for building trust and realizing the value of AI while rapidly scaling technology.”
For these organizations, and for ourselves, the decision to go AI-first is a bold and responsible one. This requires a major shift in how we view the role of technology in governance.
The future belongs to those who establish trust in AI, not just as a powerful tool, but as a valuable component in a complex balance to support innovation, business transformation, competitive advantage, and compliance. .
Click here to learn more about KPMG’s Trusted AI approach and insights. here.
Emily Florrick Risk service delivery model transformation and product management leader. brian mcgowan A globally trusted AI leaderand Tim Phelps He is the Risk Services Leader at KPMG LLP.