/
Four Facets of Responsible Enterprise AI Governance
Insights from Leaders and Experts at Partnership on AI Enterprise AI Governance Forum
Enterprise adoption is more than deploying a model, it is embedding AI into organizational cultures, processes, decision-flows and public-interest missions. As enterprise AI adoption scales, the technology is set to impact millions of people across the globe.
Ahead of the India AI Impact Summit, Partnership on AI and JPMorganChase hosted an official Pre-Summit event in New York, the Enterprise AI Governance Forum. The Forum brought together stakeholders from across the global enterprise value chain for discussions on the question: How do we scale enterprise AI adoption in a way that is safe, responsible, and inclusive?
What emerged from a series of panel discussions was an emphasis on good AI governance and an exploration into the characteristics required to achieve it.
- Integration across enterprise functions
- Collaboration towards shared understanding of standards
- Dynamism to adapt to evolving capabilities
- Human-centricity and focus on use-case impacts
As the discussion turned toward the AI Impact Summit, experts and panelists stressed that effective AI policy requires policy-makers to recognize governance as more than regulation alone. While regulation is one important tool, governance encompasses the broader set of processes and norms that should engage multiple stakeholders across sectors.
Through the discussions at the Enterprise AI Governance Forum, experts identified four key facets of responsible enterprise AI governance:
INTEGRATION
While AI is a technological advancement, enterprise leaders should not think of it as a solely technical issue. Instead, they should consider AI a “paradigm shift” and focus on how it can enhance enterprise capabilities across functions. In the same way you don’t need to understand the internal physics of a tool to harness its power, leaders do not need a deep technical expertise to understand how AI can reshape core business functions and improve user experiences.
As enterprises integrate AI, so too must they integrate good AI governance. Adding leadership positions for heads of data or product experience facilitates AI adoption across the enterprise and can operationalize good AI governance. Strong governance helps teams move faster with confidence and ultimately, builds trust.
COLLABORATION
Global AI governance for safe, responsible, and trustworthy AI requires an ecosystem of actors. Responsible AI use is a competitive advantage for enterprises, but only if similar organizations are held to the same standard. Developing these standards and achieving regulatory harmonization requires aligning on what the problems are, developing a shared understanding of what good use and governance looks like, and defining roles for actors across the value chain.
While the concept of ‘sovereign AI’ has been a key theme in the lead up to the India AI Impact Summit, it is important to define it within one’s own context and circumstances. For most countries, ‘sovereign AI’ cannot mean building all components of the AI stack domestically. Instead, countries and regions are defining ‘sovereign AI’ as a way to have national agency, goals and governance that works for them and their citizens, not about isolation.
DYNAMISM
As AI evolves it is challenging traditional notions of enterprise risk evaluation and assessment. The previous thinking that corporate governance consists of static boundaries around business-accepted risk needs to change. Instead, leaders need to understand that governance will need to evolve alongside technological advancements and new use cases.
As an example, the risks involved with the use of AI agents that can act autonomously without human oversight are significantly higher than the use of generative AI applications that create text or images through human prompts.
HUMAN-CENTRICITY
When assessing AI safety, leaders must consider not just model risk but use-case risk and impact. Governance efforts need to consider the potential impact on customers and end-users should an AI application fail.
This focus on people and inclusivity is a key feature of the upcoming India AI Impact Summit, which centers Global South actors. However, while governments from the Global South will be involved at the Summit, the lack of civil society participation brings into question how well governance decisions at a multinational level represents the public.
What’s Next
These insights point to a clear direction for enterprise leaders. When it comes to enterprise AI governance, it is time to move away from static compliance to dynamic governance systems across enterprise functions that evolve with AI capabilities, aligned with agreed-upon ecosystem-wide standards, and centered on human impact.
This week’s AI Impact Summit in New Delhi is a valuable opportunity for global stakeholders to come together and make meaningful impact, including adopting governance frameworks that mitigate risks and promote inclusive growth. On Friday, February 20th, Partnership on AI will host the session, “Monitoring AI Agents at Scale: Building Assurance Frameworks for Safe and Trusted Deployment” to further explore AI agent governance and building trust through an assurance ecosystem.
At PAI, we believe the effort to maintain momentum across regions and countries will be crucial to fostering the development and use of AI that is safe and responsible. To receive our recommendations to advancing national AI assurance ecosystems and closing the global divide AI assurance, sign up below.