Six AI Governance Priorities for 2026
Expert Insights from PAI’s Policy Steering Committee
2026 is a critical year for policy. With India’s AI Impact Summit, the EU’s Code of Practice on AI content labelling, the first UN Global Dialogue on AI Governance, and the G7 summit in the first few months alone, policymakers face a crowded calendar alongside US repositioning and geopolitical uncertainty. These forums will shape key decisions, including who gets access to AI compute and infrastructure, how synthetic content is labelled amid growing pressure for legislative action particularly around child safety, and whether capacity building narrows or widens the digital divide.
The stakes are high and the case for action is clear. AI is already accelerating our understanding of brain diseases, transforming learning and reshaping workflows. But 2025 also gave us warnings: AI therapists operating without guardrails, private conversations made public, and agentic tools wiping entire databases, twice. The gap between what AI can do and what governance can meaningfully oversee is widening.
With the help of our Policy Steering Committee, which was established to identify key questions that inform effective policymaking, we identified six key priorities for AI governance in 2026. Realizing these goals requires a multistakeholder approach including elected officials, government workers, civil society, academia, and industry.
1. Establish Foundational Infrastructure to Govern AI Agents with Security Protocols and Privacy Safeguards
Agentic systems introduce new challenges, including the potential non-reversibility of actions, open-ended decision-making pathways, and privacy vulnerabilities from expanded data access. Current infrastructure limitations around memory and context compound these risks and make oversight harder.
To enable responsible adoption, in 2026 prioritizing evaluation frameworks that scale, accountability infrastructure for attribution and remediation, and assurance mechanisms that balance oversight with privacy is necessary.
How to accomplish this:
- Establish transparency standards and secure protocols: Including for inter-agent composition and merging properties. Drive forward protocols (like MCP) that are robust, secure, and fit for purpose.
- Clarify the value chain: Establish an agreed upon value chain taxonomy to govern agents as integrated systems.
- Develop monitoring and oversight for accountability, with privacy protections: Support and pilot AI agent monitoring methods across sectors, identify failure modes specific to agents, and resolve privacy questions regarding data flows. Oversight should be informed by the stakes, reversibility and ‘affordances’ given for tasks.
- Improve testing infrastructure: Fund initiatives to improve evaluation validity and scalability. Develop assurance best practices for high-stakes sectors (e.g. finance)
- Create controlled environments where policymakers, researchers, and practitioners can test governance approaches: Learn what works, and iterate before scaling, particularly for novel challenges like agentic AI in public services.
- Deepen understanding and need for liability frameworks.
2. Strengthen Documentation and Public Reporting Mechanisms
There is still a lack of coherent strategies for how information should flow across the AI value chain. While the EU AI Act’s transparency requirements and the OECD made progress in 2025 efforts remain fragmented. Gaps remain in how documentation artifacts connect across the value chain as upstream developers document models without clarity on what downstream deployers need and deployers struggle to convey system behavior to end users.
Momentum is already underway making documentation and transparency an easier challenge to tackle in 2026.
How to accomplish this:
- Coordinate documentation artifacts across the value-chain: Define transparency artifacts needed at each layer (e.g., compute, cloud, model, deployment), establish shared expectations, and requirements for deployment context (e.g., healthcare diagnostics).
- Standardize documentation: Standardize templates tailored to system types and risks, document real-world performance and failure modes and publish strong examples.
- Strengthen reporting frameworks that already have multistakeholder support: Prioritize and improve 1-2 established international reporting frameworks, such as The Hiroshima AI Process (HAIP) Framework to ensure interoperability.
3. Coordinate Internationally Through Shared Baselines
As AI governance efforts multiply, the risk of fragmentation grows. Initiatives are proliferating at national, regional, and international levels without clear pathways toward convergence or mutual recognition. With the right coordination, divergent approaches become complementary rather than conflicting and offer coherence for government, companies, regulators, and the public alike. There is a chance to build an interoperable baseline that protects citizens across borders. The key is finding shared reference points.
How to accomplish this:
- Align on deployment challenges: Focus cooperation on concrete problem areas requiring convergence such as cross-border deployment of agents.
- Create a validated, open, evaluation repository: Develop shared evaluation infrastructure that is publicly accessible and validated across use cases, reducing duplication and enabling consistent assessment.
- Define mutual recognition: Establish processes for governments to recognize each other’s certification, audit requirements, and evaluation regimes.
- Invest in tools to map and link governance initiatives as the landscape shifts: Articulate where mandates or policies overlap, where bodies can coordinate, and which gaps still require new frameworks or institutions.
- Ensure AI Summits drive accountability and have measurable outcomes: Use summits like India’s to track prior commitments and measure progress via shared infrastructure.
- Drive institute interoperability and engagement between the International Network for Advanced AI Measurement, Evaluation and Science, and other global initiatives, such as the forthcoming Global South Network on AI Safety & Evaluations
4. Preserve Human Voice and Epistemic Integrity
As AI increasingly mediates our information, how authentic human voice is preserved must be explored. While AI offers unprecedented accessibility and personalization it risks eroding our ability to discern generated content from human creation. Protecting this ecosystem is essential in maintaining trust
How to accomplish this:
- Balance credentialing and detection tooling with efforts to build public literacy: Move beyond detection tools by investing in content credentialing and public community preparedness.
- Protect human-mediated ecosystems: Develop frameworks that enhance epistemic confidence while ensuring visibility for human voice.
- Review media viability: Explore new economic models for media organizations in an AI-intermediated environment.
- Information access equity: Explore the divide between those accessing human-curated content and those accessing primarily AI-generated content; address who cont
5. Advance the Public Understanding of AI and Workforce Resilience
Workforce readiness represents a critical policy frontier, but literacy must move beyond “how to use tools.” A move towards “assurance literacy,” understanding when to rely on AI and how to evaluate its outputs is essential.
Looking ahead, closing the data gap on AI task performance is equally important and presents a chance to put workforce policy on firmer empirical footing. There is currently a lack of rigorous, quantitative data on which specific tasks AI can effectively perform, leading to policy decisions based on speculation rather than evidence.
How to accomplish this:
- Quantify task-level capabilities: Invest in research to measure where AI adds value and where it falls short across sectors and job categories.
- Conduct workforce foresight: Analyze different scenarios for how AI could automate or reshape labor markets, identifying where workers and communities face the greatest vulnerability to guide interventions.
- Broaden AI adoption capacity: Develop skills programming for non-AI-first organizations to ensure no sector is left behind.
- Promote assurance literacy: Teach workers and students to evaluate AI outputs, recognize limitations, and understand accountability structures.
- Convene Educators, Companies, Civil Society, and Academia: To refine principles and guidelines for AI in education, identifying what children need to learn for future preparedness and what skills they must retain regardless of AI’s capabilities.
- Unite ‘AI Policy’ and ‘Economic Policy’ communities: bring together siloed communities around particular tools (workforce training, safety net benefits, unemployment insurance, labor and union policy) to share expertise.
6. Clarify AI Sovereignty Goals
The AI technology stack has evolved significantly since most national strategies were drafted. Policymakers now face difficult trade-offs: high infrastructure costs, complex strategic partnerships, and unavoidable dependencies on foreign providers. Sovereignty, while important, shouldn’t be measured solely by what a country owns or builds. It should also be measured by the tangible benefits these decisions deliver to citizens in the near and long term. This reframing raises critical questions: Which partnerships will genuinely serve national interests? And what does “delivering real value” actually look like for the people these strategies are meant to serve?
How to accomplish this:
- Map your AI supply chain and dependencies: Conduct national stocktakes of capabilities and dependencies, for example from critical minerals, and energy to inference infrastructure and specific AI models.
- Map where greater sovereignty in the AI stack would add genuine value: Weigh the risks of existing dependencies (e.g. on foreign cloud providers or models) against new dependencies (e.g. infrastructure partnerships) over time.
- Develop novel legal arrangements: Explore multilateral frameworks for shared digital public infrastructure to support equitable regional access.
- Evaluate environmental tradeoffs: Assess the long term impact of infrastructure needs on citizen wellbeing and the environment.
- Ensure public participation: Build transparent mechanisms and frameworks for public participation so governance is not driven solely by a small set of organizations or government entities, ensuring affected communities have a voice.
- Weigh up the geopolitical pros and cons of partnerships and dependencies.
2026 will not wait for perfect answers. The decisions made this year will set trajectories for years to come. Governance is an ecosystem, and strengthening it requires working together across borders and disciplines.
- Francesca Rossi, IBM
- Marc-Etienne Ouimette, Cardinal Policy
- Janet Haven, Data & Society
- Rumman Chowdhury, Humane Intelligence
- Sam Gregory, WITNESS
- Karine Perset, OECD
- Alexandra Givens, Center for Democracy & Technology
- Elham Tabassi, Brookings Institution
- Alondra Nelson, Institute of Advanced Study
- Sebastian Hallensleben, CEN/CENELEC and Resaro
- Valeria Milanes, Asociación por los Derechos Civiles (ADC)
- Antonia Kerle, BBC
- David Wakeling, Allen & Overy
- Lisa Pearlman, Apple
- Amanda Craig Deckard, Microsoft
- Andrew Reiskind, Mastercard
- Irene Solaiman, Hugging Face
- Deon Woods Bell, Gates Foundation
- Andrea Renda, Centre for European Policy Studies (CEPS)
- Alice Friend, Google