What will it take to build an AI assurance ecosystem that is credible, scalable, and globally relevant?
PAI’s Strengthening the AI Assurance Ecosystem initiative is a multiphase effort to identify and address the key needs, barriers, and opportunities shaping AI assurance across national and international contexts. Informed by virtual and in-person consultations through 2025, and guided by PAI’s Policy Steering Committee and AI Assurance Working Group, the project will produce a roadmap to help policymakers and assurance stakeholders close gaps, leverage existing efforts, and accelerate the development of robust assurance frameworks, tools, and providers.
Initiative Outputs
In 2026, PAI’s AI assurance work will focus on a core set of questions shaping how trust in AI is built and sustained in practice:
What does a functioning AI assurance ecosystem actually require?
We’ll start by mapping the essential components needed for assurance to work across sectors and jurisdictions, and identifying where today’s ecosystem is falling short.
Who is AI assurance being built for, and who is being left out?
As assurance tools and practices emerge largely in advanced economies, we’ll examine how this creates gaps for developing countries, and what policymakers and assurance actors can do to close them.
How do we establish justified trust in the people and institutions providing AI assurance?
Assurance only works if it is credible. We’ll explore what mechanisms are needed to demonstrate competence, independence, and legitimacy among AI assurance providers.
What creates real demand for external AI assurance and what incentives make it stick?
We’ll look at how markets, regulation, and organizational incentives can be aligned to encourage meaningful, high-quality assurance rather than performative compliance.
How do we ensure AI systems remain accountable after deployment, not just before?
Finally, we’ll assess progress in post-deployment monitoring, evaluation, and reporting for foundation models—and what stronger post-deployment governance would mean for assurance overall.
Together, this body of work aims to move AI assurance from a fragmented, early-stage practice toward a durable, globally relevant system that supports accountability, trust, and effective governance.
AI Assurance Working Group
Parisa Assar, Intuit
Will Bartholomew, Microsoft
Miranda Bogen, Center for Democracy and Technology
Emily Campbell-Ratcliffe, Department for Science, Innovation and Technology (UK)
Rumman Chowdhury, Humane Intelligence
Amanda Craig Deckard, Microsoft
Ian Eisenberg, Credo AI
Max Gahntz, Mozilla
Gemma Galdon-Clavell, Eticas
Sebastian Hallensleben, Resaro, CEN/CENELEC
Alayna Kennedy, Mastercard
Cameron Kerry, Brookings Institution
Fion Lee-Madan, Asenion
Emily McReynolds, Adobe
Adam Leon Smith, AIQI Consortium Oliver Smith, Eticas
Andrew Strait, AI Security Institute (UK)
Elham Tabassi, Brookings Institution Peter Twieg, Intuit
Miriam Vogel, Equal AI
David Wakeling, A&O Shearman