Our Resources
/
Report

Strengthening the AI Assurance Ecosystem

$hero_image['alt']

Accelerated AI adoption is outpacing trust.

A 2025 survey found that only 17% of Americans believe AI will have a positive impact on the US over the next 20 years. At the same time, businesses lack confidence that their AI systems comply with the law or align with internal policies. This trust gap threatens both public confidence and responsible innovation.

AI assurance, or the process of measuring, evaluating and communicating the trustworthiness of AI systems, can help close this gap. When applied effectively, AI assurance enables organizations to deploy AI with confidence while delivering clear benefits, including:

Better products and innovation
Reduced AI risks
Accountability and consumer trust

A robust, multi-stakeholder ecosystem is needed for AI assurance to realize this potential. While elements of this exist or are in development, efforts remain fragmented and incomplete.

PAI’s white paper, Strengthening the AI Assurance Ecosystem, is the first product of its initiative, Strengthening the AI Assurance Ecosystem. It maps the essential elements of the ecosystem to provide a foundation for concerted policy action and makes recommendations to address key ecosystem needs.

What the AI Assurance Ecosystem Is

The AI assurance ecosystem encompasses the actors, activities, tools, frameworks, and information flows needed to establish justified trust — trust that is well-founded and demonstrable.

Establishing justified trust must extend beyond those who procure and deploy AI to include governments and affected communities. This will require broader information flows about AI development, deployment, and assurance activities and outcomes.

Actors, activities, tools, frameworks, and information flows are interdependent and mutually reinforcing, each relying on the others to function effectively.

At the most basic level, AI assurance requires several core components, including:

  • Norms, frameworks, standards, (codified) expectations. These set the criteria against which AI models/systems are assessed during assurance.
  • Processes, tools and techniques, metrics. These are used to assess AI against the relevant criteria (or to conduct more open-ended evaluations of AI capabilities and impacts post-deployment).
  • Assurance providers. These are experts with the skills and resources needed to conduct assurance. They can be market-based entities supplying services for a fee, or independent researchers from academia, civil society, or other sectors.
    A number of other inputs are needed for effective assurance, which we cover in depth in the full paper. An assurance ecosystem is key to deliver all of these elements and ensure the necessary range of actors with the appropriate skills and resources are involved in assessing risks and/or performance using a range of methods over time.

Recommendations for Policy Makers

  1. Policymakers must invest at both the ecosystem and individual component levels to advance trust and innovation.
  2. Policymakers and others should protect and fund spaces for collaboration on shared challenges across industry, civil society, and academia.
  3. Policymakers should convene and support multistakeholder initiatives to develop a shared understanding of which categories of quality and safety are best assured by different categories of AI assurer, and at which levels of the value chain.
  4. Policy frameworks promoting assurance should reflect the fact that:
    • Assurance is most critical at the systems level;
    • Assurance is needed across the AI lifecycle.
  5. Policymakers should take steps to improve information sharing between assurance actors and to improve external assurer access for post-deployment assurance of these models and systems.
  6. Policy frameworks should recognize the importance of both internal and external assurance.
  7. Policymakers should prioritize the development of mechanisms to establish justified trust in AI assurers.
  8. Governments should support and promote the roles of AI Safety Institutes and equivalent bodies in assessing and assuring frontier systems.
  9. Policymakers should continue to prioritize the development and adoption of international consensus standards, including by supporting the capacity of civil society and Global South stakeholders.
  10. Governments and policy actors should take steps to clarify and develop existing AI governance frameworks.
  11. Policy makers should develop policy frameworks supporting necessary access to AI systems, documentation, and components by external assurers.
  12. Governments should review their procurement, funding, and data access frameworks to ensure they support the development of the assurance ecosystem as a whole.