Building Justified Trust in AI Assurers
Trust in AI assurers isn’t guaranteed.
As AI systems become more complex and widely deployed, organizations increasingly rely on external experts to evaluate their safety, performance, and compliance. But for independent assurance to be effective, those experts must themselves be trusted.
Without confidence in the impartiality, competence, and professionalism of AI assurers, even the most rigorous evaluations risk falling short. This creates a critical gap in the broader effort to build justified trust in AI.
PAI’s report, Building Justified Trust in AI Assurers, addresses this challenge directly. As the second product of its initiative, Strengthening the AI Assurance Ecosystem, it outlines why independent assurance matters, what makes it trustworthy, and what actions are needed now to strengthen trust in those who assess AI systems.
Why Independent Assurance Matters
Independent AI assurers play a unique and essential role in the ecosystem. In addition to internal evaluations, external assurance can provide:
- Objective, impartial assessments of AI systems and their impacts
- Specialized expertise not always available within organizations
- Greater confidence for regulators, customers, and affected communities
Key Recommendations
To build trust in independent AI assurance, coordinated action is needed across stakeholders.
Policymakers
- Prioritize the development of a framework of skills and competencies for assurance providers
- Prioritize the development of professional standards for assurance providers
- Foster and actively monitor the development of any professional organizations
- Actively involve professional organizations in the development of ethical frameworks, core competencies, and professional skills
Overseeing Bodies
- Regular assessments of competence against the skills framework
- Requirements for ongoing professional development/skills maintenance
- Maintenance of public registers of members
- Oversight and disciplinary measures, including complaints and whistleblower mechanisms, peer-review processes, and publication of disciplinary outcomes
- Providing that the insurance market is sufficiently mature to include such offerings, an obligation to carry an appropriate level of professional indemnity assurance
Funders
- Support the development of independent assurance capacity, especially in nonprofit and public-interest contexts
- Invest in initiatives that strengthen trust frameworks and professionalization
Assurance Providers and Professional Associations
- Develop professional associations committed to shared, published ethical standards
- Publish and uphold your own ethical standards
- Formalize and promote consensus on competency and professional frameworks
- Publish membership criteria and actively vet applicants
- Maintain and regularly update public member lists
- Establish robust oversight mechanisms, including: Clear complaints and whistleblower processes; Investigation of alleged violations; Disciplinary procedures, including deregistration when appropriate
- Consider standard provisions in member agreements to allow oversight access to client information for assurance purposes
This report is Part 2 of PAI’s Strengthening the AI Assurance Ecosystem. Read the rest of the series below