Our Blog
/
Blog
Other

PAI Researchers Co-author Multistakeholder Report on Improving Verifiability in AI Development

$hero_image['alt']

Alongside co-authors from more than 26 institutions, several staff members of the Partnership on AI contributed to the multistakeholder report,  “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims.” The report takes a broad look at the challenge of verifying claims made by AI developers about the systems they build and suggests ten mechanisms for addressing this challenge.

For AI to reliably help people and society, it is essential that those developing AI systems make their commitments to responsible AI development clear and that they are substantiated. Articulation of ethical principles is a step in this direction but needs to be supplemented with concrete commitments and external accountability. The report analyzes both existing and potential “mechanisms” for demonstrating AI developers’ claims about AI systems and development processes. These, in turn, can inform the demands made by civil society, users, regulators, and other stakeholders interested in assessing such claims.

The report distinguishes between institutional, software-related, and hardware-related aspects of AI development, and makes recommendations aimed at making each aspect more amenable to external scrutiny. In particular, the report highlights the following mechanisms for supporting verifiable claims:

  • Institutional mechanisms
    • Third-party auditing
    • Red team exercises
    • Bias and safety bounties
    • Sharing of AI incidents
  • Software mechanisms
    • Audit trails
    • Interpretability
    • Privacy-preserving machine learning
  • Hardware mechanisms
    • Secure hardware for machine learning
    • Pilot novel methods for hardware analysis
    • Compute support for academia and civil society

For each mechanism listed above, the report makes a recommendation aimed at researching, piloting, or otherwise extending it. There is no one-size-fits-all approach to developing AI responsibly and these mechanisms are insufficient on their own without appropriate formal regulations, but expanding the toolbox of associated mechanisms that developers can draw on will be essential as AI systems become increasingly widely deployed in real-world contexts.

As discussed further in the report, the AI community needs processes for sharing information about AI systems that behave in unexpected or undesired ways, so that others can learn from such incidents. PAI Program Lead, Jingying Yang, suggested a need for a robust infrastructure in AI incident sharing – a project in process with our Partner community. Jingying also helped broaden the participation in the report and helped host the first workshop behind the project.

The Partnership on AI exists in order to support collaboration across sectors and organizations. We were thus proud to have our staff contribute to this project, which involved over 50 authors at over two dozen organizations spanning academia, civil society, and industry. To contribute to this initiative and learn more, visit: http://www.towardtrustworthyai.com/