Prioritizing Safety and Responsibility: A Call to Model Providers to Develop Guardrails with Civil Society and Academia
Multistakeholder Collaboration Key to Ensuring Accountability through PAI’s Shared Protocols for Large-scale AI
Recent years have seen rapid advances in the capabilities and adoption of large-scale AI models, also known as foundation models. These models are characterized by the large datasets they are trained on and their wide variety of applications, including generative AI tasks. Their early and significant impacts on society, both positive and negative, have resulted in significant debate about appropriate release strategies and the responsible development for such models.
Partnership on AI (PAI) believes that all AI model providers must prioritize safety and responsibility regardless of whether they pursue a more open or closed release approach. This requires collaboration between the AI industry and civil society representatives to jointly examine potential risks.
Only through collective action can we develop the mitigation strategies and mechanisms of accountability needed to responsibly pursue AI’s societal benefits. We welcome the White House announcement securing voluntary corporate commitments and more voices are needed to ensure these commitments translate into actionable guardrails that have impact and accountability.
In the past year, PAI has been leading a multistakeholder process to establish Shared Protocols for the responsible deployment of large-scale models. This work is guided by our Safety Critical AI Steering Committee made up of individuals from the ACLU, Anthropic, Google DeepMind, IBM, Meta, and Omidyar Network as well institutes and research centers in Canada, the US and UK. We appreciate their leadership as well as the advice of many who have attended working groups and contributed to date.
This coming October, PAI will release the first version of our collective Shared Protocols for public comment. We will actively seek feedback from individuals in civil society, industry, media and academia internationally.
PAI Actions on Large-Scale and Generative AI
- December 2022, PAI formed the Safety Critical AI Steering Committee
- February 2023, PAI launched the Synthetic Media Framework, supported by over a dozen non-profit, media, and industry organizations.
- April 2023, PAI announced a new initiative on Generative AI.
- June 2023, PAI launched the Global Task Force for Inclusive AI with support from the White House.
- July 2023, the PAI Board of Directors consulted with experts in AI, public policy, civil rights, media, labor and governance on ways to catalyze transparency and accountability in support of the Shared Protocols.
- October 2023, PAI will release the first version of the Shared Protocols for public comment.
We welcome feedback on our approach and encourage all stakeholders to get involved.
More information about the Shared Protocols
PAI is a global network with Partners drawn from more than 100 industry, academic, non-profit, and media organizations, and is uniquely suited for this effort. First announced earlier this year, the Shared Protocols will serve as adaptable guidelines for the deployment of large-scale models to maximize the safety and responsibility of all foundation models. The Shared Protocols will set a taxonomy of risks and how to disclose, measure, and mitigate risks throughout the deployment lifecycle, taking model type, capability and release approach into account.
Recognizing that it is not possible to anticipate all of the risks associated with large-scale models or the definitive best practices for addressing them, the Shared Protocols are designed to be a living document to complement, not replace regulatory approaches, and be open to iteration as we learn more about these constantly evolving technologies and their real world impact. PAI will support the evolution and accountability of the Shared Protocols through multi-stakeholder collective action, ongoing applied research, engagement with global policymakers and inclusive communities of practice.