Our Blog
/

Partnership on AI Releases Guidance for Safe Foundation Model Deployment, Takes the Lead to Drive Positive Outcomes and Help Inform AI Governance Ahead of AI Safety Summit in UK

Non-Profit Launches Public Comment Period for Highly Anticipated Model Guidance Developed in Collaboration Between Civil Society, Policy, Academia and Tech Experts from Meta, Microsoft, and Others

October 24, 2023 – LondonPartnership on AI (PAI), a non-profit community of academic, civil society, industry, and media organizations addressing the most difficult questions on the future of AI, today released for public comment PAI’s Guidance for Safe Foundation Model Deployment (“Model Deployment Guidance”) at its AI Policy Forum in London. The framework provides multistakeholder guidance to foundation and frontier model providers, giving them the tools and knowledge needed to foster responsible development and deployment of AI models with a focus on safety for society and adaptability to support evolving capabilities and use cases. The public comment period is open now through January 15, 2024, with an updated version expected in Spring 2024.

“The foundation models that power generative AI and other tools are continuing to advance, and there is an urgent need for expert consensus on how to ensure this technology is safe,” said Rebecca Finlay, CEO, Partnership on AI. “PAI is a trusted convener of key stakeholders, and together with our Partners we’ve developed the first comprehensive guidance for model providers. Drawing from the diverse perspectives of our global community of collaborators, PAI’s Model Deployment Guidance offers the kind of collectively built guardrails the AI field desperately needs. Now it is time to hear from the public. We look forward to input from even more voices as we continue this effort.”

This inaugural version of PAI’s Guidance is the culmination of a yearlong series of meetings and workshops among experts from civil society organizations, technology companies, academic institutes, and policy think tanks for the purposes of developing a shared understanding of responsible model deployment. To keep pace with the acceleration of change in AI and large language model (LLM) innovation, these best practices are intentionally designed for flexibility to support current and future model provider decisions as the technology and regulatory landscape continues to evolve.

Over the past six months, discussions on AI ethics and governance have taken center stage with the two rounds of White House commitments to responsible AI and the UK preparing to host the world’s first summit on AI safety on November 1st and 2nd. Meanwhile, businesses are aiming to deploy AI despite most large organizations (41%) expressing concern about how foundation models are trained and their potential downstream effects. The Model Deployment Guidance provides much-needed guidelines for voluntary action. The standards put forth in the Model Deployment Guidance are built to complement single stakeholder approaches with a comprehensive framework that includes the following:

  • Customized, practical recommendations for specific model and release types with responsible best practices throughout the deployment lifecycle to advance transparency and mitigate risks
  • Visualization of risk landscape, including risks from the foundation models themselves and risks that can arise downstream when others build applications using the models
  • Categorization of model capability tiers and release type definitions previously lacking consensus from the AI community

“PAI’s Model Deployment Guidance is a unique framework that bridges different outlooks on risks among the AI scientific and ethics communities. This helps model providers ensure the safety of society as a whole while balancing the need for a variety of models to be deployed for user choice, innovation, and scientific advancement through both open and restricted release approaches,” said Madhulika Srikumar, Lead for AI Safety, Partnership on AI. “At the cusp of mass AI adoption, we need adaptable, consensus-based frameworks like PAI’s Model Guidance given the growing range of model deployment scenarios and unanswered questions around how these models will be monetized and their real world implications.”

Since April 2023, PAI has consulted and collaborated with experts from across the AI ecosystem, including representatives affiliated with institutions such as Anthropic, Berkeley Center for Long-Term Cybersecurity, Center for Governance of AI, Google, Google DeepMind, IBM, Meta, Microsoft, OpenAI, The Alan Turing Institute, and The Future Society, among other organizations, to do the vital work of distilling broad research and insights into concrete and impactful direction that ensures AI models make the greatest impact.

To learn more about Partnership on AI’s Guidance for Safe Foundation Model Deployment, access custom guidance, and provide public comment, please visit: ​​https://partnershiponai.org/modeldeployment.

Messages of Support

“This is one of the most comprehensive, nuanced and inclusive frameworks for responsibly building and deploying AI models through an open approach. The Partnership on AI’s leadership has been invaluable in bringing together industry, civil society, and experts as companies like ours determine the best approach when looking at both open and closed releases. Feedback through public comments is going to be critical to advancing this framework, and I look forward to broadening the conversation across the community.”
Joelle Pineau, Vice President, AI Research at Meta and Vice-Chair of the PAI Board.

“The Guidance for Safe Foundation Model Deployment is unique in its depth and scope. Equally distinctive is its origin—created through the collaboration of representatives from a diverse set of organizations spanning industry and non-profit organizations. This will be a living document, continually refreshed and updated to reflect AI advances and invaluable feedback from the community.”
Eric Horvitz, Chief Scientific Officer, Microsoft and Founding Board Chair of Partnership on AI

“I’m impressed by PAI’s collaborative process in convening a diverse team of experts including scientists, technologists, policy-makers, academics, and civil society leaders and, on behalf of the board, I look forward to broad participation in the public comment phase.”
Jerremy Holland, Board Chair, Partnership on AI

“In a rapidly evolving AI landscape, it’s more critical than ever to ensure diverse voices are heard and integrated. Partnership on AI’s multistakeholder approach ensures exactly that – the AI Safety Guidance stands not only on the foundations of sound science and innovation but also reflects the different perspectives of the community.”
Jatin Aythora, Director of Research & Development at BBC R&D and Vice Chair of the PAI Board

“As policymakers prepare to meet in the UK, we urge them to consider the successes and challenges of the multistakeholder process and bring additional voices to the table. PAI’s guidance is the result of a rigorous, multistakeholder consultation that is now seeking public comment. In honoring these principles, policymakers ensure that the resulting decisions are not only comprehensive but also representative of society’s collective wisdom and progress. To meet this moment, citizens deserve no less.”
Francesca Rossi, AI Ethics Global Leader at IBM and Board Member of Partnership on AI

“I really appreciate the thoughtful multistakeholder approach that PAI has taken to identify safety risks in foundation models, come up with guidelines that take into account the types of foundation models and how they are released as well as enable continuous discovery and iteration through their process”
Lama Nachman, Director of Intelligent Systems Research Lab at Intel Labs and Board Member of Partnership on AI

“While we urgently need updated regulation of the documented harms caused by automated systems, the Partnership on AI’s effort to codify best practices for development of foundation models is a critical contribution to public discussion. The framework is targeted at ensuring safe research and development by AI model providers, including where models are not yet tied to any particular use cases that clearly trigger regulation. PAI’s plan to invite ongoing feedback from civil society, industry, and regulators will ensure that this framework continually evolves to keep pace with research developments and newly identified harms.”
Esha Bhandari, Deputy Director, ACLU Speech, Privacy, and Technology Project and Member, PAI Safety-Critical AI Steering Committee

“With the speed of progress and diffusion of AI technologies, best practices for responsible AI development need to keep pace. We think this initiative can accelerate that process and set a normative baseline for responsible development.”
Markus Anderljung, Head of Policy at Centre for the Governance of AI (GovAI) and Member, PAI Working Group on Model Guidance

About Partnership on AI

Partnership on AI (PAI) is a non-profit organization that brings together diverse stakeholders from academia, civil society, industry and the media to create solutions to ensure artificial intelligence (AI) advances positive outcomes for people and society. PAI develops tools, recommendations and other resources by inviting voices from the AI community and beyond to share insights and perspectives. These insights are then synthesized into actionable guidance that can be used to drive adoption of responsible AI practices, inform public policy and advance public understanding of AI. To learn more, visit www.partnershiponai.org.

Media Contact:
Jennifer Lyle
PAI@finnpartners.com