Knowing the effects any new technology will have on the world is critical to using it safely. Today, however, our understanding of AI’s impact is being outpaced by the rapid advancement of large-scale models. Also referred to as general purpose AI or foundation models, large-scale models use large amounts of data and computing power to perform a wide variety of tasks, which can include text, image, and audio generation. To prevent accidents, misuses, and other unintended consequences, the AI community needs to work together to better understand and anticipate the risks of large-scale models — and agree on actions that will mitigate them in practice.
The Partnership on AI (PAI) is excited to announce that we are leading an effort to address both of these pressing needs. This month, we are kicking off a multistakeholder dialogue to develop shared protocols for responsible large-scale model deployment. PAI believes that collaboration between actors with diverse perspectives is crucial to fully understanding risks related to large-scale AI. To facilitate our multistakeholder approach, this work has been guided by PAI’s Safety Critical AI steering committee. Made up of experts from the Alan Turing Institute, the American Civil Liberties Union, Anthropic, DeepMind, IBM, Meta, and the Schwartz Reisman Institute for Technology and Society, among other organizations, this recently formed steering committee has convened over the last six months to do the vital work of identifying concrete interventions where community collaboration can make the greatest impact.
The rapid pace of large-scale AI deployment and its potential to negatively impact communities at scale require us to act now, aligning on best practices based on shared insights. In the absence of consensus on the most important risks and how they should be mitigated, we believe this alignment requires multistakeholder input from a diverse set of perspectives. Additionally, non-regulatory approaches to responsible deployment — such as voluntary norms or protocols — are important to consider given the full downstream impacts of this technology are not yet known.
Due to its influence on proliferation, risks, and downstream consequences, we are starting with a focus on the deployment strategies of large-scale models in creating shared protocols. As the technical foundation of many consumer-facing AI applications (and those to come), large-scale models and how they are released can have an outsized impact on end users. This makes the steps in the path from development to deployment (which may include pre-deployment testing, risk identification and mitigation, monitoring, and oversight, including the possibility of halting deployment) particularly important to consider.
To contribute to the need for clear direction, in mid-April 2023, PAI is bringing together dozens of experts from across the AI community to begin developing industry-wide protocols for the responsible deployment of large-scale models. This workshop was prompted by previous work by the steering committee, whose deliberations highlighted an urgent need for collective interventions in the governance of large-scale models — a need that PAI and its Partner community representing a global cohort of academic, industry, media, and non-profit organizations can uniquely address.
It is vital that this work is informed by a truly diverse set of perspectives. In the coming months, PAI will offer more ways for stakeholders to connect and provide input on the shared protocols for large-scale models. To stay up to date on this exciting effort and learn about future opportunities to engage, click here to sign up for PAI’s Safety Critical AI program updates.
The development of shared protocols for large-scale AI deployment builds off of years of past work by PAI on AI safety and responsible practices. In 2021, PAI released a white paper offering six recommendations (that were subsequently endorsed by the editors of Nature Machine Intelligence) for anticipating potential harms when publishing AI research. And, most recently, PAI released Responsible Practices for Synthetic Media: A Framework for Collective Action, a set of recommendations regarding synthetic media supported by an inaugural cohort of 10 institutions across sectors.