24–25 October



The AI Policy Forum is invitation only.
When you register, please use the email address to which your invitation was sent.



Partnership on AI (PAI) will host a Policy Forum bringing global policymakers into the conversation with PAI’s community of Partners and other collaborators. Recent, rapid advances pose urgent questions about AI governance — but we don’t have to answer them alone.

This invitation only, in-person event will convene policymakers, AI practitioners, representatives of philanthropic and civil society organizations, and academic experts for a series of discussions on current developments in the global AI policy space and how to best promote AI safety.

At this event, PAI will be launching our safety protocols for foundation models for public comment, a set of comprehensive and forward-looking guidelines for identifying and mitigating risks associated with large-scale AI deployment. These protocols were collaboratively developed with input from PAI’s global community, including representatives of industry, civil society, and government.


BBC Television Centre
London, UK
The Policy Forum is hosted through the generosity of our Partner, BBC Research & Development.



The agenda will cover the most pressing topics in AI governance and policy today.

Sessions will include:

PAI’s Safety Protocols for Foundation Models: Multistakeholder Collaboration in Action

PAI led the development of the Safety Protocols for Foundation Models in close collaboration with leaders from industry, civil society, and academia. The cross-sectoral Steering Committee and Working Group exemplified the PAI multistakeholder process, reaching consensus on key issues: from identifying risks to defining novel categories for model releases. Hear from leaders involved in the creation of the Protocols on the significance of these Protocols and how they can inform policy.

Foundation Models: Social and Societal Impact

In the past year, we’ve seen numerous releases of large-scale AI models and with plans for even more to come. What exactly is a frontier or foundation model? How are they different and why do they require attention and action from across the AI policy community? What responsibilities and protocols might be needed for different model providers? What are the risks of delaying guardrails?

AI Safety Policy: Advancing and Operationalizing Solutions

From the G7 to the White House to 10 Downing Street, leaders from around the world are making AI safety a priority–but do all parties have the same understanding of the term? This session will dive into what policymakers mean when they discuss “AI safety” and whether their definition and priorities align with industry, civil society, and academia.

Governing AI Globally: International Standards, Trade and Interoperability

AI tools reach across borders and so will their potential impacts. As nation states grapple with how to govern this technology, multilateral bodies too are seeking solutions for safe, responsible, and ethical development and deployment. What is the right balance between domestic and international for both policy frameworks and technical approaches?

Looking Ahead — AI Policy Trends 2023: PAI Assessment of the Policy Landscape

This interactive presentation from Partnership on AI experts will map out the current policy landscape and policy levers being used. What trends have emerged? What are the gaps and what are the urgent opportunities for policy makers to minimize risk and maximize benefit for people and society?

Looking Ahead — Democracy by Design: Election Integrity in the Era of Generative AI

Campaign ads in the US have already featured AI-generated images, bringing the fabled “fake news” to real life. How else might AI impact upcoming elections, in the US and around the globe? What policies can be put into place to strengthen democracy? What roles do industry, civil society, academia, and government have to play in protecting election integrity?