International cooperation is fundamental to our ability to respond to today’s global challenges including climate change, global health , and now, artificial intelligence. From the UK AI Safety Summit last year at this time and the G7 Hiroshima AI Process to the UN Summit for the Future this past September, it has been heartening to see increased knowledge sharing and collective action from policy leaders. The Partnership on AI community has also been actively engaged, working together on safety protocols, risk management frameworks and interoperability recommendations.
As part of the Steering Committee for the upcoming AI Action Summit in France, I am looking forward to the next phase of global action on AI governance. We kicked-off this phase with our Policy Forum in September by welcoming the French Envoy, Anne Bouverot, and partnering with the French government as an official event on the Road to the Summit.
This week, PAI will be gathering our UK and European partners in Paris alongside the regular fall OECD AI meetings and the first meeting of the Digital Trust Convention, which is co-hosted by PAI alongside organizations with extensive experience in digital trust: the Atlantic Council, the Bertelsmann Stiftung, KI Park, the Mila Quebec AI Institute, the OECD, the Association of Electrical, Electronic and Information Technologies (VDE), and the World Privacy Forum.
PAI’s mission to bring together diverse voices on pressing AI issues has led to collaboration with leading organizations across the Atlantic.
Across the UK and Europe, PAI counts more than 20 Partner organizations. This includes private sector partners in and out of the technology sector, such as A&O Shearman, Google DeepMind, EY, Ingka Group | IKEA, and Prolific; civil society organizations such as Ada Lovelace Institute, Eticas, The Alan Turing Institute, the Future Society; media organizations such as BBC and Thomson Foundation; and academic institutions, such as Fraunhofer IAO and Oxford Internet Institute. We also collaborate closely with policy bodies such as the UK AI Office and European Parliament.
Incorporating perspectives from our community in Europe has become even more important since the European Union passed the historic EU AI Act earlier this year.
We are looking forward to taking part in the next phase of the process of ensuring safe, responsible AI practices in the EU: developing the Codes of Practice. By contributing to groups that are defining rules around transparency, systemic risk management and internal governance measures for General Purpose AI model providers, we will ensure that our multistakeholder expertise contributes to this world-leading process.
In addition to the EU, national governments across the globe as well as multilateral bodies, from the UN to the G7, are developing their own AI governance frameworks. While we commend the efforts to drive transparency and accountability, it is essential that these frameworks align with one another.
A recent PAI report assessed alignment and policy interoperability across many of these frameworks. A lack of coordination could not only create a fragmented policy landscape, it could lead to divergent understandings of best practices, leaving companies unclear on compliance across jurisdictions.
As PAI works with our global community and shares our insights with policy makers across jurisdictions, we are able to build bridges towards increased understanding of the issues, alignment on best practices, and ultimately, action to ensure AI benefits people and society. We look forward to connecting with our partners in Europe, and beyond, as we continue on the road to the AI Action Summit.