It would be an understatement to say that it has been a busy time in AI policy. For those of us who have been working in the Responsible AI community for several years, the last 10 days provided a welcome spotlight on government attention and multistakeholder action.
It was an important moment of global momentum that culminated in the first AI Safety Summit at Bletchley Park, UK where I chaired the roundtable on “What should national policymakers do in relation to the risk and opportunities of AI?” and set out 6 critical takeaways and actions to be driven forward following after the summit.
A Week of Action
It began early last week with the release of US President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Billed as a “landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence,” the Order provides a wide range of oversight mechanisms for government agencies and a call for bipartisan action.
We also saw the announcement of the G7 Hiroshima Process Comprehensive Policy Framework including guiding principles, an international code of conduct for developing advanced AI systems, and the commitment to develop cooperative projects on tools and best practices.
This was followed by US Vice President Harris’ announcement in London of a $200 million philanthropic initiative to support work via five pillars to advance rights-protecting, worker-focused and inclusive AI. The coalition is made up of the David and Lucile Packard Foundation; Democracy Fund; the Ford Foundation; Heising-Simons Foundation; the John D. and Catherine T. MacArthur Foundation; Kapor Foundation; Mozilla Foundation; Omidyar Network; Open Society Foundations; and the Wallace Global Fund.
On Wednesday, November 1st, the inaugural AI Safety Summit began in Bletchley Park, UK. The Bletchley Declaration was posted on Day One with commitments by the 29 countries in attendance, including the European Union, “to support an internationally inclusive network of scientific research on frontier AI safety … to facilitate the provision of the best science available for policy making and the public good.”
On Day Two, we saw the announcement of “an international, independent and inclusive ‘State of the Science’ Report on the capabilities and risks of frontier AI”. Countries also announced national AI Safety Institutes, most notably the US and UK.
Moving Forward
At Bletchley Park, I was impressed by the global participation of governments who are not often around the usual AI tables. In the dialogue I chaired, there was a clear recognition that we needed to work against making binary choices between regulation and innovation as well as current and future harms.
Governments have many levers, including laws, to protect citizens, enable responsible innovation and ensure that the benefits of AI accrue to the many. I was also heartened to see the commitment to multilateral processes, such as the G7 and the OECD, and the work to date to harmonize or align voluntary commitments across borders. International norm setting matters.
Building on this momentum, PAI will work with our community of 105 Partners, to continue to advance open solutions for real world action on AI safety today and tomorrow. Our Guidance for Safe Foundation Model Deployment is published for public comment. Please let us know what you think.
I will also be working to ensure that the AI Safety Summit commitments are carried out with substantive multistakeholder dialogue, oversight and rigor. In this regard, the philanthropic commitment announced by Vice President Harris is crucial.
With this support, civil society organizations will be able to ensure that governments are supporting open science and sociotechnical research in the public interest and that companies are fulfilling their obligations to safe product releases and external oversight.
At the AI Safety Summit, I was honored for PAI to be a part of the remarkable civil society contingent invited to participate. Listening to their interventions, it was clear that including civil society is not just a good thing to do, it is the smart thing to do.
Despite the hard work of the UK Government, whenever I am privileged to attend a gathering such as the Summit, I am reminded of who is not in the room. Thankfully, this time, there was an enthusiastic and diverse community of 25 organizations who came together to celebrate the AI Fringe. As a Fringe Partner, it was wonderful for me and my PAI colleagues to participate in many of the events that brought people together to explore what’s next for AI safety for all. I hope this is one of the AI Safety Summit traditions that will continue.
Moving forward, it is incumbent on all of us to advance this work for the benefit of all. Governments, industry leaders, civil society and academia must work together to develop AI guidelines, share best practices, and promote knowledge. Global cooperation of the broader AI ecosystem will be needed to tackle the challenges ahead. To protect society from the potential harms of AI technology and unlock the full potential of its benefits, policymakers should look to the collaborative approach PAI embraces.