Our Blog

Meaningful AI Policy Requires Inclusive, Multistakeholder Participation


For the second week in a row, we are seeing a flurry of activity around AI policy. On Tuesday, the EU AI Act received final sign off from lawmakers, paving the way for the Act to enter into force next month. Also this week, leaders from across the globe are connecting for the AI Seoul Summit.

Indeed, with more and more tools reaching the hands of the public, there is an urgent need for policies to protect people and communities from AI harms and to advance responsible innovation. What these policies include and how they should be developed and implemented, however, remains a topic of contention.

Last week, the US Bipartisan Senate AI Working Group released the Roadmap for AI Safety Policy in the US Senate, which called for billions of US dollars towards AI innovation and highlighted policy proposals around labor, safety, and election-related misinformation. The Roadmap was met with much criticism. More than a dozen civil society groups collaborated on a Shadow Report to the Roadmap, released Monday. Centered on the need to forefront the public interest over that of industry, the report calls for swift legislative action on AI-driven concerns ranging from racial justice to privacy and surveillance to climate change.

What was it about this Roadmap’s development that resulted in its misalignment with civil society’s priorities? What does this say about not just who is in the room, but how their perspectives are meaningfully reflected?

As policymakers at national and international levels work to govern AI development, deployment, and use, it is essential to bring ideas from across sectors and disciplines to the policy discussion to center solutions that work for people, not just companies. Since 2016 Partnership on AI has brought together a community of academic, civil society, and industry experts to co-create research and frameworks towards safe and responsible AI. We encourage policymakers to learn from this approach and collaborate with a diversity of experts, including our partner community, to make meaningful change.

Bring diverse perspectives to the policy table and integrate their insights

Already, many policy processes include gathering input from various experts and stakeholders. The Senate Roadmap, for example, was developed following nine AI Insight Forums with expert testimony provided by representatives from industry and non-industry, despite the final report forefronting the industry priority of innovation. At the UK AI Safety Summit in November, leaders from more than 20 nations came together sharing their diverse ideas on AI safety policy, however PAI CEO Rebecca Finlay was one of a small number of civil society voices present at Bletchley Park.

As various US government agencies work towards implementing the White House Executive Order on AI, Partnership on AI has served as a convener by connecting agencies like NIST, NTIA, and USAID with the PAI community. In March alone, we hosted two listening sessions on Sections 4 and 11 of the Executive Order between the PAI community and NIST, as a critical means to get multidisciplinary voices contributing to this important process. By co-hosting roundtables with organizations like PAI, government agencies are able to gather feedback from a variety of perspectives on discrete issues such as synthetic media, international standards and data enrichment labor.

Approach AI governance through a sociotechnical lens

As policymakers seek to govern a new, rapidly evolving technology, it is necessary to seek expertise from technical experts. However, because general purpose AI systems have a broad range of uses in a complex societal system, there are limitations to what technical solutions can achieve, and AI policy should be approached through a sociotechnical lens. What that means is an increased focus on the interactions between AI systems, people, and society as a whole.

While we are glad to see the international collaboration that led to the recently released International Scientific Report on the Safety of Advanced AI, its proposals to mitigating risks are limited to technical solutions, which can fail when used in isolation. For example, only labeling a subset of synthetic content as “AI-generated” may cause viewers to believe that non-labeled content is authentic or, at worst, more trustworthy than labeled content. This highlights the need for broader user education on how to evaluate synthetic content and the importance of a sociotechnical approach to transparency. PAI Partners Data & Society and the Center for Democracy and Technology (CDT) have both recently published resources on the importance of a sociotechnical approach.

As we look towards the next AI Safety Summit in France and the implementation of the EU AI Act, it is critical that sociotechnical expertise plays a central role. In the EU AI Act alone, the Code of Practice for GPAI and the establishment of the Advisory Forum and Scientific Panel are areas where sociotechnical, multidisciplinary input is vital for meaningful action.

Prioritize inclusivity along with pace

As we look towards next steps in the global AI governance space, including the AI Summit in France, monitoring of the G7’s Hiroshima code of conduct, the delivery of the Seoul Commitments, including through the new network of safety institutes and the implementation of the EU AI Act, it is critical that sociotechnical expertise plays a central role.

The capabilities and power of AI systems are progressing rapidly, and there have long been calls for policy to move quickly in response. This is important and we applaud the progress being made globally to respond quickly. Speed should not mean the exclusion of certain voices or perspectives and proactive steps to ensure sociotechnical viewpoints should be a core part of policy plans, for a more responsible, innovative future. In the EU AI Act alone, the Code of Practice for GPAI and the establishment of the Advisory Forum and Scientific Panel are areas where sociotechnical, multidisciplinary input is vital for meaningful action.

Leverage existing multistakeholder frameworks

While the proliferation of AI tools and applications in the past year and a half have spurred policymakers into action, organizations like PAI have been working on questions of AI governance for years.

The frameworks and guidelines that we have co-created with our partner community of academic, civil society, and industry experts can serve as a starting place for policymakers seeking sociotechnical solutions on safe, secure, and trustworthy AI. As an example, we led a cross-sectoral working group to develop PAI’s Guidance for Safe Foundation Model Deployment, specifically designed to be adaptable to AI’s evolving capabilities.

What’s next

Nearly every day, we see headlines announcing another AI development, tool, or application that promises to change the lives of millions. As this technology can impact every business sector and social issue, policymakers face many challenges as they seek to govern. Protecting people, our communities, and livelihoods from the adverse effects of AI should remain at the forefront of all policymaking. Inclusive processes to develop policy are one way for creating effective legislation, policy frameworks, standards, and tools that centers people and society.

PAI will continue to highlight the value and necessity of multistakeholder insights to developing meaningful policy as we engage with national governments and multilateral institutions. In addition, we see two other requisites for good global AI governance: designing policies and standards that foster interoperability across geographies, providing stakeholders with clear requirements in all markets; and tangible implementation mechanisms to assess progress against policy commitments.

PAI will continue to work with partners to share expertise with policymaking bodies to advance multistakeholder solutions that we have co-created with our broad cross-sectoral partner community. To stay up to date on PAI, including our public policy work, sign up here.