Our Blog
/
Blog

PAI’s Response to the UN’s Governing AI for Humanity Interim Report

$hero_image['alt']

With only six years left until the 2030 deadline for achieving the UN Sustainable Development Goals (SDGs), advances in AI have the power to make a profound impact on people and the planet. From automating rote tasks and freeing workers to focus on more creative and meaningful work (SDG 8: Decent Work and Economic Growth) to supporting understaffed hospitals to read scans and X-rays (SDG 6: Good Health and Well-being), AI systems have already been deployed in ways that benefit humanity.

Like many national governments, civil society organizations, community advocates, academic researchers, and private sector ethics teams, the UN seeks to establish guardrails for the safe and responsible use of AI. The UN AI Advisory Board released its interim report, Governing AI for Humanity, advocating for stronger international AI governance. The report posits ways to understand the role that an international AI governance body should play by outlining its guiding principles and institutional functions.

The report is a welcome addition to the international conversation on principles of good AI governance. However, given the rapidly advancing AI development and adoption, there is an urgent need to translate the principles into practical guidance. This requires collective action and coordination. As a global nonprofit devoted to responsible AI, Partnership on AI (PAI) has led multistakeholder work to turn insights into action through real-world applications of responsible AI principles.

In response to the interim report, we extend learnings from our practical experience, as well as resources developed and tested by our multistakeholder community. Our response highlights PAI’s body of work on inclusion, safety, as well as labor and economy, which corresponds to the efforts and visions proposed by the UN.

Community Engagement and Inclusion

Consistent with the interim report, PAI also asserts that AI must be developed inclusively, with the consultation of a diverse set of stakeholders to ensure that AI benefits all people and society. To that end, PAI has launched the Global Task Force for Inclusive AI, a first-of-its-kind body created to ensure technology developers are deployers are better equipped to ethically engage with those who are impacted by AI. In doing so, we aim to see more direct involvement by impacted communities and individuals in the creation and use of AI tools. Later this year, the Task Force will release a framework for ethical and inclusive engagement practices in AI, which can be integrated into the UN’s global AI governance framework, while also serving as guidance for AI practitioners who seek better ways of working with their stakeholders. Additionally, we argue that supporting capacity building for both AI practitioners and communities is crucial to realizing equitable partnerships. ‘Making AI Inclusive: 4 Guiding Principles for Ethical Engagement’ is a helpful primer on ethical engagement for practitioners.

Workers and Working Conditions

In response to the report’s call on AI’s disparate impact on labor and working conditions, we refer to our AI, Labor, and Economy (AILE) work, which addresses overlooked aspects of this topic, from including workers’ perspectives to improving working conditions of data enrichment workers. To help steer AI toward shared prosperity, countries around the globe need effective levers of influence over the pace, depth, and distribution of AI impacts on labor demand. The Guidelines for AI & Shared Prosperity provide tools to assess an AI system’s impact on jobs and suggest responsible practices to balance the enablers and guardrails around AI.

Safety & Accountability

Global coordination for monitoring AI risks is critical. “Global alignment on implementation” as mentioned in the report not only requires multistakeholder collaboration but ongoing applied research, engagement with global policymakers, and communities of practice. In the past year, we led a multistakeholder collaboration to collectively develop Guidance for Safe Foundation Model Deployment and translate safety principles into practical recommendations for model providers. In the guidance, we suggest specific accountability mechanisms for model providers to follow throughout the deployment process, such as providing synthetic media disclosures and supplying downstream use documentation. The UN report can use these tangible accountability mechanisms to create risk monitoring mechanisms that are interoperable at the global level.

What’s Next

With many national governments wrestling with the questions the UN report seeks to answer, the report has the potential to influence policy around the globe. While AI has a global reach, major AI-developing companies are concentrated in a few countries. UN member states that may not have the resources to dedicate resources to responsible AI practices stand to benefit greatly from the recommendations and resources of the final report.

We look forward to tracking further developments on the UN’s work on AI and opportunities to contribute PAI’s insights and frameworks that we co-created with our global community. To learn more about our work, sign up here.