2023 was a year of seismic shifts in AI. While advancements in generative AI technology were perhaps most notable, the field also witnessed growth in public awareness of the potential impacts of AI in sectors such as education and entertainment widespread access to new generative AI tools in the west , and energy and attention from policy-makers aiming to spur innovation and competition while protecting their citizens. In just a year, these changes have had a significant impact on the broader landscape of how we engage with technology and each other.
With these shifts, PAI’s mission to bring diverse voices together across global sectors, disciplines, and demographics so that AI developments advance positive outcomes for people and society is more important than ever. More relevant to the rapidly growing base of companies deploying AI across their businesses, more urgent to governments grappling with AI risk and safety, and more essential to individuals already impacted by AI misuse and communities striving to realize a people-centered future for AI.
As I reflect on a year of profound change in our field, I’m most proud of how PAI has been able to meet this moment. Last year, we were able to respond to rapidly shifting contexts with collectively defined guidelines for responsible practice, to convene and engage policy makers to turn these ideas into lasting change, and to keep people centered through it all.
But this work isn’t only a reflection of 2023. Our ability to achieve these impacts are the culmination of years of intentional foresight, thoughtful collaboration, and sustained effort by PAI staff, our Board members, and our invaluable constellation of partners representing 119 organizations across 17 countries. This year, Jerremy Holland was appointed Board Chair, and Jatin Aythora and Joelle Pineau were appointed Vice Chairs. I am truly thankful to our Board Directors, Staff Team, Steering Committee and Working Group Members, and Partners.
Our community knows that AI can be a force for good, if we develop, distribute, and use it responsibly. Together we have made important progress towards that goal. Thank you for investing in and advancing this community of change through the pivotal moments of 2023. I look forward to continuing to work together with you in 2024 towards our vision of a future where we all develop AI to build a more just, equitable, and prosperous world.
Sincerely,
Rebecca Finlay
CEO, Partnership on AI
The pace and scale of shifts across the AI landscape in 2023 underscored the critical nature of PAI’s community and efforts amidst these rapidly evolving contexts.
Since 2016, PAI has been convening a diverse and growing multistakeholder community to identify, investigate, and collaborate on the most relevant and pressing AI challenges and opportunities. Our work in 2023 exemplified how we can anticipate, prepare for, and respond to these contexts to proactively co-create a people-centered future of AI.
PAI’s work with partners and collaborators delivered time-sensitive guidance, informing changes in practice and policy innovation urgently needed to advance responsible AI.
In February 2023, PAI launched the Responsible Practices for Synthetic Media: A Framework for Collective Action, with 18 partners to date aligning their practices for ethical and responsible development, creation, and sharing of AI generated media.
As generative AI technology propelled realistic synthetic media from the lab to the laptops and devices of everyday people, PAI’s Framework was already collaboratively defined by and available to key stakeholders–like builders of infrastructure and technology, creators, and media distributors and publishers–whose decisions influence the societal impacts of these technologies.
As synthetic media becomes one of the engines of digital content creation, the responsible practices advanced by Framework supporters have the power to improve consent, disclosure, and transparency for massive end-user bases. With 18 organizations signed on as institutional supporters of the Framework, we are seeing PAI’s principles of synthetic media governance in practice, scaling our impact to the billions of users reached by Framework supporters such as BBC, Google, Meta, and TikTok.
In 2024, PAI will publish in-depth case studies from Framework supporters on how they have applied Framework principles in practice, and use learnings to inform public policy.
PAI launched Guidance for Safe Foundation Model Deployment, incorporating expertise from 50+ experts and model providers on how to responsibly develop and deploy foundational AI models in ways that promote safety for society.
The foundation models that are powering this wave of innovation in generative AI are a capability step-change from previous technologies. At the cusp of mass adoption, with a growing range of scenarios and unanswered questions around how these models will be monetized and their real world implications, PAI’s guidance fills a critical knowledge gap by articulating the landscape of AI risks and how to address them while pursuing model innovation, encompassing different model types and release strategies.
As discussions on AI safety began to heat up among global policy makers, our Model Deployment Guidance laid the groundwork for convening our first-ever AI Policy Forum and joining time-sensitive policy conversations like the UK AI Safety Summit with global leaders working to build the foundation of good AI governance and safety for society.
In 2024 PAI will refine recommendations for open access models and publish an updated version of the Guidance.
PAI published Guidelines for AI and Shared Prosperity, with 13 endorsements from experts and stakeholders affirming the power of these recommendations to guide AI deployment in ways that benefit all.
2023 propelled us into a world where 79% of global business and technology leaders expect AI to significantly transform their organizations within the next 3 years, and more than 50% of those same leaders also expect generative AI to centralize economic power and increase inequality. But we don’t have to build that future.
PAI’s Guidelines summarize actionable ways AI-creating and AI-using organizations, labor organizers, and policy makers can assess and ground their decisions, agendas, and interactions in a systematic understanding of the potential impact of AI on the labor market and job quality, centering worker voice in AI development and use.
In this moment of profound opportunity to intentionally design AI technology for a more just, equitable, and prosperous world, changes in practice like OpenAI’s commitment to PAI’s Data Enrichment Sourcing Guidelines for development of ChatGPT4 and iterations illuminate a path to shared prosperity. In this case, ensuring data workers experience working conditions that respect and value their contributions.
In 2024 PAI will work closely with labor organizers and industry partners to increase adoption of the guidelines, and to develop standards of practice for fair treatment of data workers.
AI is already changing how the news is being reported, bringing potential benefits of taking on tedious tasks like transcription while posing risks like spreading misinformation. In November 2023, PAI published AI Adoption for Newsrooms: A 10-Step Guide, which provides a step-by-step roadmap to support newsrooms navigating the difficult questions posed by AI tool identification, procurement, and use.
To protect user data and privacy, companies are deploying new techniques like ‘federated learning’ and ‘differential privacy’. In collaboration with PAI founding partner Apple, we researched and published a white paper investigating if these techniques can help advance fair and ethical algorithmic decision-making while striking a balance between collecting sensitive demographic data and respecting individual privacy.
Ensuring that emerging policy and regulatory frameworks, standards, and technical tools are both consistent and support known best practices requires coordination between policymakers, civil society, academia, and industry. Seizing the opportunity to play a key role in facilitating this coordination, PAI convened policy makers and PAI Partners for critical discussions, including hosting the PAI Policy Forum and co-hosting a roundtable with the US Department of Commerce. PAI also contributed expertise in key policy conversations like the UK AI Safety Summit and at the OECD, and in responses to US government agencies’ calls for expert testimony.
To ensure AI can benefit all people and society, it must be developed with processes and dialogue that are intentionally inclusive of a diverse set of stakeholders and the broader public. In pursuit of this goal, PAI launched the Global Task Force for Inclusive AI, an initiative cited by the White House Office of Science and Technology Policy Director in her remarks at the Summit for Democracy in May 2023.
PAI’s constellation of partners and collaborators are foundational to the progress we’ve made and continuing AI development that advances positive outcomes for people and society.
Our multi-stakeholder model powered each of the advancements we made in 2023, and it is what will push us to keep evolving with new technology developments, use cases, and stakeholders in mind. Thank you to each of our Partners who make this progress possible.