ANNUAL REPORT 2024

ANNUAL REPORT 2024

 

Partnership on AI (PAI) is a nonprofit community of academic, civil society, and industry organizations who come together to address the most important questions for the future of AI. In 2024, PAI and our Partners focused on real-world solutions for the responsible, safe, and inclusive development, deployment, and use of AI systems.
Aligned with its mission to ensure AI advances benefit people and society, PAI investigated AI’s impact on digital media and information, inspired evidence-based AI policy, leveraged AI for positive social impact, advanced safety in AI, and contributed to creating an inclusive economic future in AI. Our work in 2024 brought us several steps closer to realizing our vision of a future where AI is developed to empower humanity by contributing to a more just, equitable, and prosperous world.

Partnership on AI (PAI) is a nonprofit community of academic, civil society, and industry organizations who come together to address the most important questions for the future of AI. In 2024, PAI and our Partners focused on real-world solutions for the responsible, safe, and inclusive development, deployment, and use of AI systems.

Aligned with its mission to ensure AI advances benefit people and society, PAI investigated AI’s impact on digital media and information, inspired evidence-based AI policy, leveraged AI for positive social impact, advanced safety in AI, and contributed to creating an inclusive economic future in AI. Our work in 2024 brought us several steps closer to realizing our vision of a future where AI is developed to empower humanity by contributing to a more just, equitable, and prosperous world.

From Our CEO

Since our founding, PAI has worked with our community of academic, civil society, and industry partners to jointly develop real-world guidance for the safe and responsible development, deployment, and use of AI. Together, we developed a clear, collective agenda for action in response to questions like: When and how should AI-generated media be labeled? How can workers share in the benefits of AI? How can frontier and foundation models be developed and deployed safely? How can citizens and communities benefit from AI?

In 2024, we worked with our Partners to take action on this agenda, driving impact in practice, policy, and public understanding.

It is with great pride in our community that I look back on the year and see our mission in action. We fostered changes in industry practice, informed emerging public policy, and advanced public understanding. Through case studies, we saw how partners like Adobe, OpenAI, and the BBC are implementing our Synthetic Media Framework. Government agencies from NIST in the US to the OECD have noted PAI’s guidance as inputs to public policies.

In addition, we continue to launch new initiatives in areas that most need the attention of our multistakeholder community. This year, we launched a new Philanthropy Steering Committee to better understand the opportunities and risks of AI to this sector. We mapped the value chain for open foundation models identifying key points of intervention to improve safety. We also released our first policy report with recommendations to improve policy interoperability. PAI’s work continues to address the biggest questions about the future of this transformative technology.

As always, our work is grounded in our global community of partners. Bringing together stakeholders from across sectors at in-person forums, workshops, and through online gatherings, we connected with more than 1000 experts within our community and grew to 128 PAI Partners across 16 countries.

I am grateful to work with this community, along with our supportive Board, committed donors, and talented staff. Thank you to all who have contributed and helped drive impact in 2024. I look forward to continuing this momentum in 2025 and beyond.

In partnership,
Rebecca Finlay
CEO

Investigating AI’s Impact on Digital Media and Information

As AI-generated and manipulated media becomes commonplace, PAI acted as a catalyst for its community to explore and adopt responsible practices.

In 2024, a year where AI-manipulated media was expected to play a role in elections taking place across the globe, PAI continued its work to promote truth and transparency across the digital media and information ecosystem.

Early in the year, PAI launched a cross-sector Community of Practice to explore different approaches to the challenges posed by the use of AI tools in elections, creating space for shared learning.

Demonstrating the application of PAI’s Synthetic Media Framework, PAI published 16 in-depth case studies from AI-developing companies like Adobe and OpenAI, media organizations such as CBC and BBC, platforms such as Meta and Microsoft, as well as civil society organizations like Thorn and WITNESS. The case studies were a requirement of Framework supporters, who explored how the best practices on responsible development, creation, and sharing of AI generated media could be applied to real-world use cases.

The first set of case studies from framework supporters and accompanying analysis, focused on transparency, consent, and harmful/responsible use cases. The second set of case studies from framework supporters focused on an underexplored area of synthetic media governance: direct disclosure — methods to convey to audiences how content has been modified or created with AI, like labels or other signals — PAI developed policy recommendations based on insights from the cases. If responsible synthetic media best practices, such as disclosure, are not implemented alongside safety recommendations for open source model builders, synthetic media may lead to real-world harm, such as manipulating democratic and political processes.

As AI-generated and manipulated media becomes commonplace, PAI acted as a catalyst for its community to explore and adopt responsible practices.

Read more

In 2024, a year where AI-manipulated media was expected to play a role in elections taking place across the globe, PAI continued its work to promote truth and transparency across the digital media and information ecosystem.

Early in the year, PAI launched a cross-sector Community of Practice to explore different approaches to the challenges posed by the use of AI tools in elections, creating space for shared learning.

Demonstrating the application of PAI’s Synthetic Media Framework, PAI published 16 in-depth case studies from AI-developing companies like Adobe and OpenAI, media organizations such as CBC and BBC, platforms such as Meta and Microsoft, as well as civil society organizations like Thorn and WITNESS. The case studies were a requirement of Framework supporters, who explored how the best practices on responsible development, creation, and sharing of AI generated media could be applied to real-world use cases.

The first set of case studies from framework supporters and accompanying analysis, focused on transparency, consent, and harmful/responsible use cases. The second set of case studies from framework supporters focused on an underexplored area of synthetic media governance: direct disclosure — methods to convey to audiences how content has been modified or created with AI, like labels or other signals — PAI developed policy recommendations based on insights from the cases. If responsible synthetic media best practices, such as disclosure, are not implemented alongside safety recommendations for open source model builders, synthetic media may lead to real-world harm, such as manipulating democratic and political processes.

 

 

“The responsible use of AI is core to OpenAI’s mission, and the Synthetic Media Framework has been beneficial towards collectively working out ways to address the global challenges presented by AI progress.”

Lama Ahmad
Technical Program Manager, Policy Research
OpenAI

Inspiring Evidence-Based AI Policy

PAI builds bridges between the policy world and our multistakeholder community to support the development of impactful global AI policy.

By convening experts across sectors, building consensus-based frameworks, and facilitating global policy dialogue, PAI has been able to inspire and support the development and implementation of impactful AI policy. Throughout 2024, PAI’s work was cited in various policy guidance and initiatives, demonstrating the impact of our resources, research and recommendations to inspire innovations in public policy.

Coordination is critical to developing and inspiring good AI policy and without it, we risk creating an inconsistent patchwork of frameworks and divergent understandings of best practices. Ensuring these frameworks work together is critical which is why we published the Alignment on AI Transparency — a comparative analysis of eight leading policy frameworks for foundation models, with a particular focus on documentation requirements, critical components of transparency and safety.

To further foster collaboration towards responsible AI policy, PAI hosted its AI Policy Forum in New York ahead of UN week in September. Bringing together representatives from the United Nations and national governments, alongside PAI’s community of academic, civil society, and industry partners, the Forum drove in-depth conversations about the need for people-centric policies.

PAI builds bridges between the policy world and our multistakeholder community to support the development of impactful global AI policy.

Read more
By convening experts across sectors, building consensus-based frameworks, and facilitating global policy dialogue, PAI has been able to inspire and support the development and implementation of impactful AI policy. Throughout 2024, PAI’s work was cited in various policy guidance and initiatives, demonstrating the impact of our resources, research and recommendations to inspire innovations in public policy.

Coordination is critical to developing and inspiring good AI policy and without it, we risk creating an inconsistent patchwork of frameworks and divergent understandings of best practices. Ensuring these frameworks work together is critical which is why we published the Alignment on AI Transparency — a comparative analysis of eight leading policy frameworks for foundation models, with a particular focus on documentation requirements, critical components of transparency and safety.

To further foster collaboration towards responsible AI policy, PAI hosted its AI Policy Forum in New York ahead of UN week in September. Bringing together representatives from the United Nations and national governments, alongside PAI’s community of academic, civil society, and industry partners, the Forum drove in-depth conversations about the need for people-centric policies.

“Technology that has global impact deserves global action and that is why convenings like [PAI’s Partner Forum] which bring together the full multi-stakeholder community are so vital to the future of AI.”

Alan Davidson
Assistant Secretary of Commerce for Communications and Information
and NTIA Administrator

Leveraging AI for Positive Social Impact

PAI empowers the philanthropic community, connecting them to academic, civil society, and industry experts, to guide its approach to managing the risk and harnessing the benefit of AI.

Launch of PAI’s Philanthropy Steering Committee

PAI recognizes that philanthropic organizations play a critical role in driving change towards equitable AI and by leveraging data-driven approaches, these organizations can support the development and use of safe, responsible, and inclusive AI tools and systems. That is why in 2024 PAI launched the AI & Philanthropy Steering Committee, a pivotal new initiative fostering collaboration, dialogue, and action at the intersection of AI and philanthropy. The primary purpose of the AI & Philanthropy Steering Committee is to define and guide philanthropic initiatives that leverage AI for positive social impact. The committee dedicates its time to the broad spectrum of challenges and opportunities AI presents to the philanthropic and nonprofit landscape, comprising a diverse assembly of industry, philanthropy, nonprofit community leaders, academia and subject matter experts.

Philanthropy Steering Committee Members

Ruby Bolaria Shifrin
Chan Zuckerberg Initiative
Lilian Coral
New America
Anamitra Deb
Omidyar Network
Kay Firth-Butterfield
Good Tech Advisory

Jonathan Goldberg
Surdna Foundation
Brigitte Gosselink
Google.org
Joan Harrington
Markkula Center for Applied Ethics
Janet Haven
Data & Society

Tia Hodges
MetLife Foundation
Ynis Isimbi
Skoll Foundation
Jeff Jiménez-Kurlander
Surdna Foundation
Amba Kak
AI Now Institute

Lori McGlinchey
Ford Foundation
Aidan Peppin
Cohere For AI
Stephen Plank
Annie E. Casey Foundation
Andrew Strait
Ada Lovelace Institute

Martin Tisné
AI Collaborative
Sandra Topic
Amazon Web Services

Philanthropy Forum

PAI brought together the AI & Philanthropy Steering Committee and broader community of PAI Partners at theAI & Philanthropy Forum in March 2024. The forum fostered meaningful conversations and collaborations, emphasizing strategic collective action to address the societal impacts of AI. The forum underscored the critical role of philanthropy in advancing social equity in the AI space and by continuing to prioritize inclusivity and collaboration PAI has helped foster a landscape reflective of our values and aspirations.

Please accept preferences, statistics, marketing cookies to watch this video.

PAI empowers the philanthropic community, connecting them to academic, civil society, and industry experts, to guide its approach to managing the risk and harnessing the benefit of AI.

Read more: Launch of PAI’s Philanthropy Steering Committee
PAI recognizes that philanthropic organizations play a critical role in driving change towards equitable AI and by leveraging data-driven approaches, these organizations can support the development and use of safe, responsible, and inclusive AI tools and systems. That is why in 2024 PAI launched the AI & Philanthropy Steering Committee, a pivotal new initiative fostering collaboration, dialogue, and action at the intersection of AI and philanthropy. The primary purpose of the AI & Philanthropy Steering Committee is to define and guide philanthropic initiatives that leverage AI for positive social impact. The committee dedicates its time to the broad spectrum of challenges and opportunities AI presents to the philanthropic and nonprofit landscape, comprising a diverse assembly of industry, philanthropy, nonprofit community leaders, academia and subject matter experts.

Philanthropy Steering Committee Members

Ruby Bolaria Shifrin
Chan Zuckerberg Initiative
Lilian Coral
New America
Anamitra Deb
Omidyar Network
Kay Firth-Butterfield
Good Tech Advisory
Jonathan Goldberg
Surdna Foundation
Brigitte Gosselink
Google.org
Joan Harrington
Markkula Center for Applied Ethics
Janet Haven
Data & Society
Tia Hodges
MetLife Foundation

Ynis Isimbi
Skoll Foundation
Jeff Jiménez-Kurlander
Surdna Foundation
Amba Kak
AI Now Institute
Lori McGlinchey
Ford Foundation
Aidan Peppin
Cohere For AI
Stephen Plank
Annie E. Casey Foundation
Andrew Strait
Ada Lovelace Institute
Martin Tisné
AI Collaborative
Sandra Topic
Amazon Web Services

Read more: Philanthropy Forum

PAI brought together the AI & Philanthropy Steering Committee and broader community of PAI Partners at theAI & Philanthropy Forum in March 2024. The forum fostered meaningful conversations and collaborations, emphasizing strategic collective action to address the societal impacts of AI. The forum underscored the critical role of philanthropy in advancing social equity in the AI space and by continuing to prioritize inclusivity and collaboration PAI has helped foster a landscape reflective of our values and aspirations.

Please accept preferences, statistics, marketing cookies to watch this video.

“We recognize that artificial intelligence is a powerful force that can shape the future of our society and the well-being of our priority populations. That is why we are proud to support the Partnership on AI and its efforts to foster ethical and responsible AI development and use across the philanthropic sector and beyond.”

Jonathan Goldberg
Vice President of Learning & Impact
Surdna Foundation

Advancing Safety in AI

This year, PAI took proactive steps to address the deployment of AI systems in contexts where safety risks can have widespread consequences, including healthcare, finance, transportation, and media. Recognizing the urgent need for robust safety measures, we convened over 50 experts through a collaborative workshop with GitHub to explore safeguards for state-of-the-art open foundation models and roles and responsibilities within the AI value chain. We also published 3 pilot study reports on our ABOUT ML initiative, which promoted the standardization of AI/ML documentation to enhance transparency and foster societal trust within organizations. Through comprehensive engagement, PAI continues to lead in establishing the foundation for responsible AI.

An Inclusive Economic Future for AI

In 2024, PAI championed the dialogue on AI’s role in shaping an equitable economic future. As AI continues to redefine automation and wealth distribution, we united partner organizations, economists, and worker representatives to craft actionable steps to ensure AI developers and policymakers were equipped to handle the challenges AI poses. Our work yielded the Path for Developing Responsible AI Supply Chains, a five-step guide to enhance accountability and governance. We released resources for public comment such as the Vendor Engagement Guidance and Transparency Template, essential tools for fostering responsible practices and improving transparency within AI-driven data supply chains. These initiatives mark strides toward shared responsibility and greater accountability in the evolving landscape of AI and labor.

2024 by the Numbers

Creating Community
that is international, inclusive, and equitable

16
countries
128
partners
92
convenings reaching 936 participants from 27 countries

Informing the public
about the social and societal impact of AI

58K+
social media followers
1.3B+
earned media & 3M+ social impressions
93K+
website visitors

Encouraging policy innovation
by governments

23
educational briefings for global policy makers
39
global governments and multilateral organizations participated in PAI forums
18
government participants across 8 countries attended PAI’s Policy Forum

Fostering changes in practice
in all sectors and broader communities

7
areas of focus
27
new resources
5000+
downloads of PAI resources

PAI Community

PAI continues to build a community of diverse voices across global sectors, disciplines, and demographics to serve as a powerful platform for fostering a future where AI technologies benefit people and society.

New Members of the Board of Directors

In 2024 PAI welcomed six new members to its Board of Directors. The new board directors include leaders from the philanthropic, technology, and financial sectors, as well as civil society and academia, bringing a wealth of knowledge and expertise in advancing the responsible and ethical development and use of AI.

Esha Bhandari
ACLU

Natasha Crampton
Microsoft

Vukosi Marivate
University of Pretoria

Lori McGlinchey
Ford Foundation

Premkumar Natarajan
Capital One

Suresh Venkatasubramanian
Brown University

Esha Bhandari
ACLU

Natasha Crampton
Microsoft

Vukosi Marivate
University of Pretoria

Lori McGlinchey
Ford Foundation

Prem Natarajan
ACLU

Suresh Venkatasubramanian
Brown University

17 New Partners joined in 2024

PAI welcomed 17 new partners to our global network of more than 100 organizations to identify AI’s most important challenges and opportunities and co-develop practical, effective guidance for responsible AI.

Who We Are

PAI seeks to ensure that AI technologies benefit and empower as many people as possible.

At the heart of our work is a global community of partners committed to the responsible development and use of AI. With the help of our PAI community we are able to uphold the tenets that guide us in advancing positive outcomes for people and society.

  1. We will seek to ensure that AI technologies benefit and empower as many people as possible.
  2. We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
  3. We are committed to open research and dialogue on the ethical, social, economic, and legal implications of AI.
  4. We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
  5. We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
  6. We will work to maximize the benefits and address the potential challenges of AI technologies.
  7. We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
  8. We strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals.

 


PAI’s Staff

Thanks to our Funders and Partners

 

PAI is an independent, nonprofit 501(c)(3) organization and funded by charitable contributions from philanthropy and corporate entities. Our 990 forms from previous years can be found on our funding page.

We thank all of our Partners who form this special community for their valuable insight and support.