Our Work

Our work contributes to the rigorous development of resources, recommendations, and best practices for AI.

Inclusive Research & Design

At PAI, equity and inclusion are core values which we seek to promote among our Partner organizations, in our own work, and throughout the greater AI field, including in machine learning and other automated decision-making systems. This Program explores the many barriers to inclusion in the AI space — as experienced by those who work in technology and those who are consistently excluded from key decision-making processes.

The Inclusive Research and Design Program is currently creating resources to help AI practitioners and impacted communities more effectively engage one another to develop AI responsibly. Ultimately, this work seeks to achieve a more holistic reimagining of how AI is developed and deployed around the world, leading to an AI industry that recognizes end users and impacted communities as essential expert groups.


AI and Media Integrity

While AI has ushered in an unprecedented era of knowledge-sharing online, it has also enabled novel forms of misinformation, manipulation, and harassment, creating new categories of harmful digital content and extending their potential reach. PAI’s AI and Media Integrity Program directly addresses these critical challenges to the quality of public discourse, researching timely subjects such as manipulated media detection, misinformation interventions, and content-ranking principles.

Through this Program, PAI works to ensure that AI systems bolster the quality of public discourse and online content around the world, which includes considering how we define quality in the first place. By convening a fit-for-purpose, multidisciplinary field of actors—including representatives from media, industry, academia, civil society, and users themselves—the AI and Media Integrity Program is developing best practices for AI to have a positive impact on the global information ecosystem.


AI, Labor, and the Economy

PAI believes that AI has tremendous potential to solve major societal problems and make peoples’ lives better. At the same time, individuals and organizations must grapple with new forms of automation, wealth distribution, and economic decision-making. Whether AI promotes equality or increases injustice, whether it makes all of us richer or the poor poorer is a choice we, as a world, must consciously make.

To advance a beneficial economic future from AI, the AI, Labor, and the Economy Program gathers Partner organizations, economists, and worker representative organizations. Together, these actors work to form shared answers and recommendations for actionable steps that need to be taken to ensure AI supports an inclusive economic future.


Fairness, Transparency, and Accountability & ABOUT ML

As AI systems are deployed across an ever-growing number of domains, the fairness, transparency, and accountability of these systems has become a critical societal concern. This Program examines the intersections between AI and some of humanity’s most fundamental values, addressing urgent questions about algorithmic equity, explainability, responsibility, and inclusion.

Through original research and multistakeholder input, our Fairness, Transparency, and Accountability work asks how AI can build a world that is more (and not less) just than the one that came before it. And by offering actionable resources for implementing transparency at scale, ABOUT ML seeks to operationalize these insights with full-cycle documentation of machine learning systems.


Public Policy

The AI policy space has seen a rapidly maturing governance landscape with the introduction of regulatory proposals in several jurisdictions, ongoing development of a variety of international frameworks, specifications, and methods, and growth in the research and creation of tools to advance responsible, trustworthy AI. Ensuring that the emerging frameworks, tools, and standards are both consistent and support known best practices will require greater coordination between policymakers, civil society, academia, and industry.

PAI’s Policy work seeks to facilitate this coordination by convening stakeholders to develop evidence-based frameworks, promoting a shared understanding of how policy can foster responsible AI practices, building connections across borders to support global equity and interoperability, and working with Partners to ensure policy implementation is impactful.


Safety-Critical AI

How can we ensure that AI and machine learning technologies are safe? This is an urgent short-term question, with applications in computer security, medicine, transportation, and other domains. It is also a pressing longer-term question, particularly with regard to environments that are uncertain, unanticipated, and potentially adversarial.

As our lives become increasingly saturated with artificial intelligence systems, the safety of these systems becomes a vital consideration. The Safety-Critical AI Program seeks to establish social and technical foundations that will support the safe development and deployment of AI.