We are a non-profit community of academic, civil society, industry, and media organizations addressing the most important and difficult questions concerning the future

of AI.

Stories of Impact

By creating actionable resources for the AI community, PAI translates critical insights into positive impact on the world

Investigating Challenges to Diversity in AI

Working in partnership with DeepMind, PAI researchers launched a study to investigate high attrition rates among women and minoritized individuals in tech.

EXPLORE

Tracking When AI Systems Fail

Composed of more than 1,200 reports of AI failures that caused harms or near-harms, the AI Incident Database (AIID) serves as a much-needed tool for AI researchers and developers, outlining a wide variety of real-world risks for automated systems.

EXPLORE

Taking a Methodical Approach to Best Practices

PAI worked with First Draft to support information integrity, investigating what works (and what does not) when addressing deceptive content online.

EXPLORE

Learn More About Our Impact   Explore

Our Work

Currently organized under four Programs, our work contributes to the rigorous development of resources, recommendations, and best practices for AI.

AI, Labor, and the Economy

PAI believes that AI has tremendous potential to solve major societal problems and make peoples’ lives better. At the same time, individuals and organizations must grapple with new forms of automation, wealth distribution, and economic decision-making. Whether AI promotes equality or increases injustice, whether it makes all of us richer or the poor poorer is a choice we, as a world, must consciously make.

To advance a beneficial economic future from AI, the AI, Labor, and the Economy Program gathers Partner organizations, economists, and worker representative organizations. Together, these actors work to form shared answers and recommendations for actionable steps that need to be taken to ensure AI supports an inclusive economic future.

Learn More

AI and Media Integrity

While AI has ushered in an unprecedented era of knowledge-sharing online, it has also enabled novel forms of misinformation, manipulation, and harassment as well as amplifying harmful digital content’s potential impact and reach. PAI’s AI and Media Integrity Program directly addresses these critical challenges to the quality of public discourse by investigating AI’s impact on digital media and online information, researching timely subjects such as manipulated media detection, misinformation interventions, and content-ranking principles.

Through this Program, PAI works to ensure that AI systems bolster the quality of public discourse and online content around the world, which includes considering how we define quality in the first place. By convening a fit-for-purpose, multidisciplinary field of actors — including representatives from media, industry, academia, civil society, and users that consume content — the AI and Media Integrity Program is developing best practices for AI to have a positive impact on the global information ecosystem.

Learn More

Fairness, Transparency, and Accountability & ABOUT ML

Fairness, Transparency, and Accountability encompasses PAI’s large body of research and programming around algorithmic fairness, explainability, criminal justice, and diversity and inclusion. In 2020 alone, this work examined the challenges organizations face when they seek to measure and mitigate algorithmic bias using demographic data, provide meaningful explanations to diverse stakeholders, address bias in recidivism risk assessment tools, and build more inclusive AI teams.

With ABOUT ML, PAI is leading a multistakeholder effort to develop guidelines for the documentation of machine learning systems, setting new industry norms for transparency in AI. This means not just identifying the necessary components of transparency, but releasing actionable resources to help organizations operationalize transparency at scale. Developed through an iterative, multistakeholder process, these resources pool the collective efforts and insights of academic researchers, industry practitioners, civil society organizations, and the impacted public.

Learn More

Safety-Critical AI

How can we ensure that AI and machine learning technologies are safe? This is an urgent short-term question, with applications in computer security, medicine, transportation, and other domains. It is also a pressing longer-term question, particularly with regard to environments that are uncertain, unanticipated, and potentially adversarial.

As our lives become increasingly saturated with artificial intelligence systems, the safety of these systems becomes a vital consideration. The Safety-Critical AI Program seeks to establish social and technical foundations that will support the safe development and deployment of AI.

Learn More

Learn More About Our Programs  Explore

Our Partners

With a diverse Partner community drawn from members across the globe, PAI spans sectors, disciplines, and borders.

By gathering the leading companies, organizations, and people differently affected by artificial intelligence, PAI establishes a common ground between entities that otherwise may not have cause to work together and—in so doing—serves as a uniting force for good in the AI ecosystem.

View all Partners