We are a non-profit community of academic, civil society, industry, and media organizations addressing the most important and difficult questions concerning the future of AI.
Stories of Impact
By creating actionable resources for the AI community, PAI translates critical insights into positive impact on the world
Investigating Challenges to Diversity in AI
Working in partnership with DeepMind, PAI researchers launched a study to investigate high attrition rates among women and minoritized individuals in tech.EXPLORE
Convening Across Industries
In service of transparency and accountability goals, PAI hosted a one-day, in-person workshop around the deployment of “explainable artificial intelligence” (XAI).EXPLORE
Taking a Methodical Approach to Best Practices
PAI worked with First Draft to support information integrity, investigating what works (and what does not) when addressing deceptive content online.EXPLORE
Currently organized under Programs, our work contributes to the rigorous development of resources, recommendations, and best practices for AI.
Inclusive Research & Design
At PAI, equity and inclusion are core values which we seek to promote among our Partner organizations, in our own work, and throughout the greater AI field, including in machine learning and other automated decision-making systems. This Program explores the many barriers to inclusion in the AI space — as experienced by those who work in technology and those who are consistently excluded from key decision-making processes.
The Inclusive Research and Design Program is currently creating resources to help AI practitioners and impacted communities more effectively engage one another to develop AI responsibly. Ultimately, this work seeks to achieve a more holistic reimagining of how AI is developed and deployed around the world, leading to an AI industry that recognizes end users and impacted communities as essential expert groups.Learn More
AI, Labor, and the Economy
PAI believes that AI has tremendous potential to solve major societal problems and make peoples’ lives better. At the same time, individuals and organizations must grapple with new forms of automation, wealth distribution, and economic decision-making. Whether AI promotes equality or increases injustice, whether it makes all of us richer or the poor poorer is a choice we, as a world, must consciously make.
To advance a beneficial economic future from AI, the AI, Labor, and the Economy Program gathers Partner organizations, economists, and worker representative organizations. Together, these actors work to form shared answers and recommendations for actionable steps that need to be taken to ensure AI supports an inclusive economic future.Learn More
AI and Media Integrity
While AI has ushered in an unprecedented era of knowledge-sharing online, it has also enabled novel forms of misinformation, manipulation, and harassment as well as amplifying harmful digital content’s potential impact and reach. PAI’s AI and Media Integrity Program directly addresses these critical challenges to the quality of public discourse by investigating AI’s impact on digital media and online information, researching timely subjects such as manipulated media detection, misinformation interventions, and content-ranking principles.
Through this Program, PAI works to ensure that AI systems bolster the quality of public discourse and online content around the world, which includes considering how we define quality in the first place. By convening a fit-for-purpose, multidisciplinary field of actors — including representatives from media, industry, academia, civil society, and users that consume content — the AI and Media Integrity Program is developing best practices for AI to have a positive impact on the global information ecosystem.Learn More
Fairness, Transparency, and Accountability & ABOUT ML
Fairness, Transparency, and Accountability encompasses PAI’s large body of research and programming around algorithmic fairness, explainability, criminal justice, and diversity and inclusion. In 2020 alone, this work examined the challenges organizations face when they seek to measure and mitigate algorithmic bias using demographic data, provide meaningful explanations to diverse stakeholders, address bias in recidivism risk assessment tools, and build more inclusive AI teams.
With ABOUT ML, PAI is leading a multistakeholder effort to develop guidelines for the documentation of machine learning systems, setting new industry norms for transparency in AI. This means not just identifying the necessary components of transparency, but releasing actionable resources to help organizations operationalize transparency at scale. Developed through an iterative, multistakeholder process, these resources pool the collective efforts and insights of academic researchers, industry practitioners, civil society organizations, and the impacted public.Learn More
Given AI’s potential for misuse, how do we develop and deploy algorithmic systems responsibly? Increasingly, AI systems are being deployed in contexts where safety risks can have widespread consequences, including medicine, finance, transportation, and social media. This makes anticipating and mitigating such risks — in both the near and long term — an urgent societal need.
Our Safety Critical AI Program convenes Partners and other stakeholders to develop best practices that can help us avert likely accidents, misuses, and unintended consequences of AI technologies. We don’t have to wait for such incidents to arise. As our work shows, precaution can be taken as early as the research stage to ensure the development of safe AI systems.
With a diverse Partner community drawn from members across the globe, PAI spans sectors, disciplines, and borders.
By gathering the leading companies, organizations, and people differently affected by artificial intelligence, PAI establishes a common ground between entities that otherwise may not have cause to work together and—in so doing—serves as a uniting force for good in the AI ecosystem.View all Partners