AI and Media Integrity
While AI has ushered in an unprecedented era of knowledge-sharing online, it has also enabled novel forms of misinformation, manipulation, and harassment, creating new categories of harmful digital content and extending their potential reach. PAI’s AI and Media Integrity Program directly addresses these critical challenges to the quality of public discourse, researching timely subjects such as manipulated media detection, misinformation interventions, and content-ranking principles.
Through this Program, PAI works to ensure that AI systems bolster the quality of public discourse and online content around the world, which includes considering how we define quality in the first place. By convening a fit-for-purpose, multidisciplinary field of actors—including representatives from media, industry, academia, civil society, and users themselves—the AI and Media Integrity Program is developing best practices for AI to have a positive impact on the global information ecosystem.EXPLORE PROGRAM
AI, Labor, and the Economy
PAI believes that AI has tremendous potential to solve major societal problems and make peoples’ lives better. At the same time, individuals and organizations must grapple with new forms of automation, wealth distribution, and economic decision-making. Whether AI promotes equality or increases injustice, whether it makes all of us richer or the poor poorer is a choice we, as a world, must consciously make.
To advance a beneficial economic future from AI, the AI, Labor, and the Economy Program gathers Partner organizations, economists, and worker representative organizations. Together, these actors work to form shared answers and recommendations for actionable steps that need to be taken to ensure AI supports an inclusive economic future.EXPLORE PROGRAM
Fairness, Transparency, and Accountability & ABOUT ML
As AI systems are deployed across an ever-growing number of domains, the fairness, transparency, and accountability of these systems has become a critical societal concern. This Program examines the intersections between AI and some of humanity’s most fundamental values, addressing urgent questions about algorithmic equity, explainability, responsibility, and inclusion.
Through original research and multistakeholder input, our Fairness, Transparency, and Accountability work asks how AI can build a world that is more (and not less) just than the one that came before it. And by offering actionable resources for implementing transparency at scale, ABOUT ML seeks to operationalize these insights with full-cycle documentation of machine learning systems.EXPLORE PROGRAM
How can we ensure that AI and machine learning technologies are safe? This is an urgent short-term question, with applications in computer security, medicine, transportation, and other domains. It is also a pressing longer-term question, particularly with regard to environments that are uncertain, unanticipated, and potentially adversarial.
As our lives become increasingly saturated with artificial intelligence systems, the safety of these systems becomes a vital consideration. The Safety-Critical AI Program seeks to establish social and technical foundations that will support the safe development and deployment of AI.EXPLORE PROGRAM