AI & Media Integrity


While AI has ushered in an unprecedented era of knowledge-sharing online, it has also enabled novel forms of misinformation, manipulation, and harassment as well as amplifying harmful digital content’s potential impact and reach. PAI’s AI and Media Integrity Program directly addresses these critical challenges to the quality of public discourse by investigating AI’s impact on digital media and online information, researching timely subjects such as manipulated media detection, misinformation interventions, and content-ranking principles.

Through this Program, PAI works to ensure that AI systems bolster the quality of public discourse and online content around the world, which includes considering how we define quality in the first place. By convening a fit-for-purpose, multidisciplinary field of actors — including representatives from media, industry, academia, civil society, and users that consume content — the AI and Media Integrity Program is developing best practices for AI to have a positive impact on the global information ecosystem.

Our AI and Media Integrity Work

Since its inception in 2019, the AI and Media Integrity Program has focused on projects that empower the public to distinguish between credible information and mis/disinformation. This has ranged from inquiries into how audiences interpret internet content to concrete recommendations for the field of synthetic media detection.

Currently, the Program is pursuing four distinct Workstreams exploring different intervention points for improving the broader quality and integrity of information online. The lifecycle of any piece of internet content begins with its creation. It is then distributed, usually on social media platforms, before finally being interpreted by an end user, also typically on these platforms. Our Synthetic and Manipulated Content, Content Targeting and RankingAudience Explanations, and Local News Workstreams investigate what can be done to promote a healthy information ecosystem across these stages.

AI and Media Integrity Steering Committee

The AI and Media Integrity Steering Committee is a formal body of PAI Partner organizations focused on projects confronting the emergent threat of AI-generated mis/disinformation, synthetic media, and AI’s effects on public discourse.

Justin Arenstein


Code for Africa

Ed Bice



Chris Bregler

Director, Research


Laura Ellis

Head of Technology Forecasting


Emiliano Falcon-Morano

Technology for Liberty Policy Counsel

ACLU Massachusetts

Sam Gregory

Executive Director


Scott Lowenstein

Research and Development Strategist

New York Times

Bruce MacCormack

Senior Advisor, Business Strategy


Sean Mcgregor

Founding Director, Digital Safety Research Institute

UL Research Institutes

Simon Morrison

Senior Policy Manager


Jessica Young

Senior Program Manager, Science and Technology Policy


Polina Zvyagina

Global AI Policy & Governance Director


Program Workstreams

Program: AI & Media Integrity
Audience Explanations
Program: AI & Media Integrity
Content Targeting and Ranking
Program: AI & Media Integrity
Local News
Program: AI & Media Integrity
Synthetic and Manipulated Content