AI & Media Integrity

Overview

While AI has ushered in an unprecedented era of knowledge-sharing online, it has also enabled novel forms of misinformation, manipulation, and harassment as well as amplifying harmful digital content’s potential impact and reach. PAI’s AI and Media Integrity Program directly addresses these critical challenges to the quality of public discourse by investigating AI’s impact on digital media and online information, researching timely subjects such as manipulated media detection, misinformation interventions, and content-ranking principles.

Through this Program, PAI works to ensure that AI systems bolster the quality of public discourse and online content around the world, which includes considering how we define quality in the first place. By convening a fit-for-purpose, multidisciplinary field of actors — including representatives from media, industry, academia, civil society, and users that consume content — the AI and Media Integrity Program is developing best practices for AI to have a positive impact on the global information ecosystem.

Our AI and Media Integrity Work

Since its inception in 2019, the AI and Media Integrity Program has focused on projects that empower the public to distinguish between credible information and mis/disinformation. This has ranged from inquiries into how audiences interpret internet content to concrete recommendations for the field of synthetic media detection.

Currently, the Program is pursuing four distinct Workstreams exploring different intervention points for improving the broader quality and integrity of information online. The lifecycle of any piece of internet content begins with its creation. It is then distributed, usually on social media platforms, before finally being interpreted by an end user, also typically on these platforms. Our Synthetic and Manipulated Content, Content Targeting and RankingAudience Explanations, and Local News Workstreams investigate what can be done to promote a healthy information ecosystem across these stages.

AI and Media Integrity Steering Committee

The AI and Media Integrity Steering Committee is a formal body of PAI Partner organizations focused on projects confronting the emergent threat of AI-generated mis/disinformation, synthetic media, and AI’s effects on public discourse.

Laura Ellis

Head of Technology Forecasting

BBC

Sam Gregory

Program Director

WITNESS

Scott Lowenstein

Research and Development Strategist

New York Times

Bruce MacCormack

Senior Advisor, Business Strategy

CBC

Sean Mcgregor

Technical Lead

IBM Watson AI XPRIZE

Simon Morrison

Senior Policy Manager

Amazon

Jacqueline Pan

Senior Program Manager

Facebook AI

Andy Parsons

Director of the Content Authenticity Initiative

Adobe

Jay Stokes

Research Software Engineer

Microsoft

Claire Wardle

US Director

First Draft

Program Workstreams

Program: AI & Media Integrity
Audience Explanations
Project Status
Collecting Insights
Program: AI & Media Integrity
Content Targeting and Ranking
Project Status
Collecting Insights
Program: AI & Media Integrity
Local News
Project Status
Identifying Topics
Program: AI & Media Integrity
Synthetic and Manipulated Content
Project Status
Developing Resources