logo logo logo
  • OUR WORK
    • Our Work
      • Inclusive Research & Design
      • AI & Media Integrity
      • AI, Labor, and the Economy
      • FTA & ABOUT ML
      • Public Policy
      • Safety-Critical AI
      • Philanthropy
      • Enterprise
    • Impact Stories
  • RESOURCES
    • Blog
    • Resource Library
  • EVENTS
  • ABOUT US
    • About Us
      • Mission, Vision, & Values
      • Pillars
      • Tenets
    • Team
    • Funding
    • Press
    • Careers
  • PARTNERS
  • SUPPORT US
logo logo
  • OUR WORK
    • Our Work
      • Inclusive Research & Design
      • AI & Media Integrity
      • AI, Labor, and the Economy
      • FTA & ABOUT ML
      • Public Policy
      • Safety-Critical AI
      • Philanthropy
      • Enterprise
    • Impact Stories
  • RESOURCES
    • Blog
    • Resource Library
  • EVENTS
  • ABOUT US
    • About Us
      • Mission, Vision, & Values
      • Pillars
      • Tenets
    • Team
    • Funding
    • Press
    • Careers
  • PARTNERS
  • SUPPORT US
  • DONATE
SEARCH

Workstream: Audience Explanations

Fact-Checks, Info Hubs, and Shadow-Bans: A Landscape Review of Misinformation Interventions

Claire Leibowicz

June 14, 2021

From Deepfakes to TikTok Filters: How Do You Label AI Content?

PAI Staff

May 21, 2021

Labeling Misinformation Isn’t Enough. Here’s What Platforms Need to Do Next.

Claire Leibowicz

March 11, 2021

Warning Labels Won’t Be Enough to Stop Vaccine Misinformation

PAI Staff

February 18, 2021

It matters how platforms label manipulated media. Here are 12 principles designers should follow.

Tommy Shane

June 9, 2020

Partnership on AI & First Draft Begin Investigating Labels for Manipulated Media

Claire Leibowicz

March 25, 2020

The Partnership on AI Steering Committee on AI and Media Integrity

Terah Lyons

September 5, 2019
  • OUR WORK
  • RESOURCES
  • EVENTS
  • ABOUT US
  • PARTNERS
  • SUPPORT US

© 2025 Partnership on AI | All Rights Reserved

  • Transparency and Governance
  • Privacy Policy