logo logo
  • OUR WORK
    • How We Work
    • Programs
      • Inclusive Research & Design
      • AI & Media Integrity
      • AI, Labor, and the Economy
      • FTA & ABOUT ML
      • Safety-Critical AI
    • Workstreams
    • Impact Stories
  • RESOURCES
    • Blog
    • Resource Library
  • EVENTS
  • ABOUT US
    • About Us
      • Mission, Vision, & Values
      • Pillars
      • Tenets
    • Team
    • Funding
    • Press
    • Careers
  • PARTNERS
logo logo
  • OUR WORK
    • How We Work
    • Programs
      • Inclusive Research & Design
      • AI & Media Integrity
      • AI, Labor, and the Economy
      • FTA & ABOUT ML
      • Safety-Critical AI
    • Workstreams
    • Impact Stories
  • RESOURCES
    • Blog
    • Resource Library
  • EVENTS
  • ABOUT US
    • About Us
      • Mission, Vision, & Values
      • Pillars
      • Tenets
    • Team
    • Funding
    • Press
    • Careers
  • PARTNERS
  • CONTACT
SEARCH

Workstream: Synthetic and Manipulated Content

Manipulated Media Detection Requires More Than Tools: Community Insights on What’s Needed

Jonathan Stray

July 13, 2020

5 Urgent Considerations for the Automated Categorization of Manipulated Media

PAI Staff

June 29, 2020

A Report on the Deepfake Detection Challenge

Claire Leibowicz

March 12, 2020

On AI & Media Integrity: Insights from the Deepfake Detection Challenge

Claire Leibowicz

December 11, 2019

PAI and First Draft Launch Research Fellowship on Media Manipulation

Claire Leibowicz

November 27, 2019

The Partnership on AI Steering Committee on AI and Media Integrity

Terah Lyons

September 5, 2019

Protecting Public Discourse from AI-Generated Mis/Disinformation

Penelope Sosa

June 17, 2019

Posts navigation

Newer posts
  • OUR WORK
  • RESOURCES
  • EVENTS
  • ABOUT US
  • PARTNERS

© 2023 Partnership on AI | All Rights Reserved

  • Transparency and Governance
  • Privacy Policy