As technology companies take on a greater role in moderating misinformation on their platforms, they are tasked not only with evaluating what content is credible but also with deciding how to address content that is not. Ultimately, however, the efficacy of these interventions — whether they are warning labels, info hubs, or something else — is determined by how they are interpreted by their intended audiences.
The Audience Explanations Workstream is researching how to design interventions that are effective, empowering users to discern what content is credible and better make sense of information online. This Workstream’s research question include:
- How do different audiences make sense of online information and content, including misinformation?
- What should platforms do to mitigate user belief in harmful or misleading content and amplify belief in credible content?
“I have been incredibly impressed with PAI’s focus on media integrity issues as they relate to platform interventions. The work has been professional, rigorous, and much-needed.”
Misinformation interventions are common, divisive, and poorly understood
Every day, social media platforms intervene on thousands of posts containing misinformation. This paper quantifies how the public feels about interventions and offers four implications for intervention design.
Fact-Checks, Info Hubs, and Shadow-Bans: A Landscape Review of Misinformation Interventions
From Deepfakes to TikTok Filters: How Do You Label AI Content?
Labeling Misinformation Isn’t Enough. Here’s What Platforms Need to Do Next.
Warning Labels Won’t Be Enough to Stop Vaccine Misinformation
It matters how platforms label manipulated media. Here are 12 principles designers should follow.
Partnership on AI & First Draft Begin Investigating Labels for Manipulated Media