As technology companies take on a greater role in moderating misinformation on their platforms, they are tasked not only with evaluating what content is credible but also with deciding how to address content that is not. Ultimately, however, the efficacy of these interventions — whether they are warning labels, info hubs, or something else — is determined by how they are interpreted by their intended audiences.
The Audience Explanations Workstream is researching how to design interventions that are effective, empowering users to discern what content is credible and better make sense of information online. This Workstream’s research question include:
- How do different audiences make sense of online information and content, including misinformation?
- What should platforms do to mitigate user belief in harmful or misleading content and amplify belief in credible content?