Championing accurate information became critical in 2020, when even minor misapprehensions about COVID-19 could threaten everyone’s well-being. Throughout the year, we worked with PAI Partner First Draft to support information integrity, investigating what works (and what does not) when addressing deceptive content online.
This collaboration bore its first fruits in June with the publication of “It matters how platforms label manipulated media. Here are 12 principles designers should follow.” Drawing from existing academic literature, original research, and interviews with industry experts, this guide provided a concrete set of principles for decision-makers at social media platforms seeking to minimize harms.
Amid rising awareness of their role in the spread of misinformation, social media companies have become increasingly proactive in moderating and labeling false content online. At the same time, the real-world impact of these interventions remains insufficiently studied. Platforms are now doing more, but what actions will actually reduce the internet’s mis- and disinformation problem?
Through our Media Integrity Issue Area, PAI has begun to answer this fundamental yet often neglected question. The design principles we published in June established a research- based foundation for the responsible labeling of manipulated media. Additional outputs in 2020 provided social media platforms with specific, immediate recommendations for the automated categorization of manipulated media and used interviews with end-users to identify the limitations of common intervention strategies.
Like so many challenges currently facing the AI community, internet misinformation cannot be solved without inviting all stakeholders to share their varying needs and perspectives. At PAI, these stakeholders come together to work on urgent goals.