Our Blog
/
Blog

Partnership on AI & First Draft Begin Investigating Labels for Manipulated Media

$hero_image['alt']

In the months since we announced the PAI/First Draft Media Manipulation Research Fellowship, the need for collaborative, rigorous research on whether to add descriptive labels that explain manipulated and synthetic media, and how to do this so audiences understand those labels, has become increasingly urgent.

Technology platforms and media entities not only face challenges when evaluating whether or not audio-visual content is manipulated and/or misleading, but they must also determine how to take action in response to that evaluation. Labeling is touted as an effective mechanism for providing adequate disclosure to audiences, thereby mitigating the impact of mis/disinformation; it has even been highlighted by certain platform users themselves as a particularly appealing strategy for dealing with manipulated and synthetic media. [1]

While labels have the potential to provide important signals to online content consumers, they may also lead to unintended consequences that amplify the effects of audio-visual mis/disinformation (both AI-generated and low-tech varieties). We must, therefore, work collaboratively to investigate the impact labeling has on audiences. Doing so can help preempt any unintended consequences of scattered, hasty deployment of manipulated and synthetic media labels and ensure they prove useful for mitigating the effects of mis/disinformation.

Why we must study the impact of media manipulation labels

Labeling could vastly change how audiences make sense of audio-visual content online. Academic research on the “implied truth effect” suggests that attaching warning labels to a portion of inaccurate headlines can actually increase the perceived accuracy of other headlines that lack warnings. [2] While this realm of academic research is in its nascent stages and was conducted on text-based mis/disinformation, might video and image labels change how people make sense of and trust content that is not labeled?

Such early findings imply a need to consider potential unintended consequences associated with deploying labeling interventions that do not draw from diverse input and testing. Although rigorous academic studies can sometimes take up to two years to get published, and organizations responding to mis/disinformation challenges must take swift action to respond to manipulated and synthetic media, we must work promptly alongside the academic community to further understand how labels, the language we use in those labels, and the dynamics of labels across different platforms and online spaces impact perceptions of information integrity.

This is an area that is ripe for multistakeholder, coordinated effort from technology platforms, media entities, academics, and civil society organizations – within PAI and beyond. While each technology platform has a very different way of presenting audio-visual information, and even within particular tech organizations there may be visually distinct platforms, entirely disparate labeling methods across platforms could sow confusion in an already complex and fluid online information ecosystem. As we’ve suggested previously, we need to evaluate a new visual language that is more universal and proves effective across different platforms and websites. This underscores the potential benefit to cross-sector collaboration in testing, sharing insights, and deploying interventions to deal with manipulated media.

Our Upcoming Work

PAI and First Draft are excited to begin work that furthers our community’s collective understanding of how the language and eventual labels we use to describe audio-visual manipulations impacts audience perceptions of information integrity. It has been heartening to see related efforts from The Washington Post [3] and the Duke Reporter’s Lab/Schema.org [4] that attempt to drive towards shared language amongst fact-checkers, platforms, and researchers to capture the nuances of different audio-visual manipulations.

Our research seeks to understand how labels describing manipulated and synthetic media might ultimately be leveraged to help audiences and end-users recognize mis/disinformation and interpret content online. To do so, we plan to landscape existing interventions, develop a series of manipulated media labels for testing, and then ultimately conduct audience research to study the label designs and how labeling impacts audiences more generally. This research will help illuminate whether or not a robust labeling mechanism can help audiences confidently and accurately recognize mis/disinformation. In doing so, we hope to improve the general understanding of how people interpret mis/disinformation and the effectiveness of interventions aimed at helping people to do so effectively.

Introducing our Media Manipulation Research Fellow

Emily Saltz has joined the PAI team as the PAI/First Draft Media Manipulation Research fellow in order to drive this timely work. Emily joins us from The New York Times (a PAI Partner), where she was the User Experience Lead on the News Provenance Project. Her previous research focused on how people assess news photography on their social media feeds, and what types of labels and contextual information might help audiences better make sense of the photo posts they see. Emily brings her design and research experience to this very human-centered work, work that will only become more integral to ensuring information integrity as techniques for manipulating audio-visual content become more widespread and varied.

PAI and First Draft look forward to collaborating with our Partners and beyond on this important project, one that requires consistent and collective attention across the media integrity community. We plan to share many of our findings as we work with a diverse suite of stakeholders to consider meaningful methods for promoting information integrity.

This article has been syndicated on First Draft.

[1] https://blog.twitter.com/en_us/topics/company/2020/new-approach-to-synthetic-and-manipulated-media.html

[2] Pennycook, A., Bear, E.,  Collins, & Rand, D.G. (2020). The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings.

[3] https://www.washingtonpost.com/graphics/2019/politics/fact-checker/manipulated-video-guide/

[4] https://docs.google.com/document/d/1jRbX2IesVQrWvKpehb8ntSMKe0D88bZp3nK8ZAjq6E4/edit

[5] https://www.niemanlab.org/2020/01/is-this-video-missing-context-transformed-or-edited-this-effort-wants-to-standardize-how-we-categorize-visual-misinformation/