Our Blog
/
Blog

PAI and First Draft Launch Research Fellowship on Media Manipulation

PAI and First Draft Launch Research Fellowship on Media Manipulation

Co-authored by Claire Wardle, Founder and Co-Director, First Draft

AI technologies for media manipulation are becoming increasingly sophisticated, enabling the creation of fake videos that are difficult to detect as fake with the naked eye. As we approach 2020, manipulated and synthesized videos not only threaten election integrity around the world, but also the promulgation of truth in general. It is therefore vital to consider how the media ecosystem and technology platforms can prevent audiences from being misguided by deceptive information – by helping ensure that media consumers understand which content is manipulated and how.  

On the basis of these concerns, the Partnership on AI (PAI) and First Draft, a leading non-profit tackling disinformation globally, are launching a Research Fellowship to investigate tactics for effectively communicating video manipulations to the public.

The Research Fellow will work within PAI’s AI and Media Research focus area, which examines the emergent threat of AI-generated mis/disinformation, synthetic media, and its impact on public discourse. While AI techniques provide new opportunities for video synthesis, there are also many media manipulation techniques that do not require AI. The Research Fellow will therefore consider the spectrum of video manipulations that can impact information consumption.  The Research Fellow will leverage PAI’s partner community of technology, media, civil society, and academic research organizations, and, specifically, First Draft’s expertise in the fight against mis/disinformation globally.

First Draft has worked with news and civil society partners across the world since 2015, testing different methods for effectively slowing down mis/disinformation around elections. One critical question the organization has explored is how to label manipulated media, a challenge that requires a collective approach to solve.

In May 2019 – a week after a manipulated video of Nancy Pelosi in which her movements were slowed to make her appear drunk and unfit for her role went viral – PAI, the BBC, and WITNESS co-hosted a workshop focused on mis/disinformation with First Draft and other leading institutions. At the workshop, technology companies and journalistic entities grappled with how to react to videos and other content that, while created based on genuine pieces of media, are manipulated with the intent of deceiving viewers. They considered both AI-generated videos as well as those manipulated with less sophisticated techniques, like the Nancy Pelosi video. A breakout group focused on how to effectively communicate different video manipulation techniques to the public, and, in the process, created an initial typology of possible video manipulations.

Mis/disinformation is much more than the text-based content that the term “fake news” might bring to mind. There are many types of disordered information that can range from genuine content used out of context, to entirely fabricated, AI-generated deepfake videos that use machine learning techniques to create fake videos. From psychology research, we know that people are more likely to remember images than words [1], and this could have vast implications for how we could label manipulated or synthesized videos. While many responses to mis/disinformation derive from research on text-based content, there is also a need to investigate how video and photo content should be treated to thwart the spread of mis/disinformation.

One limiting factor for previous research in this area is that much of the work on content manipulation and potential media responses has been conducted in universities in the United States, rather than tested “in the wild,” where information flows freely. Like the blue verification tick, or the hashtag, we need to consider a new visual language that is more universal and proves effective across different platforms and websites.

Would descriptive labels on videos that explain how the video has been manipulated serve to educate viewers about the different ways videos can deceive and mislead? How can platforms and newsrooms collaborate on typologies and strategies for labelling videos that have been manipulated or synthesized? Answers to these questions must recognize the nuances inherent in different platforms and online contexts, as well as the fact that implementing disparate strategies across different platforms ignores the free-flowing nature of the online information ecosystem.

The PAI and First Draft Media Manipulation Research Fellowship is predicated on the belief that work on media manipulation (identifying, labeling, signalling) should be grounded in robust research that draws from the full range of stakeholders engaged in the fight against mis/disinformation, including civil society organizations, technology companies, media entities, and academic institutions.

We are seeking candidates from across the PAI partner community and beyond, specifically those with backgrounds in psychology, communication, political science, human-computer interaction, or other related disciplines. To learn more and apply to the fellowship, please see the detailed description here.


[1] See for instance: Curran, T., & Doyle, J. (2011). Picture superiority doubly dissociates the ERP correlates of recollection and familiarity. Journal of Cognitive Neuroscience23(5), 1247-1262.