Our Blog
/
Blog

On AI & Media Integrity: Insights from the Deepfake Detection Challenge

On AI & Media Integrity: Insights from the Deepfake Detection Challenge

Coordinated, multistakeholder work that brings together researchers, technologists, advocates, and other experts in the AI and media ecosystem is vital for countering the emergent threat of AI-generated mis/disinformation. With our involvement in the Deepfake Detection Challenge (DFDC), the Partnership on AI (PAI) is helping shape the development of technologies built to identify video manipulations and promote media integrity.

PAI created the AI and Media Integrity Steering Committee as a formal body of experts and key organizations focused on developing and advising projects that strengthen media and mis/disinformation production and detection work.

For the DFDC, the Steering Committee sought to mobilize the global AI research community towards this timely issue, while keeping in mind the real world implications and contexts in which deepfake videos are often weaponized. The group did so by guiding challenge governance and helping to shape the scoring tactics, leaderboard, model access parameters, and how entry requirements could be structured.

Through regular meetings and conversations that bridged disciplines, Steering Committee members honed elements of the DFDC through:

Improved Coordination Across the Media Ecosystem 

The Steering Committee brought together experts from technology companies, mainstream media, and civil society –  enabling the designers of the machine learning challenge to consider their goals as part of a more holistic approach to information integrity. The group includes representatives from Amazon, the BBC, CBC/Radio-Canada, Facebook, First Draft, Microsoft, The New York Times, WITNESS, and XPRIZE.  Coordination across organizations implicated by the emergent threat of synthetic media is essential for media integrity efforts to have maximum impact.

Bridging Technical and Social Considerations 

The challenges associated with information integrity in the AI-age require solutions that include technical tactics (like detection), as well as attention and sensitivity to the behavioral, social, and organizational dynamics that affect online information consumption and dissemination. Many of the technical elements that the Steering Committee reviewed also prompted social considerations. How might we eventually construct a video data set that reflects a realistic distribution of the types of deepfakes developed to sow discord or cause harm? How might model access options foster the development of tools that help the global journalistic and fact-checking community make decisions about video authenticity? At the same time, how might constraints to model access potentially deter or prevent abuse by adversarial actors trying to evade detection? Answers to these questions guided elements of the challenge, including scoring rules as well as access options for model submissions. The group plans to create a best practice document highlighting suggested future directions for machine learning challenges focused on synthetic media detection.

Advocating for the Development of Deepfake Detection Tools for Journalists and Fact-Checkers 

While the DFDC is a specific machine learning challenge aimed at producing technical models for deepfake detection, the Steering Committee emphasized that it might be possible to incorporate those models into useful video verification tools for journalists and fact-checkers, key constituencies in the fight against mis/disinformation. The Steering Committee will be exploring the development of such tools in the coming months as a follow on from its work on the DFDC.

“Technical choices around deepfake detection have real-world implications for how likely it is that real people facing threats or challenging disinformation globally will have access to the right tools to help them. That is central to WITNESS’s work, and it’s been a key opportunity to have the AI and Media Integrity Steering Committee support a cross-disciplinary conversation to shape the Deepfake Detection Challenge and hopefully future efforts,” said Sam Gregory, program director at WITNESS. “We can ensure these efforts center the threats and the people working to counter them, not just in the US and Europe but globally, at the heart of efforts. The lesson of past negative impacts of technology globally is that we have to have these conversations early and inclusively.”

The complex socio-technical challenges around synthetic media detection require the type of multistakeholder input that PAI’s AI and Media Integrity Steering Committee brought to the DFDC. Projects like the DFDC demonstrate how the technology industry can effectively mobilize collective attention and cross-sectoral involvement on issues of media integrity, and how situating these efforts within the broader global community is integral to upholding public discourse.