Can responsible synthetic media practices prevent the negative impact of AI on elections?
- To highlight how generative AI tools are already playing a role in the 2024 global elections cycle and how Framework recommendations can be used to mitigate harm, PAI examined the use of deepfake audio in three global election contexts.
- PAI applied its Framework to deepfake audio use cases in Slovakia, Pakistan, and the United States. Only in Pakistan, where a deepfake audio was used over authentic footage, were disclosure and consent applied. In the other two cases, questions of consent arose around whether politicians, as public figures, need to give consent at all.
- The Framework’s disclosure recommendations, if implemented, may have been able to prevent potential harm stemming from the misuse of audio deepfakes in two of these examples.
This is PAI’s Case Submission to the Synthetic Media Framework. Explore the other case studies