In the dynamic world of hip-hop, rap battles are as common as collaborations, and the latest clash taking hold of the industry has been a long overdue feud. In the battle between rap heavyweights Kendrick Lamar and Drake, a new contender has emerged to join in on the age old tradition – artificial intelligence. What began as a time honored tradition of battling it out on the airwaves has quickly turned into a deeper conversation around the ethical uses of AI in the music industry.
The long standing feud between the rappers escalated in April and May with diss tracks, or songs intentionally disrespecting another artist, being rapidly fired back and forth. Social media helped escalate the feud and allowed for tracks to be released at incredible speeds with some tracks being released just minutes apart from each other. However, throughout the feud, listeners were left struggling to decipher which tracks were real and which tracks were deepfakes, a form of synthetic media. With many diss tracks released online, and not through record labels, it can be hard to verify whether a track is an authentic artist release. On April 15 the diss track “One Shot,” allegedly by Kendrick, surfaced online which worsened the tension between Kendrick and Drake. The track was later confirmed as a deepfake when the creator came out to explain how he created it using AI but many listeners and even Drake wondered whether it was an authentic piece released by Kendrick.
The feud has raised very important questions regarding the importance of consent, authorship, and disclosure when it comes to the use of AI in the music industry. The inclusion of late legendary rapper Tupac’s AI-generated voice in Drake’s “Taylor Made Freestyle” sparked controversy and prompted a response from Tupac’s estate in the form of a cease and desist letter in which it states that they did not give consent to use Tupac’s and the track is a “flagrant violation of Tupac’s publicity and the estate’s legal rights” as well as disrespectful to the late artist’s legacy. As AI generated content, or synthetic content, becomes more sophisticated and accessible, it raises significant challenges across many industries and sectors. PAI’s Responsible Practices for Synthetic Media Framework begins to address these concerns by identifying three groups involved in the life cycle of synthetic media (Builders of AI-generation tools, Creators of synthetic media, and Distributors/Publishers of synthetic media) and providing each group with “tailor made” guidelines for the development, creation, and distribution of AI generated or synthetic media. The Framework aims to ensure ethical practices are followed, particularly around consent and the transparent disclosure of AI usage in media.
Consent
In the case of Drake’s use of Tupac’s AI-generated voice, lack of consent was a major issue. The Synthetic Media Framework states that creators of synthetic media should be transparent in “how [they] receive informed consent from the subject(s) of a piece of manipulated content, appropriate to product and context, except for when used toward reasonable artistic, satirical, or expressive ends.” This dispute highlights the growing concern regarding deceased artists’ inability to consent to the use of their likeness and raises ethical concerns about their right to be left dead.
In March, PAI published 11 case studies, ten from the initial Framework launch cohort and one submitted by PAI, applying Framework principles to existing synthetic media challenges. Two of the case studies addressed the issues of securing consent from the deceased to create AI-likeness. AI video startup D-ID’s case study explained how they worked with the immediate families of victims of domestic violence in order to receive their consent for using the victims’ likeness in a domestic violence awareness campaign. Human rights organization WITNESS’ case study highlighted the importance of consent in the case of a social media account that was using AI to generate the imagined images of children that disappeared during Argentina’s military junta. Both case studies emphasized the importance of collaboration with the families of the deceased when it comes to digital resurrection, which was clearly not the case with Drake and Tupac Estate.
Transparency via Disclosure
The confirmed and alleged use of AI in the diss tracks also presents the challenge of how to disclose that audio has been generated or modified by AI. PAI’s Framework recommends that when creating or distributing synthetic media, the synthetic elements should always be disclosed, especially if non-disclosure could alter the perception of the content. Disclosure is important to maintain transparency. For some forms of synthetic media, there are direct disclosure methods such as creating an AI-label on a TikTok video. There are also indirect disclosure methods like cryptographic provenance such as the C2PA open standard, which can notably help platforms like TikTok directly disclose, or label, AI-generated content. Audio, however, presents a unique challenge and we must consider how AI should be labeled in audio and how you can do so without detracting from the user experience. Voice-cloning AI startup Respeecher explores these questions in its case study on preventing misuse in the voice cloning space. While Respeecher has adopted the C2PA open standard as a best practice, the company highlights the complexity of audio disclosures in an artistic context and the need for continued exploration of this challenge. In the case of the AI-generated diss track “One Shot,” the creator should have explored ways to disclose that the track was not in fact an authentic Kendrick piece to avoid confusion.
Who’s Responsible?
The Kendrick and Drake feud has made it clear that multiple stakeholders must play a role in regulating AI use in the music industry:
- Public – It is essential for the public to cultivate AI literacy in order to distinguish between authentic and synthetic media, though the burden should not be placed on audiences.
- Artists – Musicians and other creators should follow ethical practices when using AI generated content in their work, such as the Synthetic Media Framework, which emphasizes the importance of transparency and obtaining proper consent.
- Record Labels – Record labels, like the one both Kendrick and Drake are signed to, should implement policies around artists using AI in their music and should support their artists in understanding and navigating the complexities of the use of AI in music.
- Government – As policies are being created around AI, policymakers should consider legislation that protects individual rights. Recent efforts such as the ELVIS Act by lawmakers in Tennessee which prohibits the use of AI to mimic an artist’s voice without permission are examples of ways policymakers can protect individuals rights. Of course, they must also explore how enforcement will be done.
The Kendrick and Drake feud has highlighted the many challenges that AI in the music industry poses. As synthetic media gets more sophisticated and AI generation tools become more accessible, distinguishing between authentic and synthetic media, both partially and fully, will become increasingly challenging. Concerns about consent and authenticity will need to be addressed and met with responsible guidance such as PAI’s Responsible Practices for Synthetic Media. By following such guidelines and enhancing AI literacy, the music industry can responsibly adapt to the changing landscape of music and technology while fostering creative expression and preserving artistic integrity.
To keep up with the latest on PAI’s work on AI-generated media, sign up here.