A Matter of Self-Regulation: the Music Industry’s Short-Term Response to AI’s Legal Obscurity

Vaughn Gendron – In 1999, the music industry’s business model was turned upside down. With the introduction of Napster, copyrighted music was suddenly being shared and downloaded for free by millions of Internet users. This level of piracy led to a massive decline in music sales. However, thanks to copyright law, Napster was forced to settle for $26 million, which ultimately led to the company’s demise and an end to the widespread piracy.

Today, the music industry faces another landscape-shifting disruptor: generative AI. This technology provides users with the tools to seamlessly produce songs in the vocal likeness of their favorite artists via AI-generated voice models. As a result, YouTube, TikTok, and various music streaming platforms (i.e., Spotify) have become inundated with AI-generated deepfakes. In July, the AI music creation platform “Mubert” announced that its technology had produced 100 million songs, which nearly equates to Spotify’s entire catalog. Similarly, AI-generated videos on YouTube were viewed more than 1.7 billion times this year.

This surge in AI created music has caught the attention of the entire music industry. Because AI voice models are trained with copyrighted music to replicate artists’ voices, various industry players have highlighted potential legal issues with the technology, including copyright and right of publicity concerns. However, unlike with Napster, the law as it currently stands is ill-equipped to handle AI-generated music. 

In regard to the training of AI voice models with unlicensed copyrighted music, both artists and record labels have insisted that such use constitutes copyright infringement. Nonetheless, the law itself remains undecided on the issue. This year, courts have seen a swell of copyright infringement complaints against generative AI companies. These lawsuits, which have been brought by artists, open source coders, and Getty Images, are over the use of copyrighted material to train generative AI models. While many of these lawsuits remain undecided, a federal judge in California recently stated that he would likely dismiss a class action copyright suit brought by a group of artists, thus highlighting the uphill battle that music copyright owners face.

Another chief complaint of the music industry is the unauthorized use of artists’ voices in AI-generated deep fakes. While a voice is not copyrightable, there are some vocal protections offered via the right of publicity. However, there is no federal statute that recognizes a right of publicity. Rather, right of publicity laws vary from state to state. California is the only sovereignty that explicitly protects voice through its right of publicity laws. Therefore, as of now, legal avenues to combat the replication and of an artist’s voice are severely restricted.

Due to the present legal limitations, various actors in the music industry have lobbied Congress to enact regulatory protections against AI. Two weeks ago, songwriters met with members of Congress and requested that AI companies be required to obtain licenses in order to use copyrighted musical works in the training of their models. At a hearing of the Senate Judiciary Committee’s subcommittee on intellectual property in July, Universal Music Group’s General Counsel, Jeffrey Harleston, outlined several regulations that the industry giant would like to see implemented. These include a federal right of publicity law that protects voice, allowing copyright owners to see what has gone into training AI models, and labeling fully AI-generated content.

While the law awaits rulings and/or legislation on the part of the courts and Congress, the music industry appears to have focused on self-regulation. Several weeks ago, Universal Music Group (UMG) and YouTube announced a partnership via the “YouTube Music AI Incubator.” The incubator plans on bringing together various UMG artists “to help inform YouTube’s approach to generative AI in music.” The plan is for the program to provide exploration and feedback on AI tools, with the ultimate goal of finding ways for artists to profit from the technology.

In addition to the incubator, the partnership spawned YouTube’s first set of AI Music Principles, which emphasizes a commitment to the responsible use of AI, the protection of artists and YouTube’s music partners, and the implementation of industry-leading content policies. YouTube has been highly criticized for its content moderation processes as it relates to the overwhelming number of deepfakes on the platform. The company’s CEO, Neal Mohan, addressed this issue in his announcement of the partnership. He stated that YouTube intends to enforce copyrights, monitor for unauthorized uses of artists’ voices, and bolster its content moderation policies to “cover the challenges of AI.”

There have also been reports of negotiations between UMG and Google on a licensing deal. The potential agreement would handle how to license artists’ voices and melodies for AI-generated songs. Further, the talks reportedly included the possible development of a tool that would allow users to make AI-generated songs, but through which the relevant copyright owners would be paid. Importantly, artists would have the choice to opt-in or not.

These recent developments provide transparency on the music industry’s potential short-term strategy to address generative AI. Instead of waiting for Congress and/or the courts to provide legal clarity, major industry players such as UMG have taken the proactive approach of self-regulation. Through partnerships and negotiations with major tech companies and music platforms, record labels can promote the protection of copyrights, and their artists’ voices and human creativity. Until Congress and/or the courts take action, the music industry and its partners will likely be forced to collectively police themselves. 

Leave a Reply

Your email address will not be published. Required fields are marked *