Your Voice Matters
This addresses the growing concern over AI's ability to replicate human voices without consent extending existing protections for name, image, and likeness to include voice.
Let's look at how this legislation (barely over 3 pages) has significant implications for the music industry, particularly in light of the current state of AI technology, similar legislation in development, and the evolving interpretation of these laws.
The music industry has been making strides in embracing AI.
Several platforms offer authorized voice cloning services that ensure artists are compensated. These platforms balance AI innovation with ethical practices, protecting and rewarding creators.
Of course, where there’s a will to create, there’s a way to fake!
We have seen AI voice detection tools coming on the market, however there are less that specifically target AI sung voices as these need to be trained on different datasets than spoken word. There is also a need for systems to detect the difference between what is an authorized AI clone and an unauthorized production.
Spotify opposes unauthorized AI-generated content that impersonates artists. CEO Daniel Ek said such music is "absolutely not acceptable." However, without a tagging system, it's hard for listeners to identify AI-generated content.
YouTube now allows artists and their representatives to request the removal of videos that use AI to imitate their voices without permission. This includes AI-generated music covers that mimic an artist’s unique singing or rapping voice
Resemble AI's Resemble Detect is used by Spotify to scan for anomalies in its content library. It could identify unauthorized AI-generated voice clones, though how it distinguishes between authorized and unauthorized uses is unclear.
The SingFake Project (SVDD Challenge 2024) created a dataset and detection models for identifying AI-generated singing voices, tackling challenges like background music and unique singing features. It adapts speech countermeasure systems for singing detection, showing improvements with singing-specific data. However, the dataset is small and lacks diversity. The contest prioritizes model architecture over accuracy, using YouTube music as data.
Sound Ethics has developed its own, AI singing voice detector. Their research shows multiple models are needed to address detection of different types of styles and as an effective tool for fighting unauthorized usage. They are now focused on frameworks and systems to distinguish between authorized and unauthorized voice clones.
The ELVIS Act is the first legislation of its kind to make “voice” a right of publicity protected from improper impersonation with AI technology.
Similar bills have recently been introduced into Congress:
No lawsuits have been reported yet. Experts worry the ELVIS Act's broad language could cause unintended consequences and more lawsuits. The law aims to protect artists from unauthorized AI use of their voices and likenesses but could also impact traditional imitators like tribute bands and impersonators.
As for enforcing these laws? Well, that's where things get about as clear as Bob Dylan's lyrics. --Sound Ethics
The lack of uniformity among the states’ NIL laws complicates the enforcement of an individual's ownership over these rights. While Tennessee's ELVIS Act isn't the pioneer in including voice protection (NIL+V), as California has long-established NIL+V safeguards, it notably stands as the first to explicitly guard against AI-based infringements on an individual's rights to their own NIL+V.
For example, California introduced a bill earlier this year that would create liability for the “simulation of the voice or likeness” of a “readily identifiable” individual (i.e., a celebrity) through the use of digital technology. And, Kentucky has proposed a new law protecting commercial rights in “the use of names, voices, and likenesses.”
The National Association of Voice Actors (NAVA) is creating standardized contracts to address AI-generated voices, ensuring performers control and fair compensation for their voice use. This initiative safeguards voice actors’ and singers’ rights and prevents unauthorized and unethical use of their performances.
NAVA offers a contract rider to protect voice actors' and singers' recordings. This rider specifies usage terms, preventing exploitation or misuse. By setting baseline rates and protections, NAVA helps performers defend their rights in an AI-influenced industry. There is a link below to download the rider.
The ELVIS Act has had some impact on the music industry but at this point it looks like its more about the narrative as there has been no direct enforcement. The music industry is already seeing changes, with companies like Universal Music Group launching tools like MicDrop, however larger frameworks for policing and protections for unauthorized use still need more development.
Check out NAVA’s riders and session contract help here:
https://navavoices.org/synth-ai/ai-voice-actor-resources/
You can download the ELVIS act here:
https://www.capitol.tn.gov/Bills/113/Bill/HB2091.pdf