- Generative AI has already been used to create “deepfake” songs.
- These songs have been convincing or satisfying enough to gather millions of streams and views on platforms like Spotify and TikTok.
- The music industry is trying to contain the fallout, but digital media platforms are reluctant partners and AI is raising novel legal questions.
- It’s unclear how this affects the asset class in the long term, but it is a risk.
Generative AI Deepfakes
The emergence of ChatGPT just over a year ago has kicked off a tsunami of generative AI content that no one was really quite prepared for. Like other creative mediums, music has not been spared.
In April, a new single from Drake and The Weekend began trending. It reached 600K streams on Spotify, 15M views on TikTok, and nearly 300K views on YouTube before being removed. You see, that single wasn’t actually from either artist. It was created by “Ghostwriter” using generative AI.
It’s not the only instance either. Other AI-based songs using voice clones of major artists have gained traction elsewhere. A similar song with Bad Bunny’s voice has over 1.2M views on YouTube, for example.
What Should The Music Industry Do About AI?
This is what many people and companies are currently asking themselves. So far, there’s no real consensus.
On one front, there’s a strategy of attempting to contain and control the fallout.
The ASCAP is effectively petitioning the US Copyright Office for greater protections. They want AI models to have to license content for training. Additionally, they’re also seeking increased “right-of-publicity” laws that would restrict the use of an artist’s voice without their permission.
Rights Holders are also trying to contain the spread of these “deepfake” songs across streaming and social media platforms. However, Sony Music reports that platforms are reluctant partners in this process.
Removing content (especially popular content) is not generally in the best interest of companies like Spotify or YouTube. In this case, the right-of-publicity laws are not as strongly protected. Billboard reports that there are no federal laws that would obligate these platforms to remove the songs and that right-of-publicity protections vary from state to state.
On the other front, there are artists that want to embrace this new technology. If someone can create hit songs with their voice and they can get paid for it – that sounds like a potential win-win, right? No new effort, but new income.
Unfortunately, not even that is straightforward. In many cases the “ownership” of an artist’s voice is a complicated legal topic for anyone that’s signed on with a record label. The record label may effectively own the rights to the artist’s voice, limiting their options. In other cases, the contracts may be unclear leaving them in a legal gray area.
In February, Universal Music Group songs disappeared from TikTok. The removal is due to a dispute between the two companies, partially related to protecting artists from AI-generated music on their platform.
How Does AI Affect The Future Of Music?
First, there is a lot of uncertainty as to how this will play out. That lack of clarity is a bit of a stain on an asset that is valuable, in part, due to the predictability of songs’ earnings.
Since there is so much uncertainty, we can only speculate on how things might play out. But there are concerning scenarios for investors.
Attention is where the money is. A flood of new AI-generated songs getting uploaded to platforms every day is only going to make it harder to capture and monetize the attention of listeners.
If fans are barraged with a mix of “authentic” content from artists as well as AI-generated content, at some point, will they stop being able to tell the difference? What’s real and what’s fake? Will the distinction even matter to future generations? That could lead to less time spent listening to existing tracks and lower earnings.
What about personalization?
Today every service has a recommendation algorithm intended to keep people listening and watching as long as possible. One of the limitations these algorithms have is that they can only recommend content that exists.
What if the perfect song for a given user at a given moment to keep them from closing Spotify is a Taylor Swift rap song or Lady Gaga singing in Portuguese? Even if the algorithm could theoretically figure that out, if those songs don’t exist then it has to look for other next-best options that do.
With AI, those algorithms could have access to an ever-growing library of music that covers the bounds of imagination. Maybe they could even generate the songs themselves.
Conclusion
How do you invest in royalties in a world where everyone just listens to a hyper-personalized individual universe of music? Where there aren’t mass hits, but only individual hits with every variable available for the tweaking?
Honestly, I don’t know. And that is why I wonder if AI-generated content is the biggest threat to music royalties as an asset class.
There’s still a ton of uncertainty to how both laws and industries will adapt to the rise of AI. Some of those adaptations may mitigate or resolve these issues. However, there’s enough cause for concern that investors should be attentive to how things continue to develop.
Frequently Asked Questions
Originally sent to newsletter subscribers on December 11th.