top of page

AI Music Copyright Laws: From AI Royalties to Minting NFTs

Disclaimer: This article does not contain legal advice. I don't advocate for the creation or exchange of NFTs, or the generation of AI music as a whole. I am a musician first, a journalist second. I am not a lawyer or machine learning expert.

The expression "AI music" is coming to the forefront this year, though it's difficult to summarize exactly what the phrase means. The topic of artificial intelligence and music could encompass hundreds of related subjects.

Like other emerging technology, copyright law may need some time to catch up with the latest trends and codify a system of royalties and remunerations.

Artists do partner with machine learning programmers to train neural networks on their music. Dadabots and the band Sylverstein famously created their 26 hour emocore album with 1,000 songs. There's no copyright violation because it was a consensual partnership.

The RIAA made an official statement in October 2022 that AI-based extractors and mixer applications that used their music for training material were infringing on their rights.

To my knowledge, there have been no high profile AI music lawsuits to date. Sites like Uberduck offer audio deepfakes of artist voices. This technology, sometimes called AI voice transfer, has been embraced by several other companies.

In September 2023, Stability AI released a text-to-music web app called Stable Audio. As a commercial product, they were careful to type up a lengthy description of the terms of service. We've summarized them and highlighted some of the most important points here.

Here's the bottom line: If you're a developer who trains their models on datasets that include copyright material, you would be advised to get permission from the copyright owner of the song (publishers) and the copyright owner of the particular recording of that song (the label) to avoid copyright infringement.

Table of Contents

In this article, we'll cover a number of important topics related to AI music copyright, including some basics about why the conversation is relevant this year.

  1. Why is AI music blowing up in 2023?

  2. Can AI make music?

  3. Is AI music copyrighted?

  4. How do AI music royalties work?

  5. Music NFT marketplaces

  6. Will AI take over music?

  7. Are there any good things about AI music?

Why is AI music blowing up in 2023?

You may have heard about people using ChatGPT to create music. The chatbot can only create text about music, like chord progression symbols and tablature. So for this reason, it's not really an AI music generator.

Engineers have been experimenting with AI and music for more than half a century. But they did so mostly in private containers, with the safety and funding of academic institutions or big tech companies to monitor their efforts. For this reason, the RIAA wasn't as concerned about those academic groups competing for a share of streaming revenue.

Can AI make music?

Yes, you can make music with AI but it currently requires some knowledge of computer science. For the first time in history, cloud services like Google Colab are making it easier to train neural networks on musical data sets. Anyone with enough GPU and time can (legally or illegally) train on material to generate new music.

It takes a lot of resources and knowledge to operate these machine learning models. AI music generators have to be cultivated like a fine wine. The training and output parameters are modified gradually until the developer is satisfied by the musical output.

Here are some serious machine learning AI music generators in 2023:

  1. Rave 2 (colab by Hexorcismos)

  2. JukeBox by OpenAI

  3. Dance Diffusion by Harmonai

  4. Catch a Waveform

  5. Riffusion (text-to-music)

  6. VKTRS on HuggingFace (an Ai Music Video Generator)

Google presented a powerful text-to-music generator in 2023 called MusicLM but it's not publicly available yet. The software will act similarly to Dalle-2 and Midjourney. But as their demo shows, the music is still a long shot from radio quality.

Then there are the more 'commercial' AI music sites like Mubert, Soundful, and AIVA. They serve a group that would otherwise shop for human-made music at sites like Epidemic Sound and AudioJungle.

Content creators use these music services to remix human-made music loops with AI, based on parameters like BPM, key signature, and other attributes.

Is AI music copyrighted?

Your right to use a song generated by AI will depend entirely on the service you use. Companies like Soundful and Mubert offer licensing options built into their pricing structure. They write their own loops and you use their AI to remix those loops. Then you buy a license from them to use that music commercially.

As a rule of thumb, whenever you're training on other people's music, the proper thing is to notify the original artists and pay a fair share of royalties.

When copyright law catches up with AI music generators, services will probably be required to come up with royalties structures that pay artists to train on their music and reference it in generative songs.

How do AI music royalties work?

Standards for AI music royalties are being actively negotiated in the public arena. There's no sign that record labels have established legal terms for fair use in AI training datasets.

For example, is it fair to combine two mainstream artists and create a deepfake parody, as long as you're not promoting it commercially?

The AI music group Dadabots generated an AI voice of Frank Sinatra and his band playing a cover of the Britney Spears song, Toxic.

During an interview on the Interdependence podcast, Dadabots talked about the use of AI to explore the latent space between known musical systems. Even systems as remote as Britney and Sinatra.

Knowing that AI music can reveal new sonic landscapes that exist between unrelated artists, and that it will get better at this task rapidly if we give it enough time to train, shouldn't we allow for some degree of experimentation?

Imagine the app that not only creates instantaneous artist hybrids, but also creates infinite variations of their music. Instead of burning out on your favorite album, you could hear new inventions in the same style.

In the future, royalties could be based on a song's genetics. For example, if 10% of a song was pulled from one particular artist's catalog, they would be given a small cut of stream revenue based on that royalty system.

For now, AI music generation still needs to mature a bit.

Music NFT Marketplaces for minting your music

There are several Music NFT Marketplaces listed on the Music NFT Bible where artists can stream music, mint their own tracks, and collect royalties without going through traditional streaming platforms. Here's what they had to say:

Mint Songs is a music NFT platform that doesn't require invitation to join, so it's more accessible and a higher volume of artists are selling on the platform.

Catalog is a combination of both streaming service and NFT marketplace. combines minting music with streaming.

Async Music takes each track (guitar, piano, vocals, percussion) separately and in several different versions. This allows fans to select the layers they want and create custom material from it.

Royal is a music NFT marketplace that gives NFT owners the rights to songs sold on the platform. This means that those who collect a Royal NFT will be able to receive a portion of royalties generated by that song across traditional streaming platforms (Spotify, Apple, etc.).

The NFT Music royalties game isn't limited to generating and selling songs.

Independent AI musician Holly Herndon created a voice-transfer AI that lets other people sing and speak with a digital replica of her voice. Holly uses NFTs with the Tribute DAO Framework, which she has entrusted with rights to her own AI voice model.

"I feel more comfortable with distributed ownership of the rights to my voice model among a DAO of stewards who are invested in maintaining the value and reputation of my voice" - Holly Herndon

Here is an outline their revenue share according to her website:

  • Artist X produces a song using the Holly+ voice model.

  • Artist X uploads the song online, and submits the song as a proposal to the DAO through a public interface.

  • VOICE token holders vote to mint the song as an appropriate or inspiring usage of the Holly+ voice.

  • An NFT of the song is minted by the DAO, with 50% of sales generated going to the artist, and 40% of sales going to DAO members, and 10% reserved for Holly.

Holly's revenue share offers a great model of how to operate in this new economy. There are many other companies exploring timbre transfer that could leverage the same system, if the crypto economy can find stable ground.

Will AI take over music?

Yes, AI will most likely take over music by 2028. As the cost of high fidelity audio generation goes down, people with no musical training will use AI to generate new musical content and it will consume a meaningful share of listening hours currently reserved for human content, including music, podcasts and ebooks.

I predict that Spotify will eventually roll out AI music generation services for everyday consumers and generative music will flood the market.

To hear thoughts on artificial intelligence at Spotify, check out this interview with Sidney Madison Prescott, head of intelligent automation.

Spotify intelligent automation

Imagine if Spotify's recommendation list had an "AI Variation" switch that created infinite AI versions of a song or album. Instead of recommending new artists, listeners would stay within the sonic ecosystem of a single track. For each minute of AI music streaming, the source artist would get some revenue share. But people would technically no longer be listening to their music.

Spotify AI is the most clearly positioned for this takeover among the streaming companies. Apple Music isn't ambitious enough and nobody else has the world's music listening habits recorded in as much detail.

By upping the monthly subscription cost for premium users, they could pay for the GPU to run an AI music generator.

Within a few years, I predict that Spotify will start offering labels a new contract that includes rights and royalties for participating in a large AI training dataset. The industry will wrestle for higher shares and eventually they'll strike a deal. A couple years after that, AI music generators will begin to roll out.

Meanwhile, internet pirates and hackers on the deepweb will be able to train models privately and distribute generative works of art illegally. I predict that with the corporate control of AI music generation, a black market will also be created.

All of this will happen while conventional human music continues to evolve at its own pace. Human + AI collaborations will increase and blur the line until AI music becomes an everyday part of our lives.

Are there any good things about AI music?

There is, at least in potential, an infinite amount of AI music that could be created. By combining every sound in existence, artificial intelligence could wield immense creative and experimental power. Like Dalle-2 and Midjourney, the possibilities are nearly limitless.

Neural nets can be used to create musical ideas that are later expanded upon by human musicians. The ability to create with the help of AI will make humans more prolific and capable of expressing themselves at a faster pace.

This will ultimately lead to new genres and styles of music being created, and will open up opportunities for musicians to express themselves at a faster pace, with more detail and granularity.

bottom of page