WarpSound and WVRPS: An Adaptive Music System Powered by AI
The idea of a recorded song hinges on the premise that there's a beginning and an end. Songs have a fixed length once they're recorded and distributed. Streaming platforms enforce this model with their discographies. We share songs knowing that the next person will hear the same track. That's how a hit song is born.
But what if the future of music was more fluid? Imagine a song that changes and adapts in realtime to reflect something personal to you. It could respond to your body's movement through space, or if you're playing a video game, the events taking place in the game. Could we still use the word song to describe a piece of music that changes with every listen?
Interactive music like this already exists, but you won't find it trending on the Billboard Hot 100 or any of the major streaming platforms. In this article, we'll explore the meaning of adaptive music and take a look at how one company, WarpSound, is transforming the way we think about generative AI music.
What is adaptive music?
Adaptive music is the product of an interactive audio system that changes dynamically, in response to events beyond the music itself. In contrast to live improvised music, adaptive music implies some underlying digital architecture responsible for the interactivity.
Experimental composers like Brian Eno have been developing adaptive music systems for decades. Eno's mobile app Bloom generates ambient sound environments that play continuously and respond to user actions. He's brought the Bloom experience to live augmented reality events as well.
As gamers interact with virtual environments, adaptive audio elements like volume, rhythm, and key signature may change to reflect the player's choices. Game developers have been improving on immersive musical landscapes year after year, in a struggle to satisfy players and keep them engaged.
The rise in Twitch streaming has introduced a second layer of demand for adaptive music. Fans watching their favorite gamers play for hours don't necessarily want to listen to the game's soundtrack. Streamers would prefer to control the music soundtrack.
Companies have been stepping forward to satisfy Twitch streamer's need for endless audio, but only a few of these services offer adaptive music. Of the companies we've surveyed in this space, WarpSound currently takes the trophy.
Who is WarpSound?
WarpSound is a tech company focused on generative adaptive music for creators, interactive music experiences, and content for streaming, gaming and more. Their WVRPS digital collectibles extend into the Web3 landscape, empowering users to get creative with the attached AI-composed loops, as well as use those NFTs as access passes to generate new generative music NFTs in the future.
There are currently four virtual artists in the WarpSound universe that act as one of many proof points for how adaptive AI music can be used; Nayomi, DJ Dragoon, Gnar Heart and GLiTCH create live experiences in collaboration with the audience.
DJ Dragoon: Youtube / Spotify / SoundCloud
Nayomi: Youtube / Spotify / SoundCloud
Gnar Heart: Youtube / Spotify / SoundCloud
GLiTCH: Youtube / Spotify / SoundCloud
These virtual artists were developed by WarpSound’s Machine Arts Lab, each with their own musical style. Dig in a bit further and you'll find that this Machine Arts Lab is an incubator where real-life humans (visual artists, musicians and storytellers) collaborate to develop the virtual artists.
To date, WarpSound has been blessed by top-tier digital artists like Android Jones and the beat-making guru STLNDRMS. Other notable creatives include Young Guru, Mike Shinoda, Siana Park, DECAP, Jeff Nicholas, Stephen Candell, Steve Pardo, Emma Oliver, Jordan Coffer & Chipzel.
WarpSound wants to put power into the hands of their audience, so everyone can co-create a live music performance. Check out this adaptive music demo of WarpSound Live ft. Gnar Heart, published in April 2023, to get a taste of the fully interactive music experience. You'll need to open this page to cast your vote and changes the AI music / virtual stage in real time. They had done this on stage previously, but now you can do it from your web browser!
WarpSound's new adaptive music API
In May 2023, WarpSound announced a new adaptive music API that will allow developers to tap into their adaptive AI music system.. An example of how the API might be used was provided through an integration with chatGPT, where users can type in prompts to change the audio stream instantly. Unlike other continuous music services, the WarpSound adaptive AI music system doesn't stop the previous track and launch a new one. Instead, your music shape-shifts in realtime to deliver the vibe you're looking for.
Google's text-to-music app MusicLM also launched in May 2023, drawing a mixed response from the public. Music producer channels on social media apps like Instagram and Twitter have roasted MusicLM, highlighting the chasm between text prompts and the app's musical output.
When we first watched the WarpSound text-to-music demo, we were both impressed and skeptical. The music sounded too good to be true, compared to the GPT-4 MIDI composing capabilities of the AI DAW WavTool. You can watch our video review of WavTool to better understand its strengths and weaknesses.
We reached out to WarpSound's creative director, Jeff Nicholas, to find out more about how Warpsound’s adaptive AI music API works and what makes it unique. We cross-referenced his feedback with public information and demos from the company. Our takeaway is that WarpSounds seems to have created a wholly unique adaptive music system that blows their competition out the water.
How does WarpSound generate music?
Some of the biggest tech companies have taken a stab at AI MIDI generation, from OpenAI's Musenet to Google's Magenta Suite. WarpSound's MIDI intelligence is substantially better than either of those tools.
I chalk this up to the fact that WarpSound has been solely focused on music. They're motivated by more than the superficial race to push out a project and put something on the map. Management at big tech companies don't seem to be concerned with creating usable AI music generators.
You can watch the video above for an example of how Google's new text-to-music tool works. The review comes from Youtube influencer Mike Russell, who live streamed his experience with the app and gave it a generally positive review. Other users have been less bullish on the app, pointing out its numerous artifacts and limitations.
Google's MusicLM was trained exclusively on audio files scraped from YouTube. These files are captured in the MusicCaps and MuLan music datasets, without permission from the original musicians. No attribution or remunerations model was put forward, underscoring Google's race to deliver a product at all costs.
In contrast, WarpSound has cultivated an artificially intelligent composer that creates music from scratch, in real-time, using a combination of generative MIDI and studio-grade production tools working in tandem.
Here's what creative director Jeff Nicholas had to say about WarpSound's adaptive AI music system and how it differs from their competitors. We discovered that text prompts are just one potential input source, among many others:
Our system is completely adaptive in real-time, at both the compositional and sound design layers. Natural language inputs can change the music in real time and building on top of the upcoming API, developers will be able to map any other kind of input to it.. So, text prompts, but also audience voting, game emotes or mocap data, voice, image, physical controls, etc can all be mapped / built on top of it.
The whole idea is that we want to provide service and API layers so that folks can experiment with building/mapping different types of inputs that engage with the system. And then there are tons of other controls that – over time – can be put on or taken off to constrain things like bpm range, song structure (loop, infinite stream, X length song, etc). The idea there is that it can be used in a truly co-creative / collaborative way that a simple input —> fixed output method doesn’t allow for.
WarpSound's adaptive music system does not use tagged human-made loops like some of these other companies. It is generative, using AI and ML, composing midi in real-time (based off of its training) as well as a stack of metadata that is driving a studio-like environment for the sound design of the generated music.
The other companies Jeff refers to have proliferated in the past two years, generating a glut of generic music. In the first week of May 2023, Spotify removed tens of thousands of AI generated songs by Boomy. They claimed it was a response to creators using artificial streaming to game the system.

Not every AI music app is guilty of generating low quality songs. One of our favorite companies in this cohort is Mubert, an AI music app that market's their service as a collaboration between AI and humans.
I previously spoke with a Mubert representative on LinkedIn and confirmed that their text-to-music app was running semantic analysis on users' text prompts, cross-referencing the interpreted meaning with labeled metadata on their human-made loops, and then synthesizing new variations with the assistance of machine learning.
WarpSound operates at a more granular level, as Jeff explains here:
At its core, our [Warpsound] system specifically doesn’t [use "labeled MIDI loops modified algorithmically"] and instead uses machine learning to learn music theory, composition, different musical styles, etc over time based on the training data it receives. [It] uses that ML to compose.
This ML-driven MIDI composition feature is one of the most exciting elements at WarpSound. Of course, we're a little biased as a text-to-midi company ourselves. In the second half of this article, we'll go deeper into their music NFTs and the creative experiences that they've rolled out for the public to enjoy.
WarpSound's audio fingerprint and metadata
When it comes to generating AI songs with MIDI, music notes are only half the battle. If your instruments don't sound good, the best music will still sound cheesy. WarpSound has invested heavily in sound design, raising the bar for audio quality in the generative music space.
WarpSound seems to be concerned with more than profits and hypergrowth, acknowledging that the soul of music itself could be at stake here. Low quality MIDI and corny virtual instruments are not healthy for the global music ecosystem. Acknowledging the ease of generating and publishing AI music, WarpSound aims to deliver fun interfaces capable of creating music that... doesn't suck!
Jeff explained to use how the WarpSound adaptive AI music system achieves such high fidelity audio.
The sound design side - the interpretation of the midi - uses a lot of human-curated sounds, samples and instruments which we call an “audio fingerprint” driven by the metadata that the AI has generated.
Our goal from day one was to develop a system that was studio-quality sound driven by AI/ML through the entire pipeline, but retaining the ability to stay general or get granular on the types of sounds and instruments it can build from.
You don't have to be a musician to create original material with WarpSound, but if you're DAW savvy, they won't leave you hanging either. For a variety of different uses, such as the WVRPS digital collectibles, users can request the audio stems for their tracks and follow this simple guide for a general primer on how to begin using them.
Our takeaway, speaking with Jeff and watching numerous videos from their site, is that WarpSound could soon become one of the leading forces in generative music. They've pushed beyond adaptive audio and connected with musicians' deeper needs to monetize on their time and effort. That's where the music NFTs come in, as we'll explore next.
What is WVRPS by WarpSound?
The original WVRPS by WarpSound collection was created in partnership with the award-winning illustrator Andy Poon. They first developed artistic derivatives from their three main virtual artists at the time. Then they fed that art and all of its traits into the WarpSound adaptive AI music system, generating 9,999 8-bar loops that were specific to each character set in the NFT art.
Collecting one of these NFTs enables the holder to do more things, like collect additional music/art collectibles as claims and airdrops. Holders can also utilize tools like WarpSynth to generate new music/art based on their original NFT.
In the future, holding a WVRP will also open up new opportunities to play with the system more directly and mint those creations as NFTs once you've created something you're proud of.
For the past few years, WVRPS have been available on major music NFT marketplaces like OpenSea. Here's where you can find them:
OpenSea WVRPS Collections
Opensea WVRPS: https://opensea.io/collection/wvrps-by-warpsound
WVRPSynths Collection: https://opensea.io/collection/wvrpsynths
MUSIC DROPS Collection: https://opensea.io/collection/music-drops-by-warpsound
WarpSound Bouquets Collection: https://opensea.io/collection/warpsound-bouquets/
WVRPS Honoraries Collection: https://opensea.io/collection/wvrps-honoraries
All 9,999 WVRPS have been minted, so anyone can purchase one of the NFTs from a secondary marketplace. Then you can connect to The Hub to access a variety of additional content including high resolution art, music videos, audio stems, etc.
WarpSound Lab: Free Immersive Music Experiences
WarpSound's early demos can be found at The Lab. Here you can explore some of the immersive music experiences they designed, right within your browser.
Try a free WarpSound demo with WarpSynth
One of our favorite demos from the Lab is WarpSynth. It’s a limited set of 6 musical variations pre-composed by AI, but allows you to experiment by pushing different buttons and pulling levers, until you find the sound and vibe you like.
Once you dial in the right sound, hit the mint button and within moments you'll have a discrete example of what a WVRPSynth music NFT sounds like. This demo does not include the image-generating portion of their offer.
WarpSound SongSphere: An Interactive 3D Music Maker
For an advanced music making experience that loads in your browser, try SongSphere. When you first boot up the web app, a track will begin playing automatically. Click and drag the globe to change the mix, from adding reverb levels to swapping in entirely new musical ideas. As an adaptive system, the track never skips a beat.
We've previously reviewed other VR music games and DAWs, but this has to be one of the most entertaining and accessible AI music apps we've seen online. If you don't have a VR headset, you'll still be able to use and experiment with SongSphere.
There are two more experiences in The Lab currently, but I don't want to spoil the fun. You've got all the links now, so check it out for yourself and see what you think!
How do you join the WarpSound Discord community?
You can join the WarpSound Discord server to connect with their community of musicians and music NFT enthusiasts. They provide a number of resources on the company's website, from an FAQ page to a detailed tutorial on NFT basics. Exploring these pages will help you get a better sense of music NFT culture as a whole, even if you're not ready to invest yet.
We hope you found this overview to be helpful. As WarpSound's music generation API goes public and more information becomes available, we'll update this article to make sure it stays current.
If you'd like to continue exploring software in the realm of adaptive music, check out our article on mixed reality music.