Next time you're at a cafe and the barista's got a good playlist, see if you notice any patrons scrambling to pull out their smart phones.
If you ever hear a great song and miss the opportunity to scan it, Google offers a free hum-to-search app as well. Try finding the song by singing the melody into your phone.
Music discovery apps are a common, every-day example of song analysis, but they're really just the tip of the iceberg. Musicians use this same kind of tech in their studios to analyze audio samples and figure out the BPM, key signature, or underlying chord progressions.
In this article we'll share a complete overview of song analyzers we found and explain how musicians use them. We tested each app to see how it performed, so that our opinions are backed up by personal experience.
Table of Contents
Why would a musician use a song analyzer?
Song analyzers help musicians find the BPM and key signature of an audio file, so that they can successfully match it with other tracks in their DAW.
An experienced musician will usually be able to pick out the tonal center of a song, identify the scale, and determine its key. They use a metronome to pinpoint the BPM or might use a tap tempo feature in their DAW to speed up the process.
But many of today's digital audio producers focus on sampling, editing, layering effects, mixing, and mastering instead of learning Music theory foundations. As a result, they might struggle to identify a song's key signature quickly and opt for using a song analyzer instead.
Mixed In Key: Industry standard for BPM and key analysis
Mixed In Key is the gold standard for analyzing BPM and key with a DAW. Studio musicians prefer the app because of its lightweight design, accuracy, and desktop app format. The interface includes a distribution map for music notes in the song. So beyond simply naming a key, MIK provides granular insights about how that conclusion was reached.
Mixed In Key accepts files from the user's computer and is also able to listen to desktop music apps (like Spotify or Apple Music) to perform its analysis. Just be prepared to fork out $58 USD for a copy of the app.
Tunebat: Free BPM and key analysis web app (mid-grade)
Several free alternatives to Mixed In Key exist online. They tend to run inside a web browser instead of a standalone app.
Tunebat offers a BPM and key signature finder called Analyzer. We uploaded a few dozen audio samples to see how it performed. Despite being one of the highest ranking sites for this kind of service, the results were inconsistent.
As you'll see if you compare File Name with the Key and BPM columns above, Tunebeat consistently made errors by a significant margin. It seemed to perform better when a song was dense and quantized, while underperforming on ambient pads, solo melodies, and other single-instrument samples.
The most common mistake the app made was swapping in the fifth scale degree of a key signature. For example, a song in B minor was labeled as F minor, C minor was labeled as G minor and so forth.
Overall we had more success with VocalRemover's Key BPM Finder and Music Genre Finder. We'll cover those next.
VocalRemover's Key BPM Finder (higher accuracy)
If you're already following our blog, you may recognize the name VocalRemover from one of our previous reviews of Splitter AI. We had a great experience with that web app and were eager to try their Key BPM Finder tool. The results were an improvement on Tunebat's service but still had some problems. You can view the breakdown below, with errors highlighted in red:
Users upload audio files in bulk and extract BPM + key signature for each track. The interpretation of musical keys tended to be more accurate than Tunebat, while BPM data fell often within a few beats of the actual tempo.
Best app for analyzing a song's genre, BPM and key
Music Genre Finder is another popular, free web app that lets users identify the genre and subgenres of a published song. Search by artist or song title and you'll gain access to several layers of information about the music, including key signature and tempo data.
Finding the genre for a song can help musicians refine their AI text-to-music descriptions. A broad label like "rap" often won't get you close enough to the intended sound. If you analyze a track that represents the sound you're looking for, you may be able to pinpoint the right subgenre and use it in your AI music prompt.
Under the music genre tags, you'll find a track analysis section with the duration, BPM, and key signature. Other features like loudness, happiness, and danceability are also included, though these are really intended for machine algorithms. Listeners can pick up on these attributes without help from artificial intelligence.
Moises.AI (Paid service - Includes stem separation)
We ran some follow up experiments with Moises, a celebrated AI music app that's been on the scene for several years. The tempo and key signature properties are kind of a secondary feature, but we found them to be accurate for the majority of our tests.
The downsides with using Moises for BPM and key analysis is mostly financial. Stem splitting is an expensive rendering task and they paywall users early on. So if all you need is the basic music data, VocalRemover and Music Genre Finder are probably better options.
How to check if songs & samples are too loud
Did you know that streaming platforms are configured to penalize tracks when the mix is too loud? We previously reviewed AI mastering software and recommended LANDR for people who need a quality, affordable solution.
If you've got mastering covered and just need a quick sense of how streaming platforms will respond to the decibel levels of your track, check out Loudness Penalty. Their web app lets you upload a file and get instant recommendations on how to adjust it to meet the standards of each host (YouTube, Spotify, Tidal, and Apple).
The Loudness Penalty app is also available in VST/AAX/AU plugin formats. This makes it easier to adjust the mix in your DAW without having to bounce and upload tracks back to a web browser each time you want to run a test.
Analyzing the BPM & key of songs on streaming platforms
All three of these web apps allow users to search by artist or song name. It's a superior experience to sites like Sonoteller.ai where users are forced to copy and paste urls from YouTube into their search field.
Anyway, here's what SongBPM looks like - the search field is at the top of the page and the results are returned immediately below. It's probably the most straightforward of the three pages.
Pulse Music has one important differentiator from the others; users can analyze the time signature of the song. If you're trying to figure out a song with irregular cadence, like 3/4 or even 5/4, Pulse can help with that. Just don't expect it to be of any help on complex tunes with changing time signatures!
Aside from these apps, we also found a company called MusicGateway offering a BPM and Key finder for published songs, but they locked the service behind a paywall immediately. Wah wah.
We also found a collection of free web apps that used the Spotify AI API service to retrieve BPM and machine learning features, but no key signature. These included Spotify Song Analyzer, Sort Your Music, SongData.io and SongBPM Finder.
Machine learning charts like the one above from Spotify Song Analyzer are rarely useful to musicians. This kind of data is really intended for music matching algorithms. Developers seem to include these charts to make their apps seem more robust and feature-rich, but I think it's done without much consideration for usefulness or value to their end users.
Best song analyzer for detecting chord progressions
There are two excellent resources available for analyzing the chord progressions of an existing song. If it's popular music, you can learn a lot from HookTheory's catalog. They provide a chord progression builder that aggregates all of the songs with the same chord progression. Here's what that that service looks like for a four-chord sequence in A minor:
HookTheory's interface is great if you already have a progression in mind, say from one of your own songs. But if you need to figure out the chords for one of your own tracks, the chord analyzer at MazMazika is a better option. It lets users upload an audio file or reference songs on Youtube and SoundCloud.
During our experiments, we found that it analyzed the BPM accurately and struggled slightly with naming chords correctly. It still works pretty well for a free web app!
The music reference above was an accordion tune in A minor. It followed a simple i - iv - V progression (Am, Dm, EM) with a picardy third at the end of the phrase (A major). MazMazika got most of that right, but made a few mistakes.
The second chord in the progression was labeled F major, which for our music theory nerds is the relative major to the actual chord in the progression, D minor. This means the app was close but not fully accurate.
MazMazika also printed out chords that were not part of the progression, like C major and Ab diminished. This seems to be due to the fact that the song's lead melody is played on the same instrument as the chords and it took all of them into account. The melody features chromatic notes, which combines with the underlying chords to formulate diminished chord inversions.
Fortunately, the audio playback feature on MazMazika includes a realtime display of any chord being played, so it's easier to figure out why their software is making certain decisions.
Auto-tagging & AI captions for your music collection
Now for the final and most advanced challenge that musicians face. How do you generate musical tags at scale and attach it to your audio file collection without punching the data in manually?
Some of the apps we've mentioned so far can accept audio files in bulk, but the metadata remains trapped in a web browser. Human operators still have to go through and manually enter all of that data into their sample manager.
Cyanite is the most popular industry solution for this problem. It solves AI auto-tagging for large audio libraries. Users can pull out a diverse range of labels like genre, mood, and movement, and even generate full-length captions about a song written in natural language.
Captions are particularly helpful for companies with large audio libraries who want to publish them online with rich descriptions. The screenshot above shows an example of what these captions could look like and how they might be used. It comes from AudioSparx, a massive audio library that was used to train the text-to-music service Stable Audio.
When users type in a music prompt, Stability AI casts the request out against a model trained on all of these captions and metadata. So descriptions are helpful for both marketplaces and generative AI models.
Cyanite isn't the only option available. Software developers can use music classification tools like the open source C++ library Essentia to create their own labeling services. These tools analyze songs for the usual properties like key signature and BPM tempo, along with audio markers like onsets and transients.
Essentia infers the mood of a track and classifies its genre. But the library can also performs melody extraction, voice analysis, and spectral analysis.
On a less technical note, if readers want to test out AI music-to-text captions without getting into C++, I recommend checking out LP-MusicCaps on Hugging Face.
We recently published an article describing an experimental music technique that ping pongs between captions and AI music generation to creative infinite songs. This is one of many creative use cases you could explore if you're not trying to label a large dataset.
Current limitations in music information retrieval
Cyanite, LP-MusicCaps and every other music information retrieval system are subject to the same core problem. Their capacity for audio mining is limited by their training dataset. If a subgenre is missing or underrepresented, that data scarcity makes it difficult to analyze a song accurately.
The majority of music datasets today have been created for either academic or commercial purposes. Academic institutions tends to train cautiously on music that's protected by copyright law. Corporations have a bigger store of modern, commercial music and are rich with metadata, but are usually deficient in underground subgenres and obscure World music styles.
This means that while auto-tagging can help scale up new datasets, it only works if the scanned audio is similar enough to music in the primary model. Enriched audio libraries will become increasingly valuable over the coming years, as AI music generators continue to rise in popularity and usefulness.