Mastering Guidelines & Metrics For Music Streaming

Mastering Guidelines & Metrics For Music Streaming (2)

Mastering levels and metrics for posting a track on Apple Music, Spotify, YouTube, SoundCloud, Tidal, and more.

Understanding how to process a track for streaming platforms is now more important than ever, with the streaming market share of 83% in 2022. This article explores platform-specific specifications and how to comply with them.

Mastering For Streaming

With some understanding of loudness, LUFS, and normalization, let’s dive into the details of how to perform mastering for streaming platforms. Here, we’ll look at specific issues affecting streaming.

The purpose of loudness normalization has never been to force or even encourage mastering engineers to work at a certain set level. Loudness normalization is for the convenience of the end user only. It exists so that the user listens to program material from various sources, and he does not have to constantly reach out to adjust the volume.

Once you think about audio normalization in this way, you realize how much freedom there is. If you want to master music close to -14 LUFS while using enough headroom for dynamic impact, you can do so knowing that the sound can be reduced by a few dB (which will increase its dynamics).

Although there has been a convergence towards -14 LUFS over the past few years, there are still platforms that use different reference levels. For example, Apple Music uses -16 LUFS, Deezer uses -15 LUFS, and Pandora doesn’t actually use LUFS at all. The Spotify reference level is user selectable between -23, -14, and -11 LUFS! To further muddy the waters, there is nothing stopping any of the streaming services from changing their reference level, the normalization method, or both, in the future.

Make sure that the track sounds as good as possible at the highest possible volume level. There is no single number and value that can really be used. It all depends on the genre, content of the song, and the intention of the performer. The goal of a mastering engineer is to make sure that the sound performs well on various devices and systems. Finally, there are two other factors to consider when mastering: peak level and album balance.

Peak Level

There has been a slight change in the second half of 2021 with almost all major streaming platforms offering lossless audio transmission. Notable exceptions are Spotify (announced Spotify HiFi) and SoundCloud. When lossless streaming was the exception rather than the norm, it was important to leave some headroom for peaks to avoid distortion during encoding and decoding. A good rule of thumb was to leave at least 1 dB of True Peak headroom.

A good way to listen to this is to use the Codec Preview module in the Ozone plugin by iZotope. Spotify and SoundCloud don’t always use MP3 or AAC, but these two codecs can certainly give you a good idea of ​​where other formats might fall short.

iZotope Ozone Codec Preview

When you use Codec Preview, you may notice that the peak level on your output meter is slightly higher than before. This is a natural side effect of turning a WAV or AIF file into a lossy file. You can’t avoid this, but you can prepare for it by lowering the level of your track so that when it turns into a lossy file, it doesn’t overload the output channel.

However, with lossless streaming, this is not a problem. As long as your True Peak levels stay below -0.3 dBTP or so, you should be fine. Which path you choose is largely up to you. You can use the extra layer knowing that listeners playing lossy streams may suffer a little extra distortion, or play it safe and satisfy the lower common denominator.

Album Balance

Another question that sometimes comes up is “Should I mix all the songs on the album on the same level?”. If streaming platforms reduce your songs to a reference level, and different songs in your album are at different levels, doesn’t that mean they will be rejected to different degrees, thus changing the balance of your album?

Luckily, the answer is the same: mostly no. Amazon, Deezer, Pandora, and Youtube exclusively use track normalization, meaning all tracks are set to a reference level. For platforms like these, where users mostly listen to singles or radio, this makes sense. However, these platforms also have a relatively smaller market share.

Apple Music and Spotify, on the other hand, have an album normalization mode. The technique for normalizing an album is to either use the level of the loudest song in the album (or EP) or the average level of the entire album and set it equal to the platform’s reference level. The same gain offset is then applied to all other songs in the album. For Spotify and Apple Music, this is triggered when two or more songs from an album are played in sequence.

Interestingly, Tidal has chosen to use album normalization for all songs, even if they are in the same playlist. This method was implemented after Eelco Grimm published a study on the subject in 2017, providing strong evidence that album normalization is preferable for both album and playlist listening by most users. If we analyze this, it points to another important fact: we should not let reference normalization levels dictate how we align songs on an album, but rather let the artistic intent and natural flow of the music be our guide.

Below are some specifications by the platform to help you understand the various variables we are dealing with (our studio, when doing the mixing and mastering, will give the client an extra variant of the track with -14LUFS as an average to use).

Apple Music uses a -16 LUFS reference level, turns on normalization, makes quieter songs only louder as peak levels allow, never clips, and allows both track and album to be normalized depending on which playlist or album is being played. The caveat here is that older versions of macOS and iOS may still use Sound Check, a normalization method that is not based on LUFS scores.

Spotify uses a default reference level of -14 LUFS but has additional user-selectable levels of -23 and -11 LUFS. Normalization is on by default, and quieter songs will only turn up as loud as the peak levels for the -23 and -14 LUFS settings allow. The limit will be used for the -11 LUFS setting, however, over 87% of Spotify users don’t change the default settings. Spotify also allows you to normalize tracks and albums depending on whether you’re playing a playlist or an album.

YouTube uses a reference level of -14 LUFS and normalization is always on. The platform never uses clipping and uses track normalization exclusively.

SoundCloud does not use normalization, and also does not offer lossless audio playback (compresses/downgrades the quality of the material). Also, artists usually upload music directly to SoundCloud rather than through an aggregator. For these reasons, you may want to do a separate mastering for SoundCloud.

Amazon Music and Tidal use -14 LUFS while Deezer uses -15 LUFS and Pandora is close to -14 LUFS but doesn’t actually use LUFS at all. Tidal and Amazon have normalization enabled by default, while Deezer and Pandora don’t let you turn it off. Amazon, Pandora, and Deezer only use track normalization, while Tidal only normalizes albums.

AES official recommendation

In addition to all this, it should be noted that the engineers from the Audio Engineering Society (AES) have prepared a set of recommendations in the form of AES TD1008. This is a comprehensive document, but here are some of the highlights:

  • Use album normalization whenever possible, even for playlists.
  • It is recommended to normalize speech to -18 LUFS
  • Normalize music to -16 LUFS. When using album normalization, normalize the loudest track in the album to -14 LUFS.
  • Checking the characteristics of your track after mastering

Focusrite Pro Red Interfaces If you want to check the characteristics of your track after mastering to see how it will be processed, you can use the Loudness panel in iZotope’s Insight plugin. If you choose this method, then you will need to play the song from beginning to end without interruption. If you are looking for a faster method, check out iZotope’s “Waveform Statistics” in RX.

How to master, on what equipment, and what plug-ins to use for analysis and correction is a very broad topic and should be considered by a mastering engineer in all the subtleties and features of a particular track.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. GDPR 2018 privacy guidelines applied!