Make Your Guitars LOUD!!!

[Editors NoteThis article was written by Chris Gilroy, producer and house engineer at Brooklyn-based Douglass Recording. Chris earned his degree in Sound Recording Technology from UMass Lowell.  Chris has worked with a diverse range of artists including Ron Carter, Mike Stern, The Harlem Gospel Choir, Christian McBride, to name a few.]

 

I love guitars. Something about them excites all my nerve endings. From softly picked acoustics to a mountain of amps at full blast. These nuanced instruments can be tricky to record. Luckily for you, I’m setting up for a session right now where we will be tracking distorted guitars for the next 3 days. Let’s talk a bit about getting some of the best results you can while recording and the things I will be doing for this session.

Before you even get into the studio to shred, find a few different examples of recordings where you, the artist, producer, or whoever is in charge of the project, are inspired by for this session. Guy Picciotto of Fugazi has a very different tone then Matt Pike of Sleep/High on Fire. Talk to your engineer about how these different sounds speak to you and how they were achieved. What amps, guitars, pedals, etc etc were used for tracking.

If you are engineering, you need to learn the different sounds between guitars. Why grab a Fender Stratocaster over a Telecaster? What’s the draw of a hollow body guitar? Each instrument sounds very different. Then there are amps! A Fender Deluxe sounds AMAZING when cranked, but very different from a Marshall JCM50. It is a never ending task for us to learn these differences. I’m not a guitarist (my mind was simpler and could only handle smashing two pieces of wood against a drum) so every session I work on I make sure we try a few amps and guitars. Mostly so we can make sure that we have a sound we are happy with in the room, but partially so I can listen to different combinations of instruments and amps, learning it and internalizing it.

Luckily I am fortunate enough to work in a place that has a bunch of great sounding amps. When you turn the gain till the pre amp starts to clip, we reach a magical land. Which is emulated through so many pedals. To get geeky for a second, a lot of distortion pedals are trying to recreate the sound of tube amps distorting. Housed in much smaller and cheaper enclosures they are create to throw a few flavors in your bag for a gig.

But these boxes use transistors and diodes to compress and clip your sound, which will flatten your dynamics and take a ton of life out of your guitar. Live they totally rule, but if you are in the studio and have a Marshall Bluesbreaker, you probably also don’t need that OCD pedal on. Turn up the amp, and rock out.  

A hard balancing act while tracking distorted guitars is not OVER distorting. When we play live we have the benefit of watching the player’s hands on the instrument. We don’t get that same luxury through a recording. Our guitar sound must be clear enough to make out all the notes and harmonies played. For listening example, blink-182’s Enema of the State is laden with giant and punchy sounding guitars that we hear everything Tom DeLonge is playing. Back a few albums to Cheshire Cat, it is much more difficult to hear exactly what he is playing. His sound is muddied and a bit too crunchy to full hear everything. When we are tracking back down the amount of distortion a little less then when we play live. The clarity will come through but we still have the amp growl.

Kurt Ballou of Converge is a master at getting an insanely aggressive sound while still maintaining note clarity. Don’t get me wrong I LOVE horribly recorded black metal records. But after a short period of time my ears get fatigued because the guitars basically almost white noise (which then I wonder why I didn’t put on a Merzbow record).

When I double guitars I first make sure I know why we are doubling. Recently I finished mixing the new Nihiloceros EP. I wash’t involved in tracking, so during mixing I heard sections that I wanted a slight energy boost like after a bridge into the final chorus of a song. To solve this we tracked a meatier guitar sound to blend in slightly behind the rest of the guitar assault. Mixed in you can’t quite tell that there is another guitar, it just feels like the part swells a little more.

For another record, a new band from Philadelphia called Puriden, we wanted to have a massive wall of hard panned guitars. They had recorded an SG through a Vox AC30 as the main guitar. Since the guitarist has that rig as his tone, we didn’t want to lose the Vox sound so we doubled using the same amp and a Telecaster. This gave us enough sonic difference to know that we had two guitars, but have no phasing issues between the two.

Steve Albini spoke about this very eloquently in Mix with the Masters. In short, if you have a different initial sound source with a different timbre you decrease the chances of having phase issues. Even if it is a different amp, mic, etc, the initial harmonic character is the same. For the most clarity and less phase related issues down the line change your instrument. If you have the ability then change your whole rig but at the very least try a different guitar.

Micing amps is a whole other beast. This section alone can be a whole book so I will only briefly gloss over some ideas here. Or buy me a beer at a show and we can chat all night.

The placement of an amp in the room affects your sound dramatically. Having an open back amp against a wall will increase the amount of low frequencies in your sound. Having a small amp on the floor will increase first order reflections. Is the room large and live (reverberant) or tight and dry? Often the room sound will slip into your mics and affect your recordings. Speaking of mics, each type of mic responds differently and adds or subtracts to our sound.

The SM57, love it or hate it, will always be around and serve it’s duties wonderfully. Learn it and how to use it. Ribbon mics, like the Royer R-121, will add extra lower mids to your sound and often tame harshness. Condenser mics also sound incredible on amps. I love the sound of a Schoeps M22 (tube small diaphragm) on amps like a Fender Deluxe. Or a Soyuz 017 slays on guitar amps, as do so many other large diaphragm options.

Be mindful that each mic has a limit of how loud it can handle. If you have a Marshall Plexi at full blast some mics won’t be happy and give you thinner or distorted tones. You could also damage the microphone, like the sensitive ribbon mics, rendering them into very expensive door stops.

Placement of the microphone on the cabinet has a big change of sound. The more on the center of speaker cone you get, the brighter a sound you capture. As you move off axis, the sound gets a little darker, or warmer. How far or close your mic is will also change the timbre and room tone. Among other reasons, if you place a cardiod mic too close you will get a bass bump known as proximity effect. Listen to talk radio to hear this overused. Justin Colletti, of Sonicscoop, has this wonderful video exploring the different sounds we get with just this principle alone.

Originally I was hoping to get into mixing guitars, but that must wait till next time. The last point I want to drive home is that this is a skill set that we can always improve on. We are constantly learning. Go to conferences (AES), workshops, talks. Read magazines (Tape Op!) and watch videos. Talk to peers at all levels. Whenever possible I try assist other engineers. It lets me see how other people do things and handle situations. The amount I have learned from that or the conversations after the session about techniques and decisions used in the session has been monumental.

Music Streaming Platforms & Mastering – 3 Guiding Concepts

[Editors Note: This blog was written by Alex Sterling, an audio engineer and music producer based in New York City. He runs a commercial studio in Manhattan called Precision Sound where he provides recording, mixing, and mastering services.]

Background:

As an audio engineer and music producer I am constantly striving to help my clients music sound the best that it can for as many listeners as possible. With music streaming services like Apple Music/iTunes Radio, Spotify, Tidal, and YouTube continuing to dominate how people consume music, making sure that the listener is getting the best possible sonic experience from these platforms is very important.

Over the last several years some new technologies have been developed and integrated into the streaming service’s playback systems called Loudness Normalization.

Loudness Normalization is the automatic process of adjusting the perceived loudness of all the songs on the service to sound approximately the same as you listen from track to track.

The idea is that the listener should not have to adjust the volume control on their playback system from song to song and therefore the listening experience is more consistent. This is generally a good and useful thing and can save you from damaging your ears if a loud song comes on right after a quiet one and you had the volume control way up.

The playback system within each streaming service has an algorithm that measures the perceived loudness of your music and adjusts its level to match a loudness target level they have established. By adjusting all the songs in the service to match this target the overall loudness experience is made more consistent as people jump between songs and artists in playlists or browsing.

If your song is louder than the target it gets turned down to match and if it is softer it is sometimes made louder with peak limiting depending on the service (Spotify only).

So how do we use this knowledge to make our music sound better?

The simple answer is that we want to master our music to take into account the loudness standards that are being used to normalize our music when streaming, and prepare a master that generally complies with these new loudness standards.

Concept 1: Master for sound quality, not maximum loudness.

If possible work with a professional Mastering Engineer who understands how to balance loudness issues along with the traditional mastering goals of tonal balance and final polish etc.

If you’re mastering your own music then try to keep this in mind while you work:

Don’t pursue absolute loudness maximization, instead pursue conscious loudness targeting.

If we master our music to be as loud as possible and use a lot of peak limiting to get the loudness level very high then we are most likely sacrificing some dynamic range, transient punch, and impact to get our music to sound loud.

The mechanism of loudness maximization intentionally reduces the dynamic range of our music so the average level can be made higher. There are benefits to this such as increasing the weight and density of a mix, but there are also negatives such as the loss of punch and an increase in distortion. It’s a fine line to walk between loud enough and too loud.

Here is where loudness normalization comes in:

If our song is mastered louder than the streaming target loudness level then our song will be gained down (by the service) as a result. If you are mastering louder than the target level then you are throwing away potential dynamic range and punch for no benefit and your song will sound smaller, less punchy, and more dynamically constrained in comparison to a song that was mastered more conservatively in regards to loudness.

If we master softer than the target level then in some cases (Spotify) the streaming service actually adds gain and peak limiting to bring up the level. This is potentially sonically adverse because we don’t know what that limiting process will do to our music. Will it sound good or not? It most likely will create some loss of punch but how much is lost will be based on what content was put in.

Some music is more sensitive to this limiting process. High dynamic range jazz or classical music with pristine acoustic instruments might be more sonically damaged than a rock band song with distorted guitars for example so the result is not entirely predictable just on loudness measurement but also on musical style.

Thankfully the main platforms other than Spotify don’t add gain and peak limiting as of this writing so they are less potentially destructive to sound quality for below target content.

Concept 2: Measure loudness using a LUFS/LKFS meter.

The different streaming services have different loudness standards and algorithms to take measurements and apply the normalization but for the most part they use the basic unit system of loudness measurement called LUFS or LKFS. This metering system allows engineers to numerically meter how loud content is and make adjustments to the dynamic range accordingly.

Being able to understand how our music masters are metering with this scale is useful to see what will happen when they are streamed on different services (i.e. will the algorithm gain them up or down to meet the target or not?)

Concept 3: Choose which loudness standard to master to.

Direct your mastering engineer if you are working with one to master to a target loudness level and consult with them about what they feel is an appropriate target level for your music. If you are mastering jazz or classical music you probably don’t want to make a very loud master for sound quality and dynamic range reasons but if you are making a heavy rock, pop, or, hip hop master that wants to be more intense then a louder target may be more suitable.

iTunes Sound Check and Apple Music/iTunes Radio use a target level of
-16LUFS and this would be a suitable target for more dynamic material.

Tidal uses a target level of -14LUFS that is a nice middle ground for most music that wants to be somewhat dynamic.

YouTube uses a target level of -13LUFS, a tiny bit less dynamic than Tidal.

Spotify uses a loudness target of -11LUFS and as you can see this is 5 dB louder than iTunes/Apple Music. This is more in the territory of low dynamic range, heavily limited content.

Somewhere in the middle of -16LUFS and -11LUFS might be the best target loudness for your music based on your desired dynamic range but the goal is not to go above the chosen target otherwise your content gets gained down on playback and dynamic range is lost.

In all services except Spotify, content that measures lower than target loudness is not gained up. So for people working with very dynamic classical music or film soundtracks those big dynamic movements will not be lost on most streaming platforms.

However since Spotify is unique and adds gain and peak limiting if your content is below target it is potentially the most destructive sonically. So should you master to -11LUFS and save your music from Spotify’s peak limiting but lose dynamic range on the other platforms? It’s a compromise that you have to decide for yourself in consultation with your mastering engineer.

You might want to test out what -11LUFS sounds like in the studio and hear what the effect of that limiting is. Is it better to master that loud yourself and compensate in other ways for the lost punch and lower dynamic range? Or should you accept that Spotify users get a different dynamic range than iTunes users and let your music be more dynamic for the rest of the platforms?

In all cases there is no benefit to going above -11 LUFS because that is the loudest target level used by any service. If you go louder than -11LUFS then your music will be turned down and dynamic range and punch will be lost on all the services needlessly and permanently.

Further Reading:

Great info – graphic on the different streaming loudness targets.

More info on LUFS/LKFS metering.