How To Make Your Vocal Tracks POP

[Editors Note: This blog was written by our friends at Soundfly – learn more about their online course series and how you can get a discount at the bottom of this article!]

 

These days, producing your own demos essentially means the same thing as making a fully produced record of your song. It’s expected that your demo will sound full, warm, and professional, and your vocal performance has to POP to grab the interest of potential labels, bandmates, booking agents, or whomever you might be trying to impress.

If you choose to sing on your own tracks, or work with a vocalist in a band, and don’t know how to make them pop like your favorite records, it can be tough to know where to start. Soundfly’s new online course series, Faders Up I: Modern Mix Techniques and Faders Up II: Advanced Mix Techniques, is taught by today’s top sound engineers, who will help you get the professional sound you’re looking for in just six weeks. (Scroll to the bottom of this article for a special discount code!)

The next session starts on February 6, 2018, but for now, here are a few tried-and-true methods for getting your vocal tracks to sit confidently in your productions.

VOCAL POP TIP 1: NO MATTER THE MIC YOU USE, USE EQ

If you’re just starting to record and process your own vocals for the first time, you might not have a $15,000 vintage Neumann microphone at the ready. Perhaps you’re working with a stage mic like an SM58, or a USB mic like the Yeti from Blue Microphones. But even if you’ve saved up to rent something nice, your voice and the mic can’t do all the work.

Equalization (EQ) is an incredibly powerful tool, and often a necessary one to really make your vocal track pop. Here are a few common moves I make frequently when processing vocals:

1. High-pass filter. Sometimes also called a low-cut filter, this gets all the muddy background noise out of your vocal.

Plosives from sounds like Ps and Bs can send an exorbitant amount of air into the microphone and cause a low-end rumble below 100 Hz. Sometimes noise from the power in a building can create a buzz around 50 or 60 Hz.

You can fix a lot of this problem by cutting out the low end! Try a high pass around 100–150 Hz.

Generally, you can get away with a slightly higher cut for female vocalists, whereas you don’t want to kill a male vocalist’s lowest notes. Be sure to listen for the lowest note in your song and make sure you’re not totally gutting it!

2. High boost for “air.” A common characteristic of high-quality microphones is a boost in the 6–10 kHz range. This adds a pleasant “airiness” to a vocal that really grabs the ear and adds clarity.

If your mic doesn’t achieve this for you, or you want to over-emphasize this effect, consider giving your vocal a small bell-curve boost around 6 kHz.

3. Cut the “honk.” Sometimes your vocal might pop out a little more than desirable, somewhere in the 2–5 kHz range. This is the area most responsible for achieving intelligibility of the human voice, but sometimes we end up with moments of too much intelligibility.

If you take an EQ scalpel to the offending frequency area and carve this area precisely, you make room to boost the whole vocal and help it stand out even more. (Note: This is an effect perhaps best achieved using a multi-band compressor to isolate the frequency range, so if you have something like that available to you, use it!)


VOCAL POP TIP 2: LEVEL OUT YOUR VOCAL WITH COMPRESSION

In the realm of pop and electronic music production, I cannot think of a single time I didn’t use at least a little bit of compression on every vocal track in my mixes.

Compression, even in the most acoustic of settings, is involved in some degree of processing the vocal at one or several stages.

Let’s assume for now that you’re not working with a hardware compressor in between your mic and your interface — so you’re probably compressing after you’ve recorded. Here are a few notes for getting the most out of your compressor.

1. Before you compress, automate! Automation is a severely underused and often underappreciated ally in making vocals and instruments pop in a production. If there’s an obvious offending peak in volume in your vocal track, try to even it out before you stick a compressor on and try to get it to do the work for you.

Note that if you do this, you might want to “bounce in place” your automated track. Otherwise, your channel strip’s compressor will affect the volume before the automation, which defeats the whole purpose.

2. Less is more. Compression is a good way to give your vocal take consistency and add a nice, warm color to it. However, it’s very easy to go overboard with it, when the time comes.

It can be tempting to work with the built-in presets of a compressor, but frequently those will squash your vocal way more than what you’d actually want. That said, though, Logic’s initial settings are actually a pretty good starting point:

  1. A small or medium ratio, something like 2:1 or 3:1
  2. A quick (but not instant) attack time, around 10–15 ms
  3. A moderate release time, around 50–60 ms

After that, it’s about adjusting the threshold until you achieve the desired gain reduction. Start subtle, and try to keep things at or below 6 dB of gain reduction. Any more than that, and your track might start to sound flat and lifeless.

3. Double it up! Parallel compression is the technique of sending your vocal track to a second location (via a send or bus), and compressing only the duplicated signal, usually in an extreme way.

You can gain a lot of presence and “body” by compressing a vocal signal really hard, but it’ll dull the top end and make it feel lifeless and overtly aggressive. Instead, if you compress a copy of the signal really heavily, and mix only a little bit in, you get some of the body and presence without killing your vocal performance.

The settings for the parallel compressor can be pretty extreme. Keeping everything else the same, raise your ratio to somewhere between 10:1 and 20:1, and then lower your threshold until you achieve gain reduction in the 15–20 dB range.

That’s a punishing crush, but mixed in tastefully, it can add a lot of pop to the vocal!

VOCAL POP TIP 3: SPACE IS THE PLACE

These days, the use of reverb on vocal tracks is on a bit of a downward trend overall. Dry vocals are great for the intimate and/or aggressive sound of rap tracks and alternative rock, but might not work for bigger pop productions or folk numbers. Here’s a reliable way to create a larger-than-life space for your vocal.

1. Set the scene with sends. Set up two separate sends for your vocal. One will go to a mono, plate-style reverb that gives the vocal some general resonance, and helps the vocal sit into a space with the other instrument(s) in the track. The other will be a large and wide hall verb to give your vocal bravado and gravitas.

2. Make it tight. Your first vocal reverb, the plate verb, is more musical in function. It’s about creating sheen and a coherence with the band or production than about defining a “space” that the vocal is in.

Some good starter settings for a plate reverb are a decay time around 1–2 seconds, with a short predelay, around 30–50 ms.

Be sure to filter out some of the low end “mud” from your reverbs! You don’t want the reverberated signal muddying up your lead vocal sound. A high-pass filter in the 200 Hz range is a good place to start. Follow that up with a low shelf that reduces the low end below 600 Hz or so, by as much as necessary to clean things up.

Some reverbs, like the UAD EMT 140 pictured above, will have built-in filters and shelves for exactly these purposes, but you can achieve these same results with a simple EQ plugin or two.

3. Make it huge. Your vocal wasn’t recorded in a void, and you probably don’t want to depict it in a void, either. A great way to make your vocal sound larger than life is to create a larger-than-life space for it to resonate in! A wide hall reverb is a great space to illustrate this.

A good setting for this is somewhere between 3–8 seconds of decay time (we’re talking Grand Canyon-sized space), but obviously size your space to taste. Also, make sure that you give the reverb some predelay, on the order of 60–100 ms. Your vocal doesn’t leave the mouth and instantly hit the other side of a canyon!

Since these reflections are much farther away, you’ll want to take some high end out of the signal. Again, many reverbs will have some built-in shelving options available, but you can always take away more through EQ, if need be.

Just remember that, like with much of this processing, less is more. Always be sure to listen to your spacial effects in headphones as well as speakers, to make sure you’re truly hearing the space you’re creating, and not just the sound of the room you’re in.

VOCAL POP TIP 4: DOUBLE IT UP

When it comes to getting a “full” and “big” vocal sound, nothing quite gets the job done like simply laying down a second take and mixing it in, also known as doubling. In modern pop production, it’s not uncommon for a chorus part to have two, three, four, five, or more doubles of the main vocal part, just to thicken up the sound.

For a new vocalist, replicating an exact performance as closely as possible can seem daunting, if not impossible. However, some of the quirks and intricacies of each individual take, stacked together, can make a lead vocal that much more interesting, and can smooth out any individual mistakes or variations.

If you’re up for the challenge (and I highly recommend you give it a try!), here are a few suggestions for configuring doublings in a way that builds up your lead vocal into a fuller sound.

  1. The classic double. Take your lead and sing it twice. Run both vocals straight up the middle, in mono, or try just barely widening them by 5–10%, one in each direction.
  2. The wide triple. Record your lead three times. Pick a favorite and stick that in the middle. Pan the other two hard left and hard right, respectively, and turn those doublings down in the mix.
  3. The whisper triple. Similar to the wide triple, but the second and third takes are the lead performance “whispered” and mixed low in both ears. Great for achieving an ASMR effect, if that’s what you’re going for. This can also be used in conjunction with the wide triple.
  4. The unbound quartet (or quintet, etc.). Record four or more leads, and space them out evenly across the stereo spectrum.
  5. The crunchy chorus. Record three or more “loose” doubles, each with a slightly different timbral approach, and pan them all within 5–10% of center. This is great for creating a chorus or church choir effect.

These are just a few of the techniques with which I’ve found success. Bring layered harmonies into the fold, and you can really go crazy with stacking up a huge vocal sound!

Beware of the pitfalls

If you do choose to record and produce real doubles, there are a few pitfalls to be aware of to get the tightest, fullest sound.

1. Watch out for hard consonants. Sounds at the end of words with a lot of high end, like Ts and Ks, can sound really messy if they aren’t perfectly tight. While it’s possible to manipulate just the very end of a sound to perfectly align with the main lead, it’s often better to have only the lead take care of these sounds. The same thing goes for the front of sounds, including breaths.

When recording the doubles, try to get softer end consonants, or edit them out altogether in your DAW.

2. Don’t be afraid to cut things out. It’s easy to think that merely doubling the entire lead is the right idea, and that’s that. If you’re really digging the full sound you get from your two or more solid leads, voiced fully together, great! But also consider:

  1. High- and/or low-passing your double(s)
  2. Putting your double(s) through a 100% reverb, and mixing to taste
  3. Hard-compressing your doubles only, to emulate parallel compression
  4. Only keeping the doubles on lyrics or moments of great importance
  5. Cut out all the extra noise and empty content of the audio waveforms, when the double isn’t singing

Get creative with your doublings, and only take what you want from them!

3. Less is more. Once again, it’s important to recognize whether you’re adding something to the mix because it’s important, or because you just think it should sound better if you do. Be honest with yourself — are there too many doubles? Do you need a double at all, or is it just muddying up your track?


Lastly, if you’re less comfortable with recreating a perfect lead vocal performance, here are a few tips and techniques for approximating a lead vocal doubling effect.

  1. The chorus effect. This kind of effect will double your vocal track inside a plugin, giving it a slight pitch shift and/or modulation over time, and also mess very slightly with the timing of the original and the doubling. There are lots of different options for a plugin like this, such as Soundtoys’ Microshift plugin.
  2. Slap it. Use a slapback delay to create a quick and quickly decaying doubling effect. Your delay time should be extremely short, on the order of 10–30 ms (with a tiny bit of predelay to separate it from the lead), with a low feedback to avoid a ringing, modulated type of sound.
  3. Fake it. You can also create a chorus plugin type effect manually. Just copy your lead, use a pitch shifting tool to alter it up or down by somewhere around 5 cents, and slightly delay the timing of the double. Pan to taste. You can only get away with a couple of these, at most!

The world of vocal production is vast, and opinions vary widely between producers, genres, and generations about what a “correct” vocal production technique looks like. The truth is that whatever sounds good to you will probably sound good to someone else, but you won’t know until you try. So get experimenting, and make some music!


Gain more control over your next mix. Preview Soundfly’s Faders Up course series, Modern Mix Techniques and Advanced Mix Techniques, for free today! Both courses are taught by today’s leading sound engineers, and come with six weeks of personal mentorship and mix feedback from an expert who works in the field.

You’ll gain hands-on experience with modern mixing techniques such as EQ, compression, level and pan setting, digital signal processing, FX sends, and more. If you’d like to reserve a spot in the next session, use the promo code TUNECORE at checkout to get 25% off (that’s $125!).

Mix Buss Compression Made Easy!

[Author: Scott Wiggins *]  

How many of you are completely terrified of doing anything to the mix buss, aka “stereo buss” “2 buss”?

It is real easy to mess up an entire mix with too much processing, in particular, mix buss compression.

Over the years of searching the internet creeping on my favorite mixers (Jaquire King, Dave Pensado, Chris Lord Alge, and many more) mix buss compression settings I’ve found that a little goes a long way.

Sign Up

Mix Buss Compression Glue

Have you ever heard the term “glue” in a conversation of recording and mixing?

No, I’m not talking about the kind you used to put on your hands in elementary so you could peel it off when it dried.

Am I the only one who did that?

I’m talking about the way compression can make tracks seem like they fit together a little better.

When set up correctly it makes the whole song feel like it’s glued together in the subtle ways which gives it a nice musical polished cohesive sound.

The goal with mix buss compression would be to just tame any transients that may spike up in volume just a little too much, and then bring the overall volume up of the rest of the tracks juuuuuust a bit.

We’re just trying to add a little more energy and fullness to the mix.

mix buss compression

Mix Buss Compression Settings

The Attack:

The attack setting you use for mix buss compression is important just like using a compressor on any other track.

With a faster attack the compressor will clamp down sooner on the transients that tend to be a little louder than the rest of the audio coming through.

A slower attack will wait milliseconds before it clamps down on the audio and starts compressing.

I tend to use a faster attack, BUT I’m not crushing those transients with a ton of compression, so I still keep the dynamics in my mix.

If I found I was killing the transients too much and there was no excitement in my mix, I would probably make it a slower attack setting.

Release:

I tend to use a medium to fast release setting.

I’ve heard a lot of famous mixers say they set the release with the tempo of the song.

So they would watch the gain reduction needle and have it release on beat with the song.

I try my best to use this method.

Ratio:

I use a really small ratio of around 1.5 to 1.

This means that once my audio passes the threshold I’ve set that there is very little compression happening to that audio.

It’s just a little bit. I’m not trying to squash the life out of it.

You can experiment with a little bit higher of a ratio, but know that the lower the ratio the less compression (more dynamics), and the higher the ratio the more compression (less dynamics).

Threshold:

I dial the threshold to where I’m only getting about 1 to 3 dbs of gain reduction on the peaks of the audio.

I tend to keep it on the lower side of 1 to 2 dbs of gain reduction.

You just want to kiss the needle. You don’t want to have to much mix bus compression happening.

Remember, we are going for a subtle “glue” like affect.

Make up Gain:

Just like on any other compressor, I turn the make up gain to math the amount of gain reduction happening.

Be careful here. Don’t turn it up to loud and fool yourself that you like the result just because it’s louder.

Do your best to math the input volume with the output volume of the compressor.

We tend to think louder is better when it’s not really better, it’s just louder.

I’ve shot a video tutorial below to show all of this in action on a mix i’ve started. Check it out!

Conclusion

Mix buss compression is a great way to add a little bit of excitement and glue to your mix.

Some people like to slap it on the master buss AFTER they have mixed it (Ryan West who’s credits are Jay-Z, Eminem, Kid Cudi, Maroon 5, T.I, Rihanna and Kanye West)

And some engineers like to slap a little bit of compression on in the beginning and mix through it.

I don’t think there is a right or wrong way when it comes to when to put it on.

The key is to be subtle and don’t kill a good mix with too much mix buss compression.

Use your ears like always. They are your biggest weapons.

Good luck and happy mixing!

If you want to learn the 1st step to a successful mix even before you think about adding mix buss compression, check this post out about “The Static Mix”.

Sign Up

[Editors Note: This is blog was written by Scott Wiggins and it originally appeared on his site, The Recording Solution, which is dedicated to helping producers, engineers and artists make better music from their home studios.]

From the Stage to the Studio: How To Adapt Vocals For Recording

[Editors Note: This blog was written by Sabrina Bucknole. Sabrina has been singing in musical theater for over eight years, and wrote this as a deep dive into how live and theatrical singers can adapt their vocals for the studio and offers five practical tips for singers recording in the studio.]

 

Singers who have a lot of experience performing live can often find difficulty in bringing the same level of performance to the studio. Whether this is because of the space itself, the lack of an audience, the different approaches to singing techniques, or the range of equipment found in the studio, singers must learn to adapt their vocals for the studio if they want to create the “right” sound.

Introducing the Stage to the Studio

There are many elements about the studio that cannot be re-created on stage, but with technology advancing, this gap is closing, especially where vocals are concerned. For instance, loop pedals are becoming increasingly popular in live performances among the likes of famous artists including Ed Sheeran, Radiohead, and Imogen Heap. Loop pedals are used to create layers of sound and add texture to the performance, allowing a solo artist to become anything from a three-person band to an entire choir.

Vocals are recorded similarly to how they are recorded in a studio except they are recorded in the moment during live performance. It could be said that recording vocals in a studio is more intimate and requires more focus due to the enhanced sensitivity of the mics used in these spaces.

Both dynamic and condenser mics usually come with a specially designed acoustic foam windshield which absorbs the soundwaves coming from the voice. Duncan Geddes, MD of Technical Foam Services emphasises the importance of choosing the correct type of foam for the microphone windshield when recording in the studio. He explains that “having the right microphone windshield is essential to ensure an effective barrier against specific background noise while still allowing acoustic transparency. The critical aspect is the consistent pore size and density of the foam, to ensure complete sound transparency”.

To avoid picking up any unwanted sounds including plosives (“b” and “p” sounds created by a short blast of air from the mouth), acoustic windshields can be very effective. These air blasts strike the diaphragm of the mic and create a thump-like sound known as “popping”.

Sign Up

From Broadway to Booth: Vocal Differences

Singing in a recording studio can be daunting, especially for those who are used to singing live in a theatre. This could be because every tiny imperfection of the voice is picked up in the studio, including things that go unnoticed when performing live. Faced with these imperfections, some singers try to smooth out every little bump or crack in the voice in the pursuit of “perfection”.

Others embrace the “flaws” of the voice to create a sound unique to them. For instance, the well-known artist Sia embraces the natural cracks of her voice. This is apparent in most of her songs, especially in the song “Alive” on her 2016 album, This Is Acting. At 4 minutes 10 seconds you can hear her slide up to a higher note. To some, this might sound a little strained, but to others and Sia, this may simply be a natural and welcome part of her sound and performance.

Volume control can also be something to think about when entering the studio from the stage. Theatrical singers are taught to project their voices even in soft, quiet parts so they can still be heard. It could be argued that belting high and powerful notes becomes almost second nature to them, which is why they may find themselves having to reign it in slightly when adapting their voice for the studio.

For instance, according to multiplatinum songwriter and producer Xandy Barry, vocalists need to tone down their performance when recording in a studio. He reveals, “In certain quiet passages [singers] may need to bring it down, because in the studio a whisper can be clearly heard.”

It could also be argued that when performing live, the stage is a space where a certain type of energy is released, something that cannot be re-created in the studio. Playing to a crowd may bring something out of an artist. Some performers feel they can express themselves more on stage compared to in the studio. A live performance is ultimately, a performance after all.

This does not mean that the studio is restricting; instead it could be argued that other techniques are evoked when recording in this space. For instance, some singers display more finesse and subtlety in their work, something that cannot always be re-created on stage.

Sign Up

Five practical tips for singers recording in the studio:

1. Warm up

Studio time can be expensive which is why it’s best to warm up before entering the studio. As well as being prepared vocally, make sure you’re prepared with how you’re going to approach the piece. Some recommend knowing precisely how you’re going to sing every section, but this can come across as being over-rehearsed and may not sound natural. To avoid this, approach the piece differently each time and try experimenting with different sounds, textures, and volumes.

2. Record, record, record

Try and capture everything you can. If you vocalise something you like the sound of, but no one hit “record”, it can be frustrating for you as the singer, trying to re-create that same sound.

3. Keep cool and have fun

If you feel like you’re getting frustrated because a take isn’t going well or you’re not hitting the right notes, or you’re sounding rather flat, take a break. Take some time to clear your head and start afresh, so the next time you hit record, you’ll almost certainly get the results you were after!

4. Be emotional

Conjuring up emotions in the studio can be harder to do than on stage. This can be due to the lack of atmosphere, people, and the confined space. To avoid lyrics coming across bland or meaningless, try to focus on the lyrics themselves and decode them.

To stir the emotions you’re looking for, personalise the material by asking yourself “What is the meaning behind these words?”, “How are these lyrics making me feel?”, and “How can I relate these lyrics to my own life or the life of someone I care about?”. Like an actor and their script, discovering and analysing the intention of the words can have a great effect on the performance.

5. Manage the microphone

Singers with experience behind a mic know how to handle one. Skilled singers know where and how to move their head to create different volumes and sounds. For instance, by moving closer to the mic as they get softer, and further as they get louder they can manipulate the volume of their vocals, reducing the amount of compression required in editing later.

Singing into a mic when recording can be different from singing into a mic on stage. The positioning, mounting, angle of the mic, and distance from the singer, can all effect the captured vocal sound. Live singers usually hold the mic close to their mouth especially for softer parts, but in a studio, the mic is usually more sensitive to sound. This is why it’s best to keep more distance between yourself and the mic, especially for louder sections.

Sign Up

14 of the Most Commonly Confused Terms in Music and Audio

[Editors Note: This article was written by Brad Allen Williams and it originally appeared on the Flypaper Blog. Brad is a NYC-based guitarist, writer/composer, producer, and mixer.]

Once upon a time, remixing a song meant actually redoing the mix. Many vintage consoles (some Neve 80-series, for example) have a button labeled “remix” that changes a few functions on the desk to optimize it for mixing rather than recording.

But sometime in the late 20th century, the word “remix” began to take on a new meaning: creating a new arrangement of an existing song using parts of the original recording. Into the 21st century, it’s evolved again and is now sometimes used as a synonym for “cover.” The latter two definitions remain in common use, while the first has largely disappeared.

Language is constantly evolving, and musical terms are obviously no exception. In fact, in music, language seems to evolve particularly fast, most likely owing to lots of interdisciplinary collaboration and the rapid growth of DIY.

Ambiguous or unorthodox use of language has the potential to seriously impede communication between collaborators. In order to avoid an unclear situation, let’s break down standard usage of some of the most commonly conflated, misused, or misunderstood music-related terms.

GAIN / DISTORTION

Gain, as it’s used in music electronics, is defined by Merriam-Webster as, “An increase in amount, magnitude, or degree — a gain in efficiency,” or, “The increase (of voltage or signal intensity) caused by an amplifier; especially: the ratio of output over input.”

To put it in less formal terms, gain is just an increase in strength. If an amplifier makes a signal stronger, then it causes that signal to gain intensity. Gain is usually expressed as a ratio. If an amplifier makes a signal 10 times as loud, then that amplifier has a “gain of 10.”

On the other hand, harmonic distortion is that crunchy or fuzzy sound that occurs when an amplifier clips (as a result of its inability to handle the amount of signal thrown at it).

In the 1970s, some guitar amp manufacturers began employing extra gain stages in their designs to generate harmonic distortion on purpose. In other words, they’d amplify the signal, then amplify it again, and that second gain stage — having been given more than it could handle — would distort. These became known as “high-gain amplifiers.” Because of this, many guitarists just assumed that gain was synonymous with distortion. This was cemented when later amps like the Marshall JCM900 had knobs labeled “gain” that, by design, increased the amount of harmonic distortion when turned up!

Outside the realm of electric guitar, though, gain is still most typically used in a conventional way. When a recording engineer talks about “structuring gain,” for example, he or she is usually specifically trying to avoid harmonic distortion. It’s easy to see how this might cause confusion!

TONALITY / TONE

Not to pick on guitarists, but this is another one that trips us up. Tone has many music-related definitions, but the one of interest at the moment is (again, per Merriam-Webster), “Vocal or musical sound of a specific quality…musical sound with respect to timbre and manner of expression.”

On the other hand, the dictionary definition of tonality is:

1. Tonal quality.

2a. Key.

2b. The organization of all the tones and harmonies of a piece of music in relation to a tonic.

It’s important to note that “tonal quality” here refers to “the quality of being tonal,” or the quality of being in a particular key (in other words, not atonal). This is a different matter from “tone quality,” which is commonly understood to mean “timbre.” Most musicians with formal training understand tonality either as a synonym for key or as the quality of being in a key.

If you’re trying to sound fancy, it can be tempting to reach for words with more syllables, but using tonality as a synonym for timbre can be confusing. Imagine you’re recording two piano pieces — one utilizing 20th-century serial composition techniques and the other utilizing functional harmony. If you express concerns about the piano’s “tonality” while recording the second piece, the composer would probably think you were criticizing his or her work!

OVERDUB / PUNCH-IN

Most musicians in the modern era understand the difference between these two concepts, but they still occasionally confuse folks relatively new to the process of recording.

Overdubbing is adding an additional layer to an existing recording.

“Punching in” is replacing a portion of an already-recorded track with a new performance.

To do a “punch-in” (in order to fix a mistake, for example), the performer plays along with the old performance until, at the appropriate moment, the recordist presses record, thus recording over the mistake. The recordist can then “punch out” to preserve the remainder of the original performance once the correction is made.

GLISSANDO / PORTAMENTO

A portamento is a continuous, steady glide between two pitches without stopping at any point along the way.

A glissando is a glide between two pitches that stair-steps at each intermediate note along the way. A glissando amounts, in essence, to a really fast chromatic scale.

To play a glissando on guitar, you’d simply pluck a string and slide one finger up the fretboard. The frets would make distinct intermediate pitches, creating the stair-stepped effect. If you wished to play a portamento on guitar, you could either bend the string or slip a metal or glass slide over one of the fingers of your fretting hand.

VIBRATO / TREMOLO

While often used interchangeably in modern practice, vibrato and tremolo are actually distinct kinds of wiggle. In most cases, tremolo is amplitude modulation (varying the loudness of the signal), whereas vibrato is frequency modulation (varying the pitch of the signal).

But over the past few hundred years, tremolo has commonly referred to many different performative actions. On string instruments, tremolo is used to refer to the rapid repetition of a single note, and in percussion, tremolo is often used to describe a roll. Singers use it for even crazier things, like a pulsing of the diaphragm while singing¹.

Leo Fender must’ve had his terms confused — he labeled the vibrato bridges on his guitars “synchronized tremolo,” and the tremolo circuits on his amps “vibrato.” Confusion has reigned ever since.

ANALOG / DIGITAL

Analog and digital are perhaps the most confused pair of words in the 21st-century musical lexicon. I once had a somewhat older musician tell me that my 1960s-era fuzz pedal and tape echo made my guitar sound “too digital” for his music. Likewise, countless younger musicians claim to prefer the “analog sound” of the original AKAI MPC (an early digital sampler) and the Yamaha DX-7 (an early digital FM synthesizer). But “analog” and “digital” are not simply stand-ins for “vintage” and “modern,” nor for “hardware” and “software.” They’re entirely different mechanisms for storing and generating sounds. Let’s learn a little more!

Merriam-Webster’s most relevant definition of analog is, “Of, relating to, or being a mechanism in which data is represented by continuously variable physical quantities.”

Also relevant is its first definition of analogue: “Something that is analogous or similar to something else.”

Now, how does this relate to music technology? It all goes back to humans’ longstanding search for a way to capture and store sound. Sound, on a basic scientific level, is nothing more than compression and rarefaction (decompression) of air that our ears can sense. Since air pressure fluctuations can’t really be stored, recording sound proved elusive for a long time.

20th-century scientists and engineers, however, brilliantly figured out that recording sound might be possible if they could accurately transfer that sound into something that could be preserved. They needed something storable that would represent the sound; an analogue to stand in for the sound that would allow it to be captured and kept.

First, they used mechanically generated squiggles on a wax cylinder as the analogue. Eventually, they figured out that they could use alternating-current electricity (which oscillates between positive and negative voltage), as an analogue of sound waves (which oscillate between positive and negative air pressure). From there, it was a relatively short leap to figuring out that they could, through electromagnetism, store that information as positively and negatively charged magnetic domains, which exist on magnetic tape.

This is analog recording!

Since electric voltage is continuously variable, any process — including synthesis — that represents air pressure fluctuations exclusively using alternating current electricity is analog, per Merriam-Webster’s first definition above.

Digital, on the other hand, is defined as, “Of, relating to, or using calculation by numerical methods or by discrete units,” and, “Of, relating to, or being data in the form of especially binary digits, digital images, a digital readout; especially : Of, relating to, or employing digital communications signals, a digital broadcast.”

That’s a little arcane, so let’s put it this way: Rather than relying directly on continuous analog voltages, a digital recorder or synthesizer computes numerical values that represent analog voltages at various slices of time, called samples. These will then be “decoded” into a smooth analog signal later in order to be accurately transferred back into actual air pressure variations at the speaker. If that’s a blur, don’t worry — you only need to understand that this is a fundamentally different process of storing or generating sound.

Absent a real acquaintance with the technology of an individual piece of equipment or process, it’s probably safer to avoid leaping to conclusions about whether it’s analog or digital. For example, there are reel-to-reel magnetic tape machines (like the Sony PCM 3348 DASH) that don’t record analog voltage-based signal at all, but rather use the tape to store digital information (as simple ones and zeroes).

Since you can’t judge whether a piece of gear is analog or digital with your eyes, it’s probably best to only use these terms when you need to refer to the specific technologies as outlined above. In other words, next time you’re recording in a studio with a cool-looking piece of old gear, it’s probably safer to use #vintage instead of#analog to caption your in-studio Instagram photo!

PHASE / POLARITY

Phase is defined by Merriam-Webster as… (deep breath):

“The point or stage in a period of uniform circular motion, harmonic motion, or the periodic changes of any magnitude varying according to a simple harmonic law to which the rotation, oscillation, or variation has advanced from its standard position or assumed instant of starting.”

That’s a mouthful! This is a concept that’s easier understood with an example, so let’s imagine that you have a swinging pendulum:

If you were to freeze that pendulum at two different times, the dot at the end would be in two different locations. The pendulum’s swing occurs over time, so the location of the pendulum depends on when you stop it. We’d refer to the phase of the pendulum in order to describe this phenomenon and where the pendulum is in its cycle relative to time. And since it’s always moving in a continuous, smooth arc, there are an infinite number of possibilities!

Phase becomes potentially relevant for anything that’s oscillating or undulating — like the pendulum above or a sound wave.

Polarity, on the other hand, is defined as, “The particular state, either positive or negative, with reference to the two poles or electrification.”

To put it in very simple terms, you’re dealing with polarity any time you install a battery. The battery has a positive terminal and a negative one. You have to make sure it’s installed the right way. While phase is infinitely variable, polarity has only two choices — it’s one or the other.

In our brief explanation of analog audio above, we mentioned that positive and negative swings of voltage are used to represent positive and negative changes in air pressure. If we switch polarity of a signal, we swap all the positive voltages for negative ones, and vice-versa. +1v becomes -1v, +0.5v becomes -0.5v, etc. This is usually accomplished with a button marked with the Greek letter theta or “Ø.”

Interestingly, if you have one signal alone, it’s usually the case that our ear can’t really tell the difference between positive or negative polarity. It’s when you combine two or more similar signals (like two microphones on one drum for instance) that a polarity flip of one or the other can have a dramatic influence on the sound.

Confusingly, this influence is a result of phase differences between the two sources, and switching polarity can often improve (or worsen!) the sound of two combined sources which are slightly out of phase. For this reason, the polarity switch is often called a “phase switch,” and depressing it is often colloquially referred to as “flipping phase.”

In the graphic below, you’ll see a brief, zoomed-in snapshot of two waveforms. A single bass performance was simultaneously recorded into both a direct box (blue) and through a mic on its amplifier (green).

In the first graphic, you can notice that the two are slightly out of phase. The blue direct-in wave swings negative ever so slightly before the green mic–on–amp one does. This is because the amp’s sound had to travel through the air briefly before being picked up by the microphone. Since sound in air travels much more slowly than electricity does, this creates a slight time delay or phase discrepancy.

In the second example below, I’ve flipped the polarity of the amp track. You can see that the time delay still exists, but now the amp track’s wave is inverted or “upside down.” As the DI track swings negative, the amp track swings positive.

In this case, the switch made the combined sound noticeably thinner, so I quickly flipped it back. Occasionally though, flipping polarity improves the combined sound of two sources which are slightly out of phase.

In practice, most recordists will understand what you mean if you say “flip the phase,” but should there happen to be a physicist in the room, you might get a raised eyebrow! Generally, though, this is a classic example of how unorthodox usage sometimes becomes accepted over time.

Which raises the point: any of the musical and audio terms above may eventually, like “remix” before them, evolve to incorporate new shades of meaning (or even have some earlier “correct” definitions fall into disuse). In the meantime, though, the more precise your grasp on the language of music, the less likely you are to misunderstand or be misunderstood.


¹ In performance, for both singers and many instrumentalists, pure tremolo is almost impossible to achieve without taking on some characteristics of vibrato — that is to say that a passage is played or sung with only variations of either pitch or volume.

Music Streaming Platforms & Mastering – 3 Guiding Concepts

[Editors Note: This blog was written by Alex Sterling, an audio engineer and music producer based in New York City. He runs a commercial studio in Manhattan called Precision Sound where he provides recording, mixing, and mastering services.]

Background:

As an audio engineer and music producer I am constantly striving to help my clients music sound the best that it can for as many listeners as possible. With music streaming services like Apple Music/iTunes Radio, Spotify, Tidal, and YouTube continuing to dominate how people consume music, making sure that the listener is getting the best possible sonic experience from these platforms is very important.

Over the last several years some new technologies have been developed and integrated into the streaming service’s playback systems called Loudness Normalization.

Loudness Normalization is the automatic process of adjusting the perceived loudness of all the songs on the service to sound approximately the same as you listen from track to track.

The idea is that the listener should not have to adjust the volume control on their playback system from song to song and therefore the listening experience is more consistent. This is generally a good and useful thing and can save you from damaging your ears if a loud song comes on right after a quiet one and you had the volume control way up.

The playback system within each streaming service has an algorithm that measures the perceived loudness of your music and adjusts its level to match a loudness target level they have established. By adjusting all the songs in the service to match this target the overall loudness experience is made more consistent as people jump between songs and artists in playlists or browsing.

If your song is louder than the target it gets turned down to match and if it is softer it is sometimes made louder with peak limiting depending on the service (Spotify only).

So how do we use this knowledge to make our music sound better?

The simple answer is that we want to master our music to take into account the loudness standards that are being used to normalize our music when streaming, and prepare a master that generally complies with these new loudness standards.

Concept 1: Master for sound quality, not maximum loudness.

If possible work with a professional Mastering Engineer who understands how to balance loudness issues along with the traditional mastering goals of tonal balance and final polish etc.

If you’re mastering your own music then try to keep this in mind while you work:

Don’t pursue absolute loudness maximization, instead pursue conscious loudness targeting.

If we master our music to be as loud as possible and use a lot of peak limiting to get the loudness level very high then we are most likely sacrificing some dynamic range, transient punch, and impact to get our music to sound loud.

The mechanism of loudness maximization intentionally reduces the dynamic range of our music so the average level can be made higher. There are benefits to this such as increasing the weight and density of a mix, but there are also negatives such as the loss of punch and an increase in distortion. It’s a fine line to walk between loud enough and too loud.

Here is where loudness normalization comes in:

If our song is mastered louder than the streaming target loudness level then our song will be gained down (by the service) as a result. If you are mastering louder than the target level then you are throwing away potential dynamic range and punch for no benefit and your song will sound smaller, less punchy, and more dynamically constrained in comparison to a song that was mastered more conservatively in regards to loudness.

If we master softer than the target level then in some cases (Spotify) the streaming service actually adds gain and peak limiting to bring up the level. This is potentially sonically adverse because we don’t know what that limiting process will do to our music. Will it sound good or not? It most likely will create some loss of punch but how much is lost will be based on what content was put in.

Some music is more sensitive to this limiting process. High dynamic range jazz or classical music with pristine acoustic instruments might be more sonically damaged than a rock band song with distorted guitars for example so the result is not entirely predictable just on loudness measurement but also on musical style.

Thankfully the main platforms other than Spotify don’t add gain and peak limiting as of this writing so they are less potentially destructive to sound quality for below target content.

Concept 2: Measure loudness using a LUFS/LKFS meter.

The different streaming services have different loudness standards and algorithms to take measurements and apply the normalization but for the most part they use the basic unit system of loudness measurement called LUFS or LKFS. This metering system allows engineers to numerically meter how loud content is and make adjustments to the dynamic range accordingly.

Being able to understand how our music masters are metering with this scale is useful to see what will happen when they are streamed on different services (i.e. will the algorithm gain them up or down to meet the target or not?)

Concept 3: Choose which loudness standard to master to.

Direct your mastering engineer if you are working with one to master to a target loudness level and consult with them about what they feel is an appropriate target level for your music. If you are mastering jazz or classical music you probably don’t want to make a very loud master for sound quality and dynamic range reasons but if you are making a heavy rock, pop, or, hip hop master that wants to be more intense then a louder target may be more suitable.

iTunes Sound Check and Apple Music/iTunes Radio use a target level of
-16LUFS and this would be a suitable target for more dynamic material.

Tidal uses a target level of -14LUFS that is a nice middle ground for most music that wants to be somewhat dynamic.

YouTube uses a target level of -13LUFS, a tiny bit less dynamic than Tidal.

Spotify uses a loudness target of -11LUFS and as you can see this is 5 dB louder than iTunes/Apple Music. This is more in the territory of low dynamic range, heavily limited content.

Somewhere in the middle of -16LUFS and -11LUFS might be the best target loudness for your music based on your desired dynamic range but the goal is not to go above the chosen target otherwise your content gets gained down on playback and dynamic range is lost.

In all services except Spotify, content that measures lower than target loudness is not gained up. So for people working with very dynamic classical music or film soundtracks those big dynamic movements will not be lost on most streaming platforms.

However since Spotify is unique and adds gain and peak limiting if your content is below target it is potentially the most destructive sonically. So should you master to -11LUFS and save your music from Spotify’s peak limiting but lose dynamic range on the other platforms? It’s a compromise that you have to decide for yourself in consultation with your mastering engineer.

You might want to test out what -11LUFS sounds like in the studio and hear what the effect of that limiting is. Is it better to master that loud yourself and compensate in other ways for the lost punch and lower dynamic range? Or should you accept that Spotify users get a different dynamic range than iTunes users and let your music be more dynamic for the rest of the platforms?

In all cases there is no benefit to going above -11 LUFS because that is the loudest target level used by any service. If you go louder than -11LUFS then your music will be turned down and dynamic range and punch will be lost on all the services needlessly and permanently.

Further Reading:

Great info – graphic on the different streaming loudness targets.

More info on LUFS/LKFS metering.

Slapback Delay – A Must Have On Vocals & Guitars

[Editors Note: This is blog was written by Scott Wiggins and it originally appeared on his site, The Recording Solution, which is dedicated to helping producers, engineers and artists make better music from their home studios.]

Slapback delay is a very common effect on tons of hit records. It’s really easy to set up!

When you think of delay, you probably think of yelling down a long canyon and hearing your voice repeat over and over. In my mind that’s an echo.

That’s what a slapback delay is, except it’s one single echo. One single repeat of the original signal.

It’s more like you clap while standing in a small alley between 2 buildings, and hearing a very quick repeat of your clap.

It’s a super fast repeat that adds a sense of space.

Guitar players love it when playing live, and I love using it on guitars and vocals in the context of a mix.

It just adds some energy and sense of depth without having to use a reverb and running the risk of washing out your dry signal.

I tend to use more effects after the slapback delay, but I more times than not start with it to set the foundation of the sound I’m trying to achieve.

A little Goes A Long Way

This effect is used more as a subtle effect on vocals or guitars.

It can be used on anything you like, but those tend to be the most popular in my opinion.

BUT… there are no rules, so if subtle bores you, then go crazy!

Also you can start with a preset on most delay plugins, and then tweak to taste.

If you are tweaking your own slap delays, just make sure your delay times are not in increments.

For Example: 32ms and then 64ms.

That would put the delay on the beat and that’s not technically a slap delay.

I learned that tip from the great mixer and teacher Dave Pensado,  so  I wanted to pass it on to you.

Watch the video above to see how I set all this up inside a real mix.

Comment below and let me know your thoughts.