From the Stage to the Studio: How To Adapt Vocals For Recording

[Editors Note: This blog was written by Sabrina Bucknole. Sabrina has been singing in musical theater for over eight years, and wrote this as a deep dive into how live and theatrical singers can adapt their vocals for the studio and offers five practical tips for singers recording in the studio.]

 

Singers who have a lot of experience performing live can often find difficulty in bringing the same level of performance to the studio. Whether this is because of the space itself, the lack of an audience, the different approaches to singing techniques, or the range of equipment found in the studio, singers must learn to adapt their vocals for the studio if they want to create the “right” sound.

Introducing the Stage to the Studio

There are many elements about the studio that cannot be re-created on stage, but with technology advancing, this gap is closing, especially where vocals are concerned. For instance, loop pedals are becoming increasingly popular in live performances among the likes of famous artists including Ed Sheeran, Radiohead, and Imogen Heap. Loop pedals are used to create layers of sound and add texture to the performance, allowing a solo artist to become anything from a three-person band to an entire choir.

Vocals are recorded similarly to how they are recorded in a studio except they are recorded in the moment during live performance. It could be said that recording vocals in a studio is more intimate and requires more focus due to the enhanced sensitivity of the mics used in these spaces.

Both dynamic and condenser mics usually come with a specially designed acoustic foam windshield which absorbs the soundwaves coming from the voice. Duncan Geddes, MD of Technical Foam Services emphasises the importance of choosing the correct type of foam for the microphone windshield when recording in the studio. He explains that “having the right microphone windshield is essential to ensure an effective barrier against specific background noise while still allowing acoustic transparency. The critical aspect is the consistent pore size and density of the foam, to ensure complete sound transparency”.

To avoid picking up any unwanted sounds including plosives (“b” and “p” sounds created by a short blast of air from the mouth), acoustic windshields can be very effective. These air blasts strike the diaphragm of the mic and create a thump-like sound known as “popping”.

From Broadway to Booth: Vocal Differences

Singing in a recording studio can be daunting, especially for those who are used to singing live in a theatre. This could be because every tiny imperfection of the voice is picked up in the studio, including things that go unnoticed when performing live. Faced with these imperfections, some singers try to smooth out every little bump or crack in the voice in the pursuit of “perfection”.

Others embrace the “flaws” of the voice to create a sound unique to them. For instance, the well-known artist Sia embraces the natural cracks of her voice. This is apparent in most of her songs, especially in the song “Alive” on her 2016 album, This Is Acting. At 4 minutes 10 seconds you can hear her slide up to a higher note. To some, this might sound a little strained, but to others and Sia, this may simply be a natural and welcome part of her sound and performance.

Volume control can also be something to think about when entering the studio from the stage. Theatrical singers are taught to project their voices even in soft, quiet parts so they can still be heard. It could be argued that belting high and powerful notes becomes almost second nature to them, which is why they may find themselves having to reign it in slightly when adapting their voice for the studio.

For instance, according to multiplatinum songwriter and producer Xandy Barry, vocalists need to tone down their performance when recording in a studio. He reveals, “In certain quiet passages [singers] may need to bring it down, because in the studio a whisper can be clearly heard.”

It could also be argued that when performing live, the stage is a space where a certain type of energy is released, something that cannot be re-created in the studio. Playing to a crowd may bring something out of an artist. Some performers feel they can express themselves more on stage compared to in the studio. A live performance is ultimately, a performance after all.

This does not mean that the studio is restricting; instead it could be argued that other techniques are evoked when recording in this space. For instance, some singers display more finesse and subtlety in their work, something that cannot always be re-created on stage.

Five practical tips for singers recording in the studio:

1. Warm up

Studio time can be expensive which is why it’s best to warm up before entering the studio. As well as being prepared vocally, make sure you’re prepared with how you’re going to approach the piece. Some recommend knowing precisely how you’re going to sing every section, but this can come across as being over-rehearsed and may not sound natural. To avoid this, approach the piece differently each time and try experimenting with different sounds, textures, and volumes.

2. Record, record, record

Try and capture everything you can. If you vocalise something you like the sound of, but no one hit “record”, it can be frustrating for you as the singer, trying to re-create that same sound.

3. Keep cool and have fun

If you feel like you’re getting frustrated because a take isn’t going well or you’re not hitting the right notes, or you’re sounding rather flat, take a break. Take some time to clear your head and start afresh, so the next time you hit record, you’ll almost certainly get the results you were after!

4. Be emotional

Conjuring up emotions in the studio can be harder to do than on stage. This can be due to the lack of atmosphere, people, and the confined space. To avoid lyrics coming across bland or meaningless, try to focus on the lyrics themselves and decode them.

To stir the emotions you’re looking for, personalise the material by asking yourself “What is the meaning behind these words?”, “How are these lyrics making me feel?”, and “How can I relate these lyrics to my own life or the life of someone I care about?”. Like an actor and their script, discovering and analysing the intention of the words can have a great effect on the performance.

5. Manage the microphone

Singers with experience behind a mic know how to handle one. Skilled singers know where and how to move their head to create different volumes and sounds. For instance, by moving closer to the mic as they get softer, and further as they get louder they can manipulate the volume of their vocals, reducing the amount of compression required in editing later.

Singing into a mic when recording can be different from singing into a mic on stage. The positioning, mounting, angle of the mic, and distance from the singer, can all effect the captured vocal sound. Live singers usually hold the mic close to their mouth especially for softer parts, but in a studio, the mic is usually more sensitive to sound. This is why it’s best to keep more distance between yourself and the mic, especially for louder sections.

14 of the Most Commonly Confused Terms in Music and Audio

[Editors Note: This article was written by Brad Allen Williams and it originally appeared on the Flypaper Blog. Brad is a NYC-based guitarist, writer/composer, producer, and mixer.]

Once upon a time, remixing a song meant actually redoing the mix. Many vintage consoles (some Neve 80-series, for example) have a button labeled “remix” that changes a few functions on the desk to optimize it for mixing rather than recording.

But sometime in the late 20th century, the word “remix” began to take on a new meaning: creating a new arrangement of an existing song using parts of the original recording. Into the 21st century, it’s evolved again and is now sometimes used as a synonym for “cover.” The latter two definitions remain in common use, while the first has largely disappeared.

Language is constantly evolving, and musical terms are obviously no exception. In fact, in music, language seems to evolve particularly fast, most likely owing to lots of interdisciplinary collaboration and the rapid growth of DIY.

Ambiguous or unorthodox use of language has the potential to seriously impede communication between collaborators. In order to avoid an unclear situation, let’s break down standard usage of some of the most commonly conflated, misused, or misunderstood music-related terms.

GAIN / DISTORTION

Gain, as it’s used in music electronics, is defined by Merriam-Webster as, “An increase in amount, magnitude, or degree — a gain in efficiency,” or, “The increase (of voltage or signal intensity) caused by an amplifier; especially: the ratio of output over input.”

To put it in less formal terms, gain is just an increase in strength. If an amplifier makes a signal stronger, then it causes that signal to gain intensity. Gain is usually expressed as a ratio. If an amplifier makes a signal 10 times as loud, then that amplifier has a “gain of 10.”

On the other hand, harmonic distortion is that crunchy or fuzzy sound that occurs when an amplifier clips (as a result of its inability to handle the amount of signal thrown at it).

In the 1970s, some guitar amp manufacturers began employing extra gain stages in their designs to generate harmonic distortion on purpose. In other words, they’d amplify the signal, then amplify it again, and that second gain stage — having been given more than it could handle — would distort. These became known as “high-gain amplifiers.” Because of this, many guitarists just assumed that gain was synonymous with distortion. This was cemented when later amps like the Marshall JCM900 had knobs labeled “gain” that, by design, increased the amount of harmonic distortion when turned up!

Outside the realm of electric guitar, though, gain is still most typically used in a conventional way. When a recording engineer talks about “structuring gain,” for example, he or she is usually specifically trying to avoid harmonic distortion. It’s easy to see how this might cause confusion!

TONALITY / TONE

Not to pick on guitarists, but this is another one that trips us up. Tone has many music-related definitions, but the one of interest at the moment is (again, per Merriam-Webster), “Vocal or musical sound of a specific quality…musical sound with respect to timbre and manner of expression.”

On the other hand, the dictionary definition of tonality is:

1. Tonal quality.

2a. Key.

2b. The organization of all the tones and harmonies of a piece of music in relation to a tonic.

It’s important to note that “tonal quality” here refers to “the quality of being tonal,” or the quality of being in a particular key (in other words, not atonal). This is a different matter from “tone quality,” which is commonly understood to mean “timbre.” Most musicians with formal training understand tonality either as a synonym for key or as the quality of being in a key.

If you’re trying to sound fancy, it can be tempting to reach for words with more syllables, but using tonality as a synonym for timbre can be confusing. Imagine you’re recording two piano pieces — one utilizing 20th-century serial composition techniques and the other utilizing functional harmony. If you express concerns about the piano’s “tonality” while recording the second piece, the composer would probably think you were criticizing his or her work!

OVERDUB / PUNCH-IN

Most musicians in the modern era understand the difference between these two concepts, but they still occasionally confuse folks relatively new to the process of recording.

Overdubbing is adding an additional layer to an existing recording.

“Punching in” is replacing a portion of an already-recorded track with a new performance.

To do a “punch-in” (in order to fix a mistake, for example), the performer plays along with the old performance until, at the appropriate moment, the recordist presses record, thus recording over the mistake. The recordist can then “punch out” to preserve the remainder of the original performance once the correction is made.

GLISSANDO / PORTAMENTO

A portamento is a continuous, steady glide between two pitches without stopping at any point along the way.

A glissando is a glide between two pitches that stair-steps at each intermediate note along the way. A glissando amounts, in essence, to a really fast chromatic scale.

To play a glissando on guitar, you’d simply pluck a string and slide one finger up the fretboard. The frets would make distinct intermediate pitches, creating the stair-stepped effect. If you wished to play a portamento on guitar, you could either bend the string or slip a metal or glass slide over one of the fingers of your fretting hand.

VIBRATO / TREMOLO

While often used interchangeably in modern practice, vibrato and tremolo are actually distinct kinds of wiggle. In most cases, tremolo is amplitude modulation (varying the loudness of the signal), whereas vibrato is frequency modulation (varying the pitch of the signal).

But over the past few hundred years, tremolo has commonly referred to many different performative actions. On string instruments, tremolo is used to refer to the rapid repetition of a single note, and in percussion, tremolo is often used to describe a roll. Singers use it for even crazier things, like a pulsing of the diaphragm while singing¹.

Leo Fender must’ve had his terms confused — he labeled the vibrato bridges on his guitars “synchronized tremolo,” and the tremolo circuits on his amps “vibrato.” Confusion has reigned ever since.

ANALOG / DIGITAL

Analog and digital are perhaps the most confused pair of words in the 21st-century musical lexicon. I once had a somewhat older musician tell me that my 1960s-era fuzz pedal and tape echo made my guitar sound “too digital” for his music. Likewise, countless younger musicians claim to prefer the “analog sound” of the original AKAI MPC (an early digital sampler) and the Yamaha DX-7 (an early digital FM synthesizer). But “analog” and “digital” are not simply stand-ins for “vintage” and “modern,” nor for “hardware” and “software.” They’re entirely different mechanisms for storing and generating sounds. Let’s learn a little more!

Merriam-Webster’s most relevant definition of analog is, “Of, relating to, or being a mechanism in which data is represented by continuously variable physical quantities.”

Also relevant is its first definition of analogue: “Something that is analogous or similar to something else.”

Now, how does this relate to music technology? It all goes back to humans’ longstanding search for a way to capture and store sound. Sound, on a basic scientific level, is nothing more than compression and rarefaction (decompression) of air that our ears can sense. Since air pressure fluctuations can’t really be stored, recording sound proved elusive for a long time.

20th-century scientists and engineers, however, brilliantly figured out that recording sound might be possible if they could accurately transfer that sound into something that could be preserved. They needed something storable that would represent the sound; an analogue to stand in for the sound that would allow it to be captured and kept.

First, they used mechanically generated squiggles on a wax cylinder as the analogue. Eventually, they figured out that they could use alternating-current electricity (which oscillates between positive and negative voltage), as an analogue of sound waves (which oscillate between positive and negative air pressure). From there, it was a relatively short leap to figuring out that they could, through electromagnetism, store that information as positively and negatively charged magnetic domains, which exist on magnetic tape.

This is analog recording!

Since electric voltage is continuously variable, any process — including synthesis — that represents air pressure fluctuations exclusively using alternating current electricity is analog, per Merriam-Webster’s first definition above.

Digital, on the other hand, is defined as, “Of, relating to, or using calculation by numerical methods or by discrete units,” and, “Of, relating to, or being data in the form of especially binary digits, digital images, a digital readout; especially : Of, relating to, or employing digital communications signals, a digital broadcast.”

That’s a little arcane, so let’s put it this way: Rather than relying directly on continuous analog voltages, a digital recorder or synthesizer computes numerical values that represent analog voltages at various slices of time, called samples. These will then be “decoded” into a smooth analog signal later in order to be accurately transferred back into actual air pressure variations at the speaker. If that’s a blur, don’t worry — you only need to understand that this is a fundamentally different process of storing or generating sound.

Absent a real acquaintance with the technology of an individual piece of equipment or process, it’s probably safer to avoid leaping to conclusions about whether it’s analog or digital. For example, there are reel-to-reel magnetic tape machines (like the Sony PCM 3348 DASH) that don’t record analog voltage-based signal at all, but rather use the tape to store digital information (as simple ones and zeroes).

Since you can’t judge whether a piece of gear is analog or digital with your eyes, it’s probably best to only use these terms when you need to refer to the specific technologies as outlined above. In other words, next time you’re recording in a studio with a cool-looking piece of old gear, it’s probably safer to use #vintage instead of#analog to caption your in-studio Instagram photo!

PHASE / POLARITY

Phase is defined by Merriam-Webster as… (deep breath):

“The point or stage in a period of uniform circular motion, harmonic motion, or the periodic changes of any magnitude varying according to a simple harmonic law to which the rotation, oscillation, or variation has advanced from its standard position or assumed instant of starting.”

That’s a mouthful! This is a concept that’s easier understood with an example, so let’s imagine that you have a swinging pendulum:

If you were to freeze that pendulum at two different times, the dot at the end would be in two different locations. The pendulum’s swing occurs over time, so the location of the pendulum depends on when you stop it. We’d refer to the phase of the pendulum in order to describe this phenomenon and where the pendulum is in its cycle relative to time. And since it’s always moving in a continuous, smooth arc, there are an infinite number of possibilities!

Phase becomes potentially relevant for anything that’s oscillating or undulating — like the pendulum above or a sound wave.

Polarity, on the other hand, is defined as, “The particular state, either positive or negative, with reference to the two poles or electrification.”

To put it in very simple terms, you’re dealing with polarity any time you install a battery. The battery has a positive terminal and a negative one. You have to make sure it’s installed the right way. While phase is infinitely variable, polarity has only two choices — it’s one or the other.

In our brief explanation of analog audio above, we mentioned that positive and negative swings of voltage are used to represent positive and negative changes in air pressure. If we switch polarity of a signal, we swap all the positive voltages for negative ones, and vice-versa. +1v becomes -1v, +0.5v becomes -0.5v, etc. This is usually accomplished with a button marked with the Greek letter theta or “Ø.”

Interestingly, if you have one signal alone, it’s usually the case that our ear can’t really tell the difference between positive or negative polarity. It’s when you combine two or more similar signals (like two microphones on one drum for instance) that a polarity flip of one or the other can have a dramatic influence on the sound.

Confusingly, this influence is a result of phase differences between the two sources, and switching polarity can often improve (or worsen!) the sound of two combined sources which are slightly out of phase. For this reason, the polarity switch is often called a “phase switch,” and depressing it is often colloquially referred to as “flipping phase.”

In the graphic below, you’ll see a brief, zoomed-in snapshot of two waveforms. A single bass performance was simultaneously recorded into both a direct box (blue) and through a mic on its amplifier (green).

In the first graphic, you can notice that the two are slightly out of phase. The blue direct-in wave swings negative ever so slightly before the green mic–on–amp one does. This is because the amp’s sound had to travel through the air briefly before being picked up by the microphone. Since sound in air travels much more slowly than electricity does, this creates a slight time delay or phase discrepancy.

In the second example below, I’ve flipped the polarity of the amp track. You can see that the time delay still exists, but now the amp track’s wave is inverted or “upside down.” As the DI track swings negative, the amp track swings positive.

In this case, the switch made the combined sound noticeably thinner, so I quickly flipped it back. Occasionally though, flipping polarity improves the combined sound of two sources which are slightly out of phase.

In practice, most recordists will understand what you mean if you say “flip the phase,” but should there happen to be a physicist in the room, you might get a raised eyebrow! Generally, though, this is a classic example of how unorthodox usage sometimes becomes accepted over time.

Which raises the point: any of the musical and audio terms above may eventually, like “remix” before them, evolve to incorporate new shades of meaning (or even have some earlier “correct” definitions fall into disuse). In the meantime, though, the more precise your grasp on the language of music, the less likely you are to misunderstand or be misunderstood.


¹ In performance, for both singers and many instrumentalists, pure tremolo is almost impossible to achieve without taking on some characteristics of vibrato — that is to say that a passage is played or sung with only variations of either pitch or volume.

Music Streaming Platforms & Mastering – 3 Guiding Concepts

[Editors Note: This blog was written by Alex Sterling, an audio engineer and music producer based in New York City. He runs a commercial studio in Manhattan called Precision Sound where he provides recording, mixing, and mastering services.]

Background:

As an audio engineer and music producer I am constantly striving to help my clients music sound the best that it can for as many listeners as possible. With music streaming services like Apple Music/iTunes Radio, Spotify, Tidal, and YouTube continuing to dominate how people consume music, making sure that the listener is getting the best possible sonic experience from these platforms is very important.

Over the last several years some new technologies have been developed and integrated into the streaming service’s playback systems called Loudness Normalization.

Loudness Normalization is the automatic process of adjusting the perceived loudness of all the songs on the service to sound approximately the same as you listen from track to track.

The idea is that the listener should not have to adjust the volume control on their playback system from song to song and therefore the listening experience is more consistent. This is generally a good and useful thing and can save you from damaging your ears if a loud song comes on right after a quiet one and you had the volume control way up.

The playback system within each streaming service has an algorithm that measures the perceived loudness of your music and adjusts its level to match a loudness target level they have established. By adjusting all the songs in the service to match this target the overall loudness experience is made more consistent as people jump between songs and artists in playlists or browsing.

If your song is louder than the target it gets turned down to match and if it is softer it is sometimes made louder with peak limiting depending on the service (Spotify only).

So how do we use this knowledge to make our music sound better?

The simple answer is that we want to master our music to take into account the loudness standards that are being used to normalize our music when streaming, and prepare a master that generally complies with these new loudness standards.

Concept 1: Master for sound quality, not maximum loudness.

If possible work with a professional Mastering Engineer who understands how to balance loudness issues along with the traditional mastering goals of tonal balance and final polish etc.

If you’re mastering your own music then try to keep this in mind while you work:

Don’t pursue absolute loudness maximization, instead pursue conscious loudness targeting.

If we master our music to be as loud as possible and use a lot of peak limiting to get the loudness level very high then we are most likely sacrificing some dynamic range, transient punch, and impact to get our music to sound loud.

The mechanism of loudness maximization intentionally reduces the dynamic range of our music so the average level can be made higher. There are benefits to this such as increasing the weight and density of a mix, but there are also negatives such as the loss of punch and an increase in distortion. It’s a fine line to walk between loud enough and too loud.

Here is where loudness normalization comes in:

If our song is mastered louder than the streaming target loudness level then our song will be gained down (by the service) as a result. If you are mastering louder than the target level then you are throwing away potential dynamic range and punch for no benefit and your song will sound smaller, less punchy, and more dynamically constrained in comparison to a song that was mastered more conservatively in regards to loudness.

If we master softer than the target level then in some cases (Spotify) the streaming service actually adds gain and peak limiting to bring up the level. This is potentially sonically adverse because we don’t know what that limiting process will do to our music. Will it sound good or not? It most likely will create some loss of punch but how much is lost will be based on what content was put in.

Some music is more sensitive to this limiting process. High dynamic range jazz or classical music with pristine acoustic instruments might be more sonically damaged than a rock band song with distorted guitars for example so the result is not entirely predictable just on loudness measurement but also on musical style.

Thankfully the main platforms other than Spotify don’t add gain and peak limiting as of this writing so they are less potentially destructive to sound quality for below target content.

Concept 2: Measure loudness using a LUFS/LKFS meter.

The different streaming services have different loudness standards and algorithms to take measurements and apply the normalization but for the most part they use the basic unit system of loudness measurement called LUFS or LKFS. This metering system allows engineers to numerically meter how loud content is and make adjustments to the dynamic range accordingly.

Being able to understand how our music masters are metering with this scale is useful to see what will happen when they are streamed on different services (i.e. will the algorithm gain them up or down to meet the target or not?)

Concept 3: Choose which loudness standard to master to.

Direct your mastering engineer if you are working with one to master to a target loudness level and consult with them about what they feel is an appropriate target level for your music. If you are mastering jazz or classical music you probably don’t want to make a very loud master for sound quality and dynamic range reasons but if you are making a heavy rock, pop, or, hip hop master that wants to be more intense then a louder target may be more suitable.

iTunes Sound Check and Apple Music/iTunes Radio use a target level of
-16LUFS and this would be a suitable target for more dynamic material.

Tidal uses a target level of -14LUFS that is a nice middle ground for most music that wants to be somewhat dynamic.

YouTube uses a target level of -13LUFS, a tiny bit less dynamic than Tidal.

Spotify uses a loudness target of -11LUFS and as you can see this is 5 dB louder than iTunes/Apple Music. This is more in the territory of low dynamic range, heavily limited content.

Somewhere in the middle of -16LUFS and -11LUFS might be the best target loudness for your music based on your desired dynamic range but the goal is not to go above the chosen target otherwise your content gets gained down on playback and dynamic range is lost.

In all services except Spotify, content that measures lower than target loudness is not gained up. So for people working with very dynamic classical music or film soundtracks those big dynamic movements will not be lost on most streaming platforms.

However since Spotify is unique and adds gain and peak limiting if your content is below target it is potentially the most destructive sonically. So should you master to -11LUFS and save your music from Spotify’s peak limiting but lose dynamic range on the other platforms? It’s a compromise that you have to decide for yourself in consultation with your mastering engineer.

You might want to test out what -11LUFS sounds like in the studio and hear what the effect of that limiting is. Is it better to master that loud yourself and compensate in other ways for the lost punch and lower dynamic range? Or should you accept that Spotify users get a different dynamic range than iTunes users and let your music be more dynamic for the rest of the platforms?

In all cases there is no benefit to going above -11 LUFS because that is the loudest target level used by any service. If you go louder than -11LUFS then your music will be turned down and dynamic range and punch will be lost on all the services needlessly and permanently.

Further Reading:

Great info – graphic on the different streaming loudness targets.

More info on LUFS/LKFS metering.

Slapback Delay – A Must Have On Vocals & Guitars

[Editors Note: This is blog was written by Scott Wiggins and it originally appeared on his site, The Recording Solution, which is dedicated to helping producers, engineers and artists make better music from their home studios.]

Slapback delay is a very common effect on tons of hit records. It’s really easy to set up!

When you think of delay, you probably think of yelling down a long canyon and hearing your voice repeat over and over. In my mind that’s an echo.

That’s what a slapback delay is, except it’s one single echo. One single repeat of the original signal.

It’s more like you clap while standing in a small alley between 2 buildings, and hearing a very quick repeat of your clap.

It’s a super fast repeat that adds a sense of space.

Guitar players love it when playing live, and I love using it on guitars and vocals in the context of a mix.

It just adds some energy and sense of depth without having to use a reverb and running the risk of washing out your dry signal.

I tend to use more effects after the slapback delay, but I more times than not start with it to set the foundation of the sound I’m trying to achieve.

A little Goes A Long Way

This effect is used more as a subtle effect on vocals or guitars.

It can be used on anything you like, but those tend to be the most popular in my opinion.

BUT… there are no rules, so if subtle bores you, then go crazy!

Also you can start with a preset on most delay plugins, and then tweak to taste.

If you are tweaking your own slap delays, just make sure your delay times are not in increments.

For Example: 32ms and then 64ms.

That would put the delay on the beat and that’s not technically a slap delay.

I learned that tip from the great mixer and teacher Dave Pensado,  so  I wanted to pass it on to you.

Watch the video above to see how I set all this up inside a real mix.

Comment below and let me know your thoughts.

The Business of Making a Record (Part II)

[Editors Note: This is the second in a three-part series of guest articles from Coury Palermo. Over the next few months, he’ll break down what it means to grind it out and write, record, release and promote a DIY album early in your musical career. Coury is a songwriter, producer and musician who is currently one-half of duo love+war.]


Read “The Business of Making a Record (Part I)” here.

It’s time. The most exciting part of the process is here. You’re recording the material you’ve written or a collection of songs you feel best articulates where you are as a musician. You’ve spent countless hours arranging, tweaking, and rehearsing the material, and now you’re ready – or are you?

I will never forget my first real experience in the studio. I spent years working in the industry and trying to stumble upon another opportunity that would find me behind the glass – sketching out the ideas that would become my first “Masterpiece.” With each recording experience that followed, those delusions of grandeur never disappeared.

As artists, if we aren’t aiming for greatness, what’s the point? Many musicians think “completed material” equals good material – not necessarily. I’ve long believed that a good song is truly a good song if it stands on it’s own; if, when the bells and whistles are stripped away, the melody and lyric lose none of their magic.

Always go for great. If the songs are “there,” you’ve jumped the first hurdle as you begin the sometimes arduous, but always rewarding, journey of making a record.

Don’t forgo the magic to fit into the box.

There was once an industry standard for making a record – or more accurately “a folklore” attached to the process. As an independent, you would find a producer, pick a studio, and usually work with the engineer said studio provided. Though this practice still exists in some instances, the last ten or so years have brought about a very different school of thought.

We are no longer tethered to the “way it has to be done.” One of my favorite albums of the past decade, In The Early Morning, is a testament to the less conventional rulebook of recording.

Singer-Songwriter James Vincent McMorrow recorded his debut in a small house off the Irish coast – completely alone. No engineer – no producer – no carefully sound-proofed vocal booth – just a microphone and a hand full of instruments.

This “no-frills” approach to recording has been used to varying degrees of success on albums by artist such as Bon Iver, Eurythmics, Bruce Springsteen, and Peter Gabriel just to name a few. Some of the most successful indie acts in recent years created most, if not all, of their widely blogged about tracks in the comfort of their bedroom.

I’ve recorded everywhere from famed Nashville favorite Oceanway Studios to the top floor of an abandoned law office in Lincoln, Nebraska. Don’t limit your excitement or creativity to the space. Though recording in a “major studio” was an experience I will never forget, it is not one of the favorite projects I’ve been a part of. Not because of the space, Oceanway is a beautiful recording facility, but because of the environment the space created.

I remember being extremely stressed about budgets and time restraints while recording the album. This is never the recipe for success and can lead to a piece of work that is never fully realized.

Personally, I respond best to intimate spaces when recording. You don’t have to record on a SSL console to produce a great album. You DO, however, need to align yourself with capable collaborators that understand your vision and believe in you as an artist.

Is this a safe place?

The recording studio can be one of the most intimidating spaces in the world. Make sure it’s a safe space to create. From the equipment to the engineers and producers at the helm of your creation, this environment will determine how and what you create. Choosing your team is one of the most important steps in the record making process.

In the event an elaborate, fully produced record seems overwhelming or is not in the current cards – be creative. Compile your three best songs and strip them down. If the “bones” are great, you may find the extra layers unnecessary. Use this recording as product or a tool to fund your fully realized creation. There is no end to the ways in which you can achieve your project goals – it simply takes a step out of the box.

Who’s in charge?

Producers are a key element for any project. They help in wide array of areas. From honing each song to picking the right engineer, producers are involved in almost every aspect of making a record. I learned very early on that finding a collaborative “partner” is much more important than securing a producer with a long list of production credits. Don’t let the insecurities of “this is my first time” stop you from going after your dream collaborator – they are an essential part of the equation.

A few years back, the band I was in began throwing around ideas for our first full-length album. We had recorded an EP the year before, and our manager gave us the simple task of putting together a list of producers we would like to work with on the new project.

Being the dreamer that I am, I listed Pierre Marchand of Sarah Mclachlan fame as my number one pick. There was a part of me that wrote his name with a “you asked for it” smirk; never believing she would approach one of my heroes. The next thing I knew, I was on a plane to Montreal to meet Mr. Marchand and have what is still one of the most unforgettable experiences of my life.

Don’t short change yourself with limitations. The greatest adventures I’ve had in this business have come from believing in possibility. Never be afraid to go after what you believe will make your creation it’s best. The road is long, my friends, but the end result is priceless.


In my final piece of this series, I’ll talk about what you can do after the songs have been recorded, the mix is complete and your masters are “in the can”. This is where the real work begins. Until next time!


love+war is the brain-child of writer-producer-guitarist team Coury Palermo & Ron Robinson. The two began working together in the fall of 2014 with no other intention but writing material for possible pitches in TV/Film. Once the sessions began, the two realized the collaboration was destined for much more than their original hopes for commercial sync opportunities.

Grounded in the traditions of R&B, pop, and minimalistic electronica, love+war turns the ear with their infectious blend of singer-songwriter soul. Check out their recent video for their Eurythmics cover of “Missionary Man”!

Get Out of the Garage with Converse, Guitar Center & TuneCore

Hey Indie Rockers!

Submit your music to Guitar Center and Converse’s “Get Out of the Garage” Contest for a chance to win a slew of awesome prizes (including free worldwide digital distribution from TuneCore)!

GOOTG Launch

Here’s how it works:

Submit a recorded live performance or music video here. Once you submit, get your fans to watch and share your channel through social media—views and shares help boost you up in the contest rank. Five finalists will be hand-picked to perform live at Converse Rubber Tracks in Brooklyn where one winner will be chosen.

Click here for more details about how judging works.

And now to the good stuff, the prize package…

The grand prize winner will get:

  • A 3 Song EP produced by Dev Hynes at Converse Rubber Tracks in Brooklyn, NY.
  • $25,000 Cash
  • Live performance at The FADER FORT Presented by Converse in Austin
  • New gear from top music instrument brands – Fender, Shure, Martin, Ernie Ball, Evans, Pro-Mark, Dunlop, Gretsch, Zildjian, Vox
  • Free worldwide digital distribution from TuneCore
  • Feature on an AT: GC Podcast with Nic Harcourt.

Ready to enter your music and start spreading the word? Start here.

Good luck!