14 of the Most Commonly Confused Terms in Music and Audio

[Editors Note: This article was written by Brad Allen Williams and it originally appeared on the Flypaper Blog. Brad is a NYC-based guitarist, writer/composer, producer, and mixer.]

Once upon a time, remixing a song meant actually redoing the mix. Many vintage consoles (some Neve 80-series, for example) have a button labeled “remix” that changes a few functions on the desk to optimize it for mixing rather than recording.

But sometime in the late 20th century, the word “remix” began to take on a new meaning: creating a new arrangement of an existing song using parts of the original recording. Into the 21st century, it’s evolved again and is now sometimes used as a synonym for “cover.” The latter two definitions remain in common use, while the first has largely disappeared.

Language is constantly evolving, and musical terms are obviously no exception. In fact, in music, language seems to evolve particularly fast, most likely owing to lots of interdisciplinary collaboration and the rapid growth of DIY.

Ambiguous or unorthodox use of language has the potential to seriously impede communication between collaborators. In order to avoid an unclear situation, let’s break down standard usage of some of the most commonly conflated, misused, or misunderstood music-related terms.

GAIN / DISTORTION

Gain, as it’s used in music electronics, is defined by Merriam-Webster as, “An increase in amount, magnitude, or degree — a gain in efficiency,” or, “The increase (of voltage or signal intensity) caused by an amplifier; especially: the ratio of output over input.”

To put it in less formal terms, gain is just an increase in strength. If an amplifier makes a signal stronger, then it causes that signal to gain intensity. Gain is usually expressed as a ratio. If an amplifier makes a signal 10 times as loud, then that amplifier has a “gain of 10.”

On the other hand, harmonic distortion is that crunchy or fuzzy sound that occurs when an amplifier clips (as a result of its inability to handle the amount of signal thrown at it).

In the 1970s, some guitar amp manufacturers began employing extra gain stages in their designs to generate harmonic distortion on purpose. In other words, they’d amplify the signal, then amplify it again, and that second gain stage — having been given more than it could handle — would distort. These became known as “high-gain amplifiers.” Because of this, many guitarists just assumed that gain was synonymous with distortion. This was cemented when later amps like the Marshall JCM900 had knobs labeled “gain” that, by design, increased the amount of harmonic distortion when turned up!

Outside the realm of electric guitar, though, gain is still most typically used in a conventional way. When a recording engineer talks about “structuring gain,” for example, he or she is usually specifically trying to avoid harmonic distortion. It’s easy to see how this might cause confusion!

TONALITY / TONE

Not to pick on guitarists, but this is another one that trips us up. Tone has many music-related definitions, but the one of interest at the moment is (again, per Merriam-Webster), “Vocal or musical sound of a specific quality…musical sound with respect to timbre and manner of expression.”

On the other hand, the dictionary definition of tonality is:

1. Tonal quality.

2a. Key.

2b. The organization of all the tones and harmonies of a piece of music in relation to a tonic.

It’s important to note that “tonal quality” here refers to “the quality of being tonal,” or the quality of being in a particular key (in other words, not atonal). This is a different matter from “tone quality,” which is commonly understood to mean “timbre.” Most musicians with formal training understand tonality either as a synonym for key or as the quality of being in a key.

If you’re trying to sound fancy, it can be tempting to reach for words with more syllables, but using tonality as a synonym for timbre can be confusing. Imagine you’re recording two piano pieces — one utilizing 20th-century serial composition techniques and the other utilizing functional harmony. If you express concerns about the piano’s “tonality” while recording the second piece, the composer would probably think you were criticizing his or her work!

OVERDUB / PUNCH-IN

Most musicians in the modern era understand the difference between these two concepts, but they still occasionally confuse folks relatively new to the process of recording.

Overdubbing is adding an additional layer to an existing recording.

“Punching in” is replacing a portion of an already-recorded track with a new performance.

To do a “punch-in” (in order to fix a mistake, for example), the performer plays along with the old performance until, at the appropriate moment, the recordist presses record, thus recording over the mistake. The recordist can then “punch out” to preserve the remainder of the original performance once the correction is made.

GLISSANDO / PORTAMENTO

A portamento is a continuous, steady glide between two pitches without stopping at any point along the way.

A glissando is a glide between two pitches that stair-steps at each intermediate note along the way. A glissando amounts, in essence, to a really fast chromatic scale.

To play a glissando on guitar, you’d simply pluck a string and slide one finger up the fretboard. The frets would make distinct intermediate pitches, creating the stair-stepped effect. If you wished to play a portamento on guitar, you could either bend the string or slip a metal or glass slide over one of the fingers of your fretting hand.

VIBRATO / TREMOLO

While often used interchangeably in modern practice, vibrato and tremolo are actually distinct kinds of wiggle. In most cases, tremolo is amplitude modulation (varying the loudness of the signal), whereas vibrato is frequency modulation (varying the pitch of the signal).

But over the past few hundred years, tremolo has commonly referred to many different performative actions. On string instruments, tremolo is used to refer to the rapid repetition of a single note, and in percussion, tremolo is often used to describe a roll. Singers use it for even crazier things, like a pulsing of the diaphragm while singing¹.

Leo Fender must’ve had his terms confused — he labeled the vibrato bridges on his guitars “synchronized tremolo,” and the tremolo circuits on his amps “vibrato.” Confusion has reigned ever since.

ANALOG / DIGITAL

Analog and digital are perhaps the most confused pair of words in the 21st-century musical lexicon. I once had a somewhat older musician tell me that my 1960s-era fuzz pedal and tape echo made my guitar sound “too digital” for his music. Likewise, countless younger musicians claim to prefer the “analog sound” of the original AKAI MPC (an early digital sampler) and the Yamaha DX-7 (an early digital FM synthesizer). But “analog” and “digital” are not simply stand-ins for “vintage” and “modern,” nor for “hardware” and “software.” They’re entirely different mechanisms for storing and generating sounds. Let’s learn a little more!

Merriam-Webster’s most relevant definition of analog is, “Of, relating to, or being a mechanism in which data is represented by continuously variable physical quantities.”

Also relevant is its first definition of analogue: “Something that is analogous or similar to something else.”

Now, how does this relate to music technology? It all goes back to humans’ longstanding search for a way to capture and store sound. Sound, on a basic scientific level, is nothing more than compression and rarefaction (decompression) of air that our ears can sense. Since air pressure fluctuations can’t really be stored, recording sound proved elusive for a long time.

20th-century scientists and engineers, however, brilliantly figured out that recording sound might be possible if they could accurately transfer that sound into something that could be preserved. They needed something storable that would represent the sound; an analogue to stand in for the sound that would allow it to be captured and kept.

First, they used mechanically generated squiggles on a wax cylinder as the analogue. Eventually, they figured out that they could use alternating-current electricity (which oscillates between positive and negative voltage), as an analogue of sound waves (which oscillate between positive and negative air pressure). From there, it was a relatively short leap to figuring out that they could, through electromagnetism, store that information as positively and negatively charged magnetic domains, which exist on magnetic tape.

This is analog recording!

Since electric voltage is continuously variable, any process — including synthesis — that represents air pressure fluctuations exclusively using alternating current electricity is analog, per Merriam-Webster’s first definition above.

Digital, on the other hand, is defined as, “Of, relating to, or using calculation by numerical methods or by discrete units,” and, “Of, relating to, or being data in the form of especially binary digits, digital images, a digital readout; especially : Of, relating to, or employing digital communications signals, a digital broadcast.”

That’s a little arcane, so let’s put it this way: Rather than relying directly on continuous analog voltages, a digital recorder or synthesizer computes numerical values that represent analog voltages at various slices of time, called samples. These will then be “decoded” into a smooth analog signal later in order to be accurately transferred back into actual air pressure variations at the speaker. If that’s a blur, don’t worry — you only need to understand that this is a fundamentally different process of storing or generating sound.

Absent a real acquaintance with the technology of an individual piece of equipment or process, it’s probably safer to avoid leaping to conclusions about whether it’s analog or digital. For example, there are reel-to-reel magnetic tape machines (like the Sony PCM 3348 DASH) that don’t record analog voltage-based signal at all, but rather use the tape to store digital information (as simple ones and zeroes).

Since you can’t judge whether a piece of gear is analog or digital with your eyes, it’s probably best to only use these terms when you need to refer to the specific technologies as outlined above. In other words, next time you’re recording in a studio with a cool-looking piece of old gear, it’s probably safer to use #vintage instead of#analog to caption your in-studio Instagram photo!

PHASE / POLARITY

Phase is defined by Merriam-Webster as… (deep breath):

“The point or stage in a period of uniform circular motion, harmonic motion, or the periodic changes of any magnitude varying according to a simple harmonic law to which the rotation, oscillation, or variation has advanced from its standard position or assumed instant of starting.”

That’s a mouthful! This is a concept that’s easier understood with an example, so let’s imagine that you have a swinging pendulum:

If you were to freeze that pendulum at two different times, the dot at the end would be in two different locations. The pendulum’s swing occurs over time, so the location of the pendulum depends on when you stop it. We’d refer to the phase of the pendulum in order to describe this phenomenon and where the pendulum is in its cycle relative to time. And since it’s always moving in a continuous, smooth arc, there are an infinite number of possibilities!

Phase becomes potentially relevant for anything that’s oscillating or undulating — like the pendulum above or a sound wave.

Polarity, on the other hand, is defined as, “The particular state, either positive or negative, with reference to the two poles or electrification.”

To put it in very simple terms, you’re dealing with polarity any time you install a battery. The battery has a positive terminal and a negative one. You have to make sure it’s installed the right way. While phase is infinitely variable, polarity has only two choices — it’s one or the other.

In our brief explanation of analog audio above, we mentioned that positive and negative swings of voltage are used to represent positive and negative changes in air pressure. If we switch polarity of a signal, we swap all the positive voltages for negative ones, and vice-versa. +1v becomes -1v, +0.5v becomes -0.5v, etc. This is usually accomplished with a button marked with the Greek letter theta or “Ø.”

Interestingly, if you have one signal alone, it’s usually the case that our ear can’t really tell the difference between positive or negative polarity. It’s when you combine two or more similar signals (like two microphones on one drum for instance) that a polarity flip of one or the other can have a dramatic influence on the sound.

Confusingly, this influence is a result of phase differences between the two sources, and switching polarity can often improve (or worsen!) the sound of two combined sources which are slightly out of phase. For this reason, the polarity switch is often called a “phase switch,” and depressing it is often colloquially referred to as “flipping phase.”

In the graphic below, you’ll see a brief, zoomed-in snapshot of two waveforms. A single bass performance was simultaneously recorded into both a direct box (blue) and through a mic on its amplifier (green).

In the first graphic, you can notice that the two are slightly out of phase. The blue direct-in wave swings negative ever so slightly before the green mic–on–amp one does. This is because the amp’s sound had to travel through the air briefly before being picked up by the microphone. Since sound in air travels much more slowly than electricity does, this creates a slight time delay or phase discrepancy.

In the second example below, I’ve flipped the polarity of the amp track. You can see that the time delay still exists, but now the amp track’s wave is inverted or “upside down.” As the DI track swings negative, the amp track swings positive.

In this case, the switch made the combined sound noticeably thinner, so I quickly flipped it back. Occasionally though, flipping polarity improves the combined sound of two sources which are slightly out of phase.

In practice, most recordists will understand what you mean if you say “flip the phase,” but should there happen to be a physicist in the room, you might get a raised eyebrow! Generally, though, this is a classic example of how unorthodox usage sometimes becomes accepted over time.

Which raises the point: any of the musical and audio terms above may eventually, like “remix” before them, evolve to incorporate new shades of meaning (or even have some earlier “correct” definitions fall into disuse). In the meantime, though, the more precise your grasp on the language of music, the less likely you are to misunderstand or be misunderstood.


¹ In performance, for both singers and many instrumentalists, pure tremolo is almost impossible to achieve without taking on some characteristics of vibrato — that is to say that a passage is played or sung with only variations of either pitch or volume.

Studio Spotlight: Lakehouse Recording Studios Contribute to the Lasting Legacy of Asbury Park’s Music Scene

Continuing our monthly look at awesome recording studios – from the scenes they serve and the atmosphere they cultivate for independent artists – we find ourselves in the seaside town of Asbury Park, New Jersey. Known for legends like the bandleader/trombonist Arthur Pryor and rock idol Bruce Springsteen, on top of some notable music venues, the Jersey Shore city has a proud history of celebrating its musical roots.

A few years back, musician and career engineer Jon Leidersdorff opened Lakehouse Recording Studios. Feeling the need to expand his offerings, Lakehouse was designed and built in a building that also features the reputable Russo Music store, as well as Lakehouse Music Academy, a music school for students of all ages and levels. It only makes sense that this complex features a state of the art, two-studio recording facility, right?

We talked to Jon about getting the studio up and running, what sets it apart from the rest, and what it means to be providing recording solutions to the musicians of his hometown:

Tell me about how you made the transition from home studio to opening up Lakehouse. What kind of projects had you been working on leading up to that point?

Jon Leidersdorff: I was recording and developing local artists that started to see some success and working with newer bands that I met through the industry. Some of my producer friends also were bringing artists in to work there. And from that, the studio and I got very busy. I realized that I wasn’t going to be able to be involved with more projects if I didn’t have a larger commercial space.

What makes the design and layout (of the studio, specifically), unique and what can artists look forward to getting out of it in a session?

For the new recording studios I wanted to have everything I was previously missing. I wanted space where every musician in the group could see each other and set up the rig of their dreams to record with simultaneously. I wanted everyone to have the sound that they wanted hear and to be able to play together and see each other. I wanted more of a live performance for tracking.

I really missed hearing the magic of when the entire group plays together. The whole group playing at the same time really pushes each musician individually and has a huge impact on the composition. I also wanted it to sound amazing in the space.

We hired WSDG. John Storyk has done this thousands of times before and I realized that there would be no substitute for that type of experience. His rooms sound great. One thing that I hear often from the producers and artists that come through our studios is that they love the feeling in the space. And how we have so much of the gear that they never get to play or that they just see as virtual instruments or plug-ins. We spend a lot of time and energy making sure that we have a lot of unique  and vintage instruments that the musicians can use to feel more creative.

Outside of just the studio, elaborate a bit on the overall complex that Lakehouse is situated in and its significance to the neighborhood.

We are located in the downtown of Asbury Park, New Jersey. The city has an amazing musical heritage. Early days with Arthur Pryor and the John Philip Sousa big bands, the west side jazz scene of the 1930’s and ’40’s, the Jersey Shore rock scene of the 1970’s and ’80’s and the amazing punk scene at the Lanes in recent years. People believe in music here. They trust it, they support it, they live it. You can see it everywhere. It’s a great place to be when you come to record. There are great art galleries, restaurants and atmosphere, live music venues and of course there’s the boardwalk and the Atlantic Ocean. It’s a great backdrop to ignite the creative flow.  

In our building we have Russo Music, the largest and coolest independent music store in NJ. They have the best equipment and do repairs and set ups on site. It’s really helpful for the musicians that are recording here. There is an amazing music academy with very progressive programming. Most of the teachers have really cool gigs and credits.

Monmouth University has their music industry program and record label here as well. They bring really great guests here.

We also have our own small DIY venue. It’s the home for the Asbury Park Music Foundation. They have a killer PA in there and anyone coming through town can book their own show. I’ve seen a lot of great acts there. They are a nonprofit that do tremendous work for the community here.

There is a great photographer and videographer Andrew Holtz. Upstairs is Bands on a Budget who do merchandise for so many different artists. There’s CoWerks, a great shared office space.

There are also some great well-known producers will have their own mix rooms on the premises. It really creates a great community having so many different creative people in the same space.

What inspired you to start Lakehouse Music Academy? What was the reaction from residents?

The idea for the music Academy really came from need. So many of the artists that I was working with really needed support. They needed experts around them and educators who could help them to accomplish their goals. Having relevant mentors opens up so many possibilities. There are really great programs at the Academy that help the students directly and specifically with their aims.

We are fortunate to be in an area where so much of the music industry lives and plays. We have some of the biggest artists and music industry professionals teach at our Academy. The community has been the best supporters. We have a huge student body now in just a few years.

Have you been able to establish a sort of ‘path’ between the academy and the studio?  

We have set up programming that helps young musicians develop into songwriters and artists. There are programs that teach songwriting, audio engineering and connect the students to the music industry. They even have their own record label.

Between watching students come in the doors to the academy, bands through the studio, and everything in between, what makes you excited about Asbury Park’s music scene?

It’s a very exciting time to be in Asbury Park. The music scene is really turning into a ‘music community’. There is so much going on and there are many great collaborations happening everywhere.

It makes you feel good to see these artists helping each other and taking it to the next level.

What inside advice would you give to independent artists who are getting ready to step into a professional recording studio for the first time?

Ask yourself what you want to accomplish. What do you want to sound like? It’s important to find a studio and someone who understands what you’re trying to get done.

Studio Spotlight: Degraw Sound’s Ben Rice On the Brooklyn Recording Landscape & Degraw Fest

Creating, releasing, and promoting your music as an independent artist requires a lot of moving parts and team members. For artists who are at the stage in their career when they’ve moved out of the home studio and are ready to dedicate some of their budget to sessions with an engineer, there’s plenty to take into consideration.

That’s why we’re opening up the floor to highlight some recording studios in our backyard of New York City and beyond each month on the TuneCore Blog! Studio owners and engineers work with indie artists who use TuneCore for distribution and more every day, so it only makes sense for us to give them a platform to talk about the cool stuff happening in the control room.

To kick it off, we chatted with Ben Rice, owner of Degraw Sound located in the Gowanus neighborhood of Brooklyn. Ben’s been active in the music scene since he was in his teens, and the studio is in it’s fifth year of existence. Next weekend, on June 3rd, Ben and his cohorts are throwing the inaugural “Degraw Fest” – a mini full-day music festival taking place at Littlefield just down the road from the studio to be filled with bands, beer and food.

Learn more about what makes Degraw Sound special, and if you’re an NYC-based TuneCore Artist, make sure to check out Degraw Fest and say hello!

First, give us a little bit of your background as a musician/producer in New York City.

Ben Rice, Owner:I’m originally from Brooklyn — like I actually grew up here. My family lived in Park Slope in the late 80’s and early 90’s and then we moved out to Ditmas Park. Growing up I was obsessed with baseball and music and as luck would have it, music won out!

The first job I ever got when I was a teenager was working in a recording studio. It was this place called Clinton Recording Studios which was

Ben Rice

one of the last major facilities in New York. I did all the fun jobs like cleaning the bathroom and making coffee and that was my introduction to studio life. I loved the feeling of being in the studio, going in and turning on the lights in the morning and watching their huge live room light up. The old console and racks of gear fascinated me. I would work there during the days over the summer or after school and then go home and play with my Tasman 4 track and make demos of the songs I was writing.

I played in bands and toured and got to experience the early 2000’s music scene here in NYC, which was really incredible. During that time I started producing records for other bands on the scene, and at a certain point I decided that I wanted to focus solely on producing and went all in on building a studio.

If I may say, it’s a beautiful studio. What went into it’s design and how did you keep artists in mind during its construction?

Thanks Kevin, I appreciate that man. When I set out to build Degraw Sound I wanted to create a space that artists would feel inspired and comfortable in. I wanted it to feel warm and inviting —kind of like an extension of the whisky bars that me and my friends liked to hang out at, almost like there was a secret back door that would lead you into another room that somehow magically was full of sick gear.

I met with a few different studio designers and through a couple different friends I got connected with a guy named Dave Ellis who had built some beautiful spaces around Brooklyn. When I met Dave it was instantly clear to me that he understood my vision for the studio and he just seemed like a cool guy — he had a sick car and liked a lot of the same music that I did. I put a lot of trust in him to take my idea and turn it into a reality. In a lot of ways I think of him as the studio’s “producer”, meaning he had the experience, skills and tools to turn my idea into something tangible.

How do you feel that Degraw Sound contributes to the Brooklyn/NYC musical landscape? In what ways do you collaborate or connect with artists outside of production and engineering?

I think after five years of making music here we’re starting to feel that we’ve become part of the city’s musical landscape, which is a really cool feeling. Growing up in New York you learn about all the different studios in the city and to have Degraw get to the point it has where it’s become part of that conversation and musicians think of us as a place to come make records is pretty special.

The artists that we work with here have become like family. When you spend countless hours in-studio with someone collaborating on a creative project you wind up getting pretty close with them.

One of the things I appreciate most about producing records is you get to be a part of significant moments in other peoples’ lives. That often extends beyond the studio; for instance, I just got back from Austin, Texas from my buddy Will’s wedding. He’s in a band Elliot & the Ghost and we’ve made some awesome records together at Degraw Sound.

Tell us more about how you came to organize Degraw Fest, and what are you looking forward to most about it?

A couple months ago Harper and I were breaking down gear after a session and somehow we wound up riffing on the idea of putting together a show with a few of the artists that we were working with. When we’re brainstorming the ideas can grow pretty quickly and before we knew it the idea had evolved into a full day mini music festival!

The timing just felt right to do something like this. This month is our fifth anniversary so it seemed like a fun way to get everyone together that has been a part of building Degraw and putting it on the map. Now that all the heavy lifting and planning is done I’m just looking forward to hanging with everyone. When I think about the perfect early summer Saturday it involves good friends, music, beer and food – and I think we’ve got all those boxes checked! (Ed. note – buy tickets for that here!)

Given that as a business owner you’re always looking to foster a community with your neighbors, do you feel Degraw Fest will help enhance those efforts?

Oh yeah definitely. Julie and Scott over at Littlefield (where we’re hosting Degraw Fest) have always been great to us. They were super welcoming when we moved into the neighborhood and we’ve built a great relationship with them over the years. They’ve been here for a decade now and are such a big part of the scene and community that is growing here in Gowanus so we’re really pumped to be working with them on this!

Everyone that we’ve talked to about Degraw Fest has loved the idea. Marshall and Eric who own Braven Brewing in Bushwick jumped on board to help sponsor the festival. Cheech A Cini’s, a local Italian food truck and Yankees fans, are going to be joining us, too!

A lot of the artists already know each other from seeing each other around the studio or meeting at some of the other parties that we’ve thrown, so I think getting everyone together is going to feel like a really fun family reunion.

How do you recommend that your fellow studio owners/engineers take steps to connect with artists in a similar fashion as you have with Degraw Fest?

For me it’s really about having fun and doing things that you’re pumped about. I have a ton of respect for all the studio owners in this city. It’s a tough business and we all put in long hours, so anytime there’s an opportunity to do something like this that’s a little different and can help the artists that you make music with I think you have to jump on it.

What do you think makes Degraw Sound unique in terms of how studios in New York operate?

To me the thing that makes Degraw Sound unique is the people who work here. Gian, Harper and myself… we’re a bunch of weirdos who love making records and are obsessed with every aspect of it.

I think that we’re bridging the gap between commercial studios and independent producers. We can each function independently as producers and collectively as a team. We have a really beautiful and well-built professional recording studio here that is flexible and can accommodate whatever type of project people bring to us.

What we’ve found over the past few years is that the majority of the projects that we’re working on are those where the artist will hire one of us, or a couple of us, to produce their record and help them take the project from start to finish. This just seems to work out best because it allows us to really invest ourselves in the records that we’re making and help artists create music that’s authentic and realize their vision and potential.

If you HAD to choose, what’s your favorite piece of gear or recording equipment that Degraw Sound boasts?

Oh man, that’s a tough one… I mean I have my “desert island” list of toys… I love our Trident console, it’s a great British desk and it’s super fun to work on. I’d box that up and put it on a boat and take it with me. My rack of 1176 compressors and Pultecs has become a staple. I have a couple Jazzmasters that I’ll never get rid of, and we just got a Mellotron which is probably the coolest instrument ever!

What inside advice would you give to independent artists who are getting ready to step into a professional recording studio for the first time?

Find a producer that you dig and who loves your music and let them help you. No matter what stage of your career you’re at I think this is key.

Whether you grew up listening to The Beatles or Michael Jackson one of the key ingredients to those records is that there was someone who helped foster the artists’ creativity and develop those sounds.

Music Streaming Platforms & Mastering – 3 Guiding Concepts

[Editors Note: This blog was written by Alex Sterling, an audio engineer and music producer based in New York City. He runs a commercial studio in Manhattan called Precision Sound where he provides recording, mixing, and mastering services.]

Background:

As an audio engineer and music producer I am constantly striving to help my clients music sound the best that it can for as many listeners as possible. With music streaming services like Apple Music/iTunes Radio, Spotify, Tidal, and YouTube continuing to dominate how people consume music, making sure that the listener is getting the best possible sonic experience from these platforms is very important.

Over the last several years some new technologies have been developed and integrated into the streaming service’s playback systems called Loudness Normalization.

Loudness Normalization is the automatic process of adjusting the perceived loudness of all the songs on the service to sound approximately the same as you listen from track to track.

The idea is that the listener should not have to adjust the volume control on their playback system from song to song and therefore the listening experience is more consistent. This is generally a good and useful thing and can save you from damaging your ears if a loud song comes on right after a quiet one and you had the volume control way up.

The playback system within each streaming service has an algorithm that measures the perceived loudness of your music and adjusts its level to match a loudness target level they have established. By adjusting all the songs in the service to match this target the overall loudness experience is made more consistent as people jump between songs and artists in playlists or browsing.

If your song is louder than the target it gets turned down to match and if it is softer it is sometimes made louder with peak limiting depending on the service (Spotify only).

So how do we use this knowledge to make our music sound better?

The simple answer is that we want to master our music to take into account the loudness standards that are being used to normalize our music when streaming, and prepare a master that generally complies with these new loudness standards.

Concept 1: Master for sound quality, not maximum loudness.

If possible work with a professional Mastering Engineer who understands how to balance loudness issues along with the traditional mastering goals of tonal balance and final polish etc.

If you’re mastering your own music then try to keep this in mind while you work:

Don’t pursue absolute loudness maximization, instead pursue conscious loudness targeting.

If we master our music to be as loud as possible and use a lot of peak limiting to get the loudness level very high then we are most likely sacrificing some dynamic range, transient punch, and impact to get our music to sound loud.

The mechanism of loudness maximization intentionally reduces the dynamic range of our music so the average level can be made higher. There are benefits to this such as increasing the weight and density of a mix, but there are also negatives such as the loss of punch and an increase in distortion. It’s a fine line to walk between loud enough and too loud.

Here is where loudness normalization comes in:

If our song is mastered louder than the streaming target loudness level then our song will be gained down (by the service) as a result. If you are mastering louder than the target level then you are throwing away potential dynamic range and punch for no benefit and your song will sound smaller, less punchy, and more dynamically constrained in comparison to a song that was mastered more conservatively in regards to loudness.

If we master softer than the target level then in some cases (Spotify) the streaming service actually adds gain and peak limiting to bring up the level. This is potentially sonically adverse because we don’t know what that limiting process will do to our music. Will it sound good or not? It most likely will create some loss of punch but how much is lost will be based on what content was put in.

Some music is more sensitive to this limiting process. High dynamic range jazz or classical music with pristine acoustic instruments might be more sonically damaged than a rock band song with distorted guitars for example so the result is not entirely predictable just on loudness measurement but also on musical style.

Thankfully the main platforms other than Spotify don’t add gain and peak limiting as of this writing so they are less potentially destructive to sound quality for below target content.

Concept 2: Measure loudness using a LUFS/LKFS meter.

The different streaming services have different loudness standards and algorithms to take measurements and apply the normalization but for the most part they use the basic unit system of loudness measurement called LUFS or LKFS. This metering system allows engineers to numerically meter how loud content is and make adjustments to the dynamic range accordingly.

Being able to understand how our music masters are metering with this scale is useful to see what will happen when they are streamed on different services (i.e. will the algorithm gain them up or down to meet the target or not?)

Concept 3: Choose which loudness standard to master to.

Direct your mastering engineer if you are working with one to master to a target loudness level and consult with them about what they feel is an appropriate target level for your music. If you are mastering jazz or classical music you probably don’t want to make a very loud master for sound quality and dynamic range reasons but if you are making a heavy rock, pop, or, hip hop master that wants to be more intense then a louder target may be more suitable.

iTunes Sound Check and Apple Music/iTunes Radio use a target level of
-16LUFS and this would be a suitable target for more dynamic material.

Tidal uses a target level of -14LUFS that is a nice middle ground for most music that wants to be somewhat dynamic.

YouTube uses a target level of -13LUFS, a tiny bit less dynamic than Tidal.

Spotify uses a loudness target of -11LUFS and as you can see this is 5 dB louder than iTunes/Apple Music. This is more in the territory of low dynamic range, heavily limited content.

Somewhere in the middle of -16LUFS and -11LUFS might be the best target loudness for your music based on your desired dynamic range but the goal is not to go above the chosen target otherwise your content gets gained down on playback and dynamic range is lost.

In all services except Spotify, content that measures lower than target loudness is not gained up. So for people working with very dynamic classical music or film soundtracks those big dynamic movements will not be lost on most streaming platforms.

However since Spotify is unique and adds gain and peak limiting if your content is below target it is potentially the most destructive sonically. So should you master to -11LUFS and save your music from Spotify’s peak limiting but lose dynamic range on the other platforms? It’s a compromise that you have to decide for yourself in consultation with your mastering engineer.

You might want to test out what -11LUFS sounds like in the studio and hear what the effect of that limiting is. Is it better to master that loud yourself and compensate in other ways for the lost punch and lower dynamic range? Or should you accept that Spotify users get a different dynamic range than iTunes users and let your music be more dynamic for the rest of the platforms?

In all cases there is no benefit to going above -11 LUFS because that is the loudest target level used by any service. If you go louder than -11LUFS then your music will be turned down and dynamic range and punch will be lost on all the services needlessly and permanently.

Further Reading:

Great info – graphic on the different streaming loudness targets.

More info on LUFS/LKFS metering.

How To Set Up a Home Recording Studio: The Complete Guide

[Editors Note: This is a guest blog written by Jason Moss. Jason is an LA-based mixer, producer and engineer. His clients include Sabrina Carpenter, Madilyn Bailey, GIVERS and Dylan Owen. Check out his mixing tips at Behind The Speakers.]

Setting up a home recording studio can be overwhelming.

How do you know what equipment to buy? Which software is best? How can you make sure everything will work together?

Take a breath. This guide will walk you through the process, step by step. It contains everything you need to know, including equipment recommendations. Make your way to the bottom of this page, and you’ll have your home recording studio up and running in no time. This way, you can get on to the good stuff—making great recordings!

Table Of Contents:

Chapter 1: How To Find The Ultimate Home Studio Computer

Chapter 2: How To Choose The Ideal Audio Interface

Chapter 3: How To Find A Mic That Makes You Sound Radio-Ready

Chapter 4: How To Choose Studio Monitors That Supercharge Your Tracks

Chapter 5: How To Pick The Perfect Pair Of Headphones

Chapter 6: How To Find A DAW That Makes Recording Easy

Chapter 7: The Extra Stuff Most People Forget

Chapter 8: How To Set Up Your Room For Studio-Quality Sound

How To Find The Ultimate Home Studio Computer


Your computer is the command center of your home recording studio. It’s the brains and brawn behind the entire operation.

This is one area where you don’t want to skimp.

Recording will place high demands on your computer, and you’ll need a machine that can keep up. If you plan on tackling projects with lots of tracks or producing electronic music, this is even more important. The last thing you want is your computer to slow you down. There’s no faster way to kill a moment of musical inspiration.

Laptop Or Desktop?

Laptop and desktop computers

If you absolutely need to record on the go, a laptop may be your only choice. But be prepared to pay more and walk away with a less capable machine.

Go for a desktop whenever possible. Dollar for dollar, they’re faster, more powerful, and offer more storage. They also last longer and fail less, because their internal components don’t overheat as easily. And since a desktop doesn’t sit in front of your face, the noise from its fans will be less of an issue. (Microphones are super sensitive, so a noisy room will lead to noisy recordings. I worked on a laptop for years, and fan noise was a constant problem.)

PC Or Mac?

While my first computers were PCs, I’m now a Mac guy through and through. Macs crash less. They’re also the computer of choice for music-makers (you’ll find them in most home recording studios). Because of this, updates and bug fixes for recording software will often be released for Mac users first.

With that being said, most recording software and hardware is compatible with both platforms. Macs are also more expensive, so this may influence your decision. If you’re more comfortable using a PC, you can make it work. Just make sure your audio interface and software is compatible with whatever you choose.

4 Computer Specs That Really Matter

When you’re trying to find the right computer for your home recording studio, it’s easy to get lost in techno-speak. The following 4 specs are what count. Hit the guidelines below, and your computer will handle nearly any recording session with ease.

CPU (Clock Speed & Number Of Cores)

CPU

If a computer was a car, the CPU would be its engine. Clock speed is like the number of cylinders an engine has. The higher the number, the faster the CPU. A fast CPU will handle large recording sessions gracefully.

If the CPU has multiple cores, this is even better. Multiple cores will allow it to multitask more effectively.

It can be difficult to compare CPUs (especially those with a different number of cores). To make this easier, you can use sites like CPUBoss or CPU Benchmark.

Recommendations:

  • Good: 2.6 GHz dual-core
  • Better: 2.8 GHz dual-core
  • Best: 3+ GHz quad-core

RAM

RAM is your computer’s short-term memory. More RAM will make your computer run faster, particularly when working with large, complex projects.

Recommendations:

  • Good: 8 GB
  • Better: 12 GB
  • Best: 16+ GB

Hard Drive (Space & Type)

Hard drive

A computer’s hard drive is its long-term memory. This is where your recordings will be stored. Recorded audio takes up lots of space, so you’ll want plenty to spare. If you end up filling your hard drive, you can always buy an external one. However, it’s always better to start with more space.

But when it comes to hard drives, space isn’t all that matters. In fact, speed is even more important.

The best hard drives are solid-state. While they typically offer less storage space, they’re worth every penny. Solid-state drives use flash memory (the same technology you’ll find in a USB thumb drive) and have no moving parts. They’re much faster than their mechanical predecessors. If your computer has a solid-state drive, it will be much snappier when playing back and recording projects with large track counts.

If you can’t avoid a mechanical drive, opt for one that spins at 7,200 RPM. It will deliver data about 33% faster than a 5,400 RPM drive. This really matters if you plan on tackling projects with 30+ tracks.

Recommendations:

  • Good: 500 GB 7,200 RPM mechanical drive
  • Better: 1 TB 7,200 RPM mechanical drive
  • Best: 500+ GB solid-state drive

Ports

Your audio interface (see below) will connect to your computer using USB, Thunderbolt, or FireWire. Make sure there’s a port available on your computer for it. If you plan on using a MIDI keyboard or other accessories, make sure you’ve got enough free ports to accommodate them too.

Computer Recommendations

Best Bang For Your Buck: Mac Mini

The Mac Mini is seriously underrated. This is what I use in my home recording studio, and it’s more than enough. Opt for a solid-state drive and maxed-out memory for even more power. And don’t forget—you’ll need a keyboard, mouse, and monitor too.

For Mobile Music-Makers: MacBook Pro

If you need to be mobile, the MacBook Pro is a great choice. Just be prepared for fan noise.

For Those Who Want The Best: Mac Pro

It isn’t cheap, but you’ll find the Mac Pro in most professional recording studios. Even the baseline unit is more than enough.

Additional Resources

Back To Table Of Contents

How To Choose The Ideal Audio Interface


Focusrite audio interface

Your audio interface is the heart of your home recording studio. While it may look intimidating, it’s nothing more than a fancy routing box. This is where you’ll plug in microphones, speakers, and headphones. It’s also where the signal from your microphones gets converted into ones and zeros, so your computer can make use of it.

Interfaces vary widely in features. Some have knobs to adjust the volume of your speakers and microphones. Others accomplish this through a software control panel. However, all great interfaces are transparent—they don’t add any noise or distortion to the sound. This is where high-end interfaces often differ from cheaper ones.

Here are some things to keep in mind when choosing an interface:

Number Of Mic Preamps

The more preamps, the more microphones you can record at once. If you’re only recording vocals, one may be all you need. To record instruments with multiple mics (such as acoustic guitar in stereo), you’ll need at least 2. To record drums or people playing together, go for 4 or more.

Quality Of Mic Preamps

When it comes to mic preamps, people get distracted by quantity. They think more is better, so they buy cheap interfaces with 8 preamps.

This is a rookie mistake.

Cheap preamps will add noise and distortion to your recordings. This will become a permanent part of your tracks, and it can add a harsh, brittle quality to your music.

Quality is more important than quantity. Avoid cheap interfaces with 8 preamps. Instead, go for an interface with 4 or 2. You’ll walk away with a higher-quality interface, often at the same price.

1/4″ Input

Bass guitar

With a 1/4″ input, you can record electric guitar or bass without an amp. You can then use software to shape the tone. This isn’t an essential feature, but it’s handy (especially if you’re a guitarist or bassist).

Pro Tip: If your interface doesn’t have a 1/4″ input, a direct box will do the same thing.

Speaker Outputs

Make sure your interface has the same type of outputs your speakers use (either XLR, 1/4″, or RCA). If there’s a mismatch, you’ll have to use an adapter or special cable to connect them. While this isn’t a huge deal, it’s best avoided.

Headphone Jack

With a headphone jack, you’ll be able to plug in a pair of headphones and listen back while recording. This is an essential feature, and almost all interfaces have one.

Pro Tip: Most interfaces have a 1/4″ headphone jack. This is larger than the 1/8″ plug on most consumer headphones. To use consumer headphones with your interface, you’ll need an 1/8″ to 1/4″ adapter.

Compatibility

Most interfaces will connect to your computer using USB, FireWire, or Thunderbolt. Make sure your computer has a free port of that type available.

You’ll also want to make sure your interface is compatible with your recording software. You can find this information on the interface manufacturer’s website.

Interface Recommendations

1 Mic Preamp

2 Mic Preamps

4 Mic Preamps

8 Mic Preamps

Additional Resources

Back To Table Of Contents

How To Find A Mic That Makes You Sound Radio-Ready


Microphone with pop filter

Microphones are the ears of your home recording studio. They convert sound into electricity (which gets sent to your interface).

If you’re a guitarist, you know that every guitar sounds different. You might reach for a Tele over a Strat, depending on the part you’re playing. Microphones work the same way. One might sound better than another in a specific situation. But if you’re starting out, you don’t need a dozen mics to cover your bases…

This Type Of Mic Will Always Get The Job Done

There’s one type of microphone that sounds great on just about anything (including vocals).

It’s called a large-diaphragm, cardioid condenser.

If you’re only going to get one for your home recording studio, this should be it. Here’s why:

  • Large diaphragm: The diaphragm is the part of the mic that picks up sound. A large diaphragm makes the mic better at picking up low frequencies (like the body and warmth of your voice). This means it will faithfully capture the full tonal range of sounds.
  • Cardioid: This is the microphone’s polar pattern. It dictates what the mic will pick up, and more importantly, what it won’t. A cardioid mic will pick up what’s in front of it, but almost nothing to the sides or behind it. You can use this feature to reduce the level of unwanted noise in your recordings (like air conditioning rumble, noisy neighbors, or chirping birds). Just position the back of the mic towards the source of the noise!
  • Condenser: Refers to the technology the mic uses to capture sound. Condenser mics do a better job at picking up high frequencies (like the sizzle of cymbals or the crispness of a voice) than any other type of mic.

What About USB Mics?

Avoid them. While you won’t need an interface to use one, they are of lower quality than most traditional mics. They also aren’t future-proof; if USB ports become obsolete, you’ll need to buy a new mic.

Recommendations For Large-Diaphragm Cardioid Condenser Mics

Under $100

Under $250

Under $500

An Electric Guitarist’s Dream Mic For Under $100

If you plan on recording lots of guitar amps, you may want to invest in an additional microphone.

Why?

Because condenser mics don’t sound that great on amps.

But don’t worry—there’s a go-to mic that’s used to record guitar amps in multi-million dollar studios every day. And it costs less than $100.

Which one is it?

The Shure SM-57.

Shure SM-57

If you’re just getting started, this isn’t a necessity. But if you’ll be recording lots of guitar amps, you may want to consider it.

(You can use the SM-57 to record other things too, but it shines on guitar amps!)

Additional Resources

Back To Table Of Contents

How To Choose Studio Monitors That Supercharge Your Tracks


Studio monitor and LCD screen

Studio monitors are speakers designed for use in home recording studios. You’ll need these to play back and mix your recordings.

These are different than the speakers you might buy for your living room. Whereas consumer speakers often flatter and enhance the sound, studio monitors are neutral and uncolored. They won’t sound as pretty as typical speakers—in fact, they may even sound dull.

Listen on speakers like these, and you’ll hear what’s really going on in your music. Great studio monitors will force you to work harder to craft a mix that sounds good. This will lead to tracks that sound great on a variety of different speakers, not just ones that sweeten or hype up the sound.

Can’t I Just Use Headphones?

Headphones are notoriously difficult to mix on, and tracks mixed on headphones often don’t hold up on speakers. (There are, however, other uses for headphones. You’ll learn more about this below.) If you’re doing basic voiceover work, you may be able to forgo studio monitors. But if you’re recording music, it’s crucial to invest in them.

4 Studio Monitor Specs That Really Matter

When choosing studio monitors for your home recording studio, it’s easy to get distracted by frequency plots and technical jargon. Here’s what really counts:

Active Vs. Passive

Speakers need an amplifier to produce sound. If a speaker is active, it means the amplifier is built-in. This makes active speakers completely self-contained—you just need to plug them into the wall and your interface. On the other hand, passive speakers need a separate power amp to function. I would avoid them, as they add another piece of equipment to your home recording studio.

Near-Field Vs. Mid/Far-Field

Near-field monitors are built to be used in close quarters, like a home studio. Mid-field and far-field monitors are built to be placed farther away from your ears, and are more suitable for larger spaces. Go for a pair of near-fields (unless you live in a castle).

Frequency Response

Most studio monitors have a fairly flat frequency response. This means they sound neutral—the bass isn’t louder than the treble, and everything is well-balanced. However, even the flattest studio monitors will sound different in your home recording studio (room acoustics affect speakers dramatically). For this reason, I wouldn’t obsess over the frequency response of your speakers. You can always use software like Sonarworks Reference 3 to flatten things out later on.

Pay attention to how far the speakers extend down the frequency spectrum. This will often be quoted as the bottom number in a range (from 40 Hz to 20 kHz, for example). Smaller speakers won’t extend down as far. This will make it harder to hear what’s going on in your recordings. Try to find speakers that extend to 40 Hz or below.

Connectivity

Your studio monitors will have XLR, 1/4″, or RCA inputs. Make sure these are the same type of connectors your interface uses. If the two don’t match up, you’ll need a special adapter or cable to connect them. This isn’t a big deal, but it’s best avoided.

Studio Monitor Recommendations

Under $300 (Pair)

Under $600 (Pair)

Back To Table Of Contents

How To Pick The Perfect Pair Of Headphones


Pair of headphones

Headphones are an invaluable studio ally. You can use them while overdubbing, mixing, or to avoid disturbing your neighbors.

Like studio monitors, studio headphones are designed to be tonally neutral. While I don’t recommend mixing on them exclusively, headphones like these will offer you an accurate, unbiased perspective on your recordings.

When trying to find the right pair, here are some things to keep in mind:

Open-Back Vs. Closed-Back

Open-back headphones have perforations on the outside of each cup which allow sound to pass through easily. They typically sound better than closed-back headphones, and are the preferred choice for mixing. However, since sound leaks out of them so easily, they’re not ideal for recording (mics pick them up).

On the other hand, closed-back headphones have a hard enclosure that prevents sound from escaping. This makes them a better choice for recording, when maximum isolation is needed.

If you’re only going to buy a single pair for your home recording studio, go for closed-back. They’re more versatile.

Connectivity

Most pro studio headphones use a 1/4″ plug. This is thicker than the 1/8″ plug you’ll find on most consumer headphones. If you want to plug your studio headphones into an iPhone or laptop, you’ll need a 1/4″ to 1/8″ adapter.

Comfort And Fit

You’ll be wearing these for hours on end, so you want them to be comfortable. Cushy foam padding makes a big difference. Also, look for headphones that rest over, not on your ears. And if possible, try them on before you purchase!

Recommendations For Headphones

Under $100

Under $250

Under $500

Additional Resources

Back To Table Of Contents

How To Find A DAW That Makes Recording Easy


Ever seen one of these?

Large format recording console

While they may look cool, consoles like these are now collecting dust in top-tier studios across the globe.

Why?

You don’t need them anymore. In many cases, they’ve been replaced by digital audio workstations.

A digital audio workstation, or DAW, is the software that will power your home recording studio. It’s what you’ll use to record, play back, and manipulate audio inside your computer. Arm yourself with a great DAW, and you’ll be able to do everything you can do on that hunk of junk above (and more).

What’s The Best-Sounding DAW?

Visit any online audio forum and you’ll find people that claim one DAW (usually the one they use) sounds better than the rest.

This isn’t true. In fact, all DAWs sound exactly the same. The differences between them have more to do with workflow than anything else.

My 3 Favorite DAWs

When choosing a DAW, there are tons of great options. Here are my favorites:

Pro Tools

Pro Tools logo

As a mixer, Pro Tools is my DAW of choice. I’ve been using it for nearly a decade.

You’ll find Pro Tools in most recording studios. This is helpful if you ever end up recording in a commercial studio, because you’ll be able to open the projects you save on your own rig. This means you’ll be able to record drums in a professional studio, for example, and then edit them later in your home recording studio.

Pro Tools excels as a recording platform. Its audio-editing features are second-to-none. However, beatmakers or EDM producers may be better off with one of the DAWs below.

Logic

Logic is the preferred choice for many producers. It features a fantastic library of sounds and plugins—one of the most comprehensive packages available. When I’m not mixing, it’s my favorite DAW.

Unfortunately, Logic is Mac-only.

Ableton Live

Ableton Live is great for loop and sample-based producers. In fact, many EDM producers swear by it. Its audio manipulation tools are flexible and innovative, and it can be easily integrated into a live performance. If I was an electronic music producer, Ableton Live would be my choice.

Other DAWs Worth Exploring

Your search shouldn’t stop here. Here are some other DAWs worth exploring:

  • Cubase
  • Studio One
  • Digital Performer
  • Adobe Audition
  • SONAR

How To Choose The Perfect DAW For You

Choosing a DAW is like dating. Download a few trial versions and take them for a spin. Explore your options and make sure things fit before committing. While all major DAWs have similar features, some do certain things better than others.

If you’ll be collaborating, check out what DAW your collaborators use. It’s much easier to work together if you’re both using the same software. But in the end, the choice is yours.

Don’t get too hung up here. Remember, The Beatles recorded Sgt. Pepper on a 4-track tape machine. Even the most basic DAW has infinitely more power. Go with your gut and move on.

Save Hundreds By Avoiding Unnecessary Plugins

Too many plugins!

As you start to explore the world of home recording, you’re going to run across plugins.

These are pieces of third-party software that extend the functionality of your DAW. They allow you to manipulate sound in different ways.
Most people invest in plugins too early. If you’re just getting started, your DAW’s stock tools are more than enough to make great recordings. Master what you have first—more plugins won’t necessarily lead to better-sounding tracks.

Back To Table Of Contents

The Extra Stuff Most People Forget


We’ve covered the basics, but there are a couple of extras you’ll probably need too…

Cables

You’ll need an XLR cable to connect your mic to your audio interface.

You’ll also need a pair of cables to connect your speakers to your interface. These will be either 1/4″, XLR, or RCA—depending on which connectors your speakers and interface use.

Mic Stand

Go for quality here. Cheap, flimsy stands will be the bane of your existence. I prefer ones with three legs over those with a circular, weighted base. They tend to be more stable and don’t fall over as much.

What I Recommend: On-Stage Stands MS7701B

Pop Filter

A mesh screen that sits between your microphone and vocalist. It helps diffuse the blasts of air that accompany certain consonants (like “p” and “b” sounds). Left alone, these blasts will overload your microphone’s diaphragm, leading to boomy, muddy recordings. This essential accessory will significantly improve the quality of your tracks.

Pro Tip: For a pop filter to work well, there needs to be a few inches between the filter and the mic, as well as the filter and the singer. If you push the filter right up against the mic or put your mouth on it, it won’t be able to do its job.

What I Recommend: On-Stage Stands ASFSS6GB

Speaker Stands

As you’ll learn below, it’s best to get your speakers off a desk and onto stands. This is an easy move that will lead to a significant improvement in sound quality.

What I Recommend: On-Stage Stands SMS6000

MIDI Keyboard

Akai MPK49 MIDI keyboard

With a MIDI keyboard, you’ll be able to “play” any instrument imaginable. You can use it to fill out and orchestrate your recordings. If you’ll only be recording real instruments or vocalists, you won’t need one. But if you’re a beatmaker or electronic music producer, it’s almost essential.

What I Recommend: Akai MPK249 (don’t forget the sustain pedal)

Desk

You may have a desk that works already. If not, I’m a big fan of the On-Stage Stands WS7500. This is what I use in my home recording studio now. It’s a great way to get started!

Comfortable Chair

If you’re going to be logging some serious hours in your home recording studio, it makes sense to be comfortable, right?

Invest in a comfy chair with good support. You and your back will thank me later.

What I Recommend: Alera Elusion Mesh Mid-Back Office Chair

Back To Table Of Contents

How To Set Up Your Room For Studio-Quality Sound


Every decision you make while recording will be based on what you hear. If what you’re hearing isn’t accurate, you won’t make the right decisions. This will lead to recordings that sound good in your studio, but fall apart on other speakers.

You can avoid this by setting up your home recording studio properly. Don’t overlook this crucial step! If you follow the guidelines in the video below, you’ll be well ahead of most home studio owners. Your recordings will sound better too!

Taking Your Room To The Next Level With Acoustic Treatment

After your home recording studio is up and running, you’ll want to invest in acoustic treatment panels. These will improve the sound of your room by evening out acoustic problems. While acoustic treatment is beyond the scope of this article, I’ve put together a PDF with resources that will help you get started.

It’s Time To Build The Home Recording Studio Of Your Dreams

MacBook and mixer

There will be nothing more satisfying than hearing your own recordings play over the speakers in your new home studio. You now have everything you need to make this happen.

The next step is for you to take action. Order the equipment you need, set up your room using the guidelines above, and start recording! Remember, once you get all this out of the way, you can get on to the good stuff—making great music!

But before you go, leave a comment below and tell me—what will you use your home recording studio for?

I wish you the best of luck on your home recording journey!

Production: Creating the Perfect Bass Sound

[Editors Note: This bass production guide was written by our friends over at Point Blank London, and was originally featured on their site. Check it out here for audio samples and more.]

 

Searching for the perfect bass patch can be an odious task. With such a plethora of synths and libraries out there, flicking through the almost endless presets to find what’s right for you is like finding a needle in haystack.

Getting something that works with any samples or chord progressions you’ve got, that sits nicely with your kick drum and still carries enough weight to shake those subwoofer cones can seem like a juggling act.

In this tutorial we’re going to explain how to create bass sounds and lines with powerful subs, thick mids and tops that cut through on any system. Download the project used in this tutorial here.

293198_2050105_spec_L

There are a myriad of dos and don’ts out there and you can spend more time tweaking than actually making music. In this article we’ll take a forensic look at how to build your basslines from the bottom up, from creating a penetrating sub bass, layering the mids and tops, getting it to bite in all the right places and processing it with your kick and rest of the mix.

Due to the low bass frequencies in these audio examples we suggest listening through good headphones or studio monitors to appreciate the nuanced programming.

Low-End Theory

Depending on which genre of music you’re working on, the bass might perform a different function; in house and techno, a weightier kick drives the track along, dictating the pace and feel. Basslines in these genres might contain more mid-range frequencies to cut through the mixes.

Drum ’n’ bass, dubstep and other bass-heavy music can contain much more bottom end and sub frequencies, underpinning your loop. Balancing your kick and bass can be an essential part of getting your track working. With weak foundations, you’re going to struggle to get the rest of the mix sitting comfortably.

To understand bass properly, there are a few key terms you will want to get your head around: amplitude, harmonics and phase. Amplitude is simply a term for volume, but it’s not to be confused with decibels (dB). It’s more akin to relative volume or power.

Harmonics are the name given to all the frequencies that go into making up a sound. The lowest, loudest note in your bass sound is the first harmonic (or fundamental). Any frequency above this will normally be a harmonic.

Harmonics are integer multiples of the fundamental frequency. If that didn’t mean anything don’t worry, the maths is simple. Let’s have a look at Live’s Operator instrument using Osc A. Below is a low A note (110 Hz) and I’ve adjusted the Waveform Editor, bringing in the next four harmonics one at a time. They are the frequencies, 220, 330, 440 and 550Hz.

First Five Harmonics

Just above Operator we can see Voxengo SPAN mapping frequency across our X-axis and amplitude across our Y-axis: you can quite clearly see each harmonic creeping in relative to the fundamental. To the right is an oscilloscope by Laidman & Katsura, this displays time across the X-axis and amplitude across the Y-axis.

The other concept we need to familiarise ourselves with is phase. There are primarily two places we’ll come up against this, the first of which is the start phase of an oscillator. Below we can see eight notes with their phase free running (the default of most synths) and with their phase locked to restart at 0º when a note is played:

Free

Free

Restarted

Restarted

As you can see without restarting the phase, each note has a different start position within the oscillators cycle, causing irregularities in volume and nasty clicks and pops.

The second instance of phase we’re likely to come across is relationship between the left and right channel. It’s highly recommended to keep your frequencies below about 100Hz in mono: any disparity in stereo spectrum here can be very noticeable, causing phasing issues when summed to mono and more irregularities in volume.

Creating a Sub Bed

While we might tend to think of basses as one sound we can sometimes separate their spectrums up further into complex composites – containing as many as three or even four layers – each requiring different programming, processing and treatment.

Flexibility with the sub, low-mids and mid range can be key in getting the right amount of punch, the bass cutting through the mix and retaining that all-important stereo image. Let’s start off with our sub frequencies.

The only way your bass is going to move air on the dancefloor is getting a good, meaty sub. Making a competent sub isn’t rocket science, as it requires very little understanding of synthesis and sound, but making a great sub just takes a little more. Let’s stay with Operator for now.

Osc A defaults to a sine wave, a waveform that contains only the first harmonic. This is good for sub bass as it’s clear and uncluttered. Ensure the phase restarts on 0º (0%) and change the Voices to 1 in the Global Shell.

Phase Restart

If you’re leaving the sub as the sole layer for the bass part then you can almost leave it untouched. I’ve added in -30dB from Osc B, which is modulating the frequency of Osc A. This adds just a few harmonics into the sound helping it cut through a busier mix and on smaller speakers.

Do this by enabling Osc B and turning the Level up to -30dB, or wherever you feel the sweet spot is. It’s good to check on a spectral analyser, though, as frequency modulation can sometimes overpower the fundamental frequency if you add too much in.

Osc B introduced

By increasing the level of Osc B we can create a brighter, sharper tone. You can shape the overall FM by reducing the sustain of the amplitude envelope of Osc B. With the level around -13dB, and changing the Coarse tuning to 4 (fourth harmonic), we can get an archetypal garage/UK house sound:

Garage Bass

Shaping the Low-Mid Tone

Once we’ve got our foundations laid we need to move on to the lower mid range, which is going to shape the body of our bass. Click on the Operator and hit cmd + G (or ctrl + G if you’re on a PC) to group the Operator into an Instrument Rack. Instrument Racks allow MIDI to be distributed to various different chains of synths and samplers and their combined signals to be processed and mixed individually.

Click on the Show/Hide Chain List and rename the Operator “Sub”. It can be muted for now while we concentrate on our midrange.

Show Chain List

Ctrl + right-click in the panel where it says Drop an Instrumental or Sample Here, click Create Chain and name it “Mids”. We’re looking for a synth that has a couple of oscillators and, while most any subtractive synth will do, I’m opting for Native Instruments’ Massive. Drag and drop it on to the Mids chain.

Massive’s default preset is using Oscillator 1 with a wave that’s harmonically halfway between a square and a sawtooth. It’s running into filters 1 and 2, and Envelope 4 is controlling our amplitude. Let’s set about getting it to a place where we can design our sound.

Move the WT-Pos (wavetable position, highlighted in green) fully clockwise to Squ and set the routing of the oscillator to F1 (yellow). Now click on the 4 Env panel and reduce the Attack to minimum and increase the Level to maximum (blue and red).

Massive Reset

You can repeat these steps oscillators 2 and 3 if you want.

In the Osc panel, click to Restart via Gate in the Oscillator Phases box. Much like Operator, Massive allows us to select the start phase of our oscillators each time a new note is received. If we were designing a pad or poly synth patch with unison detune it might not be necessary to take these steps, but for a lot of modern bass sounds it’s recommended.

Finally, in the Voicing tab, change the Voicing from Polyphon to Monorotate and the Trigger from Always to Legato Triller. These steps ensure the bass is monophonic and that envelopes won’t retrigger if two notes overlap.

Next I’m going to enable Osc 2 and load a sawtooth in. There are a few two choices here, the Squ-Saw and Squ-Sw II. Ensure the WT-Pos is in the right place and turn the amplitude up to just half way. This gives us a richer sound that is dominated by the odd harmonics provided by the square wave – plenty of middle and top end for out filters to bite on to.

Route Osc 2 to F1 and turn your attention to the filter section. I’m going to add the Lowpass 2 filter – this has a weaker slope than the Lowpass 4 giving it a smoother sound – which will sound great later on down the line when we start modulating it.

Set the Cutoff to about 8 o’clock and leave the Resonance as is. Before moving on I’ve added the Ktr (keyboard tracking) Macro to modulate our filter. This tracks the position of the filter according to the pitch, opening it as the pitch gets higher. Lastly set the >F2 to Series and the Mix to Mix1.

At this stage you can add a third oscillator in tuned up an octave or two if you want to. This won’t really add anything to the weight of the bassline but it might help it come across on smaller speakers.

In addition you could add some Sine Shaper from the Inserts. Experiment with their position before or after the filter in the Routing panel.

Filter

Filter Envelopes for Bite and Punch

Modulation comes in all shapes and sizes and by far the two most common sources are LFOs and envelopes. Let’s look at each in turn, starting off with LFOs.

LFO stands for low frequency oscillator, and this is a control value that falls within the 0.01 Hz to 20 Hz spectrum. We wouldn’t be able to hear these waves on their own, as they’re subsonic, but when applied to filter cutoff or volume we can hear their effect.

Their value is determined by a ‘rate’ and their modulation is bi-polar i.e it has a positive and negative part to the cycle. LFOs are great for tempo-synced modulation like dubstep wobbles, filter and frequency modulation as well as stereo tremolo on pads and rhodes-type instruments.

Envelopes on the other hand are unipolar and whereas LFOs are free running envelopes are gate triggered. Massive contains four envelopes and number 4 defaults to modulating the amplitude.

Commonly there are four stages in an envelope: the attack (time in milliseconds it takes to reach the maximum level from a MIDI note on signal); decay (time in ms after the attack has passed to reach the sustain stage); sustain (value at which the note sustains at); and release (time in ms the sound takes to reach zero again after a MIDI note off is received).

ADSR-2

I’ve used envelope 1 to control several parameters in our mid-layer. Here, I’ve used the shortest attack available and dropped the level (Massive’s terminology for sustain) and set the decay parameter to a value of 11 o’clock. The decay time might differ drastically depending on your tempo, where at higher bpms you might want a shorter decay time and at slower tempos you could get away with letting the envelope’s modulation breath a little more.

Decay envelope

We can add this envelope to as many different parameters as we like. Firstly let’s add it to our filter (which, if you remember, already has some modulation from the keyboard tracking). Setting the amount of modulation is key to controlling the harmonics that come through and therefore sets the tone of your transient. Having more modulation means the initial hit is brighter, and less, duller.

I’m also adding the same modulation to the Drive circuits on the two Inserts, for which I’ve used Parabolic and Sine Shapers. These add harmonics into the signal by folding over the upper portions of a waveform. One of these is placed before the filter and one after.

Lastly I’ve used Massive’s powerful Modulation Oscillator tuned up 19 semitones (one octave plus a perfect fifth above the MIDI input) and set to Phase modulate Osc 2. Sonically phase modulation is very similar to frequency modulation, and again adds a nice blast of complex high frequencies to our transient.

envelope mod

Macro Managing

We want this bass to be as flexible as possible so I’m going to set up some Macros within Live’s Instrument Rack to control our mids. Click on the Unfold Device Parameters and then click Configure.

Unfold

Configure

Now, anything you touch in Massive will populate this list. I’m going to add the filter cutoff, the drive and dry/wet from both inserts, the phase from our modulation oscillator and the level from envelope 1. If you’ve done that correctly it should look like this:

Config 2

Unclick the Configure button and assign these to Macros. I’m going to give the filter cutoff it’s own Macro named “Cutoff” and the dry/wet and drives of both inserts will be mapped to Macro 2, “Drive”.

The envelope level will be mapped to Macro 3, named “Env Mod” (because cleverly reducing the Marco to 0 will remove all of the envelope modulation), and lastly the phase will be mapped to Macro 4 named, “FM”.

colour code

Once they’re named and colour-coded, click Map and carefully set the ranges for each parameter. It’s good to have a MIDI loop running in the background whilst you do this. You want to set a minimum and maximum that are musical but allow some space for interesting automation later on down the line.

Macros

Top Layer

Now we’ve put the work into our mid-range let’s concentrate on the top layer. I’m going to duplicate my instance of Massive for mids by clicking on the chain and hitting cmd + d (or ctrl + d for a PC). Rename this new chain “Top” and solo it.

Aside from the patch being duplicated you’ll notice all of our hard work that’s gone into tweaking the Macros has been retained. Let’s edit this patch to get a more suitable top end. Firstly I’m going to disable the Oscillator Phases to Restart via Gate. I’m going to experiment with Unison Detune in this patch. Restarting the oscillator’s phases can sometimes create a nasty flanging sound when combined with unison detune.

I’m setting both oscillators to sawtooths now, matching their amplitudes and detuning them ever so slightly. The wider the detune amount, the faster beating we get. Beating is a fluctuation we hear when two oscillators are playing the same note but out of tune (you hear a similar effect when tuning two adjacent strings on a guitar together).

I’ve opted for +/- 20 cents. Next add in Osc 3 selecting the Scrim (Screamer) wavetable. Use envelope 1 to modulate the wavetable readout. I’ve gone for a range of 10 o’clock-5 o’clock.

Lastly for our oscillators, add in the Noise oscillator with envelope 1 controlling the amplitude. We want a blast of noise at the transient of the sound but having too much noise in the sustain stage will quickly muddy the sound up. I’ve chosen the Tape Hiss option here.

Oscs

Let’s turn our attention to the filter. I’ve left the settings intact but changed the algorithm to Bandpass. This works by isolating a band of frequencies, leaving us with a more aggressive but thinner sound perfect for our top layer. Set the Bandwidth and Resonance to about 9 o’clock.
In the voicing tab change the number of Unison Voices from 1 to 4 and enable the Pitch Cutoff and Pan Position, adjusting their values to taste. Pitch Cutoff will add some detuning to each voice and Pan Position will spread those around the stereo spectrum. Now our layer is starting to sound the part.

Bandpass

voicings

There’s not much more to do but turn our attention to the FX tab. I’m adding in a Classic Tube and Dimension Expander while shelving off some bottom end in the EQ tab. Keep a close eye on the Master as all of these distortions and unison effects can easily clip the sound unpleasantly.

Processing Layers Together

Now we have our three layers in place, we need to think about separating them so there’s as little overlap as necessary and each part occupies its own space in the frequency and stereo spectrum. As our sub is fine let’s start with the mid layer. Solo it and add Live’s EQ Eight.

I’ve high-pass filtered it fairly abruptly at 80Hz using the 48dB/Oct slope: this stops it interfering with our sub. I’m also going to add some compression to even out the level a little more and some limiting to deliberately clip the layer. You could add more distortions and modulations here but I’m going to reserve them for our top layer.

Solo the top and add an EQ Eight. Add Live’s Pitch plug-in from the MIDI Effects tab and tune it up an octave. This will transpose any incoming MIDI up an octave automatically – a great time-saving device! I’m again going to high-pass the sound, this time using the standard 12dB/Oct slope and high-passing at 180Hz.

I’ve also added Live’s Auto Filter (adding some extra low-pass filter envelope modulation), the Simple Delay (using short unsynced values of 30 and 80ms), some Reverb, Compression and Limiting. Here’s the top layer on its own now.

FX 1

 

Lastly I’m going to map the levels of each chain to a Macro, allowing me easier control over each layer, and the dry/wet of the top layer’s FX to my last remaining Macro.

MAcros final

Multi-Band and Parallel Processing

Now our synth is balanced internally we can think about processing it as a whole. The way Ableton nests Instrument Racks is clever but it means in order to contain any effects we now apply with our three existing layers we’ll need to re-group (cmd + G / ctrl + G) our current three layers into another Instrument Rack. Alternatively add an Audio Effects Rack after.

While we can use filters or EQ to separate frequency bands, it’s safer to use Live’s Multiband Dynamics as the bands are phase coherent and will minimise the amount og delay to any part of the spectrum. I’ve added three chains, each with a Multiband Dynamics, each soloing one of the Low, Medium and High bands. Ensure you label your chains for ease of use at a later date.

Now we can process these bands individually and adjust their crossover if you choose. STart by adding a Utility to the Low chain and reducing the width to 0%. It’s recommended to keep your bottom end in mono for nearly all applications and this plug-in can ensure that. I’ve also added Live’s Compressor with a slow attack and release with high ratio to tame the dynamic range a bit.

Mutliband 1

On the Mids chain I’ve adjusted the high crossover band to 1.5 kHz to narrow this range a little. Adding another Utility I’ve kept the Width at 60% and added some more compression with a much faster attack and release to match the quick envelope modulation of this band.

Lastly in the High band I’ve adjusted the Width of a Utility to 120% to spread the sound a little and added some light low-pass filtering around 8.5 kHz. After the Audio Effects Rack you can add in any further EQ you might want (to balance the patch specifically with your track), any compression, limiting and sidechain compression.

The patch is designed to be a jack of all trades and will require some tweaking of the Massive instruments and processing to get it to sit just right, so be liberal with adjustments. Hopefully this acts as a springboard to inspire you to create your own bass sounds too. Download the project used in this tutorial here.