[Editors Note: This is blog was written by Scott Wiggins and it originally appeared on his site, The Recording Solution, which is dedicated to helping producers, engineers and artists make better music from their home studios.]
How many of you are completely terrified of doing anything to the mix buss, aka “stereo buss” “2 buss”?
It is real easy to mess up an entire mix with too much processing, in particular, mix buss compression.
Over the years of searching the internet creeping on my favorite mixers (Jaquire King, Dave Pensado, Chris Lord Alge, and many more) mix buss compression settings I’ve found that a little goes a long way.
Mix Buss Compression Glue
Have you ever heard the term “glue” in a conversation of recording and mixing?
No, I’m not talking about the kind you used to put on your hands in elementary so you could peel it off when it dried.
Am I the only one who did that?
I’m talking about the way compression can make tracks seem like they fit together a little better.
When set up correctly it makes the whole song feel like it’s glued together in the subtle ways which gives it a nice musical polished cohesive sound.
The goal with mix buss compression would be to just tame any transients that may spike up in volume just a little too much, and then bring the overall volume up of the rest of the tracks juuuuuust a bit.
We’re just trying to add a little more energy and fullness to the mix.
Mix Buss Compression Settings
The attack setting you use for mix buss compression is important just like using a compressor on any other track.
With a faster attack the compressor will clamp down sooner on the transients that tend to be a little louder than the rest of the audio coming through.
A slower attack will wait milliseconds before it clamps down on the audio and starts compressing.
I tend to use a faster attack, BUT I’m not crushing those transients with a ton of compression, so I still keep the dynamics in my mix.
If I found I was killing the transients too much and there was no excitement in my mix, I would probably make it a slower attack setting.
I tend to use a medium to fast release setting.
I’ve heard a lot of famous mixers say they set the release with the tempo of the song.
So they would watch the gain reduction needle and have it release on beat with the song.
I try my best to use this method.
I use a really small ratio of around 1.5 to 1.
This means that once my audio passes the threshold I’ve set that there is very little compression happening to that audio.
It’s just a little bit. I’m not trying to squash the life out of it.
You can experiment with a little bit higher of a ratio, but know that the lower the ratio the less compression (more dynamics), and the higher the ratio the more compression (less dynamics).
I dial the threshold to where I’m only getting about 1 to 3 dbs of gain reduction on the peaks of the audio.
I tend to keep it on the lower side of 1 to 2 dbs of gain reduction.
You just want to kiss the needle. You don’t want to have to much mix bus compression happening.
Remember, we are going for a subtle “glue” like affect.
Make up Gain:
Just like on any other compressor, I turn the make up gain to math the amount of gain reduction happening.
Be careful here. Don’t turn it up to loud and fool yourself that you like the result just because it’s louder.
Do your best to math the input volume with the output volume of the compressor.
We tend to think louder is better when it’s not really better, it’s just louder.
I’ve shot a video tutorial below to show all of this in action on a mix i’ve started. Check it out!
Mix buss compression is a great way to add a little bit of excitement and glue to your mix.
Some people like to slap it on the master buss AFTER they have mixed it (Ryan West who’s credits are Jay-Z, Eminem, Kid Cudi, Maroon 5, T.I, Rihanna and Kanye West)
And some engineers like to slap a little bit of compression on in the beginning and mix through it.
I don’t think there is a right or wrong way when it comes to when to put it on.
The key is to be subtle and don’t kill a good mix with too much mix buss compression.
Use your ears like always. They are your biggest weapons.
If you’ve been using TuneCore or reading our blog for the past few years, you know that we’ve tried to highlight the benefits of well-mastered releases. Mastering is an art that can vastly improve the sound of your recorded music – and it once took knowing an audio engineer who specializes in this process to make it a possibility. It could also be more costly for those releasing on an indie budget.
In an effort to solve this, a little while back TuneCore began offering Aftermaster in our suite of Artist Services – a brilliant program that connects independent artists in need of mastering with GRAMMY Award winning engineers at a reasonable cost. Artists without the local resources or connections to high quality mastering options could now use Aftermaster to easily coordinate this studio magic-making in advance of their release.
Now, we’re psyched to announce a new and even more cost-friendly solution for instant mastering: Promaster by Aftermaster. TuneCore has expanded its partnership with Aftermaster in order to offer our artists the finest instant mastering tool on the market right now to polish their recently recorded tracks.
Promaster by Aftermaster is unlike any other instant mastering product in that it was wholly developed by artists, producers and audio engineers to streamline the mastering and storage process without compromising quality or creative intent. It uses cutting edge audio processing systems developed internally to instantly master audio to the highest standards.
Since we’re in the business of hooking artists up, Promaster by Aftermaster is ready to offer special rates exclusively for TuneCore Artists. For $9.95 per single and $24.95 per album, TuneCore Artists can also enjoy the following features:
Receive four versions of each track: ‘Powerful’, ‘Radio Ready’,’Bass Enhanced’ and ‘Vocal Enhanced’
Free preview of your masters before you purchase
Master an entire album at once, ensuring consistency
Monthly and annual subscriptions available
Whether you just wrapped up in the studio, recorded a new single in your bedroom, or feel like revisiting some never-released tracks – polish ‘em up fast at a price that doesn’t break the bank.
Once upon a time, remixing a song meant actually redoing the mix. Many vintage consoles (some Neve 80-series, for example) have a button labeled “remix” that changes a few functions on the desk to optimize it for mixing rather than recording.
But sometime in the late 20th century, the word “remix” began to take on a new meaning: creating a new arrangement of an existing song using parts of the original recording. Into the 21st century, it’s evolved again and is now sometimes used as a synonym for “cover.” The latter two definitions remain in common use, while the first has largely disappeared.
Language is constantly evolving, and musical terms are obviously no exception. In fact, in music, language seems to evolve particularly fast, most likely owing to lots of interdisciplinary collaboration and the rapid growth of DIY.
Ambiguous or unorthodox use of language has the potential to seriously impede communication between collaborators. In order to avoid an unclear situation, let’s break down standard usage of some of the most commonly conflated, misused, or misunderstood music-related terms.
GAIN / DISTORTION
Gain, as it’s used in music electronics, is defined by Merriam-Webster as, “An increase in amount, magnitude, or degree — a gain in efficiency,” or, “The increase (of voltage or signal intensity) caused by an amplifier; especially: the ratio of output over input.”
To put it in less formal terms, gain is just an increase in strength. If an amplifier makes a signal stronger, then it causes that signal to gain intensity. Gain is usually expressed as a ratio. If an amplifier makes a signal 10 times as loud, then that amplifier has a “gain of 10.”
On the other hand, harmonic distortion is that crunchy or fuzzy sound that occurs when an amplifier clips (as a result of its inability to handle the amount of signal thrown at it).
In the 1970s, some guitar amp manufacturers began employing extra gain stages in their designs to generate harmonic distortion on purpose. In other words, they’d amplify the signal, then amplify it again, and that second gain stage — having been given more than it could handle — would distort. These became known as “high-gain amplifiers.” Because of this, many guitarists just assumed that gain was synonymous with distortion. This was cemented when later amps like the Marshall JCM900 had knobs labeled “gain” that, by design, increased the amount of harmonic distortion when turned up!
Outside the realm of electric guitar, though, gain is still most typically used in a conventional way. When a recording engineer talks about “structuring gain,” for example, he or she is usually specifically trying to avoid harmonic distortion. It’s easy to see how this might cause confusion!
TONALITY / TONE
Not to pick on guitarists, but this is another one that trips us up. Tone has many music-related definitions, but the one of interest at the moment is (again, per Merriam-Webster), “Vocal or musical sound of a specific quality…musical sound with respect to timbre and manner of expression.”
On the other hand, the dictionary definition of tonality is:
It’s important to note that “tonal quality” here refers to “the quality of being tonal,” or the quality of being in a particular key (in other words, not atonal). This is a different matter from “tone quality,” which is commonly understood to mean “timbre.” Most musicians with formal training understand tonality either as a synonym for key or as the quality of being in a key.
If you’re trying to sound fancy, it can be tempting to reach for words with more syllables, but using tonality as a synonym for timbre can be confusing. Imagine you’re recording two piano pieces — one utilizing 20th-century serial composition techniques and the other utilizing functional harmony. If you express concerns about the piano’s “tonality” while recording the second piece, the composer would probably think you were criticizing his or her work!
OVERDUB / PUNCH-IN
Most musicians in the modern era understand the difference between these two concepts, but they still occasionally confuse folks relatively new to the process of recording.
Overdubbing is adding an additional layer to an existing recording.
“Punching in” is replacing a portion of an already-recorded track with a new performance.
To do a “punch-in” (in order to fix a mistake, for example), the performer plays along with the old performance until, at the appropriate moment, the recordist presses record, thus recording over the mistake. The recordist can then “punch out” to preserve the remainder of the original performance once the correction is made.
GLISSANDO / PORTAMENTO
A portamento is a continuous, steady glide between two pitches without stopping at any point along the way.
A glissando is a glide between two pitches that stair-steps at each intermediate note along the way. A glissando amounts, in essence, to a really fast chromatic scale.
To play a glissando on guitar, you’d simply pluck a string and slide one finger up the fretboard. The frets would make distinct intermediate pitches, creating the stair-stepped effect. If you wished to play a portamento on guitar, you could either bend the string or slip a metal or glass slide over one of the fingers of your fretting hand.
VIBRATO / TREMOLO
While often used interchangeably in modern practice, vibrato and tremolo are actually distinct kinds of wiggle. In most cases, tremolo is amplitude modulation (varying the loudness of the signal), whereas vibrato is frequency modulation (varying the pitch of the signal).
But over the past few hundred years, tremolo has commonly referred to many different performative actions. On string instruments, tremolo is used to refer to the rapid repetition of a single note, and in percussion, tremolo is often used to describe a roll. Singers use it for even crazier things, like a pulsing of the diaphragm while singing¹.
Leo Fender must’ve had his terms confused — he labeled the vibrato bridges on his guitars “synchronized tremolo,” and the tremolo circuits on his amps “vibrato.” Confusion has reigned ever since.
ANALOG / DIGITAL
Analog and digital are perhaps the most confused pair of words in the 21st-century musical lexicon. I once had a somewhat older musician tell me that my 1960s-era fuzz pedal and tape echo made my guitar sound “too digital” for his music. Likewise, countless younger musicians claim to prefer the “analog sound” of the original AKAI MPC (an early digital sampler) and the Yamaha DX-7 (an early digital FM synthesizer). But “analog” and “digital” are not simply stand-ins for “vintage” and “modern,” nor for “hardware” and “software.” They’re entirely different mechanisms for storing and generating sounds. Let’s learn a little more!
Merriam-Webster’s most relevant definition of analog is, “Of, relating to, or being a mechanism in which data is represented by continuously variable physical quantities.”
Also relevant is its first definition of analogue: “Something that is analogous or similar to something else.”
Now, how does this relate to music technology? It all goes back to humans’ longstanding search for a way to capture and store sound. Sound, on a basic scientific level, is nothing more than compression and rarefaction (decompression) of air that our ears can sense. Since air pressure fluctuations can’t really be stored, recording sound proved elusive for a long time.
20th-century scientists and engineers, however, brilliantly figured out that recording sound might be possible if they could accurately transfer that sound into something that could be preserved. They needed something storable that would represent the sound; an analogue to stand in for the sound that would allow it to be captured and kept.
First, they used mechanically generated squiggles on a wax cylinder as the analogue. Eventually, they figured out that they could use alternating-current electricity (which oscillates between positive and negative voltage), as an analogue of sound waves (which oscillate between positive and negative air pressure). From there, it was a relatively short leap to figuring out that they could, through electromagnetism, store that information as positively and negatively charged magnetic domains, which exist on magnetic tape.
This is analog recording!
Since electric voltage is continuously variable, any process — including synthesis — that represents air pressure fluctuations exclusively using alternating current electricity is analog, per Merriam-Webster’s first definition above.
Digital, on the other hand, is defined as, “Of, relating to, or using calculation by numerical methods or by discrete units,” and, “Of, relating to, or being data in the form of especially binary digits, digital images, a digital readout; especially : Of, relating to, or employing digital communications signals, a digital broadcast.”
That’s a little arcane, so let’s put it this way: Rather than relying directly on continuous analog voltages, a digital recorder or synthesizer computes numerical values that represent analog voltages at various slices of time, called samples. These will then be “decoded” into a smooth analog signal later in order to be accurately transferred back into actual air pressure variations at the speaker. If that’s a blur, don’t worry — you only need to understand that this is a fundamentally different process of storing or generating sound.
Absent a real acquaintance with the technology of an individual piece of equipment or process, it’s probably safer to avoid leaping to conclusions about whether it’s analog or digital. For example, there are reel-to-reel magnetic tape machines (like the Sony PCM 3348 DASH) that don’t record analog voltage-based signal at all, but rather use the tape to store digital information (as simple ones and zeroes).
Since you can’t judge whether a piece of gear is analog or digital with your eyes, it’s probably best to only use these terms when you need to refer to the specific technologies as outlined above. In other words, next time you’re recording in a studio with a cool-looking piece of old gear, it’s probably safer to use #vintage instead of#analog to caption your in-studio Instagram photo!
PHASE / POLARITY
Phase is defined by Merriam-Webster as… (deep breath):
“The point or stage in a period of uniform circular motion, harmonic motion, or the periodic changes of any magnitude varying according to a simple harmonic law to which the rotation, oscillation, or variation has advanced from its standard position or assumed instant of starting.”
That’s a mouthful! This is a concept that’s easier understood with an example, so let’s imagine that you have a swinging pendulum:
If you were to freeze that pendulum at two different times, the dot at the end would be in two different locations. The pendulum’s swing occurs over time, so the location of the pendulum depends on when you stop it. We’d refer to the phase of the pendulum in order to describe this phenomenon and where the pendulum is in its cycle relative to time. And since it’s always moving in a continuous, smooth arc, there are an infinite number of possibilities!
Phase becomes potentially relevant for anything that’s oscillating or undulating — like the pendulum above or a sound wave.
Polarity, on the other hand, is defined as, “The particular state, either positive or negative, with reference to the two poles or electrification.”
To put it in very simple terms, you’re dealing with polarity any time you install a battery. The battery has a positive terminal and a negative one. You have to make sure it’s installed the right way. While phase is infinitely variable, polarity has only two choices — it’s one or the other.
In our brief explanation of analog audio above, we mentioned that positive and negative swings of voltage are used to represent positive and negative changes in air pressure. If we switch polarity of a signal, we swap all the positive voltages for negative ones, and vice-versa. +1v becomes -1v, +0.5v becomes -0.5v, etc. This is usually accomplished with a button marked with the Greek letter theta or “Ø.”
Interestingly, if you have one signal alone, it’s usually the case that our ear can’t really tell the difference between positive or negative polarity. It’s when you combine two or more similar signals (like two microphones on one drum for instance) that a polarity flip of one or the other can have a dramatic influence on the sound.
Confusingly, this influence is a result of phase differences between the two sources, and switching polarity can often improve (or worsen!) the sound of two combined sources which are slightly out of phase. For this reason, the polarity switch is often called a “phase switch,” and depressing it is often colloquially referred to as “flipping phase.”
In the graphic below, you’ll see a brief, zoomed-in snapshot of two waveforms. A single bass performance was simultaneously recorded into both a direct box (blue) and through a mic on its amplifier (green).
In the first graphic, you can notice that the two are slightly out of phase. The blue direct-in wave swings negative ever so slightly before the green mic–on–amp one does. This is because the amp’s sound had to travel through the air briefly before being picked up by the microphone. Since sound in air travels much more slowly than electricity does, this creates a slight time delay or phase discrepancy.
In the second example below, I’ve flipped the polarity of the amp track. You can see that the time delay still exists, but now the amp track’s wave is inverted or “upside down.” As the DI track swings negative, the amp track swings positive.
In this case, the switch made the combined sound noticeably thinner, so I quickly flipped it back. Occasionally though, flipping polarity improves the combined sound of two sources which are slightly out of phase.
In practice, most recordists will understand what you mean if you say “flip the phase,” but should there happen to be a physicist in the room, you might get a raised eyebrow! Generally, though, this is a classic example of how unorthodox usage sometimes becomes accepted over time.
Which raises the point: any of the musical and audio terms above may eventually, like “remix” before them, evolve to incorporate new shades of meaning (or even have some earlier “correct” definitions fall into disuse). In the meantime, though, the more precise your grasp on the language of music, the less likely you are to misunderstand or be misunderstood.
¹ In performance, for both singers and many instrumentalists, pure tremolo is almost impossible to achieve without taking on some characteristics of vibrato — that is to say that a passage is played or sung with only variations of either pitch or volume.
[Editors Note:This blog was written by Alex Sterling, an audio engineer and music producer based in New York City. He runs a commercial studio in Manhattan called Precision Sound where he provides recording, mixing, and mastering services.]
As an audio engineer and music producer I am constantly striving to help my clients music sound the best that it can for as many listeners as possible. With music streaming services like Apple Music/iTunes Radio, Spotify, Tidal, and YouTube continuing to dominate how people consume music, making sure that the listener is getting the best possible sonic experience from these platforms is very important.
Over the last several years some new technologies have been developed and integrated into the streaming service’s playback systems called Loudness Normalization.
Loudness Normalization is the automatic process of adjusting the perceived loudness of all the songs on the service to sound approximately the same as you listen from track to track.
The idea is that the listener should not have to adjust the volume control on their playback system from song to song and therefore the listening experience is more consistent. This is generally a good and useful thing and can save you from damaging your ears if a loud song comes on right after a quiet one and you had the volume control way up.
The playback system within each streaming service has an algorithm that measures the perceived loudness of your music and adjusts its level to match a loudness target level they have established. By adjusting all the songs in the service to match this target the overall loudness experience is made more consistent as people jump between songs and artists in playlists or browsing.
If your song is louder than the target it gets turned down to match and if it is softer it is sometimes made louder with peak limiting depending on the service (Spotify only).
So how do we use this knowledge to make our music sound better?
The simple answer is that we want to master our music to take into account the loudness standards that are being used to normalize our music when streaming, and prepare a master that generally complies with these new loudness standards.
Concept 1: Master for sound quality, not maximum loudness.
If possible work with a professional Mastering Engineer who understands how to balance loudness issues along with the traditional mastering goals of tonal balance and final polish etc.
If you’re mastering your own music then try to keep this in mind while you work:
If we master our music to be as loud as possible and use a lot of peak limiting to get the loudness level very high then we are most likely sacrificing some dynamic range, transient punch, and impact to get our music to sound loud.
The mechanism of loudness maximization intentionally reduces the dynamic range of our music so the average level can be made higher. There are benefits to this such as increasing the weight and density of a mix, but there are also negatives such as the loss of punch and an increase in distortion. It’s a fine line to walk between loud enough and too loud.
Here is where loudness normalization comes in:
If our song is mastered louder than the streaming target loudness level then our song will be gained down (by the service) as a result. If you are mastering louder than the target level then you are throwing away potential dynamic range and punch for no benefit and your song will sound smaller, less punchy, and more dynamically constrained in comparison to a song that was mastered more conservatively in regards to loudness.
If we master softer than the target level then in some cases (Spotify) the streaming service actually adds gain and peak limiting to bring up the level. This is potentially sonically adverse because we don’t know what that limiting process will do to our music. Will it sound good or not? It most likely will create some loss of punch but how much is lost will be based on what content was put in.
Some music is more sensitive to this limiting process. High dynamic range jazz or classical music with pristine acoustic instruments might be more sonically damaged than a rock band song with distorted guitars for example so the result is not entirely predictable just on loudness measurement but also on musical style.
Thankfully the main platforms other than Spotify don’t add gain and peak limiting as of this writing so they are less potentially destructive to sound quality for below target content.
Concept 2: Measure loudness using a LUFS/LKFS meter.
The different streaming services have different loudness standards and algorithms to take measurements and apply the normalization but for the most part they use the basic unit system of loudness measurement called LUFS or LKFS. This metering system allows engineers to numerically meter how loud content is and make adjustments to the dynamic range accordingly.
Being able to understand how our music masters are metering with this scale is useful to see what will happen when they are streamed on different services (i.e. will the algorithm gain them up or down to meet the target or not?)
Concept 3: Choose which loudness standard to master to.
Direct your mastering engineer if you are working with one to master to a target loudness level and consult with them about what they feel is an appropriate target level for your music. If you are mastering jazz or classical music you probably don’t want to make a very loud master for sound quality and dynamic range reasons but if you are making a heavy rock, pop, or, hip hop master that wants to be more intense then a louder target may be more suitable.
iTunes Sound Check and Apple Music/iTunes Radio use a target level of
-16LUFS and this would be a suitable target for more dynamic material.
Tidal uses a target level of -14LUFS that is a nice middle ground for most music that wants to be somewhat dynamic.
YouTube uses a target level of -13LUFS, a tiny bit less dynamic than Tidal.
Spotify uses a loudness target of -11LUFS and as you can see this is 5 dB louder than iTunes/Apple Music. This is more in the territory of low dynamic range, heavily limited content.
Somewhere in the middle of -16LUFS and -11LUFS might be the best target loudness for your music based on your desired dynamic range but the goal is not to go above the chosen target otherwise your content gets gained down on playback and dynamic range is lost.
In all services except Spotify, content that measures lower than target loudness is not gained up. So for people working with very dynamic classical music or film soundtracks those big dynamic movements will not be lost on most streaming platforms.
However since Spotify is unique and adds gain and peak limiting if your content is below target it is potentially the most destructive sonically. So should you master to -11LUFS and save your music from Spotify’s peak limiting but lose dynamic range on the other platforms? It’s a compromise that you have to decide for yourself in consultation with your mastering engineer.
You might want to test out what -11LUFS sounds like in the studio and hear what the effect of that limiting is. Is it better to master that loud yourself and compensate in other ways for the lost punch and lower dynamic range? Or should you accept that Spotify users get a different dynamic range than iTunes users and let your music be more dynamic for the rest of the platforms?
In all cases there is no benefit to going above -11 LUFS because that is the loudest target level used by any service. If you go louder than -11LUFS then your music will be turned down and dynamic range and punch will be lost on all the services needlessly and permanently.
[Editors Note:This is a guest blog written by Jason Moss. Jason is an LA-based mixer, producer and engineer. His clients include Sabrina Carpenter, Madilyn Bailey, GIVERS and Dylan Owen. Check out his mixing tips at Behind The Speakers.]
Setting up a home recording studio can be overwhelming.
How do you know what equipment to buy? Which software is best? How can you make sure everything will work together?
Take a breath. This guide will walk you through the process, step by step. It contains everything you need to know, including equipment recommendations. Make your way to the bottom of this page, and you’ll have your home recording studio up and running in no time. This way, you can get on to the good stuff—making great recordings!
Your computer is the command center of your home recording studio. It’s the brains and brawn behind the entire operation.
This is one area where you don’t want to skimp.
Recording will place high demands on your computer, and you’ll need a machine that can keep up. If you plan on tackling projects with lots of tracks or producing electronic music, this is even more important. The last thing you want is your computer to slow you down. There’s no faster way to kill a moment of musical inspiration.
Laptop Or Desktop?
If you absolutely need to record on the go, a laptop may be your only choice. But be prepared to pay more and walk away with a less capable machine.
Go for a desktop whenever possible. Dollar for dollar, they’re faster, more powerful, and offer more storage. They also last longer and fail less, because their internal components don’t overheat as easily. And since a desktop doesn’t sit in front of your face, the noise from its fans will be less of an issue. (Microphones are super sensitive, so a noisy room will lead to noisy recordings. I worked on a laptop for years, and fan noise was a constant problem.)
PC Or Mac?
While my first computers were PCs, I’m now a Mac guy through and through. Macs crash less. They’re also the computer of choice for music-makers (you’ll find them in most home recording studios). Because of this, updates and bug fixes for recording software will often be released for Mac users first.
With that being said, most recording software and hardware is compatible with both platforms. Macs are also more expensive, so this may influence your decision. If you’re more comfortable using a PC, you can make it work. Just make sure your audio interface and software is compatible with whatever you choose.
4 Computer Specs That Really Matter
When you’re trying to find the right computer for your home recording studio, it’s easy to get lost in techno-speak. The following 4 specs are what count. Hit the guidelines below, and your computer will handle nearly any recording session with ease.
CPU (Clock Speed & Number Of Cores)
If a computer was a car, the CPU would be its engine. Clock speed is like the number of cylinders an engine has. The higher the number, the faster the CPU. A fast CPU will handle large recording sessions gracefully.
If the CPU has multiple cores, this is even better. Multiple cores will allow it to multitask more effectively.
It can be difficult to compare CPUs (especially those with a different number of cores). To make this easier, you can use sites like CPUBoss or CPU Benchmark.
Good: 2.6 GHz dual-core
Better: 2.8 GHz dual-core
Best: 3+ GHz quad-core
RAM is your computer’s short-term memory. More RAM will make your computer run faster, particularly when working with large, complex projects.
Good: 8 GB
Better: 12 GB
Best: 16+ GB
Hard Drive (Space & Type)
A computer’s hard drive is its long-term memory. This is where your recordings will be stored. Recorded audio takes up lots of space, so you’ll want plenty to spare. If you end up filling your hard drive, you can always buy an external one. However, it’s always better to start with more space.
But when it comes to hard drives, space isn’t all that matters. In fact, speed is even more important.
The best hard drives are solid-state. While they typically offer less storage space, they’re worth every penny. Solid-state drives use flash memory (the same technology you’ll find in a USB thumb drive) and have no moving parts. They’re much faster than their mechanical predecessors. If your computer has a solid-state drive, it will be much snappier when playing back and recording projects with large track counts.
If you can’t avoid a mechanical drive, opt for one that spins at 7,200 RPM. It will deliver data about 33% faster than a 5,400 RPM drive. This really matters if you plan on tackling projects with 30+ tracks.
Good: 500 GB 7,200 RPM mechanical drive
Better: 1 TB 7,200 RPM mechanical drive
Best: 500+ GB solid-state drive
Your audio interface (see below) will connect to your computer using USB, Thunderbolt, or FireWire. Make sure there’s a port available on your computer for it. If you plan on using a MIDI keyboard or other accessories, make sure you’ve got enough free ports to accommodate them too.
Best Bang For Your Buck: Mac Mini
The Mac Mini is seriously underrated. This is what I use in my home recording studio, and it’s more than enough. Opt for a solid-state drive and maxed-out memory for even more power. And don’t forget—you’ll need a keyboard, mouse, and monitor too.
For Mobile Music-Makers: MacBook Pro
If you need to be mobile, the MacBook Pro is a great choice. Just be prepared for fan noise.
For Those Who Want The Best: Mac Pro
It isn’t cheap, but you’ll find the Mac Pro in most professional recording studios. Even the baseline unit is more than enough.
Your audio interface is the heart of your home recording studio. While it may look intimidating, it’s nothing more than a fancy routing box. This is where you’ll plug in microphones, speakers, and headphones. It’s also where the signal from your microphones gets converted into ones and zeros, so your computer can make use of it.
Interfaces vary widely in features. Some have knobs to adjust the volume of your speakers and microphones. Others accomplish this through a software control panel. However, all great interfaces are transparent—they don’t add any noise or distortion to the sound. This is where high-end interfaces often differ from cheaper ones.
Here are some things to keep in mind when choosing an interface:
Number Of Mic Preamps
The more preamps, the more microphones you can record at once. If you’re only recording vocals, one may be all you need. To record instruments with multiple mics (such as acoustic guitar in stereo), you’ll need at least 2. To record drums or people playing together, go for 4 or more.
Quality Of Mic Preamps
When it comes to mic preamps, people get distracted by quantity. They think more is better, so they buy cheap interfaces with 8 preamps.
This is a rookie mistake.
Cheap preamps will add noise and distortion to your recordings. This will become a permanent part of your tracks, and it can add a harsh, brittle quality to your music.
Quality is more important than quantity. Avoid cheap interfaces with 8 preamps. Instead, go for an interface with 4 or 2. You’ll walk away with a higher-quality interface, often at the same price.
With a 1/4″ input, you can record electric guitar or bass without an amp. You can then use software to shape the tone. This isn’t an essential feature, but it’s handy (especially if you’re a guitarist or bassist).
Pro Tip: If your interface doesn’t have a 1/4″ input, a direct box will do the same thing.
Make sure your interface has the same type of outputs your speakers use (either XLR, 1/4″, or RCA). If there’s a mismatch, you’ll have to use an adapter or special cable to connect them. While this isn’t a huge deal, it’s best avoided.
With a headphone jack, you’ll be able to plug in a pair of headphones and listen back while recording. This is an essential feature, and almost all interfaces have one.
Pro Tip: Most interfaces have a 1/4″ headphone jack. This is larger than the 1/8″ plug on most consumer headphones. To use consumer headphones with your interface, you’ll need an 1/8″ to 1/4″ adapter.
Most interfaces will connect to your computer using USB, FireWire, or Thunderbolt. Make sure your computer has a free port of that type available.
You’ll also want to make sure your interface is compatible with your recording software. You can find this information on the interface manufacturer’s website.
How To Find A Mic That Makes You Sound Radio-Ready
Microphones are the ears of your home recording studio. They convert sound into electricity (which gets sent to your interface).
If you’re a guitarist, you know that every guitar sounds different. You might reach for a Tele over a Strat, depending on the part you’re playing. Microphones work the same way. One might sound better than another in a specific situation. But if you’re starting out, you don’t need a dozen mics to cover your bases…
This Type Of Mic Will Always Get The Job Done
There’s one type of microphone that sounds great on just about anything (including vocals).
It’s called a large-diaphragm, cardioid condenser.
If you’re only going to get one for your home recording studio, this should be it. Here’s why:
Large diaphragm: The diaphragm is the part of the mic that picks up sound. A large diaphragm makes the mic better at picking up low frequencies (like the body and warmth of your voice). This means it will faithfully capture the full tonal range of sounds.
Cardioid: This is the microphone’s polar pattern. It dictates what the mic will pick up, and more importantly, what it won’t. A cardioid mic will pick up what’s in front of it, but almost nothing to the sides or behind it. You can use this feature to reduce the level of unwanted noise in your recordings (like air conditioning rumble, noisy neighbors, or chirping birds). Just position the back of the mic towards the source of the noise!
Condenser: Refers to the technology the mic uses to capture sound. Condenser mics do a better job at picking up high frequencies (like the sizzle of cymbals or the crispness of a voice) than any other type of mic.
What About USB Mics?
Avoid them. While you won’t need an interface to use one, they are of lower quality than most traditional mics. They also aren’t future-proof; if USB ports become obsolete, you’ll need to buy a new mic.
Recommendations For Large-Diaphragm Cardioid Condenser Mics
How To Choose Studio Monitors That Supercharge Your Tracks
Studio monitors are speakers designed for use in home recording studios. You’ll need these to play back and mix your recordings.
These are different than the speakers you might buy for your living room. Whereas consumer speakers often flatter and enhance the sound, studio monitors are neutral and uncolored. They won’t sound as pretty as typical speakers—in fact, they may even sound dull.
Listen on speakers like these, and you’ll hear what’s really going on in your music. Great studio monitors will force you to work harder to craft a mix that sounds good. This will lead to tracks that sound great on a variety of different speakers, not just ones that sweeten or hype up the sound.
Can’t I Just Use Headphones?
Headphones are notoriously difficult to mix on, and tracks mixed on headphones often don’t hold up on speakers. (There are, however, other uses for headphones. You’ll learn more about this below.) If you’re doing basic voiceover work, you may be able to forgo studio monitors. But if you’re recording music, it’s crucial to invest in them.
4 Studio Monitor Specs That Really Matter
When choosing studio monitors for your home recording studio, it’s easy to get distracted by frequency plots and technical jargon. Here’s what really counts:
Active Vs. Passive
Speakers need an amplifier to produce sound. If a speaker is active, it means the amplifier is built-in. This makes active speakers completely self-contained—you just need to plug them into the wall and your interface. On the other hand, passive speakers need a separate power amp to function. I would avoid them, as they add another piece of equipment to your home recording studio.
Near-Field Vs. Mid/Far-Field
Near-field monitors are built to be used in close quarters, like a home studio. Mid-field and far-field monitors are built to be placed farther away from your ears, and are more suitable for larger spaces. Go for a pair of near-fields (unless you live in a castle).
Most studio monitors have a fairly flat frequency response. This means they sound neutral—the bass isn’t louder than the treble, and everything is well-balanced. However, even the flattest studio monitors will sound different in your home recording studio (room acoustics affect speakers dramatically). For this reason, I wouldn’t obsess over the frequency response of your speakers. You can always use software like Sonarworks Reference 3 to flatten things out later on.
Pay attention to how far the speakers extend down the frequency spectrum. This will often be quoted as the bottom number in a range (from 40 Hz to 20 kHz, for example). Smaller speakers won’t extend down as far. This will make it harder to hear what’s going on in your recordings. Try to find speakers that extend to 40 Hz or below.
Your studio monitors will have XLR, 1/4″, or RCA inputs. Make sure these are the same type of connectors your interface uses. If the two don’t match up, you’ll need a special adapter or cable to connect them. This isn’t a big deal, but it’s best avoided.
Headphones are an invaluable studio ally. You can use them while overdubbing, mixing, or to avoid disturbing your neighbors.
Like studio monitors, studio headphones are designed to be tonally neutral. While I don’t recommend mixing on them exclusively, headphones like these will offer you an accurate, unbiased perspective on your recordings.
When trying to find the right pair, here are some things to keep in mind:
Open-Back Vs. Closed-Back
Open-back headphones have perforations on the outside of each cup which allow sound to pass through easily. They typically sound better than closed-back headphones, and are the preferred choice for mixing. However, since sound leaks out of them so easily, they’re not ideal for recording (mics pick them up).
On the other hand, closed-back headphones have a hard enclosure that prevents sound from escaping. This makes them a better choice for recording, when maximum isolation is needed.
If you’re only going to buy a single pair for your home recording studio, go for closed-back. They’re more versatile.
Most pro studio headphones use a 1/4″ plug. This is thicker than the 1/8″ plug you’ll find on most consumer headphones. If you want to plug your studio headphones into an iPhone or laptop, you’ll need a 1/4″ to 1/8″ adapter.
Comfort And Fit
You’ll be wearing these for hours on end, so you want them to be comfortable. Cushy foam padding makes a big difference. Also, look for headphones that rest over, not on your ears. And if possible, try them on before you purchase!
While they may look cool, consoles like these are now collecting dust in top-tier studios across the globe.
You don’t need them anymore. In many cases, they’ve been replaced by digital audio workstations.
A digital audio workstation, or DAW, is the software that will power your home recording studio. It’s what you’ll use to record, play back, and manipulate audio inside your computer. Arm yourself with a great DAW, and you’ll be able to do everything you can do on that hunk of junk above (and more).
What’s The Best-Sounding DAW?
Visit any online audio forum and you’ll find people that claim one DAW (usually the one they use) sounds better than the rest.
This isn’t true. In fact, all DAWs sound exactly the same. The differences between them have more to do with workflow than anything else.
My 3 Favorite DAWs
When choosing a DAW, there are tons of great options. Here are my favorites:
As a mixer, Pro Tools is my DAW of choice. I’ve been using it for nearly a decade.
You’ll find Pro Tools in most recording studios. This is helpful if you ever end up recording in a commercial studio, because you’ll be able to open the projects you save on your own rig. This means you’ll be able to record drums in a professional studio, for example, and then edit them later in your home recording studio.
Pro Tools excels as a recording platform. Its audio-editing features are second-to-none. However, beatmakers or EDM producers may be better off with one of the DAWs below.
Logic is the preferred choice for many producers. It features a fantastic library of sounds and plugins—one of the most comprehensive packages available. When I’m not mixing, it’s my favorite DAW.
Unfortunately, Logic is Mac-only.
Ableton Live is great for loop and sample-based producers. In fact, many EDM producers swear by it. Its audio manipulation tools are flexible and innovative, and it can be easily integrated into a live performance. If I was an electronic music producer, Ableton Live would be my choice.
Other DAWs Worth Exploring
Your search shouldn’t stop here. Here are some other DAWs worth exploring:
How To Choose The Perfect DAW For You
Choosing a DAW is like dating. Download a few trial versions and take them for a spin. Explore your options and make sure things fit before committing. While all major DAWs have similar features, some do certain things better than others.
If you’ll be collaborating, check out what DAW your collaborators use. It’s much easier to work together if you’re both using the same software. But in the end, the choice is yours.
Don’t get too hung up here. Remember, The Beatles recorded Sgt. Pepper on a 4-track tape machine. Even the most basic DAW has infinitely more power. Go with your gut and move on.
Save Hundreds By Avoiding Unnecessary Plugins
As you start to explore the world of home recording, you’re going to run across plugins.
These are pieces of third-party software that extend the functionality of your DAW. They allow you to manipulate sound in different ways. Most people invest in plugins too early. If you’re just getting started, your DAW’s stock tools are more than enough to make great recordings. Master what you have first—more plugins won’t necessarily lead to better-sounding tracks.
We’ve covered the basics, but there are a couple of extras you’ll probably need too…
You’ll need an XLR cable to connect your mic to your audio interface.
You’ll also need a pair of cables to connect your speakers to your interface. These will be either 1/4″, XLR, or RCA—depending on which connectors your speakers and interface use.
Go for quality here. Cheap, flimsy stands will be the bane of your existence. I prefer ones with three legs over those with a circular, weighted base. They tend to be more stable and don’t fall over as much.
A mesh screen that sits between your microphone and vocalist. It helps diffuse the blasts of air that accompany certain consonants (like “p” and “b” sounds). Left alone, these blasts will overload your microphone’s diaphragm, leading to boomy, muddy recordings. This essential accessory will significantly improve the quality of your tracks.
Pro Tip: For a pop filter to work well, there needs to be a few inches between the filter and the mic, as well as the filter and the singer. If you push the filter right up against the mic or put your mouth on it, it won’t be able to do its job.
With a MIDI keyboard, you’ll be able to “play” any instrument imaginable. You can use it to fill out and orchestrate your recordings. If you’ll only be recording real instruments or vocalists, you won’t need one. But if you’re a beatmaker or electronic music producer, it’s almost essential.
Every decision you make while recording will be based on what you hear. If what you’re hearing isn’t accurate, you won’t make the right decisions. This will lead to recordings that sound good in your studio, but fall apart on other speakers.
You can avoid this by setting up your home recording studio properly. Don’t overlook this crucial step! If you follow the guidelines in the video below, you’ll be well ahead of most home studio owners. Your recordings will sound better too!
Taking Your Room To The Next Level With Acoustic Treatment
After your home recording studio is up and running, you’ll want to invest in acoustic treatment panels. These will improve the sound of your room by evening out acoustic problems. While acoustic treatment is beyond the scope of this article, I’ve put together a PDF with resources that will help you get started.
It’s Time To Build The Home Recording Studio Of Your Dreams
There will be nothing more satisfying than hearing your own recordings play over the speakers in your new home studio. You now have everything you need to make this happen.
The next step is for you to take action. Order the equipment you need, set up your room using the guidelines above, and start recording! Remember, once you get all this out of the way, you can get on to the good stuff—making great music!
But before you go, leave a comment below and tell me—what will you use your home recording studio for?
I wish you the best of luck on your home recording journey!
[Editors Note:This is a guest blog written by Jason Moss. Jason is an LA-based mixer, producer and engineer. His clients include Sabrina Carpenter, Madilyn Bailey, GIVERS and Dylan Owen. Check out his mixing tips at Behind The Speakers.]
Last year, the U.S. music industry made more money from streaming than CDs or digital downloads.
The times, they are a-changin’.
In case you haven’t noticed, the way we consume music is shifting. You’ve likely read about how this is impacting artists. But no one’s talking about how it will impact the sound of pop music.
Streaming won’t just change the way pop music is consumed, but also the way it’s created. This shouldn’t be surprising. In fact, there’s always been a relationship between music, medium, and distribution. For proof, look to the past.
In the 60’s, Motown built records for radio. Short song lengths allowed for the regular interjection of ads, and long intros gave DJs the freedom to talk over tracks. In the 1980’s, the dawn of the CD gave way to longer-form content. The average album’s length increased from 40 minutes to well over an hour. And since it was no longer important to maintain the integrity of vinyl grooves , records started sporting wider low ends and louder levels. (Is it any surprise that hip hop emerged as a dominant genre during this time?) In the 2000’s, Apple’s decision to unbundle the album and offer single-track downloads on iTunes shifted the trajectory of the music industry once again. After an album-oriented trend that lasted decades, singles once again became the primary focus.
Throughout the history of the music business, the goal was always the same: get people to purchase records. Once that purchase was made, it didn’t matter whether the record was played or not.
The traditional pop music-making process evolved to serve these intentions. Infectious, hook-heavy records were crafted to drive listeners to the checkout aisle. The biggest hits seemed inescapable for a month or two, but often disappeared as quickly as they emerged. But as far as the music industry was concerned, this was irrelevant. As long as people bought the CD or downloaded the song, we were happy.
But streaming has completely changed the game. For the first time, financial success is no longer based on one-time sales, but on ongoing streams. The more a track is played, the bigger the payout. The implications of this shift are massive.
On streaming platforms, flash-in-the-pan tracks that burn bright and fade fast are less lucrative than ever. The most profitable pop songs instead burrow their way into the hearts of listeners, inspiring millions of streams for years to come. Success is no longer about the hit, but the replay.
This shift introduces a powerful new incentive to foster deeper, longer-lasting relationships with listeners. While tracks will still need to be hook-laden enough to inspire an immediate connection, they must also be worth listening to hundreds, if not thousands of times.
What will this mean for the pop hits of the future? We can only guess. As terrestrial radio continues to become less relevant, song structures and arrangements will likely become more fluid. New, innovative mediums may even emerge. Who says a recording has to present the same experience with every play? What if tracks evolved over time? What if, after one hundred plays, a bonus verse emerged? As play count becomes a dominant metric for measuring the success of tracks, ideas like these are fair game.
One thing’s for sure—as streaming continues to emerge as the dominant platform for music consumption, the sound of pop music will change. Will you change with it?