Studio Spotlight: The Record Co. Focuses on Access Over Profit in Boston

Boston, Massachusetts is home to over 250,000 college students. With institutions like Harvard, M.I.T., Berklee College of Music, Emerson, Boston College and a slew of others, it’s a given that you’d see plenty of artists and bands finding their legs in a major U.S. city – whether they’re undergrads meeting at a local party or show, or a grad student furthering their music career by way of education. Growing up in the area, I recall being obsessed with bands in the ‘local scene’ – catching the T to see bands play in places from Elks Lodges to 18+ venues that I had to ‘borrow’ an ID to get into. But even then I noticed a turnover, as bands would migrate to other parts like New York and L.A., or venues with all-ages access would close unexpectedly.

While this isn’t uncommon, there’s still a lot to love about Boston’s music scene, but it can be a difficult place to live and survive as a musician or engineer. And what about the potential fans who don’t know what’s in their backyard?

Enter The Record Co. – a Boston-based non-profit facility that provides access to an affordable space to record quality projects and opportunities to freelance engineers and producers. The result is a much-praised collaborative atmosphere that is helping to change the landscape of Boston’s independent music scene. Not to mention, The Record Co. does a wonderful job of showing off all Boston has to offer with their Boston Sessions collaborative mixtape series, with Vol. 2 coming out soon!

In this month’s Studio Spotlight, I spoke to Jesse Vengrove, Program Director (and engineer/musician) at The Record Co. to discuss how the non-profit’s approach to offering this kind of access and how it’s been paying off:

First and foremost, what inspired you to start The Record Co. and do so as a non-profit?

Go up to any studio owner and ask them the following two questions and you’ll probably get similar responses:

1) “Are you making a large profit?” – “No”

2) “Why are you doing this then?” – “I love the work and I think it’s important/has cultural and/or artistic value.”

And there you have the most informal definition of a non-profit organization.

The Record Co. was founded in 2009 and, after a failed startup (first location flooded), we moved to our current facility in 2010.  The non-profit angle came out of a realization that that no one really needs to own a studio, people just need access to one.

We wanted to create a space that was accessible to everyone, regardless of socioeconomic status, race, gender, and we wanted to create a space that was a part of the community and give back to the city.  We charge our clients to use the facility like any other studio but the rates are subsidized by foundations/grants and individual donors who believe it’s important to cultivate a vibrant and creative scene in Boston.

We’ve found a way to allow artists to come in and use the facility at a price point that works for small/non-existent budgets while relying on other sources of funding to keep daily operations running. In 2017 we’re on track to host 1,100 sessions between the two rooms, so needless to say there’s a demand that we’re filling (while still seeing new studios pop up and legacy studios stay in business).

Give our readers a little bit of a breakdown of the facility overall. What sets your studios apart from others in the area?

We currently have about 5,000 sq/ft split up over 2 floors which gives us a fair amount of space. We have two studios, Studio A and Studio B (yeah, super creative!).  Studio A is 2,500 sq/ft and includes a full kitchen and a lounge (with an ever-growing homage to the amazing art collection at Goodwill). We wanted it to feel like you’re walking into your friend’s living room, warm and homey.  We kept a lot of the windows up there so there’s a lot of natural light, which really makes the room comfortable.  There are two iso booths in there and a large live room.  You can get giant drum sounds up there (and we once squeezed a 45-person orchestra in there) or you can control/segment the room with gobos.  It’s a large space but we did our best to keep sightlines open so no one feels disconnected.

Studio B is our smaller vocal/overdub room.  This room is a little more chic than Studio A; no windows to the outside, color-changing LED lights, leather couch.  It’s a small but spacious enough so it doesn’t ever feel crowded, and everyone always loves the homemade absorption panels covering the wall.  Studio B definitely has a more traditional feel to it compared to A but it’s by no means sterile; it’s still a comfortable room to work in.  There’s a lounge outside the studio so there’s lots of space to spread out.  Studio B has it’s own private bathroom which sounds most excellent for re-amping.

Obviously you provide a space for the many artists of Boston to record, but tell us a little more about how your setup has benefited freelance engineers over your seven-year history.

TRC is a 100% freelance studio, which means that we don’t have any staff engineers.  We think it’s really important for artists to work with technical professionals that they get along with (both personally and musically) and so we enforce that every client brings in their own engineer.  At this point we have 1,100 gigs for freelancers every year happening in our facility, and we’ve priced our studios in a way that leaves room for engineers to charge a reasonable rate for their services.

When clients need referrals we refer to our staff, who are all great engineers as well (but they still negotiate their own rates and get paid directly by the client as a freelancer).  We also see a lot of engineers coming in from other studios around town (Q Division, Mad Oak, Zippah, Futura…) which we love.

Has the way you operate fostered its own community within the greater music scene? Do you feel you’re providing a space for collaboration and networking?

We see thousands of musicians/artists/engineers through our doors every year so I’m happy to say that it feels like we have a large community surrounding the work that that we do.  We really value the face-to-face interaction that takes place in recording studio and are happy to see so many people coming out of their basements or bedrooms and collaborating.  The best music doesn’t get made in a vacuum, it usually takes a team.

How do you feel that The Record Co. has contributed to the ever-changing landscape of the arts in Boston?

We’ve contributed in two ways: through direct support to artists/musicians and through an effort to raise general awareness about the great music that is being made in our city.  There is an obvious need for the programming we do as there are thousands of people that have taken advantage of our studios.  We have had bands and engineers tell us that we are the reason they stayed in Boston instead of moving to NYC or LA which is extremely meaningful to us and shows that there is a need for the work that we are doing.

We have also made an effort to engage music fans in Boston and let them know that you don’t need to look to NYC/LA or Pitchfork/Rolling Stone to find good new music, there’s actually tons of being made all around you.  Raising the reputation and awareness of what’s happening here in Boston is a long process but it only serves to make the city feel more like home for all of the musicians/artists that struggle to live and work here in Boston.

For a city home to a quarter of a million college students and a mayoral administration hoping to retain this population after graduation, what else does Boston need to be a happier home to working musicians and engineers?

That’s a tough one and is something we talk about regularly.  All-ages music venues, more (well maintained) rehearsal spaces, better public transportation, affordable housing inside city-limits…. None of these things are easy problems to solve but all would go a long way towards making the city a more hospitable place for artist and engineers.

Speaking of those college students, how does the Recor Co. interact with student artists and engineers-in-training from local colleges and universities? 

We wanted to price our studio rates in such a way that artists could afford to rent an appropriate amount of time to actually accomplish what they set out to.  These days the only way for artists to develop themselves is to act as their own A&R and just keep recording and tweaking until they finally land on something good.

Because we also cater to a lot of engineers who are just getting their start or haven’t worked in a studio outside of a college setting we host orientations every other week which consists of a conversation about expectations and best practices while working in a professional setting, how to avoid pit-falls that have the potential to kill the vibe for the players, and then a full technical walkthrough of the facility.  We always have staff around to assist with any technical questions/issues and we do have a great crew of part-time assistants that are able to help out as well.

After six years in business you dropped Boston Sessions, Volume 1 – which resulted in a very cool development in the Rock Band video game franchise! – what led you to releasing this? What was the reaction from artists and labels involved?

We really wanted to tackle both raising the reputation of what’s happening in our music scene and also provide an economic opportunity for the artists involved.  ‘Vol.1 – Beast’ featured 13 brand new tracks by 13 Boston-based artists.  In total we paid 63 artists/engineers/producers to make the record, which we’re really proud of.

Artist and sponsors alike both loved the project.  It was unique as it was all brand new material (not pre-recorded content) and really provided a cool cross-section of the diverse scene in Boston.  We were really happy to work with Harmonix to get the album featured in Rock Band, which is by far one of the craziest things to come from the project.  We also just finished up a large donated outdoor ad campaign around the city and on the trains called “Boston Music Is” which features pictures of artists from the comp.  It’s great to see the city showing some love for the artists that make it a cool place to be.

The album is available for streaming on Bandcamp and Spotify and vinyl is in our web store.

What can we expect on the upcoming volume of Boston Sessions? Beyond promoting the Record Co. and the artists featured, what hopes do you have for the release?

Vol. 2 is going to be an awesome collection of new music from some great artists around the city.  We really hope this go around that we not only turn heads in Boston but in other cities as well.  Ultimately we want Boston to be seen as a music destination and the Boston Sessions program is just one step along that path to get there.


14 of the Most Commonly Confused Terms in Music and Audio

[Editors Note: This article was written by Brad Allen Williams and it originally appeared on the Flypaper Blog. Brad is a NYC-based guitarist, writer/composer, producer, and mixer.]

Once upon a time, remixing a song meant actually redoing the mix. Many vintage consoles (some Neve 80-series, for example) have a button labeled “remix” that changes a few functions on the desk to optimize it for mixing rather than recording.

But sometime in the late 20th century, the word “remix” began to take on a new meaning: creating a new arrangement of an existing song using parts of the original recording. Into the 21st century, it’s evolved again and is now sometimes used as a synonym for “cover.” The latter two definitions remain in common use, while the first has largely disappeared.

Language is constantly evolving, and musical terms are obviously no exception. In fact, in music, language seems to evolve particularly fast, most likely owing to lots of interdisciplinary collaboration and the rapid growth of DIY.

Ambiguous or unorthodox use of language has the potential to seriously impede communication between collaborators. In order to avoid an unclear situation, let’s break down standard usage of some of the most commonly conflated, misused, or misunderstood music-related terms.


Gain, as it’s used in music electronics, is defined by Merriam-Webster as, “An increase in amount, magnitude, or degree — a gain in efficiency,” or, “The increase (of voltage or signal intensity) caused by an amplifier; especially: the ratio of output over input.”

To put it in less formal terms, gain is just an increase in strength. If an amplifier makes a signal stronger, then it causes that signal to gain intensity. Gain is usually expressed as a ratio. If an amplifier makes a signal 10 times as loud, then that amplifier has a “gain of 10.”

On the other hand, harmonic distortion is that crunchy or fuzzy sound that occurs when an amplifier clips (as a result of its inability to handle the amount of signal thrown at it).

In the 1970s, some guitar amp manufacturers began employing extra gain stages in their designs to generate harmonic distortion on purpose. In other words, they’d amplify the signal, then amplify it again, and that second gain stage — having been given more than it could handle — would distort. These became known as “high-gain amplifiers.” Because of this, many guitarists just assumed that gain was synonymous with distortion. This was cemented when later amps like the Marshall JCM900 had knobs labeled “gain” that, by design, increased the amount of harmonic distortion when turned up!

Outside the realm of electric guitar, though, gain is still most typically used in a conventional way. When a recording engineer talks about “structuring gain,” for example, he or she is usually specifically trying to avoid harmonic distortion. It’s easy to see how this might cause confusion!


Not to pick on guitarists, but this is another one that trips us up. Tone has many music-related definitions, but the one of interest at the moment is (again, per Merriam-Webster), “Vocal or musical sound of a specific quality…musical sound with respect to timbre and manner of expression.”

On the other hand, the dictionary definition of tonality is:

1. Tonal quality.

2a. Key.

2b. The organization of all the tones and harmonies of a piece of music in relation to a tonic.

It’s important to note that “tonal quality” here refers to “the quality of being tonal,” or the quality of being in a particular key (in other words, not atonal). This is a different matter from “tone quality,” which is commonly understood to mean “timbre.” Most musicians with formal training understand tonality either as a synonym for key or as the quality of being in a key.

If you’re trying to sound fancy, it can be tempting to reach for words with more syllables, but using tonality as a synonym for timbre can be confusing. Imagine you’re recording two piano pieces — one utilizing 20th-century serial composition techniques and the other utilizing functional harmony. If you express concerns about the piano’s “tonality” while recording the second piece, the composer would probably think you were criticizing his or her work!


Most musicians in the modern era understand the difference between these two concepts, but they still occasionally confuse folks relatively new to the process of recording.

Overdubbing is adding an additional layer to an existing recording.

“Punching in” is replacing a portion of an already-recorded track with a new performance.

To do a “punch-in” (in order to fix a mistake, for example), the performer plays along with the old performance until, at the appropriate moment, the recordist presses record, thus recording over the mistake. The recordist can then “punch out” to preserve the remainder of the original performance once the correction is made.


A portamento is a continuous, steady glide between two pitches without stopping at any point along the way.

A glissando is a glide between two pitches that stair-steps at each intermediate note along the way. A glissando amounts, in essence, to a really fast chromatic scale.

To play a glissando on guitar, you’d simply pluck a string and slide one finger up the fretboard. The frets would make distinct intermediate pitches, creating the stair-stepped effect. If you wished to play a portamento on guitar, you could either bend the string or slip a metal or glass slide over one of the fingers of your fretting hand.


While often used interchangeably in modern practice, vibrato and tremolo are actually distinct kinds of wiggle. In most cases, tremolo is amplitude modulation (varying the loudness of the signal), whereas vibrato is frequency modulation (varying the pitch of the signal).

But over the past few hundred years, tremolo has commonly referred to many different performative actions. On string instruments, tremolo is used to refer to the rapid repetition of a single note, and in percussion, tremolo is often used to describe a roll. Singers use it for even crazier things, like a pulsing of the diaphragm while singing¹.

Leo Fender must’ve had his terms confused — he labeled the vibrato bridges on his guitars “synchronized tremolo,” and the tremolo circuits on his amps “vibrato.” Confusion has reigned ever since.


Analog and digital are perhaps the most confused pair of words in the 21st-century musical lexicon. I once had a somewhat older musician tell me that my 1960s-era fuzz pedal and tape echo made my guitar sound “too digital” for his music. Likewise, countless younger musicians claim to prefer the “analog sound” of the original AKAI MPC (an early digital sampler) and the Yamaha DX-7 (an early digital FM synthesizer). But “analog” and “digital” are not simply stand-ins for “vintage” and “modern,” nor for “hardware” and “software.” They’re entirely different mechanisms for storing and generating sounds. Let’s learn a little more!

Merriam-Webster’s most relevant definition of analog is, “Of, relating to, or being a mechanism in which data is represented by continuously variable physical quantities.”

Also relevant is its first definition of analogue: “Something that is analogous or similar to something else.”

Now, how does this relate to music technology? It all goes back to humans’ longstanding search for a way to capture and store sound. Sound, on a basic scientific level, is nothing more than compression and rarefaction (decompression) of air that our ears can sense. Since air pressure fluctuations can’t really be stored, recording sound proved elusive for a long time.

20th-century scientists and engineers, however, brilliantly figured out that recording sound might be possible if they could accurately transfer that sound into something that could be preserved. They needed something storable that would represent the sound; an analogue to stand in for the sound that would allow it to be captured and kept.

First, they used mechanically generated squiggles on a wax cylinder as the analogue. Eventually, they figured out that they could use alternating-current electricity (which oscillates between positive and negative voltage), as an analogue of sound waves (which oscillate between positive and negative air pressure). From there, it was a relatively short leap to figuring out that they could, through electromagnetism, store that information as positively and negatively charged magnetic domains, which exist on magnetic tape.

This is analog recording!

Since electric voltage is continuously variable, any process — including synthesis — that represents air pressure fluctuations exclusively using alternating current electricity is analog, per Merriam-Webster’s first definition above.

Digital, on the other hand, is defined as, “Of, relating to, or using calculation by numerical methods or by discrete units,” and, “Of, relating to, or being data in the form of especially binary digits, digital images, a digital readout; especially : Of, relating to, or employing digital communications signals, a digital broadcast.”

That’s a little arcane, so let’s put it this way: Rather than relying directly on continuous analog voltages, a digital recorder or synthesizer computes numerical values that represent analog voltages at various slices of time, called samples. These will then be “decoded” into a smooth analog signal later in order to be accurately transferred back into actual air pressure variations at the speaker. If that’s a blur, don’t worry — you only need to understand that this is a fundamentally different process of storing or generating sound.

Absent a real acquaintance with the technology of an individual piece of equipment or process, it’s probably safer to avoid leaping to conclusions about whether it’s analog or digital. For example, there are reel-to-reel magnetic tape machines (like the Sony PCM 3348 DASH) that don’t record analog voltage-based signal at all, but rather use the tape to store digital information (as simple ones and zeroes).

Since you can’t judge whether a piece of gear is analog or digital with your eyes, it’s probably best to only use these terms when you need to refer to the specific technologies as outlined above. In other words, next time you’re recording in a studio with a cool-looking piece of old gear, it’s probably safer to use #vintage instead of#analog to caption your in-studio Instagram photo!


Phase is defined by Merriam-Webster as… (deep breath):

“The point or stage in a period of uniform circular motion, harmonic motion, or the periodic changes of any magnitude varying according to a simple harmonic law to which the rotation, oscillation, or variation has advanced from its standard position or assumed instant of starting.”

That’s a mouthful! This is a concept that’s easier understood with an example, so let’s imagine that you have a swinging pendulum:

If you were to freeze that pendulum at two different times, the dot at the end would be in two different locations. The pendulum’s swing occurs over time, so the location of the pendulum depends on when you stop it. We’d refer to the phase of the pendulum in order to describe this phenomenon and where the pendulum is in its cycle relative to time. And since it’s always moving in a continuous, smooth arc, there are an infinite number of possibilities!

Phase becomes potentially relevant for anything that’s oscillating or undulating — like the pendulum above or a sound wave.

Polarity, on the other hand, is defined as, “The particular state, either positive or negative, with reference to the two poles or electrification.”

To put it in very simple terms, you’re dealing with polarity any time you install a battery. The battery has a positive terminal and a negative one. You have to make sure it’s installed the right way. While phase is infinitely variable, polarity has only two choices — it’s one or the other.

In our brief explanation of analog audio above, we mentioned that positive and negative swings of voltage are used to represent positive and negative changes in air pressure. If we switch polarity of a signal, we swap all the positive voltages for negative ones, and vice-versa. +1v becomes -1v, +0.5v becomes -0.5v, etc. This is usually accomplished with a button marked with the Greek letter theta or “Ø.”

Interestingly, if you have one signal alone, it’s usually the case that our ear can’t really tell the difference between positive or negative polarity. It’s when you combine two or more similar signals (like two microphones on one drum for instance) that a polarity flip of one or the other can have a dramatic influence on the sound.

Confusingly, this influence is a result of phase differences between the two sources, and switching polarity can often improve (or worsen!) the sound of two combined sources which are slightly out of phase. For this reason, the polarity switch is often called a “phase switch,” and depressing it is often colloquially referred to as “flipping phase.”

In the graphic below, you’ll see a brief, zoomed-in snapshot of two waveforms. A single bass performance was simultaneously recorded into both a direct box (blue) and through a mic on its amplifier (green).

In the first graphic, you can notice that the two are slightly out of phase. The blue direct-in wave swings negative ever so slightly before the green mic–on–amp one does. This is because the amp’s sound had to travel through the air briefly before being picked up by the microphone. Since sound in air travels much more slowly than electricity does, this creates a slight time delay or phase discrepancy.

In the second example below, I’ve flipped the polarity of the amp track. You can see that the time delay still exists, but now the amp track’s wave is inverted or “upside down.” As the DI track swings negative, the amp track swings positive.

In this case, the switch made the combined sound noticeably thinner, so I quickly flipped it back. Occasionally though, flipping polarity improves the combined sound of two sources which are slightly out of phase.

In practice, most recordists will understand what you mean if you say “flip the phase,” but should there happen to be a physicist in the room, you might get a raised eyebrow! Generally, though, this is a classic example of how unorthodox usage sometimes becomes accepted over time.

Which raises the point: any of the musical and audio terms above may eventually, like “remix” before them, evolve to incorporate new shades of meaning (or even have some earlier “correct” definitions fall into disuse). In the meantime, though, the more precise your grasp on the language of music, the less likely you are to misunderstand or be misunderstood.

¹ In performance, for both singers and many instrumentalists, pure tremolo is almost impossible to achieve without taking on some characteristics of vibrato — that is to say that a passage is played or sung with only variations of either pitch or volume.

Production: Creating the Perfect Bass Sound

[Editors Note: This bass production guide was written by our friends over at Point Blank London, and was originally featured on their site. Check it out here for audio samples and more.]


Searching for the perfect bass patch can be an odious task. With such a plethora of synths and libraries out there, flicking through the almost endless presets to find what’s right for you is like finding a needle in haystack.

Getting something that works with any samples or chord progressions you’ve got, that sits nicely with your kick drum and still carries enough weight to shake those subwoofer cones can seem like a juggling act.

In this tutorial we’re going to explain how to create bass sounds and lines with powerful subs, thick mids and tops that cut through on any system. Download the project used in this tutorial here.


There are a myriad of dos and don’ts out there and you can spend more time tweaking than actually making music. In this article we’ll take a forensic look at how to build your basslines from the bottom up, from creating a penetrating sub bass, layering the mids and tops, getting it to bite in all the right places and processing it with your kick and rest of the mix.

Due to the low bass frequencies in these audio examples we suggest listening through good headphones or studio monitors to appreciate the nuanced programming.

Low-End Theory

Depending on which genre of music you’re working on, the bass might perform a different function; in house and techno, a weightier kick drives the track along, dictating the pace and feel. Basslines in these genres might contain more mid-range frequencies to cut through the mixes.

Drum ’n’ bass, dubstep and other bass-heavy music can contain much more bottom end and sub frequencies, underpinning your loop. Balancing your kick and bass can be an essential part of getting your track working. With weak foundations, you’re going to struggle to get the rest of the mix sitting comfortably.

To understand bass properly, there are a few key terms you will want to get your head around: amplitude, harmonics and phase. Amplitude is simply a term for volume, but it’s not to be confused with decibels (dB). It’s more akin to relative volume or power.

Harmonics are the name given to all the frequencies that go into making up a sound. The lowest, loudest note in your bass sound is the first harmonic (or fundamental). Any frequency above this will normally be a harmonic.

Harmonics are integer multiples of the fundamental frequency. If that didn’t mean anything don’t worry, the maths is simple. Let’s have a look at Live’s Operator instrument using Osc A. Below is a low A note (110 Hz) and I’ve adjusted the Waveform Editor, bringing in the next four harmonics one at a time. They are the frequencies, 220, 330, 440 and 550Hz.

First Five Harmonics

Just above Operator we can see Voxengo SPAN mapping frequency across our X-axis and amplitude across our Y-axis: you can quite clearly see each harmonic creeping in relative to the fundamental. To the right is an oscilloscope by Laidman & Katsura, this displays time across the X-axis and amplitude across the Y-axis.

The other concept we need to familiarise ourselves with is phase. There are primarily two places we’ll come up against this, the first of which is the start phase of an oscillator. Below we can see eight notes with their phase free running (the default of most synths) and with their phase locked to restart at 0º when a note is played:





As you can see without restarting the phase, each note has a different start position within the oscillators cycle, causing irregularities in volume and nasty clicks and pops.

The second instance of phase we’re likely to come across is relationship between the left and right channel. It’s highly recommended to keep your frequencies below about 100Hz in mono: any disparity in stereo spectrum here can be very noticeable, causing phasing issues when summed to mono and more irregularities in volume.

Creating a Sub Bed

While we might tend to think of basses as one sound we can sometimes separate their spectrums up further into complex composites – containing as many as three or even four layers – each requiring different programming, processing and treatment.

Flexibility with the sub, low-mids and mid range can be key in getting the right amount of punch, the bass cutting through the mix and retaining that all-important stereo image. Let’s start off with our sub frequencies.

The only way your bass is going to move air on the dancefloor is getting a good, meaty sub. Making a competent sub isn’t rocket science, as it requires very little understanding of synthesis and sound, but making a great sub just takes a little more. Let’s stay with Operator for now.

Osc A defaults to a sine wave, a waveform that contains only the first harmonic. This is good for sub bass as it’s clear and uncluttered. Ensure the phase restarts on 0º (0%) and change the Voices to 1 in the Global Shell.

Phase Restart

If you’re leaving the sub as the sole layer for the bass part then you can almost leave it untouched. I’ve added in -30dB from Osc B, which is modulating the frequency of Osc A. This adds just a few harmonics into the sound helping it cut through a busier mix and on smaller speakers.

Do this by enabling Osc B and turning the Level up to -30dB, or wherever you feel the sweet spot is. It’s good to check on a spectral analyser, though, as frequency modulation can sometimes overpower the fundamental frequency if you add too much in.

Osc B introduced

By increasing the level of Osc B we can create a brighter, sharper tone. You can shape the overall FM by reducing the sustain of the amplitude envelope of Osc B. With the level around -13dB, and changing the Coarse tuning to 4 (fourth harmonic), we can get an archetypal garage/UK house sound:

Garage Bass

Shaping the Low-Mid Tone

Once we’ve got our foundations laid we need to move on to the lower mid range, which is going to shape the body of our bass. Click on the Operator and hit cmd + G (or ctrl + G if you’re on a PC) to group the Operator into an Instrument Rack. Instrument Racks allow MIDI to be distributed to various different chains of synths and samplers and their combined signals to be processed and mixed individually.

Click on the Show/Hide Chain List and rename the Operator “Sub”. It can be muted for now while we concentrate on our midrange.

Show Chain List

Ctrl + right-click in the panel where it says Drop an Instrumental or Sample Here, click Create Chain and name it “Mids”. We’re looking for a synth that has a couple of oscillators and, while most any subtractive synth will do, I’m opting for Native Instruments’ Massive. Drag and drop it on to the Mids chain.

Massive’s default preset is using Oscillator 1 with a wave that’s harmonically halfway between a square and a sawtooth. It’s running into filters 1 and 2, and Envelope 4 is controlling our amplitude. Let’s set about getting it to a place where we can design our sound.

Move the WT-Pos (wavetable position, highlighted in green) fully clockwise to Squ and set the routing of the oscillator to F1 (yellow). Now click on the 4 Env panel and reduce the Attack to minimum and increase the Level to maximum (blue and red).

Massive Reset

You can repeat these steps oscillators 2 and 3 if you want.

In the Osc panel, click to Restart via Gate in the Oscillator Phases box. Much like Operator, Massive allows us to select the start phase of our oscillators each time a new note is received. If we were designing a pad or poly synth patch with unison detune it might not be necessary to take these steps, but for a lot of modern bass sounds it’s recommended.

Finally, in the Voicing tab, change the Voicing from Polyphon to Monorotate and the Trigger from Always to Legato Triller. These steps ensure the bass is monophonic and that envelopes won’t retrigger if two notes overlap.

Next I’m going to enable Osc 2 and load a sawtooth in. There are a few two choices here, the Squ-Saw and Squ-Sw II. Ensure the WT-Pos is in the right place and turn the amplitude up to just half way. This gives us a richer sound that is dominated by the odd harmonics provided by the square wave – plenty of middle and top end for out filters to bite on to.

Route Osc 2 to F1 and turn your attention to the filter section. I’m going to add the Lowpass 2 filter – this has a weaker slope than the Lowpass 4 giving it a smoother sound – which will sound great later on down the line when we start modulating it.

Set the Cutoff to about 8 o’clock and leave the Resonance as is. Before moving on I’ve added the Ktr (keyboard tracking) Macro to modulate our filter. This tracks the position of the filter according to the pitch, opening it as the pitch gets higher. Lastly set the >F2 to Series and the Mix to Mix1.

At this stage you can add a third oscillator in tuned up an octave or two if you want to. This won’t really add anything to the weight of the bassline but it might help it come across on smaller speakers.

In addition you could add some Sine Shaper from the Inserts. Experiment with their position before or after the filter in the Routing panel.


Filter Envelopes for Bite and Punch

Modulation comes in all shapes and sizes and by far the two most common sources are LFOs and envelopes. Let’s look at each in turn, starting off with LFOs.

LFO stands for low frequency oscillator, and this is a control value that falls within the 0.01 Hz to 20 Hz spectrum. We wouldn’t be able to hear these waves on their own, as they’re subsonic, but when applied to filter cutoff or volume we can hear their effect.

Their value is determined by a ‘rate’ and their modulation is bi-polar i.e it has a positive and negative part to the cycle. LFOs are great for tempo-synced modulation like dubstep wobbles, filter and frequency modulation as well as stereo tremolo on pads and rhodes-type instruments.

Envelopes on the other hand are unipolar and whereas LFOs are free running envelopes are gate triggered. Massive contains four envelopes and number 4 defaults to modulating the amplitude.

Commonly there are four stages in an envelope: the attack (time in milliseconds it takes to reach the maximum level from a MIDI note on signal); decay (time in ms after the attack has passed to reach the sustain stage); sustain (value at which the note sustains at); and release (time in ms the sound takes to reach zero again after a MIDI note off is received).


I’ve used envelope 1 to control several parameters in our mid-layer. Here, I’ve used the shortest attack available and dropped the level (Massive’s terminology for sustain) and set the decay parameter to a value of 11 o’clock. The decay time might differ drastically depending on your tempo, where at higher bpms you might want a shorter decay time and at slower tempos you could get away with letting the envelope’s modulation breath a little more.

Decay envelope

We can add this envelope to as many different parameters as we like. Firstly let’s add it to our filter (which, if you remember, already has some modulation from the keyboard tracking). Setting the amount of modulation is key to controlling the harmonics that come through and therefore sets the tone of your transient. Having more modulation means the initial hit is brighter, and less, duller.

I’m also adding the same modulation to the Drive circuits on the two Inserts, for which I’ve used Parabolic and Sine Shapers. These add harmonics into the signal by folding over the upper portions of a waveform. One of these is placed before the filter and one after.

Lastly I’ve used Massive’s powerful Modulation Oscillator tuned up 19 semitones (one octave plus a perfect fifth above the MIDI input) and set to Phase modulate Osc 2. Sonically phase modulation is very similar to frequency modulation, and again adds a nice blast of complex high frequencies to our transient.

envelope mod

Macro Managing

We want this bass to be as flexible as possible so I’m going to set up some Macros within Live’s Instrument Rack to control our mids. Click on the Unfold Device Parameters and then click Configure.



Now, anything you touch in Massive will populate this list. I’m going to add the filter cutoff, the drive and dry/wet from both inserts, the phase from our modulation oscillator and the level from envelope 1. If you’ve done that correctly it should look like this:

Config 2

Unclick the Configure button and assign these to Macros. I’m going to give the filter cutoff it’s own Macro named “Cutoff” and the dry/wet and drives of both inserts will be mapped to Macro 2, “Drive”.

The envelope level will be mapped to Macro 3, named “Env Mod” (because cleverly reducing the Marco to 0 will remove all of the envelope modulation), and lastly the phase will be mapped to Macro 4 named, “FM”.

colour code

Once they’re named and colour-coded, click Map and carefully set the ranges for each parameter. It’s good to have a MIDI loop running in the background whilst you do this. You want to set a minimum and maximum that are musical but allow some space for interesting automation later on down the line.


Top Layer

Now we’ve put the work into our mid-range let’s concentrate on the top layer. I’m going to duplicate my instance of Massive for mids by clicking on the chain and hitting cmd + d (or ctrl + d for a PC). Rename this new chain “Top” and solo it.

Aside from the patch being duplicated you’ll notice all of our hard work that’s gone into tweaking the Macros has been retained. Let’s edit this patch to get a more suitable top end. Firstly I’m going to disable the Oscillator Phases to Restart via Gate. I’m going to experiment with Unison Detune in this patch. Restarting the oscillator’s phases can sometimes create a nasty flanging sound when combined with unison detune.

I’m setting both oscillators to sawtooths now, matching their amplitudes and detuning them ever so slightly. The wider the detune amount, the faster beating we get. Beating is a fluctuation we hear when two oscillators are playing the same note but out of tune (you hear a similar effect when tuning two adjacent strings on a guitar together).

I’ve opted for +/- 20 cents. Next add in Osc 3 selecting the Scrim (Screamer) wavetable. Use envelope 1 to modulate the wavetable readout. I’ve gone for a range of 10 o’clock-5 o’clock.

Lastly for our oscillators, add in the Noise oscillator with envelope 1 controlling the amplitude. We want a blast of noise at the transient of the sound but having too much noise in the sustain stage will quickly muddy the sound up. I’ve chosen the Tape Hiss option here.


Let’s turn our attention to the filter. I’ve left the settings intact but changed the algorithm to Bandpass. This works by isolating a band of frequencies, leaving us with a more aggressive but thinner sound perfect for our top layer. Set the Bandwidth and Resonance to about 9 o’clock.
In the voicing tab change the number of Unison Voices from 1 to 4 and enable the Pitch Cutoff and Pan Position, adjusting their values to taste. Pitch Cutoff will add some detuning to each voice and Pan Position will spread those around the stereo spectrum. Now our layer is starting to sound the part.



There’s not much more to do but turn our attention to the FX tab. I’m adding in a Classic Tube and Dimension Expander while shelving off some bottom end in the EQ tab. Keep a close eye on the Master as all of these distortions and unison effects can easily clip the sound unpleasantly.

Processing Layers Together

Now we have our three layers in place, we need to think about separating them so there’s as little overlap as necessary and each part occupies its own space in the frequency and stereo spectrum. As our sub is fine let’s start with the mid layer. Solo it and add Live’s EQ Eight.

I’ve high-pass filtered it fairly abruptly at 80Hz using the 48dB/Oct slope: this stops it interfering with our sub. I’m also going to add some compression to even out the level a little more and some limiting to deliberately clip the layer. You could add more distortions and modulations here but I’m going to reserve them for our top layer.

Solo the top and add an EQ Eight. Add Live’s Pitch plug-in from the MIDI Effects tab and tune it up an octave. This will transpose any incoming MIDI up an octave automatically – a great time-saving device! I’m again going to high-pass the sound, this time using the standard 12dB/Oct slope and high-passing at 180Hz.

I’ve also added Live’s Auto Filter (adding some extra low-pass filter envelope modulation), the Simple Delay (using short unsynced values of 30 and 80ms), some Reverb, Compression and Limiting. Here’s the top layer on its own now.

FX 1


Lastly I’m going to map the levels of each chain to a Macro, allowing me easier control over each layer, and the dry/wet of the top layer’s FX to my last remaining Macro.

MAcros final

Multi-Band and Parallel Processing

Now our synth is balanced internally we can think about processing it as a whole. The way Ableton nests Instrument Racks is clever but it means in order to contain any effects we now apply with our three existing layers we’ll need to re-group (cmd + G / ctrl + G) our current three layers into another Instrument Rack. Alternatively add an Audio Effects Rack after.

While we can use filters or EQ to separate frequency bands, it’s safer to use Live’s Multiband Dynamics as the bands are phase coherent and will minimise the amount og delay to any part of the spectrum. I’ve added three chains, each with a Multiband Dynamics, each soloing one of the Low, Medium and High bands. Ensure you label your chains for ease of use at a later date.

Now we can process these bands individually and adjust their crossover if you choose. STart by adding a Utility to the Low chain and reducing the width to 0%. It’s recommended to keep your bottom end in mono for nearly all applications and this plug-in can ensure that. I’ve also added Live’s Compressor with a slow attack and release with high ratio to tame the dynamic range a bit.

Mutliband 1

On the Mids chain I’ve adjusted the high crossover band to 1.5 kHz to narrow this range a little. Adding another Utility I’ve kept the Width at 60% and added some more compression with a much faster attack and release to match the quick envelope modulation of this band.

Lastly in the High band I’ve adjusted the Width of a Utility to 120% to spread the sound a little and added some light low-pass filtering around 8.5 kHz. After the Audio Effects Rack you can add in any further EQ you might want (to balance the patch specifically with your track), any compression, limiting and sidechain compression.

The patch is designed to be a jack of all trades and will require some tweaking of the Massive instruments and processing to get it to sit just right, so be liberal with adjustments. Hopefully this acts as a springboard to inspire you to create your own bass sounds too. Download the project used in this tutorial here.


Slapback Delay – A Must Have On Vocals & Guitars

[Editors Note: This is blog was written by Scott Wiggins and it originally appeared on his site, The Recording Solution, which is dedicated to helping producers, engineers and artists make better music from their home studios.]

Slapback delay is a very common effect on tons of hit records. It’s really easy to set up!

When you think of delay, you probably think of yelling down a long canyon and hearing your voice repeat over and over. In my mind that’s an echo.

That’s what a slapback delay is, except it’s one single echo. One single repeat of the original signal.

It’s more like you clap while standing in a small alley between 2 buildings, and hearing a very quick repeat of your clap.

It’s a super fast repeat that adds a sense of space.

Guitar players love it when playing live, and I love using it on guitars and vocals in the context of a mix.

It just adds some energy and sense of depth without having to use a reverb and running the risk of washing out your dry signal.

I tend to use more effects after the slapback delay, but I more times than not start with it to set the foundation of the sound I’m trying to achieve.

A little Goes A Long Way

This effect is used more as a subtle effect on vocals or guitars.

It can be used on anything you like, but those tend to be the most popular in my opinion.

BUT… there are no rules, so if subtle bores you, then go crazy!

Also you can start with a preset on most delay plugins, and then tweak to taste.

If you are tweaking your own slap delays, just make sure your delay times are not in increments.

For Example: 32ms and then 64ms.

That would put the delay on the beat and that’s not technically a slap delay.

I learned that tip from the great mixer and teacher Dave Pensado,  so  I wanted to pass it on to you.

Watch the video above to see how I set all this up inside a real mix.

Comment below and let me know your thoughts.

What is the Best Microphone for Recording Vocals? $3,000 vs. $100 Mics

[Editors Note: This is blog was written by Scott Wiggins and it originally appeared on his site, The Recording Solution, which is dedicated to helping producers, engineers and artists make better music from their home studios.]
You may be asking yourself: What is the best microphone for recordings vocals?

I want to ask you a few questions first.

Does expensive gear really matter anymore?

Is the reason why your recordings and mixes don’t sound the way you want them  to because you don’t have expensive gear?


Yes it’s nice to have great gear, and I’m not against it, BUT I whole heartedly disagree that you can’t make GREAT recordings and mixes on budget gear these days.

The question, “What is the best microphone for recording vocals?”, is a lot more complicated than that.

You see, different vocalist sound different on different mics. If you have a bunch of mics to try out on your vocalist the day of your recording then by all means go for it and pick the best one.

The thing is, this is really subjective. One mic may sound different then the other, but is it better? Maybe…. Maybe not. Maybe it’s just different and they are both good.

I have a really good buddy named Pat Manske who is a professional grammy nominated audio engineer, and works
at a studio in Wimberly TX called the ZONE.

Did you know the famous engineer Rupert Neve lives in Wimberly?! Pretty cool.

Anyways… Pat let me borrow a really expensive Neuman U87 mic, an RE20, and an SM7B.

All classic, famous mics. These mics can be heard on all kinds of hit records over the years. The RE20 and SM7B are more affordable, but still in the $400 range. The SM7B is what Michael Jackson
sung into for the whole Thriller album.

best microphone for recording vocals

Neuman U87

best microphone for recording vocals


best microphone for recording vocals


I put them up against my $300 MXL 4000 I’ve had for years and use on EVERYTHING, as well as 2 $100 mics, the Audio Technica 2020 (AT2020) and the workhorse of studios world wide, the SM57.

best microphone for recording vocals

MXL 4000

best microphone for recording vocals

Audio Technica 2020

best microphone for recording vocals

Shure SM57

In the video below, I sing the same part of a verse into all 6 mics, and I was surprised at the results I found.

All of them sound different. All have little things I like and dislike about them. The thing is, I am convinced now that it’s not the lack of expensive mics that are the reason for not having a great vocal sound.

I am completely sold on “It’s the ear, not the gear that make a great engineer”.

I’m the only one standing in my way from getting my mixes to sound like the top dogs in the industry.

It’s not the million dollar studios, it’s not the super vintage analogue gear they have that I don’t, It’s not the $100,000 mic locker they have to choose from. It’s their years of hard work and putting in the time. It’s their knowledge of how EQ and Compression works.

It’s their taste they developed over the months and years of finishing mix after mix. Thats it!

With today’s technology and digital plugins that sound just as good as the true analogue gear, the playing field is even. Tons of professional mixers are going completely “IN THE BOX”, meaning they are strictly using DAWs and digital gear.

So to answer your question “What is the best microphone for recording vocals?”:

I’d say the one you have already. If you need help on how to record a great vocal sound, then check this post out on another post/video I created.

Take a look at the video below and see the results of the mic shoot out for yourself. I’d love to read your comments below on which mic you liked the best and the reasons why.

How To Get Your Audio Files Ready for Distribution

By Jacqueline Rosokoff

Not too long ago we were on a formatting kick.  We went over how to correctly enter your release information, and then we covered artwork, so now we’re talking audio, arguably your most important distribution asset.

Stay with us as we tackle everything audio. Continue reading “How To Get Your Audio Files Ready for Distribution”