It’s never been so easy to get stuck in with mixing your own music. We’ll show you how to get started with mixing, regardless of your current skill level.
What’s the difference between a professionally produced piece of music and something cooked up by a beginner? Alongside the quality of the recorded elements and the attention to detail given to a song’s arrangement, a key factor will always lie in the impact and presentation of the mix. Do all the tracks and elements of the mix work together in balance? Does each instrument have its place? Does everything audible retain a sense of individuality yet still ‘glues together’ as a complete whole? If those factors ring true in a song or production you enjoy listening to, then those are the hallmarks of a professional mix.
Today, the tools needed to mix a track fully to a professional standard are within most people’s reach, and they might be more affordable than you would expect. The only thing missing? The knowledge about how to use them to get the sound you’re looking for.
In this article, we’ll give you a primer in mixing music, explaining the essential information you need to make a start on mixing music yourself. We won’t just blindly tell you what to do – we’ll show you how it works and we’ll provide examples so you can hear it in action. Give us an hour, or likely less, and we’ll send you on your first steps towards mix engineering to a professional standard.
What is Mixing in Music Production?
The simple answer: In music production, mixing is the process of taking elements within a recorded song and making all those bits fit together in sonically pleasing ways with level balancing, panning and audio processing with a blend of creative and technical thinking. The end result gives each instrument or channel its own space to be heard alongside others.
The Mixing Conundrum
For this article, we’ll be working with a ‘model mix’. In this case, it contains 13 channels including 7 individual drum channels, a bass channel, 2 guitar channels, and 3 synth channels.
When we play all the raw channels of our mix together, it sounds OK but has a long way to go until it sounds more commercial. For example, the bass channel is too quiet in the mix.
To improve this, the first temptation you might have is to turn the Bass channel up. The problem here, if we do so, we may drown something else out – in this case, being the drums. Hear below how our bass is more present, but the drums have suffered and are getting lost behind the bass.
We could continue this sequence by turning the drums up to compensate BUT we could then easily drown another element out, and so on, in the process. You can probably already see the problem here. By the end, we’ll be exactly where we started in the relative levels of each channel, but louder in absolute terms. All we would have done here is chase our tail, yet improved very little if nothing at all.
Mixing with audio processing is the best solution here for this exact problem. Instead of chasing our tail by turning things up with level balancing, we instead remove and sculpt parts of one track (that may not be needed) to give another track some sonic space to breathe more easily. This gives both elements, if not multiple tracks in a mix, a fairer way to exist together.
By the end of this article, the mixing techniques you are about to learn will result in the finished mix example below. Read on to learn what we did and our reasons behind why we chose these mixing techniques to get this mix sounding the way it does.
Mixing - A Broader Understanding
Mixing is the process and the art of fitting multiple elements together so that instrument and vocal tracks don’t compete sonically. Decisions need to be made in which elements should take precedence and priority, versus which elements should be pushed into the background. Most elements in a mix will have some form of audio processed – but to different extents.
Another aim of mixing music is to make sure that the final ‘mixdown’ (‘bounce’, ‘render’ or ‘output’) is sounding balanced. This ‘balance’ is typically achieved across…
If that’s the overall intention of mixing, over the coming sections, we’re going to take you slowly and simply through the first steps in what to look out for and what to do to achieve all three.
Using EQ to Fit Elements Together
An EQ (short for Equalizer) is a powerful tool for mixing. It sculpts different frequencies by boosting or cutting their power. Even if you’ve never used a DAW before, you may have encountered a similar effect with the graphic EQ on a home stereo or the Bass and Treble controls on a basic car stereo. EQ’s are all about tone shaping, making things brighter, darker, less harsh, more bass heavily, less ringy etc.
A mixing engineer can go deeper than these controls, using a parametric EQ such as our own F6 Dynamic EQ to boost or cut in a completely custom way.
What are Frequencies in Music?
Most people are used to describing a sound or musical note as being ‘lower’ or ‘higher’ in pitch. Frequency and pitch can be used interchangeably in some ways, but with frequencies we are often discussing bands and ranges of frequencies in the context of tone. Frequencies are measured in Hertz (Hz) which gives us far more precision than pitch, which usually describes exact note values.
How EQ Works
An EQ takes a particular band of frequencies and makes it louder or quieter (AKA, applies a boost or cut). This boost or cut can be set to be shallow or deep, and the frequency range it affects can be wide or narrow. It can be applied as a high-pass, low-pass, bell or shelf filter.
How EQ is Used in Mixing
In our model mix, we noticed that the Bass and Synth 1 were competing for tonal space. We can’t simply turn one up as the other will be swamped. Let’s listen to these two elements in isolation now.
The reason that these two elements clash so badly is because they are masking each other with tonal energy. Their frequencies overlap and clash, so neither is clearly audible, but remember, we want them to be, so what do we do? In the example below, we’ve isolated the frequency range where the two instruments clash. Hear it here and go back to the full-range example above to hear the clashing frequencies in context.
This is exactly where EQ will shine. By removing these frequencies from Synth 1 and leaving them in the Bass channel, there’s no more clash, and we hear the Bass more clearly when both are played together. We could also choose to compensate for the loss in the Synth 1 by strengthening or boosting it elsewhere in its frequency spectrum, although we may have to make room for it by shifting a third channel somewhere else later in the mix… we’ll see.
This EQ principle can also be applied when mixing all the other channels together into a coherent whole. In the example below, we’ve taken our mix, noticed where other frequencies clash and made decisions about which frequencies should be reduced from certain instruments.
There’s a lot more that can be done with EQ, but this is its fundamental power in most mixing situations.. If you’re starting out in your mixing journey, we recommend you grasp this EQ workflow first.
Using Compression to Control Dynamic Range
One huge topic in music is compression. We’ve written extensively about this subject, which gets as deep as you let it. At the surface, though, compression is always the same thing.
A compressor reduces the dynamic range of audio passing through it. By dynamic range, we mean the difference in level (volume) between its quietest and loudest parts. Typically, a compressor does this by reducing the level of (turning down) any part of a signal that crosses a set threshold. With the loud parts turned down, the softer areas of a signal usually get turned up, bringing the average level up as well, which makes it much easier to set track levels in a mix.
How Compression is Used in Mixing
Depending on the source material a compressor is working on, dynamic range compression is used in subtly different ways.
A compressor may be used to reduce transients on an overly dynamic source. For example, drums that have been recorded without any subsequent processing. A compressor with a fast attack time (a FET model like our CLA-76 is ideal for this) can be used to reduce transients, whether or not the whole track will be turned up on output or not. Usually, other settings will include a fast release time, a relatively high threshold and quite a high ratio.
Check out this example of our snare before and after transient compression.
Another use of a compressor is to add character and power to a track. This will usually be done using a slower, more characterful compressor like CLA-2A, which mostly needs the input and output gain controls set to work its magic.
Have a listen to our guitar before and after processing using CLA-2A with its slow, characterful compression.
One other key use case for a compressor is on a bus channel. This is a channel that has multiple channels routed to it and is used to affect all of them as a single output (also known as a sum). Your DAW’s master output channel is an example of a bus, but you might also have other buses, ‘sub’ or ‘submix’ channels that group multiple elements.
For example, a drum bus will group all drum kit elements together on a single output channel, allowing you to add processing to the output – using a compressor for example!
A bus compressor will have variable settings, but you may expect to see a quick attack and a medium-slow release, coupled with a low threshold and medium ratio. This kind of ‘Glue’ action can be responsible for bringing a group of tracks together and making them sound more cohesive.
Check out our drum buss before and after compression with this example below.
Considering Width, Depth and Reverb
If correct EQ usage lets you balance a song in terms of its frequencies and tone, and if compression lets you balance a song in terms of its power, dynamic range and loudness, then there’s a third essential element to get your head around in mixing. Our essential third type of balance in a mix is that of its width and depth.
Reverb: a Sense of Depth and Space
No mix is complete without reverb. Unless your recording was made in a large space, you’ll likely have to simulate one instead. You’re in good company as artificial reverb has been used for decades to give songs an impressive sense of space.
The history of studio reverbs moves from dedicated chambers, through huge plates and physical springs carrying sound, to early programmable digital reverb units. Today’s plugin reverbs give a huge amount of control over every aspect of a reverb’s character and behavior, and they contain presets ready with pre-rolled settings to work on vocals, drums, guitars and many more signals.
How Reverb is Used in Mixing
You can simply drop a reverb plugin onto a channel and start playing around with flavors you like the sound of… but a much wiser was to work with reverb is to drag the reverb plugin onto a Return, Aux or Bus channel, and route multiple mix channels to it. This gives you the flexibility to route multiple things to one reverb, and the ability to process the reverb signal with dedicated effects.
To use a reverb processor, set it loud before dialing in your ideal reverb character, then slowly reduce the level of the reverb to taste. Once you can no longer hear the reverb signal against the original channels that fed it, turn the reverb back up slightly – this will ensure you have enough reverb presence without overcooking things. Here’s our model mix with reverb added tastefully.
There are certain reverb types that are good for certain types of instruments, such as Plate Reverb sound great with vocals; Room or Hall reverb is a go-to for drums; Spring Reverb is traditional with guitars… but these are rules of thumb only.
For some inspiration, check out 4 Famous Vocal Mixing Tricks Using Reverb and Delay.
Why Music Comes in Stereo
Everyone should know that most music is played in stereo. (We’ll ignore the fancy schmancy ‘immersive’ and ‘3D’ audio formats for now). But, not everyone understands what the true effect of stereo is.
By having two speakers in front of you – one on the front-left and one on the front-right – we effectively create a virtual sound stage or ‘stereo field’. A mixing engineer can place an instrument at a point on this horizontal line. Close your eyes and be able to point to a guitar “just about there”, or to a backing vocal “coming from over there”.
Having control over this width, and knowing what to do with it, is part of the mixing engineer’s job. Since this is an introductory article, we won’t go too deep but will note a few attitudes to stereo that you should adopt.
Basic Panning and Width in Stereo
Panning is the positioning of an element (or many) at a distinct place (or places) within the stereo field – at a certain point between a listener’s left and right speakers. If an element is panned to the extreme left, its sound will only come out of the left speaker. As it moves towards the center, more energy starts to come from the right speaker, until the two speakers have equal signals at the center.
Panning something to the extreme left or right is rarely done in modern mixing, although it’s not been entirely abandoned. More often, you’ll witness instruments at a point between hard left and hard right.
Width is a slightly harder concept. A source that’s been recorded by a single microphone will have a more exact location when panned. A source that’s been recorded using two microphones will be a stereo source to start off with. This stereo pair can itself be panned, but the result will be a signal sounding over a wider space – for example from near extreme left to a third of the distance from the center to the left.
Mono and Central Elements in the Stereo Field
Anything in the dead center of the stereo field is said to be mono/not stereo at all. Perhaps the first rule of mixing for stereo is knowing what elements can and should be kept in mono.
Lead vocals, for example, are generally kept to the center of the stereo image. Kick drums, snares and bass guitars will often feature here too, and the idea is that these elements are so vital to any song that they should remain right down-the-center-line and with equal power in both speakers.
Bass is another element that usually stays in mono, although this is for compatibility issues. Low frequencies are said to be less tolerant of stereo, although the exact reasons for this are out of scope with this article for now.
All that being said, the reverb signal from a vocal or snare can be drenched in stereo reverb, so long as the original signal stays clear and central.
How Panning is Used in Mixing
Panning and width have no hard rules, but below we’ve prepared our model mix using the rules of thumb: we’ve kept all the most important elements, and bass, in the center, while we’ve placed other elements further out in the stereo field. This gives some separation between them without their levels or frequency profiles changing, meaning that panning is yet another tool in your toolkit for making professional sounding mixes.
More Mixing Knowledge Awaits
In this article, we’ve covered three fundamental parts of mixing, explaining not just how EQ, compression and reverb work – but also why they work. We’ve taken you from a raw, recorded set of tracks to a far more coherent ‘mix’ – even if we’ve only used the basic processors to get things started.
This is simply the beginning of learning the ropes of mix engineering. To go further, be sure to check out more from Waves Learn and sign up for the Waves Newsletter to get more mix wisdom and tips delivered straight to your inbox.
So what comes next? Anyone wanting to move forward on their own should check out these potential next steps for their mixing journey…