You are here

Mix Rescue: Simon Fitzpatrick

Remixing Your Track By Paul White
Published November 2006

Careful EQ and some judicious compression, using Apple Logic's built-in processors, helped bring out the best from the acoustic guitar parts.Careful EQ and some judicious compression, using Apple Logic's built-in processors, helped bring out the best from the acoustic guitar parts.

This month we help reader Simon Fitzpatrick to bind his synthesized, programmed, and acoustic sounds together into a cohesive mix.

This month's song 'Take My Time' was sent in by Simon Fitzpatrick, an SOS reader who is based near Dublin. He told me that the song was inspired by the pressures of modern life, and went on to say that these same pressures also affected the scheduling of the recording, resulting in the vocals being recorded over several sessions at different levels, though he did measure the mic distance to ensure this was consistent every session.

A BLUE Baby Bottle was used as the main recording mic for vocalist Rian at a distance of 20cm, and the preamp was a rather nice Avalon VT737SP, where a small amount of EQ was added during recording. Simon tried to compensate for slight tonality changes between sessions using EQ, and although he says he can still hear small differences, you eventually reach a point in any production where you have to ask: 'will it affect sales?'

Original Mix & Arrangement

The general line-up comprised programmed drums, synth bass, steel- and nylon-strung acoustic guitars, sampled strings, synth pad (from an Alesis Ion), and sampled acoustic piano. The lead vocal was augmented by two harmony vocal tracks comprising left and right channels. That meant that everything except the vocals and acoustic guitars was either sampled or synthesized.

Simon's original mix sounded very respectable, but he was concerned that he wasn't achieving the transparency and clarity he wanted, and I could see what he was getting at. To me the problem seemed to be that he'd tried to give every part of the mix maximum clarity, which in turn meant that all the parts were pushing for front position. This wasn't disastrous, by any means, but I felt that by re-balancing the song and choosing different reverbs it would be possible to get closer to the result he wanted. I liked the way both steel-strung and classical acoustic guitars had been used to give the otherwise electronic backing a real sense of sounding like a band, and the vocal parts were well recorded and strong.

I didn't change the arrangement of the song in any way, as that seemed to me to be very well thought out and sympathetic, but I did push the pad and string keyboard parts a little lower in the mix, as well as treating one vocal section to a 'transistor radio' effect courtesy of Altiverb 's Small Speaker impulse response, downloaded from Audio Ease's web site. Simon said he probably wouldn't use this effect in his final version, but was perfectly happy for me to leave it in my version for this article to demonstrate the production technique. He also sent me some alternative piano sections and string parts a couple of days before our deadline, so I replaced the pianos as instructed, but layered the new string sections with a lower amount of the old string part, as I really liked the effect.

The main vocal reverb that I set up was Universal Audio's Plate 140 plate emulation with an Altiverb ballroom on the second send to add life to the acoustic guitar and piano samples without washing them out. I added a little plate to the pianos too. A third send was set up to feed the UAD1 Roland Space Echo plug-in, and I added this to the main and backing vocals to help make them more lush. In isolation, the echo plus plate reverb sounded pretty heavy-handed, but in the context of the mix the effect balance seemed subjectively right for the song. Vocal treatments of this kind are very much down to personal preference, so there are any number of alternative 'right' ways to approach this.

Rescued This Month

The artist Rian is the product of a collaboration between 24-year-old singer/actor Rian Sheehy Kelly, songwriter/producer Simon Fitzpatrick (of Red Sky Music), and accomplished session guitarist Karl Breen. Rian met songwriter and producer Simon Fitzpatrick in 2005, and following a successful debut collaboration with a song called 'Red Sky' (eventually produced by renowned producer Billy Farrell), they are now working on other tracks to be released in the future.

Drums & Percussion

My first step was to create subgroups for all the tracks, which in the main comprised drums/percussion, acoustic guitars, keyboards, and vocals. The bass was adjusted individually, but I grouped the piano parts with the other keyboard elements. I believe the drum samples came from the single-hits section of Spectrasonics Stylus, and the kick had a bit of an odd ending to its decay, so I gated it. I also used EQ to dip out some boxiness in 200Hz region and added some definition at around 4.3kHz.

Flanging was added to just the introduction to the main drum part using automation, making an interesting spot effect.Flanging was added to just the introduction to the main drum part using automation, making an interesting spot effect.This improved the sound a lot, but it still didn't sound 'acoustic' enough for my liking, and the snare sample was also a bit lightweight. In the end I resorted to my increasingly common trick of using Apple Logic 's Audio To Score facility to extract MIDI hits from the kick track and snare track, then assigned these to trigger drum samples, in this case from Toontrack's EZ Drummer. Mixed a few decibels below the existing parts, the EZ Drummer hits were just what I needed to make the kick and snare appear a bit more real. I can honestly say I'd never taken this approach to beefing up drum tracks until I started doing these Mix Rescue projects, but it is a very worthwhile and effective quick fix provided that the drums are on separate tracks and aren't suffering from too much spill.

Very little processing was used on individual tracks, though at Simon's suggestion I did flange the intro to the main drum entry under automation control. The kick gate and EQ I've already mentioned, and I also compressed the high piano line to try to even up the level, as some notes were tending to dominate. I also did a little destructive editing to match the level of some overly quiet notes.

The synth-bass sound seemed to be trying to take over the role of a bass guitar to the extent that I wondered why they hadn't chosen a bass guitar in the first place, so I used Noveltech's Character plug-in to give this a bit more mid-range definition, following it up with a Waves L1 limiter to catch the odd hot note. This gave a strong sound with a lot of depth, and it didn't seem to get in the way of any of the other mix elements, which is always encouraging.

To diffuse the Alesis Ion pad sound, I chose another Universal Audio plug-in, this time the Roland Dimension D, as its very subtle chorus adds a gentle sense of movement and also helps push the sound back in the mix. For the string part, reverb was all that was needed. If you don't have the Dimension D plug-in, a gentle stereo chorus, or a pitch shifter adding parts a few cents up and down, will create a nice spatial effect.

Need Help With Your Mix?

If you're having trouble with a mix, then you can submit your track for the Mix Rescue treatment contact us first via the email address below. Please include a daytime contact telephone number, some information about how you recorded and mixed your version of the track, and your views about what aspects of your mix are causing you most concern.

mixrescue@soundonsound.com

Lead Vocal

Moving on to the lead vocal, I used a little pitch-correction, setting it to operate quite slowly in chromatic mode just to pull in any sustained notes that might tend to wander off. The lead vocal was actually very well sung, but subtle pitch-correction can help tighten up the sections where the harmonies join in. Both the main and backing vocals were treated to a mixture of plate reverb and tape echo, with the lead vocal also warmed up very slightly in the mid-range using the UAD1 Neve EQ plug-in and levelled using Logic 's own compressor with a 3.2:1 ratio. The Altiverb small-speaker effect was inserted directly on this channel and brought in for the required section using automation.

Although I didn't process or EQ any of the individual acoustic guitars, I did process the guitar subgroup using some very gentle EQ to take out mid-range boxiness and honkiness at 175Hz and 1.2kHz, while adding a slight boost at 8.5kHz to open out the high end. Noveltech's Character was again used sparingly to add a bit more life to the guitar sound, and I used a similar Character setting to open up the keyboards without making them sound too 'in your face'. I'm still not sure exactly how this plug-in works, but rather than push just the top end as an exciter does, it seems to take account of the frequency spectrum of the sound being processed (dynamically) then accentuates the peaks that give it its tonal signature. Overall guitar group compression was provided by Logic 's compressor, this time with a 6.3:1 ratio, just to keep the overall level more constant.

Once the main and backing vocal parts had been mixed, I used Noveltech's new Vocal Enhancer plug-in, which is a vocal-specific processor using the same technology as Character, over the whole vocal group. Currently this, like Character, is only available for TC Electronic's Powercore DSP platform. This was applied only above 5kHz to add sizzle without edginess and it worked really well in pushing the vocal to the front of the mix and creating a lively, contemporary sound. If you don't have one of these, you can add air and transparency to a vocal line simply by using a broad, gentle boost centred at 10-12kHz.

Remix Reactions

Simon Fitzpatrick: "Right from the start I noticed a difference: the picked acoustic now has a far better stereo image, and Paul has managed to get the guitar sounding crystal clear throughout. In fact, everything is much clearer — the vocals, the drums, and the strings. The relative volumes have also been improved, for example the piano in the middle section now stands out more, which is perfect in the absence of the lead vocal, and the strings sit perfectly behind the mix. On reflection, I think I had just about everything in my mix faded up to the same level, and was compensating with heavy subtractive EQ, which was never going to work. The drums work better now, with just the right amount of ambience. Although mine were clear, they always sounded either slightly sequenced or a tad synthetic.

Mix Rescue"The vocal sound is definitely more pleasing, and the harmonies sit well behind the lead vocal. There is more reverb on the harmony than I would normally consider using — I guess you read so many articles warning of the dangers of using too much reverb that it is refreshing to see longer reverbs actually working. I had originally used a bass guitar sound, but couldn't get it to sit well. My background for many years was in dance music, and dealing with real bass has been a nightmare recently so I ended up going for a sustained analogue bass sound here. Had I known Paul was going to do the mix I probably would have settled on the real thing!

"For this Mix Rescue I had to bounce each of the mix tracks down, and although this was undoubtedly a chore, it had some unexpected benefits. Several bad edits were far more obvious when the levels were normalised, especially on the vocals. I noticed a wide array of faults — wobbly legato on the pads, patchy velocity on the strings, places where the acoustic guitar performance could be best replaced by another section from elsewhere, as well as the usual clicks and pops. As tedious as this was, it can make quite an impact, so in future I'm considering listening to each track carefully in solo before mixing.

"This has been an extremely worthwhile experience. I've been reading Paul's tutorials for many years (including a book or two) and it was great to see his principles applied to one of my own tracks."

Mix Processing

For processing the whole mix, I eventually settled on my Drawmer DC2476 Masterflow processor, as I've never yet found a plug-in that gets close to it for polishing a mix. This was used to add the merest hint of 'air' EQ to the whole mix up at around 12kHz, and also to add some very gentle multi-band compression with the threshold at -40dB and ratios in the 1.1:1-1.2:1 range. This adds density and loudness to the mix without obviously affecting the dynamics, as mild compression is being applied to signals at all levels.

Mix RescueMix RescueRian's vocals were brought to the front of the mix using a combination of Noveltech's new Vocal Enhancer and Apple Logic's built-in Compressor.Rian's vocals were brought to the front of the mix using a combination of Noveltech's new Vocal Enhancer and Apple Logic's built-in Compressor.Perhaps most important was Drawmer's three-band tube-saturation emulation section, where the low end was warmed up below 120Hz, the mid-range processed barely at all, and the high end livened up a little above 3.2kHz. This seemed to crystallise the mix by emphasising the space between the sounds and by reducing mid-range clutter, and it also warmed up the bass end. This processor also includes a peak limiter to catch transients, but I tried to keep below the limiter threshold so that I could follow the DC2476 with TC Electronic's Brickwall Limiter — one of the best limiters I've heard. Because Logic is still seemingly incapable of working properly with external hardware inserted into its main output using the I/O plug-in, I routed everything via a buss and inserted Logic 's I/O plug-in into the buss insert point. This was then used to route the digital in and out of my MOTU 828 MkII interface to the Drawmer's S/PDIF sockets. This setup allows you to mix the song via the external hardware in real time.

Once balanced, the song virtually mixed itself, though I applied an automated fade-out, as the song had no real structured ending. Some of the parts had already been faded, so I had to match my own fade to these to get a natural result. If this were to be a commercial release I might spend time automating odd vocal phrases and syllables to achieve absolute consistency, but sometimes too much polishing robs a mix of its magic. Ultimately, if a mix sounds good using little or no mix automation, you shouldn't feel guilty about it! I did do a little tweaking, though, for example reducing the level of an acoustic guitar string squeak that fell just out of time with a hi-hat, but nothing too surgical was required.

My final balance had the acoustic guitars, bass and (after their entry later in the song) drums providing the main support for the vocals, with the pianos adding interest and punctuation. The strings and pads played more of a supporting role and also helped the song build towards its end section.

The Big Picture

Although I use Logic to tackle these mix projects, many of the plug-ins I use are available for all plug-in formats and those Logic plug-ins that I do use tend to have similar counterparts in other sequencers. Usually the approach is more important than the actual tools used, though you do need a decent set of monitors — I used M-Audio's EX66s for this session. I find it helps to set up the groups, name the tracks and check for odd noises in the first session, then leave the mix alone for a few hours before you come back to set an initial balance. Once you think you have something that works, work on something else for a while then come back to it later and see if you still like what you have got.

No two mixing jobs are the same, and in this case an important part of the decision-making process was deciding which parts could be dropped in level (specifically the strings and pad keyboard) to keep the mix sounding open. Often you can spend too much time worrying about what to put in a mix when it might be more productive to think about what can be left out. 

Hear The Differences For Yourself!

Listen to the changes I made by checking out the following audio examples which I made during the mixing session. They're available for download at www.soundonsound.com/sos/nov06:

The original bass synth sound, without any extra processing.

The same synth sound processed with Noveltech Character and Wave L1 Ultramaximizer.

This file shows how I made one of the drum fills into a feature by flanging it selectively under automation control.

The synth backing pad was treated to subtle chorusing courtesy of Universal Audio's Roland Dimension D plug-in.

On this submix of the guitars you can hear the effects of my processing choices: EQ cut at 175Hz and 1.2kHz, EQ boost at 8.5kHz, some subtle Noveltech Character, a small amount of overall compression, and a dash of Audio Ease Altiverb.

The final lead vocal benefited from a little pitch-correction, and a combination of plate reverb and tape echo effects. I also boosted a little with EQ in the mid-range and applied some 3.2:1-ratio compression.

Using automation I inserted a radio preset from Audio Ease's Altiverb just for selected sections of the lead vocal.

The mix that Simon Fitzpatrick sent in when asking for help.

My completed remix.