Black Friday DAWBuster: 3 Plugins for $49.99 | Shop Now »

Scott Martin Gershin on the Cinematic Adventure

Jul 19, 2009

High profile sound designer & mixer Scott Martin Gershin took a few minutes to speak with Waves about his current projects as well as his beginnings in the business.

THE SONIC ROAD MAP

What are you up to at the moment?

A lot’s going on today: finishing a mix, prepping for my next project, building an audio playroom, and I got rear-ended this morning! Just a normal day.

How has Waves been solving problems for you lately?

On one of my films coming out this fall, Gamer, I did a mix in Pro Tools that we used for the first temp, so we could figure out the style, “the sonic road map of the film”. It is visually frantic at times and has emotional stops and starts that can turn on a dime. With such extreme sonic dynamics, I needed to find a way to get the dialogue through. I ended up using MaxxVolume and that actually worked really well, much better than I would have suspected.

Originally, I thought of MaxxVolume as a tool to contain full mixes from being too dynamic. I never thought I would use it all that much because my goal is to make my mixes as dynamic as I can. But what’s interesting is that I had a problem, in that my dialogue was too dynamic. And I needed to fit the dialogue in a very small hole at times to make it cut through the battle sequences. Having control over upper and lower thresholds was really helpful, especially when they can be automated. Because I had to find a quick solution that sounded good, I ended up using MaxxVolume on the temp, and it ended up working incredibly well.

Also, I just got finished re-mastering a lot of my stereo mixes that I’ve done over the last 5 years for video games. The 5.1 mixes held up, but when playing back the mixes through various stereo playback systems at different facilities in different countries—consumer, some very low-fi, even laptop speakers—I noticed differences I wanted to re-address.

Since newer plugins have come out, I thought it would be cool to re-master some of them, use some of the latest technology and tricks I didn’t have at the time. So I went back into my stems, listened on a ton of consumer speakers as well as several high-end systems, and started nipping and tucking. After my adventures on the latest movie, I thought I would try MaxxVolume again and see if lightning strikes twice. I was very happy with the results.

Whether it’s in movies or in games, we still face the same sonic challenges and a lot of it has to do with—not as much limiting from the top—but bringing up the volume from the bottom, everything finding its own dynamic window.

I’ve used many compressors before but MaxxVolume seems to respond in a way that’s easy and very pleasant, especially for dialogue. I’m still experimenting with MaxxVolume in other areas like in-game dialogue and elements within the score and design. It’s definitely an interesting tool.

THE ART OF SONIC STORYTELLING

Can you tell us an overview of your career path?

I started as a musician. I went to Berklee College of Music as a guitar player. I went there to learn audio engineering and mixing, interestingly enough, I tried to get a work study gig in their mix studios but all the positions were filled! There was, however, an opening in the synth lab.

Now this is in the days of Moog modulars, ARP 2500’s and 2600s when they were new. Late 70’s, early ‘80s, I started learning synthesis. It was more of a novelty, the keyboards never played very well. So as a guitar player who worked in the synth lab, I had to teach people how to use a Arp 2500 /2600/ Arp Odyssey. I started my student job not knowing that much about synths, so a teacher—Chris Noyes—took me under his wing and taught me during off hours. The clouds parted and I saw the light. I think this was pivotal because as I was learning about mixing and recording, I was simultaneously learning synthesis, and in those days those two were not tied together. They were two different programs. You had one guy doing one or the other but very few people did both. And what I started realizing is that a VCA is similar to a noise-gate, and a VCF is similar to an automated EQ. And ring modulators were just cool…You can actually control and modulate things in real-time in ways that may not have been as common on the mixer side.

So I started merging my synthesis and my audio into one practice very early on. Busing out of the console through different modules, Moogs or ARPs whatever. I was also heavily into microphones and weird recording techniques. Those were some great years of experimenting.

What did you do when you first came out to L.A.?

I started working the L.A. studio circuit, as a mixer and occasionally as a synthesist, programming DX7s, Prophets, Jupiter 8s, Oberheims. I worked for a lot of studio players in L.A., programming, but there’s only so much satisfaction trying to get trumpet or grand piano sounds out of a Jupiter 8. I was always more into “Well, why don’t I recreate a brass sound on a Jupiter 8 and combine it with a percussive element of a DX7 while adding it as a sweetener for real recorded brass.” So I started doing music design very early on. That’s when MIDI just started coming out. It was more fun to grab several synthesizers, tying them together, and creating something new, rather than using one synth to do it all.

I’ve always been a huge fan of movies, very much inspired by Star Wars and Apocalypse Now as a kid, and I started finding it was cooler to make lasers and weird tones on samplers and other synths than anything else. While struggling and growing in L.A. as a mixer / synthesist, a friend of mine said “Hey, have you ever thought about doing post production, like cartoons?” I thought it was great that I could combine my mixing and synthesist knowledge into one task with no politics, because at that time in music you were either on one side of the glass or the other. So I started doing all these Saturday morning cartoons. I realized that I could grab all the stuff I knew in music and bring it into editorial! I started using Akai 612s, Akai 900s, Emus, then I made my way to learning the Synclavier.

What was your “big break”?

While working on cartoons, Gene Gillette introduced me to Wylie Stateman, who would later become my mentor and partner who had a company called Wallaworks, which turned into Soundelux a couple of years later. After doing cartoons, I moved over to Todd-AO which used to be known as Glen Glenn Sound, where I worked on a TV show called Beauty and the Beast using a Synclavier system. Over those years, Wylie and I stayed in touched and after the first season of Beauty and the Beast, he offered me a job as lead sound designer on a brand new movie, Honey, I Shrunk the Kids at the newly-formed Soundelux.

At that time, a company called Hybrid Arts came out with a brand new device called the ADAP running on an Atari computer, similar to Sound Forge today. Soundelux had just bought two AMS Audiofiles, but I needed something that could manipulate sound, so I cut Honey, I Shrunk the Kids on an ADAP system laying back each sound to multiple 24 tracks. Shortly after Honey, I did another movie called Born on the Fourth of July. Some buddies of mine created a company called Waveframe. I asked if I could test it out on this movie, so they dropped one off. I think I was the first person in post production to use the Waveframe 1000.

What was great about the editorial side of sound design in the early years was that it was a way to be at the mixing console and doing synthesis at the same time. A lot of my design was very musical. On Oliver Stone’s Heaven and Earth, I got on the “cue sheet” as a composer, because I was designing in a very musical way, with all sorts of exotic instruments, vocalizations, metals blended with organic recordings of animals, sound fx, and tones. I also started realizing that mic’ing a kick drum and mic’ing a gunshot were very similar, so I started taking the techniques I had learned from recording drums and percussion and used them to record things like guns.

BREAKING OUT THE BIG GUNS

How did it feel when everything was done on DAWs?

When I was on the Waveframe and the ADAP, I would use them to create and manipulate sounds, laying them back to a 24 track analog machine with Dolby SR. Once we got beyond 24 tracks within the DAW, that’s when the fun began, because even with 8 or 16 tracks at a time, it wasn’t close to enough to mix off of. I mean, when I do my mixes now, I’m somewhere between 200 to 1600 tracks (6 to 12 Pro Tools systems with 160 – 192 tracks per system coming out of 64 outputs per system). It’s a big change.

The reason I go so wide with my track count is, I have to keep up with always changing CG (Computer Graphics). If I’ve got my elements, virtually (mostly virtual - based on plugins available on a mixing facilities playback system), it allows me to change and update my sounds to the latest CG versions and allows me to react quickly and turn on a dime. I need to be fast and cost effective for my clients. They used say tape is cheap. Now tracks are cheap. Most of the time, I play back on seven Pro Tools systems all running simultaneously. All my tracks are categorized, so while they may not all be playing back at once, I try to make it so that, lets say, our hero’s gun will always be on fader or grouper 1 throughout the whole movie. I want my tracks to be very anticipatory and help out the mixer so they know exactly where their hand needs to go.

Also, you have to remember the reason for the high track count is because many of the elements and categories are divided into multi-channels (5.1, LCR, 5.0 etc.). Main character weapons, vehicles, explosions, car crashes, good guy guns, bad guy guns, design sweeteners with each possibly broken into sub elements…you get the idea. You break it apart so you have total access. It’s the same thing as doing a drum mix; you want control over the different mics on a drum set. In this case, because I don’t always know how the music will sound, it allows me to push and pull different elements within a given category that will better cut through the music.

With all those tracks, how do you know which processor to pick for which sound?

Because of the lack of pre-dub time or access to certain plugins on the dubbing stage, a lot can be done in editorial design. To create a sound like a football tackle or a gunshot, that’s has a lot of energy at higher volumes, but needs to “pop” and cut through a mix at lower volumes, I like using different types of saturation plugins in conjunction with mastering plugins. That’s all done in the design phase. What we do is we master every sound effect before it ever goes to the stage, and we do that either virtually or we render it. The last movie I worked on was a little challenging because it was so stylized. At any given point, we had to figure out what role each sound was going to play at any given time. While the urge is to make everything really cool and big, everything it could be, that may not be the role of that sound at that point. I equate it with trying to arrange a score for an orchestra, you have to hear it in your head before you record it. You’ve got to know what each sound element is, what role they’re going to play in the full concert. Some sounds are subtle and add to the detail, while others do the heavy lifting and add mass to your mix. The choice of DSP processing is based on what we are trying to achieve.

How do you foresee using UM and Center in your process?

The UM tools are familiar to me because I’ve been using similar tools for years, so it’s cool to have it within Pro Tools. On some of my game mixes, I get the music in stereo, but because I am creating a 5.1 mix, I needed to find a way to spread out the music while retaining its sonic quality. Sometimes spreading out a stereo mix this way can take out some of the punch. I like that the UM gives me control over the punchiness as well as other parameters.

With Center, here’s one great feature: A lot of times we go and record sound in stereo, let’s say a gunshot. Well, as you know, stereo is very different in film because you basically have no phantom center. The speakers are too far apart. What you really want is a recording in LCR, which I occasionally do—stereo with a mono center. In the past, we’ve derived the center by either toting in the pan a little bit, using a Dolby box or many other techniques.

To test out the plugin, I took one of my stereo guns that I had and put them through Waves Center. I doubled up the tracks, split out the stereo and mono components, and assigned the mono material to the center speaker and the remaining stereo material to left and right. Having a stereo sound divided into LCR allows me to EQ and process them separately. I can think of a million applications for it, and I am excited about going down that path on so many different fronts. It’s endless.

DESCRIBE RAIN

How long does a feature take to sound design?

I’m on films anywhere from 12 to 30 weeks, though I’ve done some shorter, and some up to a year. 12 weeks is fast, really fast, because you have to remember that we’re going to start initial design and edit, do a temp mix, test it with an audience 1 to 3 times, finish the edit including design, Foley, ADR, dialogue, and complete the mix. It usually takes me 4 to 6 weeks to create an initial temp pass, then another 4 to 6 weeks to design and edit for the final with a full crew. In between we are working with the director, the studio, and the picture cutter supporting their picture and concept changes. It’s like working on a demo for a song. You try out ideas, you test it with audiences, and you see which ones work. Sometimes films can come together very quickly and sometimes they don’t. Usually the shows that I’m on for a year tend to be more CG shows and there’s much more development and concept, like The Chronicles of Riddick. When I did Underworld 2, we were only on it for 6 to 8 weeks, without temps, it was ready set go.

What makes great sound designer?

It’s simple: listening. I know that sounds weird, but I’ve been around since the late ‘70s and lots of gear has come and gone, and a lot of things have changed, but like a painter, it’s not about the paint or the brushes you use, it’s about what you want to create first in your head. And then finding the right tools to achieve that goal.

What you have to do is start listening. When I interview people, the ongoing joke in L.A., for the people that know me, is the question I used to ask young sound designers: “Describe rain to me.” It would allow me to get an insight to the person and they way they listen and think by their answers, because a lot of people go “Rain? I dunno, it’s heavy, it’s wet, it’s heavy.” And I say, “Yeah, but what about rain on windows? What about rain on water? Slow drips, fast drips, rain down gutters? What about all of the details that rain can give you and the emotion that it brings? And the rhythm. Everything’s got rhythm, whether it’s a very slow drip on a hot smoggy southern day that will have a sound, a feeling where you can almost smell it. And the only way to know those things is to listen.

Of course, you’ve got to have an audio vocabulary throughout your life of what things sound like. What does a punch sound like? What does a Hollywood punch sound like? What does a punch in real life sound like? If you’re doing a movie like Blackhawk Down, where the goal is for realism, it’s going to be one sound. If you’re doing a movie like Hellboy II, where it’s over the top and an E-ticket ride, it’s gonna be another. When communicating with directors, they’ve got it in their head what their favorite movies sound like. So you’ve got to have an audio vocabulary; of real life, of fantasy life, of how Hollywood sounds. I equate it to playing jazz, blues, classical, and heavy metal. You’ve got to know the different styles and what to do.

Because ultimately, it’s not about the sound, it’s about the storytelling. I am an audio storyteller, the man behind the curtain. My job is to help enhance the experience of the moviegoer or the gameplayer.

Loading....