Our collaborator and electroacoustic-geek, Sylvain Paradis sat down with one of the most recognizable and versatile names in sound design and a true master synthesist; Richard Devine, an electronic force of nature.
When you approach a synth, especially one you’ve never touched before, do you have pre-conceived sounds in mind that you then try to reach, or do you just let the synth do its thing and determine the direction?
I’m working on a project right now; I can’t say what it is, but it’s pretty crazy. This synthesizer has new technology that’s never been implemented before, so it’s totally alien to anything I’ve ever used. I’ve had to spend a few weeks just understanding the architecture of it. Before that I was doing work for Dave Smith on the Prophet 6 keyboard. I was familiar with the Prophet 5, but they implemented all these new things in the Prophet 6 engine, lots of things he incorporated using newer technology within an older design, so I had to get re-acquainted with the new technology first. Before I had even thought about what I was going to do creatively with it, I just had to understand what the instrument could do. I basically start from scratch and go down to the most basic level – what do the basic sound generators do? The different wave shapes, frequency ranges etc. It’s important that I fully understand all the basics before I start designing a palette of sounds.
What’s interesting is, when I get an instrument, there are no sounds in it. Maybe it’ll have one or two basic presets, and they’ll tell me basically how it’s designed and what they know it’s capable of, but they come to me to really bring the instrument to life. This means I have to really understand what everything does, so that I can figure out how to utilize any unique features of the system to the point where they’re all working harmoniously to express something really cool. It’s like telling a story. I try to do things that perk interest; that make people wonder “how did he do that?”. I always tell synthesizer manufacturers that I’m not going to make them a bunch of meat-and-potatoes piano-type sounds, I’d rather go into much more expressive things.
You work a lot with analog modular systems, but also with virtual modular environments like Max and Reaktor. What is the difference?
PureData, let’s see… Bidule, SynthEdit – I don’t use SynthEdit much any more. It’s interesting, I use a lot of those environments because they are very modular. Most people don’t know this, but I actually started out using modular synths, right from the beginning. When I first started getting into electronic music, I bought a lot of second-hand gear at pawnshops. In Atlanta, where I live, there are a lot of pawnshops near the city, and I would hit them up every weekend and just pick up tons of gear. This was in the late 80’s – early 90’s. I was able to get so much stuff, and get it for cheap, because no-one wanted analog gear back then! One of the first synths I got was an ARP 2600, I think I paid maybe $250.00 for it! It was a bit beat-up, but the guy at the shop didn’t know what it was.
So I studied a lot of these synthesizers, and I was in high school, at an age where I was just like a sponge for information. I’m still a sponge, but back then I was just obsessed with learning as much about this stuff as I could. Getting the ARP 2600 at that point totally changed my direction. I was like “I’m gonna do all modular!” Because that’s where the flexibility is. Then, when it went to computers, I got into Max in my first year of college. I bought it at my school bookstore, for MIDI patching and stuff. I wanted an environment where I could control my hardware samplers. I didn’t want my sequencing to always be in steps of 8, 16, 32. I wanted to be all over the place, not on a timeline. A friend told me to check out Max, so I studied it, and learned that I could build whatever I wanted. I could have the BPM jump from 100 to 1000, then back to 20. I could set a randomizer, have things jump and jack around… It’s almost like patching on an analog modular, the approach is the same.
When you’re making a Max patch, it’s a lot more labour-intensive than an analog modular patch, and you have to do a lot more planning. How do different interfaces affect the music that you produce; working on a computer and using a mouse to do everything versus patching a modular synth with your hands?
I think time is the issue. When you’re doing single-point manipulation on a computer, it takes a lot of time, so you don’t end up experimenting as much. You set a few things up, like you would in Max, set up a few objects, connect things together, and then you kinda have to wait. The process takes longer, to find these sweet spots when little spontaneous things happen, whereas on a modular, it’s very physical. The feedback you get is much quicker, it’s instantaneous. When I’m adjusting things in a patch on my modular system, the same operations that I can do in under 20 seconds with my hands would have taken me about 5-6 minutes on a computer, using a mouse, or even using a controller. I use my iPad a lot with my MacBook when I’m making music, I use software called Lemur to create GUIs that interacts with Max or Reaktor or Kyma.
That’s a big part of it, isn’t it – how you control it. With electronic instruments, there’s often this physical alienation versus acoustic instruments, where there’s a direct interface with your body.
Yeah totally, there’s a wall! I mean, I still think that the laptop is by far the most advanced instrument ever. I tell people all the time, the MacBook Pro is like the Fender Strat of our generation. The only problem is this interfacing aspect, that physical, instant feedback you get when you’re playing an instrument that resonates, where you feel the vibrations with your body. You just don’t get that looking at a computer, working with a mouse. I have all these devices like Wacom tablets and controllers that help me bring more physical interaction with the computer, but still, there’s a lot of setup involved, a lot of logistics. Especially when you’re working with Max or Reaktor, or even Kyma, you have to think about things. You’re thinking “alright, I have x quadrants; I want this quadrant to be assigned to this value, this value, this value; the pen pressure will change this value…”
Is that what drew you back to analog modular synths?
Yeah, I went back. It’s weird, I started out making music using only hardware. Then, I started getting into computers, at first, only to do sequencing. Then, I met a friend of mine named Don Hassler, who worked at the Atlanta College of Art as a professor. He had an experimental sound class he was teaching, and he showed me SuperCollider and Csound. At this point, I didn’t know you could do any kind of advanced sound generation on the computer, all I knew it for was basically recording/DAW and MIDI sequencing stuff. When I heard SuperCollider, when Don played me the “FM Landscape” patch in version 1.0, I literally sat in my chair like “holy shit, that is the most organic, beautiful texture I’ve ever heard. I need to look into this”. That was the turning point for me. For about 8 or 9 years, I delved really deeply into computer synthesis. I made entire albums using PureData and Csound. They were all free, open-source stuff so I was able to learn it really easily, there was a lot of documentation, a lot of other people using it. Eventually I bought into SuperCollider, and from there went to Max, doing more and more complex stuff within the computer. Eventually it all moved into the computer. I actually sold a lot of my modular stuff because I was thinking the curve would continue upwards and we would have better and better ways of interfacing with this new technology. I thought it would eventually go back to where we had that instant feedback I mentioned before, but we never really got there. These new controllers would come out, but they would all go stale pretty fast. I was really yearning to go back to the fun, playful aspect of what I had been doing at the beginning; the computers never quite got there, in spite of all the power you can harness.
Do you think digital technology will break through this wall?
Well that’s sort of what’s happening with Eurorack right now, with all the digital microprocessors getting smaller. I’m using two Raspberry Pi-based modules in my rig tonight, as well as two Arduino-based modules, where I’m creating my own Arduino sketches on one. It’s kind of sneaking back in as people are realizing “whah, this is such a fun way of interfacing with sound, why don’t we use a microprocessor with control voltage”, and then you can get the best of both worlds. This is what’s got me really excited these days.
With the massive prevalence of a certain kind of technology, like Apple products, do you think the collective demands for a certain type of user interface are affected and changed? Like if synthesizers had never been created, and Bob Moog was alive and built a modular system today, it probably wouldn’t look or work like it did when he built it in the 1960s.
I was recently on a panel with Morton Subtonik at MoogFest, talking about alternative controllers. Morton was asked why he went with Buchla over Moog, and he said “it was simple, I hate the keyboard,” and that’s a direct quote. I think he had a very good point. If you look at the traditional 88-key piano keyboard, that’s a format that hasn’t changed in hundreds of years, while music has completely changed. Or you even look at the computer keyboard in relation to the typewriter keyboard, another format that has remained the same while its application has completely changed. Morton was not interested in using traditional interfaces to control his music. He wanted to do micro-tunings, pressure control, getting into organic, continuous textures, that’s why he worked with Don [Buchla]. Morton also said how baffled he was by the constant recycling of outdated systems of control for generations, over and over. No-one’s really thinking outside the box. There needs to be a movement to create more inventive devices!
Do you think there is a risk of something being lost when technology replaces human functions to a great extent? Like if we all became soulless robots?
Technology is moving in this direction – like the online mastering thing, where it’s all algorithm-driven, and computers are mastering these tracks. You can’t replace that human touch. I mean, these algorithms are mostly going to just be doing corrections at best, and it’s all very cold and clinical. It’s not thinking about what the music should sound like. I see where this could go. The more things we have that are automated, it forces us to lose this consciousness. I mean, look at the iPhone! Since I’ve had this thing, my typing skills have gone back to a 10-year-old’s level because of autocorrect. You don’t even have to do basic math any more… all these things we used to have to do. It’s making us stupider in a lot of ways, and desensitizing us a lot! Socially, we don’t interact the same way any more. I communicate with all my friends via instant messaging, to the point where I hate if I even have to pick up the phone to talk to someone. It’s strange where we’re headed. The technology is changing every part of us, whether we like it or not. I think it’s good that we’re all connected, but there are negative aspects for sure.
What are you trying to achieve using this technology? What experience do you try to deliver to the audience with your music?
Just something different, from my own weird brain’s way of doing things. I guess I’m trying to take them to these strange places. Even tonight, my patch is some pretty abstract stuff. I mean, it’s rhythmic, but then it goes into these strange, alien… I don’t even know what to call it really. I just want them to experience what I experience. It’s so fascinating to hear a patch that’s like this environment that starts playing, goes into all these different movements, it’s like watching architecture. It’s like an architectural object unfolding and re-glueing and re-suctioning to different areas. I don’t even know what to call that, but that’s what I really love about the modular, it’s all live. My set tonight is going to be all modular, and it is going to sound like it’s all modular. I feel that’s the only way to do it if you’re going to use one.
The patch you’ve prepared for tonight, is it something you intend to be tweaking a lot live, or is it something you’ll just sort of push in the right direction, but will mostly be operating by itself?
No, when I do my live performances, it’s driven by my hands the entire time. Tonight, I’m trying to get around 8 different “songs” out of the one patch, so there are all kinds of things I’ll have to do to move through the parts. It won’t do anything if I just life my hands away. It can “run”, but it’ll just idle, and that’s what I like about it. I have to perform everything. Based on the energy in the room and what’s happening, I can make decisions, as opposed to a patch I might do in the studio, which is much less hands-on. When I post videos online, I’ll do little minor adjustments, but those patches are more regenerative. They sound like they could play almost forever, almost like a re-composing piece of music, and I don’t really have to do anything. For the live shows, I want them to know that it’s live. I don’t want anyone to feel like it’s just a spacebar-linear set where the BPM doesn’t change. It can be a lot more fun when people feel that energy, it’s a different energy altogether. I hope it goes well! The thing is it could go really bad too! When you have that many open variables in play, there are things that just spontaneously go wrong. Pretty much every show I’ve played, there are at least 2 or 3 things that go wrong, and then you’re trying to retrace how it went wrong… It’s part of it I guess, you’re flying by the seat of your pants, doing everything live; no safety net.
These days in art, do you think it’s important for an artist to do research in order to express himself authentically?
I think it is. I do a lot of research in order to do what I do, mainly because I have to. Learning new tools, adapting techniques to these new tools, to utilize them in the best way possible. I don’t know if everyone’s like me… being a sound designer you’re keenly listening to all sorts of things. I always bring a little recorder with me, because I’m always looking for sounds; researching different ways to get sounds. I’ll look for different surfaces, different textures, different animals, plant life, it doesn’t matter, I’m interested in all things that can give me something I wouldn’t be able to get in the studio. This also involves researching microphones and microphone techniques… It requires knowledge of many different fields to get what you want. This isn’t necessarily something I think everyone has to do, but for my work it’s needed. But I think there is a certain amount of research you need to do to achieve anything. If you want to make a certain type of music, it’s important to find the artists you really like, figure out what they’re doing, and adapt it into your workflow, if that’s the kind of thing you really want to do.
I didn’t really follow that path, actually. I didn’t choose a very popular or well-known form of expression. I was just a kid tripping in my garage, playing with these weird instruments, going “what the hell this is totally ffffrreaking my brain out!!” I never had my heart set on doing this thing or that thing. I was in it for my own personal enjoyment, and that’s still, to this day, why I do this. It’s like painting or drawing or any other form of expression. It’s very therapeutic for me to just be in a studio, doing what I do.
Would you say that you do art out of necessity?
I do it because I’d probably go crazy if i wasn’t. I don’t even know what I’d do if I wasn’t doing art… getting into trouble or something, who knows. I’ve always been a creative person. I’m also a visual artist. I went to college for visual art, doing graphic design, that was my first true passion. Music was just something I did for fun. I never looked at it as anything more than that. But by my junior year in college, I was doing stuff with labels, all this stuff started happening, and music was paying for everything! Paying my tuition, I could buy a car, buy a house, then my parents were like “maybe you should just focus on music!” I was like “really!? It feels wrong!” It just felt so weird making it my job. That’s when you realize you’ve found your calling, when you’re getting paid to do work you’d happily do for free. Sometimes I still feel bad taking money for this. My wife gets so pissed at me, she has a regular job, and she’ll get up at 6:00am and I’ll still be up playing synths and she’ll be like “oh you little –! you just get to do what you want all the time!!” and I say hey, I didn’t ask for this, it just happened! It all just fell into place.
So finally, what’s your biggest source of inspiration?
Everything! I’d say art… I like going to museums; I draw a lot of inspiration from visual art. Also other music, or any kind of sound. I draw a lot of inspiration from nature. Me and my wife go hiking a lot, we go kayaking… we do a lot of outdoor stuff. She has her own portable recording rig as well, so we go recording together. She has really good ideas too, about how to capture sounds. It’s really cool to have a partner with me who’s just as into it, and think differently from me about techniques, and we always get really cool sounds when we go out on field recording trips together. We’ve gone to caves, we’ve done hydrophone recordings, in crazy locations! But if you go to these weird spaces you’ll get really, really interesting sounds. Nature is a huge inspiration to me. Some of the most interesting and bizarre sounds I’ve ever gotten have been in nature. By far more interesting than anything I’ve ever made in a computer or had to program. I tell people all the time that the best sounds are all around us. They’re right under your nose, in your backyard! And they’re free! you don’t have to pay for them or anything, you just have to go find them.