Thursday, November 24, 2011

Five Things About Audio-Recording I Wish I'd Known Earlier


!±8± Five Things About Audio-Recording I Wish I'd Known Earlier

1. It Is Only Stereo If The Sound From The Left Speaker Is Different From The Sound Coming From The Right.

I THOUGHT I knew what "stereo" meant. Heck, when I was a kid that's what we called our music players. "Hey, turn on the stereo and play some Queen." It just means coming out of both speakers (left and right), right? Now this may sound silly, but just because you have sound coming from both left AND right speakers does NOT mean you have a stereo signal. Forget about dictionary definitions for a second. THIS is what you need to know: It is only useful stereo if the sound from the left speaker is different from that coming from the right. And get this... to be truly effective that difference must be so slight that we aren't consciously aware of it!

Let me clarify. If you listen closely through both speakers, you'll hear that most music has different stuff coming from different sides, like the lead guitar part coming completely from the right speaker, and the rhythm guitar coming completely from the left. If you turn the "pan" button on the stereo all the way to one side or the other, you can't hear the other guitar at all... or maybe only very faintly. But everyone knows that right? Well that still isn't really the COOLEST thing about stereo. So "what is?" I hear you cry. This: a single instrument (or voice) that somehow comes from the speakers with some differences between the left and right parts of the audio, is WICKED cool! You've heard us talk about the magic of audio? Well THIS is one of those magical things. An entire chapter (or book) could probably be written about this....and we will go into lots of depth in the tutorials. But the basic idea is that the human brain picks up on differences in sounds on left and right UNCONSCIOUSLY if the differences are subtle enough. This is NOT some freaky-deaky new-age hooey! Modern recorded music (in fact our very survival as a species... but that's another book) depends on it.

Let's say you have a recording of a piano that, for some reason, was recorded in mono (maybe you only had one microphone). Even though it is possible to tell your recorder to "record in stereo," that just means it is going to split the audio into two "identical" sounds, and send one to the right channel and the other to the left channel. This is NOT a stereo signal, based on our definition above. Remember why? Correct! There are no differences in the sounds coming from left and right. If you were to play this recording through headphones, you'd notice piano sound coming from both speakers, but your brain would tell you that there was only one lonely, rather thin-sounding piano, right in front of you. If you remember your vocabulary, or just read a lot of Readers Digest, you'll know that the word, "mono" comes from the Greek word, "monos" meaning "single" or "alone," also meaning "not stereo." BUT if you take one of the two identical piano signals (one going left, the other going right still), and delay it in time by just 10 milliseconds, you'll still THINK you're hearing only one piano, but suddenly it will sound bigger, fuller, nicer. You can increase the delay by up to 40 milliseconds for an even wider sound, and still your brain will think it's just one single piano that sounds much better than before. THAT is magic (at least I think so)! It isn't until the two sounds get well over 40 milliseconds apart in time that we start to hear two different and distinct sounds.

Discovering this principle of audio enhanced the quality of my recordings immensely, as it will yours when you learn all the ways you can make use of it. The above example used only "timing differences" to create this man-made (fake?) stereo signal. But other differences like pitch, EQ, etc. can also be used.

2. Record As Loud As You Possibly Can.

Noise, like the tools in your garage that you haven't used since last century, will be with us always. Technically, noise is any sound in your recording OTHER than the thing you tried to record. It could be a bird outside your window, computer fans, lawnmowers, or all the electrical junk that is a fact of life when you use electric stuff. Generally speaking, the less noise, the better your recording. OK, so why do I wish someone had told me this at the beginning? I mean, it's obvious isn't it? As with so many things, "knowing about it" and "knowing what to do about it" do not always go hand-in-hand.

A LOT of programs are available for reducing noise in audio recordings. But the truth is, once the noise is in already in your recording, it'll be very difficult to get rid of. Allow me to give you an example. Podcaster Pete goes to lots of trade shows and records interviews he on his iPod to publish on his website monthly. Before publishing, however, he sends his raw audio to an "audio guy" to have it turned into something publishable. But when said guy opens the audio and sees the signal on the computer, he is amazed by how quiet the recording is! It barely even registers on the playback meters. In order to even hear it at a reasonable volume, he has to increase the volume of the entire file, noise and all. Then, he hears the voices talking, but it's on a backdrop of god-awful hiss and crackle. The only way forward now is use digital noise reduction tools to try and filter out the hiss as much as possible, which leads to a passably listen-able interview, but one that sounds a little like it was recorded underwater, a typical artifact of noise-reduction.

You can only do so much to FIX a noisy recording after the fact. So what could Podcaster Pete have done to make a supremely better recording to start with? Some might say, "dude, just stop recording with that inexpensive mic going into your iPod!" Sure, he could buy a "digital field recorder" for between 0 and 0. Or he could find a way to make the recording LOUDER in the first place by feeding his iPod more signal, which is a faster and cheaper solution. My guess is that Pete wanted to avoid being TOO loud and overloading the input (a good idea!), but in doing so, he erred on the side of recording at too low a volume. THAT is almost as bad, since you have to crank the volume on both the tiny signal AND the noise in order to hear anything.

The bottom line is this: The best way to fight noise is to limit how much of it gets into your recording in the first place. In order to get the best quality possible when recording, make sure you feed the recorder a loud enough signal. But you have to be careful. Too loud, and you'll get distortion; too quiet, and you'll get too much noise.

3. EQ...That Thing You Never Knew Quite How To Use

Have you ever seen one of these things? They used to be common in "entertainment systems." Right along with your CD Player, Cassette Player, Record Player, amplifier, and "tuner" (meaning..."radio"), would be this other big boxy thing with nothing but a row like 30 vertical (meaning "uppy-downy") slider controls across the face of it. These sliders had little square button-like thingies that you could slide up or down. They usually started out in the middle (at the "zero" mark). About the only thing they seemed good for is making funny patterns, like smiley faces, or mountain ranges, by moving the sliders up or down in the right way. Besides maybe making you feel better by having your stereo "smile" at you, I really don't think anybody ever knew what to do with one of these things. With a scary name like "graphic equalizer," it sounded so important. It also sounded like a good name for a "rated-R" action movie, but that's another story. Anyway, you had to make everyone think you knew why you had one, so you pretended to know what it did. But in reality, you felt safer just leaving it alone to sit there with its straight row of slider buttons right down the center, the way it was the day you brought it home because it came with all the other stuff.

You are probably familiar with some kinds of EQ without realizing it. You know those controls on your music player labeled "bass" and "treble"?" That's a crude EQ! If your graphic EQ box had only 2 little sliders on it, it would be the same thing. One control makes the sound "bassier" (the low sounds) and the other makes the sound "treblier" (the high sounds). I always laugh when I see someone turn both controls all the way up or down. They have accomplished absolutely nothing that the volume knob wouldn't do. If both (all) the sliders are up, you just turned the radio up. Congratulations. An EQ is only useful if you can make shapes OTHER than straight lines with the sliders.

So now you're wondering what the heck an EQ IS good for aren't you? Well for one thing, it turns out that our ears lie to us! Can you believe it? I know! It's crazy, right? Every human has lying ears. I was floored when I first heard. See, it turns out that there are all kinds of sounds out there in the world that we can't hear! As a matter of fact, MOST sounds are inaudible to humans. The only sounds humans CAN hear are those in the range between 20 hertz (abbreviated as "Hz") and 20,000 Hz. A "hertz" is a measure of how often (or "frequently") something shakes back and forth (or "vibrates") in one second. Sound is just energy that makes air particles shake back-and-forth. When these air particles vibrate with a frequency of between 20 times and 20-thousand times per second, it makes a sound that is in the "range of human hearing." Though if you're 21 or older, good luck hearing things above 16,000 Hz;) (where the so called "teen buzz" or "mosquito frequency" lives. Confused? Ask a teenager.

4. Certain Sounds Can Always Be Found At the Same Place On The EQ Spectrum

Within that range of 20-20 KHz (the "K" meaning "thousand), all manner of fascinating things happen. For example, do you know where on that spectrum a baby's cry lives? You got it; at the most annoying frequency there is, around 3KHz. I say "annoying" because it is the frequency we humans are MOST sensitive to. Some think it is a survival thing for our species, being able to hear a baby crying in the woods. But since we don't usually have a "crying baby" solo in our audio recordings, why is this helpful? It tells us that everything we can hear vibrates around certain predictable frequencies. And that knowledge gives us access to some pretty cool superpowers.

For example, I know just where to find the signal of a bass guitar on the spectrum. It'll be down around the low frequencies of 80-100 Hz. So will the kick drum. The electric guitar will usually be in the mid-range between 500Hz and 1KHz in the upper mid-range along with keyboards, violas and acoustic guitars and human voices. Clarinets, violins, and harmonica's tend to hang out mainly in the upper-mid range around 2-5 KHz, and things like cymbals, castanets, and tambourines like to be in the "highs' up around the 6 KHz.

Just ignore the fact that, in a range that tops out at 20KHz, we call things at 7 or 8 KHz "highs." It's because the spectrum is exponential, and not linear. If that makes your brain hurt just ignore it. It's sometimes better that way.

Now that we know where specific things live in the range of hearing, we can adjust volumes at JUST those frequencies without affecting the rest of the audio.

Knowledge of this "range of human hearing," and how to use (or NOT use) an EQ will come in handy more often than almost any knowledge. Once we know where to quickly find where a sound is on the EQ spectrum, we can surgically enhance, remove, or otherwise shape sounds at JUST their own frequencies, without affecting other sounds at other frequencies.

But how in the world can we adjust volume in just one narrow frequency, say 100 Hz, without also changing the volume at all the frequencies? Hmm, wasn't there some discussion about a thing called an "EQ" that had a whole bunch of sliders on it? Could it be that those sliders were located as specific frequencies, and could turn the volume up or down just at those frequencies without affecting the rest of the sound? Why, yes. It could be. Now you know.

5. Mixing With EQ Instead of Volume Controls

Once you know where the frequencies of certain instruments are likely to live, you can use an EQ to prevent these sounds from stepping all over each other in a mix and sounding like a jumbled mess, with bass guitar covering the sound of a kick-drum, or the keyboard drowning out the guitar. "But you wouldn't need to use an EQ to do this if you use a 'multi-track' recorder, right?" That's what recording engineers and producers usually use to record bands, and other musical acts, so each instrument and voice is recorded onto their very own "track." Since every different sound has their own volume control, it seems obvious what to do if something is too loud or quiet, right? 'Don't you just simply use a mixer to turn the "too loud" track down, and vice versa. I mean, isn't that what "mixing" means?' That's what I used to think too. The answer is... "only sometimes." For example, even after spending hours mixing a song one day, I simply could NOT hear the harmonies over the other instruments unless I turned them up so loud that they sounded way out-of-balance with the lead vocal. It was like a bad arcade game. There was simply no volume I could find for the harmonies that was "right." It was either lost in the crowd of other sounds, or it was too loud in the mix.

Then I learned about the best use of EQ, which is to "shape" different sounds so that they don't live in the same, over-crowded small car. Let's say you have one really, really fat guy and one skinny guy trying to fit into the back seat of a Volkswagen Bug. There is only enough room for 2 average-sized people, and the fat guy takes up the space of both of those average people already. Somebody is going to be sitting on TOP of someone else! If the fat guy is sitting on the skinny guy, Jack Spratt disappears almost completely. If Jack sits on top of Fat Albert, he will be shoved into the ceiling, have no way to put a seatbelt on, it's just all kinds of ugly no matter which way you shove 'em in.

But if I had a "People Equalizer" (PE?), I could use it to "shape" Albert's girth, scooping away fat until he fit nicely into one side of the seat, making plenty of room for Jack. Then if I wanted to, I could shape Jack a bit in the other direction, maybe some padding to his bony arse so he could more comfortably sit in his seat. Jack just played the role of the "harmonies" from my earlier mixing disaster. Albert was the acoustic guitar. Just trying to "mix" the track volumes in my song was like moving Jack and Albert around in the back seat. There was no right answer. But knowing that skinny guys who sing harmony usually take up space primarily between 500 and 3,000 Hz, while fat guitar players can take up a huge space between 100 and 5,000 hertz, I can afford to slim the guitar down by scooping some of it out between, say, 1-2KHz, and then push the harmonies through that hole I just made by boosting its EQ in the same spot (1-2 KHz). Nobody would be able to tell that there was any piece of the guitar sound missing because there was so much of it left over that it could still be heard just fine. But now, so can the harmonies...because we gave them their own space! And we did all this without even touching the volume controls on the mixer. So it turns out the EQ does have it's uses!

So those are the "big five" as I see it. If I had read an article like this when I first started down the path of audio recording, things would have been my learning curve could probably been shortened by a decade or so! I hope some young would-be recording engineers out there can benefit from this article the way I could not.


Five Things About Audio-Recording I Wish I'd Known Earlier

Emerson Karaoke System Sale Off Best Prices Cheap Landline Phones Save C7 Led Lights




No comments:

Post a Comment


Twitter Facebook Flickr RSS



Fran�ais Deutsch Italiano Portugu�s
Espa�ol ??? ??? ?????







Sponsor Links