I was watching a demonstration of a combined preamplifier/audio interface on YouTube. The equipment was mildly interesting, but what really caught my attention was how wrong the method of recording was.
The presenter of the video was recording an acoustic guitarist who would later also sing. The first thing to do when recording acoustic guitar, presuming the player has tuned already, is find the best place in the room to make the recording. This can be done simply by ear, listening to the guitar directly as it is played. In the video, this was not done.
The next step in recording acoustic guitar is to choose a microphone. This can be done by experience, picking a mic that you have found to work well on acoustic guitar in the past. There is a lot of leeway on this and you would have to pick something quite odd for there to be a significant problem. I can accept that the presenter chose a mic he knew he would like. But…
The position of the microphone is of absolute importance. You can go by experience, rule of thumb, or the '12th fret rule', which isn't actually a rule at all – just something else to try out. But unless you have some kind of extra-sensory perception, you need to try the mic in a variety of positions. The best way to do this is to monitor in your control room while your assistant moves the mic around in the studio, according to your instructions given via talkback. If you're on your own with the guitarist, perhaps working in one room, it's a reasonable compromise to listen on closed-back headphones while you move the mic yourself. The presenter of the video isn't seen to experiment with the microphone position at all.
Shockingly, what happens next is that before even one note has been played the presenter inserts a compression plug-in. Now unless the thought you have in mind right from the start is to achieve a compressed sound, there is absolutely no way that you would know that the guitar needs compression before you hear it uncompressed.
The presenter describes the compressor as 'awesome'. Maybe it is, but it isn't as awesome as having an amazing player right there in the studio. Musicians always come first, techniques second, equipment and software third. He goes on to admit that he is cheating by choosing a preset setting rather than adjusting the controls himself. And still not a note has been played.
With the particular interface being demonstrated, it is possible to record through the plug-in, rather than record clean and apply processes and effects later. This is what he decides to do, before hearing anything from the guitar. The actual sound he achieves isn't bad. But who knows how much better it could have been?
Moving on to vocals, we don't see the presenter experiment with different microphones, which would be considered professional studio practice particularly with vocalists. However in the home studio environment, it isn't uncommon to have only one vocal mic, maybe even only one mic, so this perhaps isn't too unrealistic. There is no experimenting with distance or whether or not a pop shield is necessary, but maybe the presenter has worked with this singer before. This time he uses an EQ plug-in, but he does compare the before and after sound, which is the correct thing to do. Some would say that it is better to record a flat signal and EQ later, but recording with EQ and/or compression is quite common, so I'll let that pass. He uses compression too, but then a strange thing happens…
The presenter inserts a reverb plug-in into the recording path so that the signal is printed onto the track. There is absolutely no benefit in doing this, unless you deliberately want to burn your bridges. Reverb can always be added to a dry signal, but it can never be taken away from a signal that has reverb printed in. The exception to this is recording a classical singer in a good acoustic environment, in which case you would want to capture the acoustics along with the signal. But you would never do it with artificial reverb.
Head-down in his equipment and software, the presenter communicates little with the singer, although he does mind his manners with 'please' and 'thank you'. It takes more than that to get the best out of a performer though. The performer needs to feel that they have your absolute full attention. If they are not singing, there should be active communication going on through the talkback. Silent periods while the engineer fiddles with the settings can kill a performer's confidence.
The session proceeds with the presenter saying to the performer that, “We'll do the same thing again”, as a command rather than a suggestion or request. She rightly responds, “Huh?”. The way to do it, if it seems like a retake or double track will be beneficial, is to give the singer a reason why you want to do it again. Otherwise they will be left feeling a) that their performance wasn't good enough, and b) that their influence on how the recording will turn out is diminished. Not good. The presenter does comment, “Awesome” to the singer after the take, but without much of a feeling of sincerity.
Now of course, the purpose of the video is to promote the equipment and software, and many of the in-between processes may have been left out. But along the way it basically shows how not to run a recording session. Anyone following this method will end up with results that are not as good as they could have been, both musically and sonically. The moral of this story is that it is always correct to put the performer's comfort and well-being first, then use your knowledge, skill and experience to get the best sound from your microphone(s). As the session progresses, concentrate on getting the best performance you can and don't let the equipment and software get in the way of communication with the singer or player. When the performers have packed up and gone home – that's the time to start having fun with the plug-ins.