Okay, I couldn’t resist. The song has been posted:
It’s official, and I hope to not work on it any more. It’s been a long road with my first posting about this project on June 30, 2007.
As promised, though, here are a few thoughts on Convolution Reverb and the piano portion of this recording.
Let’s tackle the piano portion first. This whole project started by scanning in the piano music from the book that my daughter plays from on occasion. With only a few tweaks, the Finale part was an exact representation of the written music. Human Playback added dynamics and slurs automatically. All that was missing was the sustain pedal.
I tried inserting the sustains into the Finale score several ways, including notation and MIDI entry. It all proved too cumbersome, so I decided that I would leave it until transfer to the DAW, where I could easily add sustain pedal data to the MIDI track by just playing it in live.
When I got to that stage, however, I noticed something that I didn’t like. While HP does a grand job with dynamics and varying velocities of the individual notes, each and every note lines up perfectly with the beat. There is a significant difference between a chord where all the notes happen at exactly the same time and one where the notes are all a few millliseconds apart.
While I could have simply applied a randomization function to the start times, I thought I would go with something a little more purposeful. So, I took an evening and played in the part in real time from my digital piano. I took quite a few takes (this piece pushes the limits of my site reading ability) and some after-the-fact editing, but I ended up with something I think is quite superior to what I had before.
One very interesting thing that I learned is that I tend to play ahead of the beat. I always knew that I rushed a bit, but it’s interesting to see all the marks on the piano roll editor overlapping the beat by a bit. I also learned that if I get to where I’m too far ahead of the beat and naturally pause a little to let the metronome catch up (which sounds TERRIBLE in playback), is not a deal killer on the recording. I can highlight a few measures leading up to the pause and then stretch them to fill the space. The result sounds very natural.
Now, about reverb. For the first several years of computer-based digital reverbs, they just couldn’t compare to the dedicated hardware units in real time. The processing requirements of reverb tails, especially longer ones, just ate up too much overhead.
Enter convolution reverbs. The theory is simple. The process starts with an impluse resonse (IR) – an infinitely short white noise followed by the natural reverb tail of the space. Since infinitely shorts sounds don’t exist, there are two workarounds. The first is a very, very short sound like a starter pistol, or electric spark, or balloon pop. The spark is supposed to work the best. The second process uses a continuous sine wave that varies in pitch and then the IR is put together based on that. I have no idea how this second one works.
Anyway, the convolution reverb can take the IR and process any sound to make it sound like it happened in that space. Standalone reverbs can be similarly processed, and there are literally hundereds of IRs from various expensive hardware reverbs like Lexicons available free on the web.
In my experience, the best free reverb is SIR. It’s not the only free one, but it’s the only free one that I’ve found with more controls than just volume.
Check it out and enjoy.