We’ve been talking about methods for synchronizing image and audio inside After Effects.
At a micro level, if we want to work comfortably, it is desirable that there is a tight matching between fps and bpm, so we can use a very simple algorithm to find “right” tempos depending on the fps of the movie:
30 fps = 1800 fpm (frames per minute)
1800/2=900 fpm = 2 fpb (frames per beat)
900/2=450 fpm = 4 fpb
450/2=225 fpm = 8 fpb
225/2=112.5 fpm-bpm = 16 fpb
112.5/2=56.25 fpm-bpm = 32 fpb
The basic idea is to divide movie’s fpm by an integer number, until we get a number which is suitable for bpm… in the example we divided by 2, but we can do this dividing by any integer number… the advantage of succesive dividing by 2 (n root) is that we maximize divisibility of the fpb… so for 112.5 bpm we have 16fpb, which means 8 frames per 1/2 beat, 4 frames per 1/4 beat….
There’s a nice online app that helps us doing this calculation: http://www.vjamm.com/support_av_bpm.php
We did a little example of soundsync (available in dropbox) using a very straight technique: every sound in our music has a visual animation assigned. We just have to edit in AfterEffects and put everything in order.
We’ve also been talking about volume / opacity envelopes; algorithmic curves; use of black for silence / white for peaks
In a medium structure level, we’ve seen that there’s no need to make visible EVERY sound of the music, so it looks like sight accepts less complexity than hearing, so we just need to make animations for those sounds in the music that are more “important”, and we get a better effect