Studio Audio recording and Audio field recording

 

Posted in Uncategorized | Leave a comment

AUDIO EFFECTS (FX)

 DELAY FX
ECHO (=DELAY)
http://en.wikipedia.org/wiki/Delay_(audio_effect)
http://en.wikipedia.org/wiki/File:Delay-line_block_diagram.png
http://en.wikipedia.org/wiki/File:Echo_samples.ogg 

CHORUS  http://en.wikipedia.org/wiki/Chorus_effect
FLANGER http://en.wikipedia.org/wiki/Flanging

REVERB http://en.wikipedia.org/wiki/Reverb

DYNAMIC RANGE
COMPRESSION http://en.wikipedia.org/wiki/Dynamic_range_compression
NOISE GATE http://en.wikipedia.org/wiki/Noise_gate

EQUALIZER and FILTER
http://en.wikipedia.org/wiki/Equalization
http://en.wikipedia.org/wiki/Audio_filter

 

 

Posted in Uncategorized | Leave a comment

Digital Audio

Digital audio refers to technology that records, stores, and reproduces sound by encoding an audio signal in digital form instead of analog form. Sound is passed through an analog-to-digital converter (ADC), and pulse-code modulation is typically used to encode it as a digital signal. A digital-to-analog converter performs the reverse process, and converts the digital signal back into an audible sound. Digital audio systems may include

compression, storage, processing and transmission components. Conversion to a digital format allows convenient manipulation, storage, transmission and retrieval of an audio signal.  http://en.wikipedia.org/wiki/Digital_audio

Sample Rate http://en.wikipedia.org/wiki/Sample_rate

Bit Depth  http://en.wikipedia.org/wiki/Audio_bit_depth

Posted in Uncategorized | Leave a comment

Audio basics

This post is a summary of the content that we have seen in class during the weeks devoted to audio in the workshop. Most of the content are linked to online content hosted on wikipedia, so it will be very easy for you to expand knowledge and fully understand all questions by selecting the preferred language.

 Sound is a mechanical that is an oscillation of pressure transmitted through a solid, liquid or gas, composed of frequencies within the range of hearing.
http://en.wikipedia.org/wiki/Sound
http://upload.wikimedia.org/wikipedia/commons/6/6d/Sine_waves_different_frequencies.svg

Hearing range usually describes the range of frequencies that can be heard by an animal or human, though it can also refer to the range of levels. In humans the audible range of frequencies is usually 20 to 20,000 Hz, although there is considerable variation between individuals, especially at high frequencies, where a gradual decline with age is considered normal.
http://en.wikipedia.org/wiki/Audible_range

Audio Frequencie http://en.wikipedia.org/wiki/Audio_frequency

 The decibel (dB) is a logarihm unit that indicates the ratio of a physical quantity relative to a specified or implied reference level.
http://www.cyberphysics.co.uk/graphics/diagrams/db_scale.gif

Tracks and Channels
Mono http://en.wikipedia.org/wiki/Monaural
Monaural or monophonic sound reproduction (often shortened to mono) is single-channel. Typically there is only one microphone, one loudspeaker, or (in the case of headphones and multiple loudspeakers) channels are fed from a common signal path. In the case of multiple microphones the paths are mixed into a single signal path at some stage

Stereo http://en.wikipedia.org/wiki/Stereo
Stereophonic sound or, more commonly, stereo, is a method of sound reproduction that creates an illusion of directionality and audible perspective. This is usually achieved by using two or more independent audio channels through a configuration of two or more loudspeakers in such a way as to create the impression of sound heard from various directions, as in natural hearing.[1] Thus the term “stereophonic” applies to so-called “quadraphonic” and “surround-sound” systems as well as the more common 2-channel, 2-speaker systems. It is often contrasted with monophonic, or “mono” sound, where audio is in the form of one channel, often centered in the sound field (analogous to a visual field). Stereo sound is now common in entertainment systems such as broadcast radio and TV, recorded music and the cinema. 

Surround Sound http://en.wikipedia.org/wiki/Surround_sound

Posted in Uncategorized | Leave a comment

Edition and Rendering

This is the final part of Module2, and it covers knowledge related with edition tricks and rendering formats.

Let’s make memory about it….

We’ve talking about transitions and cuts for the final edit. Main idea is to know what is the last image of the previous sequence and the first image of the actual sequence and find cool ways to mix them. In some cases new concepts are appearing (dragon inside the water) and we should do new compositions, rather than just doing a fade or a cut.

We’ve also been talking about rendering formats and codecs. For intermediate renders of scenes, we should use this kind of format and codecs:

  • 1920*1080
  • 30 fps
  • Square pixels
  • Progressive (No fields)
  • RGB(A)
  • 16 bit rendering
  • PNG, TIFF, TGA sequence rendering | or | Quicktime Animation codec

For final render of the whole movie we should use:

  • 1920*1080
  • 30 fps
  • Square pixel
  • Progressive (No fields)
  • RGB
  • 16 bit rendering
  • at least 44.1khz stereo audio, no compression
  • DXV codec 90-100% quality | or | Quicktime photoJPG 90-100% quality | or | Quicktime Apple ProRes 422 90-100% quality
Posted in Uncategorized | Leave a comment

Some recommendations

Posted in Uncategorized | Leave a comment

Sound Sync

We’ve been talking about methods for synchronizing image and audio inside After Effects.

At a micro level, if we want to work comfortably, it is desirable that there is a tight matching between fps and bpm, so we can use a very simple algorithm to find “right” tempos depending on the fps of the movie:

30 fps = 1800 fpm (frames per minute)

1800/2=900 fpm = 2 fpb (frames per beat)

900/2=450 fpm = 4 fpb

450/2=225 fpm = 8 fpb

225/2=112.5 fpm-bpm = 16 fpb

112.5/2=56.25 fpm-bpm = 32 fpb

The basic idea is to divide movie’s fpm by an integer number, until we get a number which is suitable for bpm… in the example we divided by 2, but we can do this dividing by any integer number… the advantage of succesive dividing by 2 (n root) is that we maximize divisibility of the fpb… so for 112.5 bpm we have 16fpb, which means 8 frames per 1/2 beat, 4 frames per 1/4 beat….

There’s a nice online app that helps us doing this calculation: http://www.vjamm.com/support_av_bpm.php

We did a little example of soundsync (available in dropbox) using a very straight technique: every sound in our music has a visual animation assigned. We just have to edit in AfterEffects and put everything in order.

We’ve also been talking about volume / opacity envelopes; algorithmic curves; use of black for silence / white for peaks

In a medium structure level, we’ve seen that there’s no need to make visible EVERY sound of the music, so it looks like sight accepts less complexity than hearing, so we just need to make animations for those sounds in the music that are more “important”, and we get a better effect

 

Posted in Uncategorized | Leave a comment

Working with final Frames

So we arrived to the end of the Video Module, and last week we are devoted to work with the final frames for the script we’ll give to the government. This is an interesting process because we start defining the look of the differents scenes on the mappings.

In addition to that, teachers are helping with specifical problems, and teaching advanced techniques for those who are interested.

Posted in Uncategorized | Leave a comment

Winter in Venice

We attended the mapping show at Venetian casino called “Winter in Venice” with a group of students. We tried to analyze the number of videoprojectors and other technical arrangements done in the show, as well as creative results.

After the analysis we figured out that they were using around 20 mid-power (15k) videoprojectors, usually stacked (2 by 2, or 3 by 3). The huge amount of light pollution on the casino areas makes it impossible to work with “normal” videoprojection systems, and made the technical side of that mapping a very difficult one, so they had to use lots of videoprojectors. The hugeness of the “screens” also has led the tech responsibles of the show to use edge-blending techniques and multiscreen setups, so we guess they will be using some kind of media server to manage the video reproduction and synchronization.

On the creative side general feeling was that, although some parts were very well done (mostly 3d freezing and collapse sequences), the overall show is too much “rainbow” colored. Maybe some parts could have been much better using a more restricted color palette. What we thought was the weakest part of the show was the missuse of audio and videos synchronization… it looks like both things have been worked separately and not mixed until the end of the creation process.

Posted in Uncategorized | Leave a comment

Shooting Video and StopMotion

We used the recording studio at Icentre to generate some materials for the massage sequence in the TapSeac Mapping.

We wanted to have a couple of massaging hands recorded on video, so we used the chroma foil at the studio to record the hands. We also learned some basics of StopMotion animation.

Final steps of this process were getting rid of the green background using aftereffects chroma keying capabilities, so that we could have a clean background for the hands in order to mix with any other materials made in 3D.

Posted in Uncategorized | Leave a comment