So I generally start off stating that I do use Adobe Premiere Pro for my visual work. I do audio engineering using Ableton 9 which I am most familiar with in so far audio workstation software, but for whatever software you use, I imagine the most important component comes by way of equalization and mastering material. Generally I have in mind this guide for the general newcomer least knowledgeable in working more formally either with audio workstation software and/or visual/audio software media packages such as Adobe premiere in putting together something like music videos of any live performance. If you were already recording live sessions via smartphones and so forth but have to deal with smartphone microphones for sound that would appear much further off from a given acoustic experience, this guide will give you some idea as to the process of going about improving sound for a given live session recording alongside potentially engineering video. I won't be getting into hefty details here.
Equipment: Workstation software decent stuff (several hundred dollars but certainly under $1000 for budgeting). A USB midi multichannel audio/digital interface (I have for around $200). Nice utility microphones (Rode makes some...you can get into fairly inexpensive studio cardiods though for a couple hundred dollars or less that sound good). These aren't USB mics by the way that I use, at least I haven't personally used or have had much experience with USB direct stuff. Microphone stands. Digital cameras, optimally 1080p60 frames/sec (produces nice imaging...may help to look for decent low lighting capture features for your camcorder what ever you use)...although at present I work with what I have which includes some at 720p. Camera tripods for most cameras. Computer hardware: while audio engineering doesn't at all require solid state drive, you may consider an SSD for rendering and program operations using Adobe products period. Kinetic hard drives run adobe products horribly slow in my opinion. SSDs can be picked up for less a hundred dollars, and is generally well worth the cost.
Firstly, probably the more important component in audio engineering acoustic material in my experience comes in learning low pass filtering roll offs. Some mics have filter switches that do this for you, but I generally like to control this on an as needed basis and so tend to leave my cardiod microphones flat. Your workstation software probably is best if you can individual channel control equalization (optimally along 8 different channel points) but really you may be able to do with parametric controls (3 channels), and lastly you'd probably want a mastering channel equalization.
Compression/Normalization - what is it? Compression takes a given signal and compresses its amplitude range down to a given different range so that the high volume amplitude range of a signal is closer to that of a low volume amplitude range.
Normalization is similar to compression in that a signal is not only compressed but that it may be maximized so that a signal appears closest to peak maximum volume through the audible duration of a song. My Ableton production suite software include normalization for mastering a given track for export, though I rarely use it.
For audio depending on the genre of music that you play, I'd say from the outset, I don't really like to compress or normalize from an engineering standpoint at all. Here's why I dislike it for acoustic sounds. First, it tends to leave errant acoustically noise to higher levels in normalizing a given recorded audio source noise sound. While this maybe find for certain genres, heavy metal, rock and so forth, it tends to make things in my opinion sound more noisy, cluttered and messy, alongside lifting arguably undesirable ranges of sound that you may not be looking for in so far as acoustic noise. Especially if you are looking in the case of more trained voices something of variable amplitude expression, this can constrain the range of quieter to louder. The advantage of compression is that it is like an auto remote in controlling volume when it sounds overwhelming (e.g., when television cuts to a loud commercial a compressor would do all the work in making the commercial sound less loud relative to a given broadcast segment), the disadvantage comes, however, in taking away acoustic artistic freedom in controlling velocity (volume) expression in the playing of a given instrument or singing range of a vocalist, so alternately I may suggest the art of hand compressing you vocals or instrument which basically amounts to doing exactly what your compressor does but doing it by hand in the mix down process. In Ableton, and imagine other similar higher end workstation software, you hadn't need worry about getting things right each time you work on an automations pass however. Automation, by the way, is the ability of workstation to record you mixing console movements, either through directly interfacing the controls of the workstation software, or using for instance, while recording these, or using the workstation's mixdown interface to 'draw' in the control points graphically speaking for the described automation process that you need control. Automation can control things like volume assignment at a specific transport time, or anything for chorus, reverb, tempo of the song, tempo of a given sample on a specific channel, to whole host of things. Automation is probably something that you should learn well with any workstation software, as it is the most powerful aspect of such software period. In any event, recording in an automated context vocals allows you to do 'hand' compression of things like vocals which I tend to prefer if at all possible since vocals in my opinion to tend to sound most natural this way. The art of hand compressing is simply described in this fashion. Where any aberrant volume in a flat mixed range of an instruments sound is occurring tends to be most prominent in overwhelming all other instrumentation is volume graduated so that it is brought into balance with the rest of the instrumentation until such instrument has restored something of it volume equilibrium relative to the volume mean of the recording. Simply put, working the volume knob and sliding the volume where volume peaks are most prominent on the instrument channel (just like reducing the television volume on a loud commercial) is the process of hand compressing your channel.
Second, workstation should have the ability in controlling per channel effects in my opinion. You could use effects send and return channels which allow for a singly generic effect to be controlled variably (in so far as amount) to each given channel, but I like to control effect amount and type on a per channel basis.
Third, should at least be able to have equalization and an effects package per channel. A generic effects package might include (pass lo/hi/mid filter/chorus/reverb/delay/limiter). I tend to use at the moment reverb mostly and some chorus, but it really depends on the song and the sound that one might be looking for.
Fourth, likely if you want to provide some acoustically natural separation from instrument to instrument and vocal to vocal, aside from your studio setup having any number of mics to achieve the task, and assuming you know how to set this all up in such a way, you may alternately consider something as simple as setting up a duplicate channel of any existing acoustically recorded channel and track delaying this. This is basically a stereo delay on a given channel. I usually tend to move vocals no more than 5 ms although you can likely experiment to some high ms degree for any particular sound that you might be looking for. Also, you will want to pan the duplicate stereo delayed channel alongside its original to some degree opposite on the left right spectrum. Without panning tends to produce a chorusing effect which has the effect of potentially 'softening' vocals. Panning fully left and right for delayed and non delayed channels respectively maximizes the fullest extent of a stereo pan.
Equalization per channel, for any acoustically recorded channel, in my opinion is usually met be on generically often redundant process from one acoustic instrument to the next. This is done usually first and foremost to establish the instrument so that it sounds optimally as through speakers or headphones as it would sound live without any recording microphone and as if you heard the instrument with your own ears. A person singing on top of a given microphone, may likely produce much greater low end. The effect of this from the context of a microphone is like someone putting their mouth next to your ear and speaking. Of course, they are going to sound much deeper as any generating tone from their mouth will tend to produce frequency that neither have ample space to diminish lower end frequencies. One solution is to give the vocalist, of course, ample space between the microphone and their mouth. Another solution is to re equalize and low end roll off any frequencies which tend to make their voice sound different relative to the space that you might be listening to the them sing. A lot of this, I offer is a bit of a listening art. Although you might be able to employ techniques where you audibly listen and gauge their singing voice relative to the listening distance context that you have imagined and accordingly adjusting equalization so that the acoustic version sounds like an incoming recorded channel version. Once you have equalization set per channel, it may also help to template and save these settings so that adjustments can be reapplied with ease, especially say for repeating procedure for on a next song. Though I'd say it may be a bit of alchemy and art in finding this desired balance especially in relating some imaginary spatial distance of the vocalist relative to a listener in so far as a mastering process is concerned for performance recording. As of late I also tend to notch dip equalization on vocals on the 1k range Q setting could vary (optimally not too broad, but not a sharp notch either). The effect of low end roll off (up to 100 hz or more), tends to be feminizing for female vocalists, while not doing roll off tends to aid in promoting masculinity for male vocals.
Reverb - This is something of how one might one to convey any vocal in given spatial context. Whether or not there is natural room reverb to be taken advantage of may have one choosing alternately to take advantage more of natural room acoustics and maybe setting microphones a little further away from a given instrumentalist so as to allow for room acoustic maturation of a given sound, and then likely it may also be important to have in your microphone arsenal a given utility microphone that you can switch say between cardiod, to figure 8, or alternately another dipole setting. On the other hand reverb packages on decent workstation software may allow you to synthetically create nice sounding reverbs that might seem as though a person were performing on stage, or in a room and so forth. Most important aspects of reverb come by way of wet/dry mixing this effect, alongside pass filtering controls which may be natively built into the reverb effects module assigned to a given channel. I tend to low pass filter but not completely removing high reverb frequencies...which is to say passing high end ranges but reducing volume usually, this might be starting anywhere from 1k down to lower ranges controlling the 'tin' sounds of a given reverb.
Chorus - As stated before, chorusing can be done with stereo delayed tracks simultaneous to providing stereo separation, but you may choose to add chorus especially with vocals depending on a given desired production context. I've found for a given acoustically natural sound, you can still add chorus without making instruments or vocals sound unnatural or synthetic (or maybe you are looking for this production style sound). The art in adding chorus, much the same comes in the chorusing width and wet/dry amount with a generic chorus package. For instance, on my package I might run vocals for a certain sound minuscule to the extent of not being noticeable but smoothing out any vocalists sharper, raspier edges. Chorusing on the visual spectrum tends to be yellow warming as opposed to cool blue filtering.
Important things to consider when doing audio mixing and mastering:
1. How clear or cluttered does the overall recording sound especially at high volumes on playback?
2. How well is everything meshing together in so far as given performance? Does it sound dynamic or flat? By flat, do the instruments appear to be pouring out of the speakers monophically un separated, or do the instruments feel more like what one might hear as if being in a given audience listening to a desired performance.
3. Are there prominent annoyances, tones and frequencies? If so, you will likely need to isolate thee source(s) and make adjustments.
4. Can I replay this song over and over and over again and not feel like something is really bothering me in terms of the mixing? Generally speaking you should have reached a stage where the acoustics are as much desirable as given by the instrumentalists and vocalists alike that have all worked together in producing the song from an acoustic standpoint that sounds equally good as it should be recorded. Of course, while I've heard of the magic button that makes everyone sound good engineering key that anyone desire, but generally if the musicians have done their job in producing a song together (especially in a live context) that sounds good, the engineering should flow more smoothly and in like kind it is up to the engineer in replicating this performance so that it is almost indistinguishable from a live context (at least striving towards this goal).
Mastering effects that I like to use. Simple 3 band parametric with fairly wide Q controlling 1k range (Q width works to 100 hz falloff for instance). I have Ableton's enchanced stereo mastering package...stereo off, compression generally off, and some mid range roll off to something that is generally desirable but not so heavy as to be as obvious.
Synchronization of Audio and Visual materials. Is actually simpler than you might expect if you have never done it. First, you will want a synchronization queue. This is usually some audible tone with the absence of any significant room ambient noise generally much diminished relative to the synchronization tone. I've done something as simple as three audible hand claps which can be visually seen once 'expanding' all audio visual tracks in adobe premiere.
For the video workstation software like premiere, basically, you will want to have all audio synchronization queues lines up linearly so that each video feed is playing at nearest the same time frame relative to the other.
If you are using Premiere or something like it you will be able to load each video/audio feed into a given track. Correspondingly you will have your mastered finalized audio recording placed on its own track, although it will have an empty video designation.
Synchronization comes by way of shifting and augmenting the end the start of given camera feed. Usually fairly close to the start of the video/audio feed. One goes about duplicating the process for all other feeds moving and augmenting the start time of such feed until the synchronization queue is visually lined up for all audio feeds at the exact same frame time. Otherwise, any non synchronized feed may look at worst completely off timing relative to live sound, and better worse badly dubbed, while best, appearing as though any particular camera source feed had the same audio source.
Workflow beyond this...
Mute all but the audio track that you just mastered...in other words all video feeds will have their audio tracks disabled, and the finalized master audio track will represent all of these video feeds audio source.
Once having synchronized all you tracks. I start about the process of reviewing the song being mindful of tempo and simultaneously laying down markers. Of course, I am a drummer and have experience playing big kits alongside acoustic natural drums, so this tends to help. If you have a rhythm person in your group I may suggest having them help you lay out transition markers in the song. These transition markers represent the points in which scene footage will switch from one camera footage to the next. For instance, the vocalist starts singing in a song, so correspondingly a transition marker is laid down to represent the shift to this camera pan angle. I tend to move rhythmically in laying these transition markers (done 'M' in premiere), and then to avoid too much rigid regularity, I tend to also randomize the length of transitions so that they aren't always expectantly the same, but this may depend on the stylistic effect of transitioning that you are looking for in terms of production. The rapidity of the pace and flow of transitions being done here may be in essence according to not only tempo but the pace that you are attempting to achieve. Most rapidly produces strobe like visual imagery effects that I've seen while in other cases, a slower acoustic performance holds on a given camera pan for the sake of capturing performance drama. It really depends on what you are looking for energy wise in terms of average frame scene length (whether this is 3 seconds or less or more) from camera pan to camera pan is all given to the production author here. But mostly at this point, you need simply lay down transition point markers ('M') throughout the song.
Once having laid down transition markers. I recommend deciding what camera effects you like to use. This will likely be a production style template used in controlling things like filtering, lighting, vignettes, and so forth. Basically much of what you can do with a photo editor with a still image can be done in Premiere in so far as visual imagery. Mostly you may work to use thing like vignettes in reducing the visual distraction of backgrounds and making more focal the performer. Cineon filters aid to difference what might appear to be video footage and converting this to a more cinematic filter look. Hopefully, if you have had any experience with photo editors some of the workflow process isn't so much different between working with still images and visual imagery in so far as filters adjustment. If you are working with different camera pan angles especially where you haven't gone through the process of formally controlling lighting, you will likely want to work in balancing visual light tonal differences by way of lighting controls (exposure, highlights, whites, blacks, and so forth). Optimally a similar filtering process is done for each camera pan one and the same, but likely adjustments will likely be needed individually for each camera. I recommend saving you filter settings for each camera (again for template purposes this saves time setting anything up the next song/recording session around).
Razor cutting the whole thing. Once effects have been applied, and it is in the way of time, critical to importantly apply effects before commencing the process of chopping your video feeds, you can start the process of using the razor tool. Basically the razor tool slices a feed at a given transition point so that blocks of a feed can be, for instance, disabled/enabled or deleted and so forth which will be important in establishing the transition for scene sequence to the next.
For all such feeds, excepting the audio track alone, you will razor cut along each designated transition marker point that you have established repeatedly throughout the song.
So to start you will begin at the songs Marker In point cutting the visual pans at all such points. You will want to advance to the next transition marker (shift + M in Premiere) and make the same cut on all such camera pan feeds (vertically). This process of moving to the next transition marker and cutting is done until you reach the songs Marker Out point.
Transitioning the feeds. There are a couple ways of doing this. One way generally isn't as desirable as it means a deleted feed needs to be restored if a scene need be changed, the other way is more preferable which is disabling feed blocks and leaving a desired camera pan feed 'enabled' for a given scene. For the desired camera pan feed, you don't do anything. However, for all other camera pan's, you will simply need to disable their feeds (adobe is down Shift + E). Otherwise, you would need to manually select each feed (that isn't to be shown) and likely delete it (or make sure that it is not sequentially in the top most visible layers at a given point of scene footage upon rendering.
A rough guide in performance visuals mixing for scene layouts. Generally speaking a visual rule is that a lead in a given performance takes visual predominance. If a vocalist is singing they should likely have more camera time relative to the rest of performers that aren't taking a given prominent role. Exceptions to this may come at song refrains/choruses, or places where enough redundancy in performance is likely that a vocalist's role is somehow established more to the level of an instrumentalists (e.g., a looped vocal track). That being said, I am sure there is plenty of interpretation as to how much predominance is given to a performer relative to all other instrumentalists on these points. I wouldn't go so much beyond this since there a lot of different ways that one might go about laying visuals here.
So the process of disabling/enabling feeds means that at each scene block that you have already cut for the entire song, one and only one camera pan is enabled for each and every scene block.
You are basically done.
You might consider adding additional footage, if you have an idea for visuals beyond this. I've actually laid out full blown videos using creative commons archive footage (see Prelinger archives) which can make for a lot of fun using source material. Just make sure of the rights assigned to the video, and certainly be sure to provide citation if attribution is required, or you may consider assembling your own environmental sources footage. If I had the money to fly drones, I might be doing more environmental stuff, but maybe you don't need to compile environmental source footage with a high budget here. Some beautiful stuff has been done by using still camera pan videos, shooting videos that are generally non motion excepting what is moving independent of the camera.
I usually export 1080p30 frames/sec HD quality video (a bit bigger than Youtube HD video) but seems better for some reason than specifically YouTube HD type videos . Check your render start/end times that it corresponds with your song marker IN/OUT.
Then upload the exported file to YouTube.
I do recommend that you first disable your video to private and send emails (via recipient box) to all your band mates for video review, if they can't be there firstly to review your work. After it is uploaded and processed send the email notifications out for the video, and then change the video to 'unlisted'. I do this because...few actually bother to hassle logging into their google accounts to gain access to a privately listed video. A privately listed youtube video with email sent will still have the same link when it is changed to 'unlisted' but at least everyone with link can access as opposed to with link and simultaneously being logged into google.
Equipment: Workstation software decent stuff (several hundred dollars but certainly under $1000 for budgeting). A USB midi multichannel audio/digital interface (I have for around $200). Nice utility microphones (Rode makes some...you can get into fairly inexpensive studio cardiods though for a couple hundred dollars or less that sound good). These aren't USB mics by the way that I use, at least I haven't personally used or have had much experience with USB direct stuff. Microphone stands. Digital cameras, optimally 1080p60 frames/sec (produces nice imaging...may help to look for decent low lighting capture features for your camcorder what ever you use)...although at present I work with what I have which includes some at 720p. Camera tripods for most cameras. Computer hardware: while audio engineering doesn't at all require solid state drive, you may consider an SSD for rendering and program operations using Adobe products period. Kinetic hard drives run adobe products horribly slow in my opinion. SSDs can be picked up for less a hundred dollars, and is generally well worth the cost.
Firstly, probably the more important component in audio engineering acoustic material in my experience comes in learning low pass filtering roll offs. Some mics have filter switches that do this for you, but I generally like to control this on an as needed basis and so tend to leave my cardiod microphones flat. Your workstation software probably is best if you can individual channel control equalization (optimally along 8 different channel points) but really you may be able to do with parametric controls (3 channels), and lastly you'd probably want a mastering channel equalization.
Compression/Normalization - what is it? Compression takes a given signal and compresses its amplitude range down to a given different range so that the high volume amplitude range of a signal is closer to that of a low volume amplitude range.
Normalization is similar to compression in that a signal is not only compressed but that it may be maximized so that a signal appears closest to peak maximum volume through the audible duration of a song. My Ableton production suite software include normalization for mastering a given track for export, though I rarely use it.
For audio depending on the genre of music that you play, I'd say from the outset, I don't really like to compress or normalize from an engineering standpoint at all. Here's why I dislike it for acoustic sounds. First, it tends to leave errant acoustically noise to higher levels in normalizing a given recorded audio source noise sound. While this maybe find for certain genres, heavy metal, rock and so forth, it tends to make things in my opinion sound more noisy, cluttered and messy, alongside lifting arguably undesirable ranges of sound that you may not be looking for in so far as acoustic noise. Especially if you are looking in the case of more trained voices something of variable amplitude expression, this can constrain the range of quieter to louder. The advantage of compression is that it is like an auto remote in controlling volume when it sounds overwhelming (e.g., when television cuts to a loud commercial a compressor would do all the work in making the commercial sound less loud relative to a given broadcast segment), the disadvantage comes, however, in taking away acoustic artistic freedom in controlling velocity (volume) expression in the playing of a given instrument or singing range of a vocalist, so alternately I may suggest the art of hand compressing you vocals or instrument which basically amounts to doing exactly what your compressor does but doing it by hand in the mix down process. In Ableton, and imagine other similar higher end workstation software, you hadn't need worry about getting things right each time you work on an automations pass however. Automation, by the way, is the ability of workstation to record you mixing console movements, either through directly interfacing the controls of the workstation software, or using for instance, while recording these, or using the workstation's mixdown interface to 'draw' in the control points graphically speaking for the described automation process that you need control. Automation can control things like volume assignment at a specific transport time, or anything for chorus, reverb, tempo of the song, tempo of a given sample on a specific channel, to whole host of things. Automation is probably something that you should learn well with any workstation software, as it is the most powerful aspect of such software period. In any event, recording in an automated context vocals allows you to do 'hand' compression of things like vocals which I tend to prefer if at all possible since vocals in my opinion to tend to sound most natural this way. The art of hand compressing is simply described in this fashion. Where any aberrant volume in a flat mixed range of an instruments sound is occurring tends to be most prominent in overwhelming all other instrumentation is volume graduated so that it is brought into balance with the rest of the instrumentation until such instrument has restored something of it volume equilibrium relative to the volume mean of the recording. Simply put, working the volume knob and sliding the volume where volume peaks are most prominent on the instrument channel (just like reducing the television volume on a loud commercial) is the process of hand compressing your channel.
Second, workstation should have the ability in controlling per channel effects in my opinion. You could use effects send and return channels which allow for a singly generic effect to be controlled variably (in so far as amount) to each given channel, but I like to control effect amount and type on a per channel basis.
Third, should at least be able to have equalization and an effects package per channel. A generic effects package might include (pass lo/hi/mid filter/chorus/reverb/delay/limiter). I tend to use at the moment reverb mostly and some chorus, but it really depends on the song and the sound that one might be looking for.
Fourth, likely if you want to provide some acoustically natural separation from instrument to instrument and vocal to vocal, aside from your studio setup having any number of mics to achieve the task, and assuming you know how to set this all up in such a way, you may alternately consider something as simple as setting up a duplicate channel of any existing acoustically recorded channel and track delaying this. This is basically a stereo delay on a given channel. I usually tend to move vocals no more than 5 ms although you can likely experiment to some high ms degree for any particular sound that you might be looking for. Also, you will want to pan the duplicate stereo delayed channel alongside its original to some degree opposite on the left right spectrum. Without panning tends to produce a chorusing effect which has the effect of potentially 'softening' vocals. Panning fully left and right for delayed and non delayed channels respectively maximizes the fullest extent of a stereo pan.
Equalization per channel, for any acoustically recorded channel, in my opinion is usually met be on generically often redundant process from one acoustic instrument to the next. This is done usually first and foremost to establish the instrument so that it sounds optimally as through speakers or headphones as it would sound live without any recording microphone and as if you heard the instrument with your own ears. A person singing on top of a given microphone, may likely produce much greater low end. The effect of this from the context of a microphone is like someone putting their mouth next to your ear and speaking. Of course, they are going to sound much deeper as any generating tone from their mouth will tend to produce frequency that neither have ample space to diminish lower end frequencies. One solution is to give the vocalist, of course, ample space between the microphone and their mouth. Another solution is to re equalize and low end roll off any frequencies which tend to make their voice sound different relative to the space that you might be listening to the them sing. A lot of this, I offer is a bit of a listening art. Although you might be able to employ techniques where you audibly listen and gauge their singing voice relative to the listening distance context that you have imagined and accordingly adjusting equalization so that the acoustic version sounds like an incoming recorded channel version. Once you have equalization set per channel, it may also help to template and save these settings so that adjustments can be reapplied with ease, especially say for repeating procedure for on a next song. Though I'd say it may be a bit of alchemy and art in finding this desired balance especially in relating some imaginary spatial distance of the vocalist relative to a listener in so far as a mastering process is concerned for performance recording. As of late I also tend to notch dip equalization on vocals on the 1k range Q setting could vary (optimally not too broad, but not a sharp notch either). The effect of low end roll off (up to 100 hz or more), tends to be feminizing for female vocalists, while not doing roll off tends to aid in promoting masculinity for male vocals.
Reverb - This is something of how one might one to convey any vocal in given spatial context. Whether or not there is natural room reverb to be taken advantage of may have one choosing alternately to take advantage more of natural room acoustics and maybe setting microphones a little further away from a given instrumentalist so as to allow for room acoustic maturation of a given sound, and then likely it may also be important to have in your microphone arsenal a given utility microphone that you can switch say between cardiod, to figure 8, or alternately another dipole setting. On the other hand reverb packages on decent workstation software may allow you to synthetically create nice sounding reverbs that might seem as though a person were performing on stage, or in a room and so forth. Most important aspects of reverb come by way of wet/dry mixing this effect, alongside pass filtering controls which may be natively built into the reverb effects module assigned to a given channel. I tend to low pass filter but not completely removing high reverb frequencies...which is to say passing high end ranges but reducing volume usually, this might be starting anywhere from 1k down to lower ranges controlling the 'tin' sounds of a given reverb.
Chorus - As stated before, chorusing can be done with stereo delayed tracks simultaneous to providing stereo separation, but you may choose to add chorus especially with vocals depending on a given desired production context. I've found for a given acoustically natural sound, you can still add chorus without making instruments or vocals sound unnatural or synthetic (or maybe you are looking for this production style sound). The art in adding chorus, much the same comes in the chorusing width and wet/dry amount with a generic chorus package. For instance, on my package I might run vocals for a certain sound minuscule to the extent of not being noticeable but smoothing out any vocalists sharper, raspier edges. Chorusing on the visual spectrum tends to be yellow warming as opposed to cool blue filtering.
Important things to consider when doing audio mixing and mastering:
1. How clear or cluttered does the overall recording sound especially at high volumes on playback?
2. How well is everything meshing together in so far as given performance? Does it sound dynamic or flat? By flat, do the instruments appear to be pouring out of the speakers monophically un separated, or do the instruments feel more like what one might hear as if being in a given audience listening to a desired performance.
3. Are there prominent annoyances, tones and frequencies? If so, you will likely need to isolate thee source(s) and make adjustments.
4. Can I replay this song over and over and over again and not feel like something is really bothering me in terms of the mixing? Generally speaking you should have reached a stage where the acoustics are as much desirable as given by the instrumentalists and vocalists alike that have all worked together in producing the song from an acoustic standpoint that sounds equally good as it should be recorded. Of course, while I've heard of the magic button that makes everyone sound good engineering key that anyone desire, but generally if the musicians have done their job in producing a song together (especially in a live context) that sounds good, the engineering should flow more smoothly and in like kind it is up to the engineer in replicating this performance so that it is almost indistinguishable from a live context (at least striving towards this goal).
Mastering effects that I like to use. Simple 3 band parametric with fairly wide Q controlling 1k range (Q width works to 100 hz falloff for instance). I have Ableton's enchanced stereo mastering package...stereo off, compression generally off, and some mid range roll off to something that is generally desirable but not so heavy as to be as obvious.
Synchronization of Audio and Visual materials. Is actually simpler than you might expect if you have never done it. First, you will want a synchronization queue. This is usually some audible tone with the absence of any significant room ambient noise generally much diminished relative to the synchronization tone. I've done something as simple as three audible hand claps which can be visually seen once 'expanding' all audio visual tracks in adobe premiere.
For the video workstation software like premiere, basically, you will want to have all audio synchronization queues lines up linearly so that each video feed is playing at nearest the same time frame relative to the other.
If you are using Premiere or something like it you will be able to load each video/audio feed into a given track. Correspondingly you will have your mastered finalized audio recording placed on its own track, although it will have an empty video designation.
Synchronization comes by way of shifting and augmenting the end the start of given camera feed. Usually fairly close to the start of the video/audio feed. One goes about duplicating the process for all other feeds moving and augmenting the start time of such feed until the synchronization queue is visually lined up for all audio feeds at the exact same frame time. Otherwise, any non synchronized feed may look at worst completely off timing relative to live sound, and better worse badly dubbed, while best, appearing as though any particular camera source feed had the same audio source.
Workflow beyond this...
Mute all but the audio track that you just mastered...in other words all video feeds will have their audio tracks disabled, and the finalized master audio track will represent all of these video feeds audio source.
Once having synchronized all you tracks. I start about the process of reviewing the song being mindful of tempo and simultaneously laying down markers. Of course, I am a drummer and have experience playing big kits alongside acoustic natural drums, so this tends to help. If you have a rhythm person in your group I may suggest having them help you lay out transition markers in the song. These transition markers represent the points in which scene footage will switch from one camera footage to the next. For instance, the vocalist starts singing in a song, so correspondingly a transition marker is laid down to represent the shift to this camera pan angle. I tend to move rhythmically in laying these transition markers (done 'M' in premiere), and then to avoid too much rigid regularity, I tend to also randomize the length of transitions so that they aren't always expectantly the same, but this may depend on the stylistic effect of transitioning that you are looking for in terms of production. The rapidity of the pace and flow of transitions being done here may be in essence according to not only tempo but the pace that you are attempting to achieve. Most rapidly produces strobe like visual imagery effects that I've seen while in other cases, a slower acoustic performance holds on a given camera pan for the sake of capturing performance drama. It really depends on what you are looking for energy wise in terms of average frame scene length (whether this is 3 seconds or less or more) from camera pan to camera pan is all given to the production author here. But mostly at this point, you need simply lay down transition point markers ('M') throughout the song.
Once having laid down transition markers. I recommend deciding what camera effects you like to use. This will likely be a production style template used in controlling things like filtering, lighting, vignettes, and so forth. Basically much of what you can do with a photo editor with a still image can be done in Premiere in so far as visual imagery. Mostly you may work to use thing like vignettes in reducing the visual distraction of backgrounds and making more focal the performer. Cineon filters aid to difference what might appear to be video footage and converting this to a more cinematic filter look. Hopefully, if you have had any experience with photo editors some of the workflow process isn't so much different between working with still images and visual imagery in so far as filters adjustment. If you are working with different camera pan angles especially where you haven't gone through the process of formally controlling lighting, you will likely want to work in balancing visual light tonal differences by way of lighting controls (exposure, highlights, whites, blacks, and so forth). Optimally a similar filtering process is done for each camera pan one and the same, but likely adjustments will likely be needed individually for each camera. I recommend saving you filter settings for each camera (again for template purposes this saves time setting anything up the next song/recording session around).
Razor cutting the whole thing. Once effects have been applied, and it is in the way of time, critical to importantly apply effects before commencing the process of chopping your video feeds, you can start the process of using the razor tool. Basically the razor tool slices a feed at a given transition point so that blocks of a feed can be, for instance, disabled/enabled or deleted and so forth which will be important in establishing the transition for scene sequence to the next.
For all such feeds, excepting the audio track alone, you will razor cut along each designated transition marker point that you have established repeatedly throughout the song.
So to start you will begin at the songs Marker In point cutting the visual pans at all such points. You will want to advance to the next transition marker (shift + M in Premiere) and make the same cut on all such camera pan feeds (vertically). This process of moving to the next transition marker and cutting is done until you reach the songs Marker Out point.
Transitioning the feeds. There are a couple ways of doing this. One way generally isn't as desirable as it means a deleted feed needs to be restored if a scene need be changed, the other way is more preferable which is disabling feed blocks and leaving a desired camera pan feed 'enabled' for a given scene. For the desired camera pan feed, you don't do anything. However, for all other camera pan's, you will simply need to disable their feeds (adobe is down Shift + E). Otherwise, you would need to manually select each feed (that isn't to be shown) and likely delete it (or make sure that it is not sequentially in the top most visible layers at a given point of scene footage upon rendering.
A rough guide in performance visuals mixing for scene layouts. Generally speaking a visual rule is that a lead in a given performance takes visual predominance. If a vocalist is singing they should likely have more camera time relative to the rest of performers that aren't taking a given prominent role. Exceptions to this may come at song refrains/choruses, or places where enough redundancy in performance is likely that a vocalist's role is somehow established more to the level of an instrumentalists (e.g., a looped vocal track). That being said, I am sure there is plenty of interpretation as to how much predominance is given to a performer relative to all other instrumentalists on these points. I wouldn't go so much beyond this since there a lot of different ways that one might go about laying visuals here.
So the process of disabling/enabling feeds means that at each scene block that you have already cut for the entire song, one and only one camera pan is enabled for each and every scene block.
You are basically done.
You might consider adding additional footage, if you have an idea for visuals beyond this. I've actually laid out full blown videos using creative commons archive footage (see Prelinger archives) which can make for a lot of fun using source material. Just make sure of the rights assigned to the video, and certainly be sure to provide citation if attribution is required, or you may consider assembling your own environmental sources footage. If I had the money to fly drones, I might be doing more environmental stuff, but maybe you don't need to compile environmental source footage with a high budget here. Some beautiful stuff has been done by using still camera pan videos, shooting videos that are generally non motion excepting what is moving independent of the camera.
I usually export 1080p30 frames/sec HD quality video (a bit bigger than Youtube HD video) but seems better for some reason than specifically YouTube HD type videos . Check your render start/end times that it corresponds with your song marker IN/OUT.
Then upload the exported file to YouTube.
I do recommend that you first disable your video to private and send emails (via recipient box) to all your band mates for video review, if they can't be there firstly to review your work. After it is uploaded and processed send the email notifications out for the video, and then change the video to 'unlisted'. I do this because...few actually bother to hassle logging into their google accounts to gain access to a privately listed video. A privately listed youtube video with email sent will still have the same link when it is changed to 'unlisted' but at least everyone with link can access as opposed to with link and simultaneously being logged into google.
No comments:
Post a Comment