Sort of doing the journal of stuffs done here and where I've left off until the next round. Figured I'd do this since this can provide some benefits to those unfamiliar with Ableton and functions.
4/14/13
I started on the premise of working with a dominant moog synth sound. Was looking for a gritty/square dominant sound. This time I didn't feel like searching ableton synth banks, but I wanted to explore the Kurzweil V.A.S.T. sound architecture libraries to figure out if I could model the sound then sample import this into Ableton. As it turns out I started with an existing kurzweil sound then modified this (using square oscillators coupled with a couple of keymap filter changes). I sampled this merely arming the desired track for audio input, and collected a few second sample (really all that were needed for sample). In this case, I used sampler instrument, and dragged the desired audio sample in the sample instruments sound bank. I then did a number of things:
Setting the sample for repeat on Loop, and then using a repeat on filter. Then creating for an added oscillating effect different start points (one for starting the loop and then a start point for the loops repetition). I then added yet another loop repeat on the envelope filter itself...in this case setting for time in milliseconds (as opposed to bpm measure time). Generally I like the action of the moog to have just a slight sustain (usually no more then 1/2 a second), but anything less can provide a choppy feeling on key play if you haven't trained yourself on sustain holding the notes or hadn't felt compelled to run with the sustain pedal rapidly on a given cadence. Also may be important to reference is using a sound sample gathered outside key of C to make note of this and accordingly set this key in transposition on the sampler instrument (especially if you are using a sample that contains more then just single notes and outside of say the key of C Major or whatever).
Did add an auxilliary oscillator likewise...experimented a little with volume modulation (using a sine wave), but here for the model that I were desiring to achieve it would seem adding this sort of modulation is nearly like adding an arpeggio when it becomes predominant in effect. Thought about 'aging' the sound by giving a slight pitch warble which could also be accomplished by using the same lfo filter setting its amount relative low in points (something like 1-10 out of 100 in scale maybe) and then setting the lfo frequency low enough so that the oscillation is scaling around 150-250ms (?). Generally I wanted the sound to be highly responsive in play, meaning little latency to sound from the time sound is played, really in the attack range of a piano for a classic synth lead, but I wanted this synth meaty sounding...I've found for this type of buzzing sound 2nd to 3rd octaves were generally the hottest range where the lowest became quite in audible for the given sample. The advantage of sampling here is that the process of running the sample I believe runs the sample at different frequencies at different rates (default setttings) both in terms of frequency and wavelength time signatures. The advantage of this is that what sounds clean and less buzzy in high frequency ranges, tends to become buzzier as one goes into lower frequency ranges for a given sample which provided to be an added advantage relative to the less gritty cleaner sound that the kurzweil produced in the same octave range. Experimented with both low and high pass filters, but generally felt the rawer sample sound more dominant, and that filtering was subtractive to the sound. So after playing with the sound to a given rhythm. I then had a few more things to work with.
The rhythm left with me with the desire for producing something fairly mechanical as they were especially at the given tempo (around 60-80 bpm) or at least the feeling here were that I didn't want to much of a humanized expression on play. I did have an added consideration here. I wanted the effect to be like an arpeggio on the synth lead but I didn't want to work arpeggiators which tend to note time length restrict or at least switching between multiple timings here on a lead could require more work toggling between time signatures (say 1/8 to 1/16 then going to 1/4 and so forth), so instead another solution went as follows. The lead itself would entirely be unison (easy enough to toggle on the voice allotment of the sampler instrument). Next I wanted to make sure (for the mechanical feeling) having notes played at a consistent velocity, so I added a velocity midi effect on the sampler instrument which would ensure each note were played the same in terms of velocity maps. Next I added a midi note length effect. This providing a couple of nice things...one ensuring that a sustain on each note is automatically generated without having to adjust the envelopes (you can do this either in terms of millisecond timing or in measure timing so that you are playing each time precisely say a 1/8, 1/4, 1/2, or whole note nothing less and nothing more). The advantage of all these things tends to produce a very quantized feel but without the restriction of being exactly quantized on the 'piano roll' of the given clip.
The nice aspect of running unison on the synth lead is that despite having fixed midi note lengths, anything quantized less then the note length is subject to time signature truncation. Meaning that you could produce any note length less then the ascribed midi note length as long as you have unison set on the sampler instrument (note: this could apply equally in the case of 2 or more voices where a series of notes exceeded the voice length for the given midi note length set, but this has a smearing interval/chording sustain effect which could produce a less dominant synth lead also if one weren't careful in writing or producing). So in essence I could restrict to a form of quantization here. Fixing the note length would ensure that notes sound like they were played with quantized precision because the sustain on the notes timed less then the sustain of the overall note length were such to end at any time were the beginning of the next note would occur in pattern...this is to say a lead that could be produced that were cleanly unison note patterns where one note would precisely in to the sound of the next note with no varied empty sound between notes. Technically though, one hadn't need lead with precise timing precision here, or in other words, I didn't and didn't want to quantize the lead on the 'piano roll' since the other gap would fill the other aesthetics problem. In this way I knew the the melody pattern that I wanted to write, but I also wanted to ensure that the lead would sound dominant in mind especially relative to other synth leads that I could recall in mind that I were attempting to model in some manner. If you think about natural acoustic instruments, you'd find any number (and even more percussive ones may have natural acoustic resonances) have natural sustains. Any sustain itself leaves a resonance often times between one note to the next, however, precise the person were playing the instrument. Often times, however, in electronic instruments, nothing of natural acoustic reverberation is built into (outside of electronic piano/tines like the Rhodes or other instruments like this I'd guess...and technically the rhodes is basically what the electric guitar is to a guitar...sustain and reverberation built captured through the electromagnetic pickups are likely through the mass of the body of the instrument and less relating to its acoustic properties ??), unfortunately without a reverb placed on the instruments, it seems that some one playing these instruments might feel a "choppy" time discordant effect. Using some of the steps above may alleviate this feeling or you could experiment, as I have done adding slight sustain on envelope filters coupling this with a little reverb and a little delay.
Other aspects were working with similar but variations of synth leads sounds here for the project. At this point may let stuff sit, not sure exactly what I want to do next, since the lead itself is great for part of the song itself, but I wanted this as a part and not an incessant refrain.
4/20
Sort of running synopsis of past week. Biggest highlights were working with creating a custom drum rack. Part of this involved constructing/finding synth sounds that I felt could be used in the context of suiting a percussive melody...this bit of inspiration probably comes most from Drum and Bass genres...here notably I could think of highlights here in the context of lower ocatve tones frequenting in the 1 second gate length range (or thereabouts), but I've found that these could be used in versatile fashion in many different octave ranges (here I'd chosen something like the 3rd octave)...although in some instancing the actual synth sounds were toned perhaps in the second octave range. I chose to experiment one (having a lot less experience) with the Collision instrument in ableton live, but instead of choosing a pre set sound customizing from the generic start which incidentally sets at a gate length sound at around 100 ms(?)...I extended this and then pitched the tone to the desired octave range indicated above. Procedurally I worked the same from basic synth sounds using the Analog instrument as well. In some cases simplifying the harmonics of the instrument working with 1 oscillator instead of two, toggling unison (for added sound clarity and prominence)...I've found for 'softness' in the sound Sine oscillators tend to be less harsh relative to square and saw waveforms, while these provide nice edging and sharpness in other cases. Next, on the drum rack for the inserted sound. while the default sound produced are all keyed to C3, creating simple melody type percussive patterns could be done simply varying the played note (note: not the note trigger on the controller)...in this case, simply altering for each the synth instrument on the drum rack to trigger on say a minor or major scale...in my case, I worked with triggers using something simple like D3, E3, F#3, and G#3. One could repeat load this into the drum rack and procedurally extend this to much range of added note scaling opening possibilities for many variations of melody patterns thus on the drum rack...you might ask yourself what's the difference between loading the same instrument individually into midi tracks and recording individually the pattern themselves, and generally you could do this albeit. This method provides a different intuitive hands on approach to working across a spectrum of synthesizers for a given melody/interval/chorded expressions at least an avenue that I were wanting to explore here. Secondly. as I've done in the past to give rise to quick hands on approach to rhythm variations here, I've resorted to the monosequencer (max live midi instrument) a nice tool. I'd mention for the equivalent of triplets, doublets, or any higher order rapidly timed notes, the repeat on the pattern grid useful. In one particular song/session instance, repeating the same rhythm session clip from scene to scene but instead scene recording to a specific pattern on the mono sequence, or in the envelope filter adjusting the monosequencers clip pattern at specific time intervals on the quantization grid...thus extending further pattern variations even found on pattern array of the monosequencer itself. Much produced here I'd offer were generally through experimentation simply finding the rhythms while making adjustments in the micro and macroscopic sense which could have included anything from changing step sequence values say from 1/16 to 1/8, or perhaps even lower. Likewise, while adjusting either the envelope filters on the synths themselves or adding midi note length instruments on the synth sound to adjust the note gate length...in most cases I looking for a medium between glitch and blur. Sometimes I've found it intuitive to work with simplified basic instrument sounds (as in the case of the analog and collision synths) and then expand from there making adjustments in refining the sound that I were looking for. If you are interested in sound sampling, I'd offer there are several ways of recording looped samples. As in the case of experimenting with melody patterns that aren't so much textural in orientation, perhaps, recording the midi pattern (if working with actual ableton instrument) sounds have worked for me the most. In this case, I've found two particular easy paths to recording the sample. One involves the audio instrument effect bar loop recorder, or alternately toggling View>In/Out on particular audio channel so that it is set to receive audio from say the master as opposed to directly from ASIO sound card source, thus enabling the played synth sound to be recorded to an audio track (where specific time intervals aren't easily quantifiable in terms of 1,2,3,4 bars or in some easy quantitized subdivision). The other aspect of experimentation with sounds themselves especially in sound texturing involves changing the warp length rate of the sound sample...which adjusting the warp type on the sample to any number of things: complex, beat, texture, and so forth. Also adding to this experimenting on the sound samples envelope filters, for interesting pitch modulation which on the quantitization grid can add sometimes interesting step sequence pitch shifting...try this working with a vocal sample for instance to see this effect (as I've heard at times in electronic, dance, and various music genres). Another interesting feature that I've liked in the past is auto filtering on individual instruments themselves, producing the effect of softening or breaking the sound...which leads to the effect of breaks in play on the song itself having and to the effect of dynamism in the play of the song.
I've relegated myself mostly as of recent to writing what I would consider the snapshots of songs (or short stories) in terms of lengths here. The nice approach I've found here is that one could potentially expand upon this at some other time if desired, but having provided an approach as in writing that neither extends ambition to the frustration of writer's block, or in other words, I've found working in the simplest of construction to an idea, and then extending this in series could be of aid. This is not any different from starting from a simple set of logical premise and then branching ideas from this point, or in other words, if you can write the short story, this could lead to the novella which in turn could lead to the novel (consider, for instance, writers that have strung together short stories into a tapestry representing a novel), or at least a start for gaining some sense of experience in writing here.
5/9
Moving slower on everything right now, mostly working midi, but also found some other interesting features in ableton namely...if you wanted generic rhythm volume modulation, instead of drawing in envelope filters by hand here or even resorting to templates of these in the generic beat signature sense (1 bar, 1/2, 1/4, 1/8, and so forth), you can select 'beat' as a filter type on the audio samples view. Here you can select one of the following signatures mentioned above), otherwise custom envelopes would be drawn by hand...if you want 0 crossfading between one beat to the next, you'd set the filter to 0 which produces a square/step function type amplitude from note to note on the sample, as opposed to a more sine based amplitude where cross fading is set to max. Playing more with more with self generated samples (using Ableton instrument synthesis) then warp transforming these to produce a variation on the sound which were much harder to replicate through synthesis outside of the sampler instrument. Also another favorite of mine is to texturally build sounds especially on certain types of repetitive keys, by adding the under the audio instrument effects, using the Insta mix (beat looper) package, coupling this with a regular 1 bar looper...usually I like to midi route to a foot controller where I can use the insta mix looper like a sustain pedal, here this could seem like a timed delay to the ear, but gives much more versatility and live control relative to hand adding something like pong delays or any sort of echo effect outside of well defined controller access. The 1 bar or multi bar looper instrument effect is nice to have handy since you neither need to arm a track and reroute for audio sample recording which could possibly disrupt a sound texture find. Once recorded you can drag samples from the multi bar loop recorder to any open audio slot in session view, or to a sampler/simpler instrument. Its easy with an instrument to build many different types of sound textures which is nice especially if you are working on the lines of sound symmetry and congruities.
5/10
Timing
I've set about deliberate writing some things that weren't conformed much at all to tempo, and other things that were. Mindful that when I've worked within banks of audio files I've found most to be conformed in the sense of tempo in the flat sense of the word.
Thus when recording with instruments, I've found it especially useful to have tempo/metronome s used in establishing time signatures before hand. This one prevents trying to warp squeeze any audio/midi file into the conformed quantization space or creating any added work in adjust aligning things, if this were considered a problem to engineering. Generally in the past when I worked with analog machines...8 track back then, owing to track limitations, I'd find myself recording to the max 6 tracks for drum tracks and then one guitar scratch track. Leaving an empty master bump down track open for the mix down. Why did I do this? The guitar scratch was there merely for time cue purposes...worth noting that when having played back then on drums, I hadn't worked so much with metronomes, the inclinations between drummer and musicians were such that human feedback would inevitably lead to change in tempos during performance, or in other words the song might be played faster and faster as time wore on, or to the contrary depending on cadences. Unfortunately with repeating rhythms without cues at such times, we the sound engineers found that determining song structure to drum rhythms alone weren't enough...thus the guitar scratch track became an important queue factor here. Of course, you could use a similar method in modern DAW sound engineering. Establishing and maintaining queue control of timing is important, or at least this could make for much less work on your project with decent enough organization.
In Ableton, this were equally true if you started to work structures of song into component parts as opposed to a recorded entirety. For example, record a song in one single take. Assuming your musician or you the musician did a good take, this could be an optimal way of recording a track, but now try recording the song in component parts, that is, subdivide the recording into sessions.
You might have more difficult chaining the timing between component parts here either in session or arrangement view?! And especially when working with these parts called 'Clips' you might have found the time that you pushed the start button for the record to the time that the clip were ended included silent spaces that hadn't needed to be attended to in the single take instance. Also without adequate timing, you might have noticed another problem popping up, the clip weren't so well conformed to the quantization grid! Seeming a little larger scale of problem since you might select on the quantization grid 'narrowest' to refine an approximated search to optimize as close as you could to a given loop interval, or you would have spared yourself all of this problem recording on tempo in Ableton firstly because recording on tempo establishes the pattern more conformed to the quantization grid, and you could easily drag warp markers, or start and stop markers to the points on the grid corresponding to the clips start and stop, and more easily you could compute based on time signatures the length of the clip for chain automating your session clips! If you wanted to chain automate entire scenes of clips without having to do this for each clip, you'd right mouse click on the scene itself and choose the 'select all clips' option, here a small little dialog box pops up below the main session grid in the lower left half of your session view. Then you could enter a time signature value for the length that the scene is to be played, choose 'Next' for the trigger type on the scene (which sets the queue for the next scene to played) and then ensure that this trigger is not randomly assigned but of 100% probability. While you may be able to manage a triggering a few clips by hand, I've found triggering more clips in scene to automate to a next scene could be problematic here...why?! If time signature differentials vary less and less between clips but at least have some variance, the problem were that hand triggering scene queues is assigned on first come first serve basis, thus if you had in mind queueing a scene for play off one clip another might interfere with queue timing if it is first in line and you don't know it, or you simply hadn't had enough reaction response time to queue to a given desired clip for the entire scene automation process. Minding, you could also use the arrangement view alternately for this same process, but the session view provides in some ways a nice live hands on approach to mixing also here.
4/14/13
I started on the premise of working with a dominant moog synth sound. Was looking for a gritty/square dominant sound. This time I didn't feel like searching ableton synth banks, but I wanted to explore the Kurzweil V.A.S.T. sound architecture libraries to figure out if I could model the sound then sample import this into Ableton. As it turns out I started with an existing kurzweil sound then modified this (using square oscillators coupled with a couple of keymap filter changes). I sampled this merely arming the desired track for audio input, and collected a few second sample (really all that were needed for sample). In this case, I used sampler instrument, and dragged the desired audio sample in the sample instruments sound bank. I then did a number of things:
Setting the sample for repeat on Loop, and then using a repeat on filter. Then creating for an added oscillating effect different start points (one for starting the loop and then a start point for the loops repetition). I then added yet another loop repeat on the envelope filter itself...in this case setting for time in milliseconds (as opposed to bpm measure time). Generally I like the action of the moog to have just a slight sustain (usually no more then 1/2 a second), but anything less can provide a choppy feeling on key play if you haven't trained yourself on sustain holding the notes or hadn't felt compelled to run with the sustain pedal rapidly on a given cadence. Also may be important to reference is using a sound sample gathered outside key of C to make note of this and accordingly set this key in transposition on the sampler instrument (especially if you are using a sample that contains more then just single notes and outside of say the key of C Major or whatever).
Did add an auxilliary oscillator likewise...experimented a little with volume modulation (using a sine wave), but here for the model that I were desiring to achieve it would seem adding this sort of modulation is nearly like adding an arpeggio when it becomes predominant in effect. Thought about 'aging' the sound by giving a slight pitch warble which could also be accomplished by using the same lfo filter setting its amount relative low in points (something like 1-10 out of 100 in scale maybe) and then setting the lfo frequency low enough so that the oscillation is scaling around 150-250ms (?). Generally I wanted the sound to be highly responsive in play, meaning little latency to sound from the time sound is played, really in the attack range of a piano for a classic synth lead, but I wanted this synth meaty sounding...I've found for this type of buzzing sound 2nd to 3rd octaves were generally the hottest range where the lowest became quite in audible for the given sample. The advantage of sampling here is that the process of running the sample I believe runs the sample at different frequencies at different rates (default setttings) both in terms of frequency and wavelength time signatures. The advantage of this is that what sounds clean and less buzzy in high frequency ranges, tends to become buzzier as one goes into lower frequency ranges for a given sample which provided to be an added advantage relative to the less gritty cleaner sound that the kurzweil produced in the same octave range. Experimented with both low and high pass filters, but generally felt the rawer sample sound more dominant, and that filtering was subtractive to the sound. So after playing with the sound to a given rhythm. I then had a few more things to work with.
The rhythm left with me with the desire for producing something fairly mechanical as they were especially at the given tempo (around 60-80 bpm) or at least the feeling here were that I didn't want to much of a humanized expression on play. I did have an added consideration here. I wanted the effect to be like an arpeggio on the synth lead but I didn't want to work arpeggiators which tend to note time length restrict or at least switching between multiple timings here on a lead could require more work toggling between time signatures (say 1/8 to 1/16 then going to 1/4 and so forth), so instead another solution went as follows. The lead itself would entirely be unison (easy enough to toggle on the voice allotment of the sampler instrument). Next I wanted to make sure (for the mechanical feeling) having notes played at a consistent velocity, so I added a velocity midi effect on the sampler instrument which would ensure each note were played the same in terms of velocity maps. Next I added a midi note length effect. This providing a couple of nice things...one ensuring that a sustain on each note is automatically generated without having to adjust the envelopes (you can do this either in terms of millisecond timing or in measure timing so that you are playing each time precisely say a 1/8, 1/4, 1/2, or whole note nothing less and nothing more). The advantage of all these things tends to produce a very quantized feel but without the restriction of being exactly quantized on the 'piano roll' of the given clip.
The nice aspect of running unison on the synth lead is that despite having fixed midi note lengths, anything quantized less then the note length is subject to time signature truncation. Meaning that you could produce any note length less then the ascribed midi note length as long as you have unison set on the sampler instrument (note: this could apply equally in the case of 2 or more voices where a series of notes exceeded the voice length for the given midi note length set, but this has a smearing interval/chording sustain effect which could produce a less dominant synth lead also if one weren't careful in writing or producing). So in essence I could restrict to a form of quantization here. Fixing the note length would ensure that notes sound like they were played with quantized precision because the sustain on the notes timed less then the sustain of the overall note length were such to end at any time were the beginning of the next note would occur in pattern...this is to say a lead that could be produced that were cleanly unison note patterns where one note would precisely in to the sound of the next note with no varied empty sound between notes. Technically though, one hadn't need lead with precise timing precision here, or in other words, I didn't and didn't want to quantize the lead on the 'piano roll' since the other gap would fill the other aesthetics problem. In this way I knew the the melody pattern that I wanted to write, but I also wanted to ensure that the lead would sound dominant in mind especially relative to other synth leads that I could recall in mind that I were attempting to model in some manner. If you think about natural acoustic instruments, you'd find any number (and even more percussive ones may have natural acoustic resonances) have natural sustains. Any sustain itself leaves a resonance often times between one note to the next, however, precise the person were playing the instrument. Often times, however, in electronic instruments, nothing of natural acoustic reverberation is built into (outside of electronic piano/tines like the Rhodes or other instruments like this I'd guess...and technically the rhodes is basically what the electric guitar is to a guitar...sustain and reverberation built captured through the electromagnetic pickups are likely through the mass of the body of the instrument and less relating to its acoustic properties ??), unfortunately without a reverb placed on the instruments, it seems that some one playing these instruments might feel a "choppy" time discordant effect. Using some of the steps above may alleviate this feeling or you could experiment, as I have done adding slight sustain on envelope filters coupling this with a little reverb and a little delay.
Other aspects were working with similar but variations of synth leads sounds here for the project. At this point may let stuff sit, not sure exactly what I want to do next, since the lead itself is great for part of the song itself, but I wanted this as a part and not an incessant refrain.
4/20
Sort of running synopsis of past week. Biggest highlights were working with creating a custom drum rack. Part of this involved constructing/finding synth sounds that I felt could be used in the context of suiting a percussive melody...this bit of inspiration probably comes most from Drum and Bass genres...here notably I could think of highlights here in the context of lower ocatve tones frequenting in the 1 second gate length range (or thereabouts), but I've found that these could be used in versatile fashion in many different octave ranges (here I'd chosen something like the 3rd octave)...although in some instancing the actual synth sounds were toned perhaps in the second octave range. I chose to experiment one (having a lot less experience) with the Collision instrument in ableton live, but instead of choosing a pre set sound customizing from the generic start which incidentally sets at a gate length sound at around 100 ms(?)...I extended this and then pitched the tone to the desired octave range indicated above. Procedurally I worked the same from basic synth sounds using the Analog instrument as well. In some cases simplifying the harmonics of the instrument working with 1 oscillator instead of two, toggling unison (for added sound clarity and prominence)...I've found for 'softness' in the sound Sine oscillators tend to be less harsh relative to square and saw waveforms, while these provide nice edging and sharpness in other cases. Next, on the drum rack for the inserted sound. while the default sound produced are all keyed to C3, creating simple melody type percussive patterns could be done simply varying the played note (note: not the note trigger on the controller)...in this case, simply altering for each the synth instrument on the drum rack to trigger on say a minor or major scale...in my case, I worked with triggers using something simple like D3, E3, F#3, and G#3. One could repeat load this into the drum rack and procedurally extend this to much range of added note scaling opening possibilities for many variations of melody patterns thus on the drum rack...you might ask yourself what's the difference between loading the same instrument individually into midi tracks and recording individually the pattern themselves, and generally you could do this albeit. This method provides a different intuitive hands on approach to working across a spectrum of synthesizers for a given melody/interval/chorded expressions at least an avenue that I were wanting to explore here. Secondly. as I've done in the past to give rise to quick hands on approach to rhythm variations here, I've resorted to the monosequencer (max live midi instrument) a nice tool. I'd mention for the equivalent of triplets, doublets, or any higher order rapidly timed notes, the repeat on the pattern grid useful. In one particular song/session instance, repeating the same rhythm session clip from scene to scene but instead scene recording to a specific pattern on the mono sequence, or in the envelope filter adjusting the monosequencers clip pattern at specific time intervals on the quantization grid...thus extending further pattern variations even found on pattern array of the monosequencer itself. Much produced here I'd offer were generally through experimentation simply finding the rhythms while making adjustments in the micro and macroscopic sense which could have included anything from changing step sequence values say from 1/16 to 1/8, or perhaps even lower. Likewise, while adjusting either the envelope filters on the synths themselves or adding midi note length instruments on the synth sound to adjust the note gate length...in most cases I looking for a medium between glitch and blur. Sometimes I've found it intuitive to work with simplified basic instrument sounds (as in the case of the analog and collision synths) and then expand from there making adjustments in refining the sound that I were looking for. If you are interested in sound sampling, I'd offer there are several ways of recording looped samples. As in the case of experimenting with melody patterns that aren't so much textural in orientation, perhaps, recording the midi pattern (if working with actual ableton instrument) sounds have worked for me the most. In this case, I've found two particular easy paths to recording the sample. One involves the audio instrument effect bar loop recorder, or alternately toggling View>In/Out on particular audio channel so that it is set to receive audio from say the master as opposed to directly from ASIO sound card source, thus enabling the played synth sound to be recorded to an audio track (where specific time intervals aren't easily quantifiable in terms of 1,2,3,4 bars or in some easy quantitized subdivision). The other aspect of experimentation with sounds themselves especially in sound texturing involves changing the warp length rate of the sound sample...which adjusting the warp type on the sample to any number of things: complex, beat, texture, and so forth. Also adding to this experimenting on the sound samples envelope filters, for interesting pitch modulation which on the quantitization grid can add sometimes interesting step sequence pitch shifting...try this working with a vocal sample for instance to see this effect (as I've heard at times in electronic, dance, and various music genres). Another interesting feature that I've liked in the past is auto filtering on individual instruments themselves, producing the effect of softening or breaking the sound...which leads to the effect of breaks in play on the song itself having and to the effect of dynamism in the play of the song.
I've relegated myself mostly as of recent to writing what I would consider the snapshots of songs (or short stories) in terms of lengths here. The nice approach I've found here is that one could potentially expand upon this at some other time if desired, but having provided an approach as in writing that neither extends ambition to the frustration of writer's block, or in other words, I've found working in the simplest of construction to an idea, and then extending this in series could be of aid. This is not any different from starting from a simple set of logical premise and then branching ideas from this point, or in other words, if you can write the short story, this could lead to the novella which in turn could lead to the novel (consider, for instance, writers that have strung together short stories into a tapestry representing a novel), or at least a start for gaining some sense of experience in writing here.
5/9
Moving slower on everything right now, mostly working midi, but also found some other interesting features in ableton namely...if you wanted generic rhythm volume modulation, instead of drawing in envelope filters by hand here or even resorting to templates of these in the generic beat signature sense (1 bar, 1/2, 1/4, 1/8, and so forth), you can select 'beat' as a filter type on the audio samples view. Here you can select one of the following signatures mentioned above), otherwise custom envelopes would be drawn by hand...if you want 0 crossfading between one beat to the next, you'd set the filter to 0 which produces a square/step function type amplitude from note to note on the sample, as opposed to a more sine based amplitude where cross fading is set to max. Playing more with more with self generated samples (using Ableton instrument synthesis) then warp transforming these to produce a variation on the sound which were much harder to replicate through synthesis outside of the sampler instrument. Also another favorite of mine is to texturally build sounds especially on certain types of repetitive keys, by adding the under the audio instrument effects, using the Insta mix (beat looper) package, coupling this with a regular 1 bar looper...usually I like to midi route to a foot controller where I can use the insta mix looper like a sustain pedal, here this could seem like a timed delay to the ear, but gives much more versatility and live control relative to hand adding something like pong delays or any sort of echo effect outside of well defined controller access. The 1 bar or multi bar looper instrument effect is nice to have handy since you neither need to arm a track and reroute for audio sample recording which could possibly disrupt a sound texture find. Once recorded you can drag samples from the multi bar loop recorder to any open audio slot in session view, or to a sampler/simpler instrument. Its easy with an instrument to build many different types of sound textures which is nice especially if you are working on the lines of sound symmetry and congruities.
5/10
Timing
I've set about deliberate writing some things that weren't conformed much at all to tempo, and other things that were. Mindful that when I've worked within banks of audio files I've found most to be conformed in the sense of tempo in the flat sense of the word.
Thus when recording with instruments, I've found it especially useful to have tempo/metronome s used in establishing time signatures before hand. This one prevents trying to warp squeeze any audio/midi file into the conformed quantization space or creating any added work in adjust aligning things, if this were considered a problem to engineering. Generally in the past when I worked with analog machines...8 track back then, owing to track limitations, I'd find myself recording to the max 6 tracks for drum tracks and then one guitar scratch track. Leaving an empty master bump down track open for the mix down. Why did I do this? The guitar scratch was there merely for time cue purposes...worth noting that when having played back then on drums, I hadn't worked so much with metronomes, the inclinations between drummer and musicians were such that human feedback would inevitably lead to change in tempos during performance, or in other words the song might be played faster and faster as time wore on, or to the contrary depending on cadences. Unfortunately with repeating rhythms without cues at such times, we the sound engineers found that determining song structure to drum rhythms alone weren't enough...thus the guitar scratch track became an important queue factor here. Of course, you could use a similar method in modern DAW sound engineering. Establishing and maintaining queue control of timing is important, or at least this could make for much less work on your project with decent enough organization.
In Ableton, this were equally true if you started to work structures of song into component parts as opposed to a recorded entirety. For example, record a song in one single take. Assuming your musician or you the musician did a good take, this could be an optimal way of recording a track, but now try recording the song in component parts, that is, subdivide the recording into sessions.
You might have more difficult chaining the timing between component parts here either in session or arrangement view?! And especially when working with these parts called 'Clips' you might have found the time that you pushed the start button for the record to the time that the clip were ended included silent spaces that hadn't needed to be attended to in the single take instance. Also without adequate timing, you might have noticed another problem popping up, the clip weren't so well conformed to the quantization grid! Seeming a little larger scale of problem since you might select on the quantization grid 'narrowest' to refine an approximated search to optimize as close as you could to a given loop interval, or you would have spared yourself all of this problem recording on tempo in Ableton firstly because recording on tempo establishes the pattern more conformed to the quantization grid, and you could easily drag warp markers, or start and stop markers to the points on the grid corresponding to the clips start and stop, and more easily you could compute based on time signatures the length of the clip for chain automating your session clips! If you wanted to chain automate entire scenes of clips without having to do this for each clip, you'd right mouse click on the scene itself and choose the 'select all clips' option, here a small little dialog box pops up below the main session grid in the lower left half of your session view. Then you could enter a time signature value for the length that the scene is to be played, choose 'Next' for the trigger type on the scene (which sets the queue for the next scene to played) and then ensure that this trigger is not randomly assigned but of 100% probability. While you may be able to manage a triggering a few clips by hand, I've found triggering more clips in scene to automate to a next scene could be problematic here...why?! If time signature differentials vary less and less between clips but at least have some variance, the problem were that hand triggering scene queues is assigned on first come first serve basis, thus if you had in mind queueing a scene for play off one clip another might interfere with queue timing if it is first in line and you don't know it, or you simply hadn't had enough reaction response time to queue to a given desired clip for the entire scene automation process. Minding, you could also use the arrangement view alternately for this same process, but the session view provides in some ways a nice live hands on approach to mixing also here.
No comments:
Post a Comment