I have somewhat of a feature request… I need some kind of a sample start parameter within the Event Editor so that I can modulate sounds in relation to each other within an event – and make these available for the programmer to modulate. Just tweaking the playback of the event won´t do as this is not possible for me to do when designing the event.
Also, when making these (or other properties) of the event available to the programmer I would like to be able to put restraints on how much he is allowed to modulate these properties – so that I decide what sound good or not.
Check the modulation section on Native Instruments Battery for further info on a good implementation on control modulation restraints.
http://www.native-instruments.com/index … odulate_us
Ah, I get it now, that’s pretty cool. I’ve put it on our list but can’t make any promises on when it’ll be implemented. We will be overhauling the min/max respawn properties of sound definitions at some point and may very well replace/expand them with something like you’re describing – more of generalised automation model I guess.
Hi, two things on this, all relating to oneshot sounds in sequence:
Samplestart would be easily implemented by adding a new effect in the Event Editor – Offset – where you could just control +/- from a parameter strip
Making it possible to parametercontrol velocity would make this even more flexible – allowing different playback speeds for further control over mutual sounds in the event
Sorry, not following you here Magnus. You want to specify a sample offset that a sound will start playing at? (rather than playing from the start) Can you give me an example of what you’re trying to create? (it’s been a long day 😕 )
sorry about that, I´m not that good at explaining…
but I´ll try by giving you an example:
I make three layers in the Event editor. On each I put three fragments of a sound – in this case, say a footstep indoors on concrete.
On layer one I put the sound of the heel
On layer two I put the sound of the foot
On layer three I put the reflecting sound from the room
I position these between themselves so that I get a replica on how the sound was before i broke it off into the fragments.
Now, by moving these fragments in relation (modulating the startpositions of the samples in the event) and volume to each other I can simulate different walking speeds and positioning in the room. This I as the Sound Designer would like to be able to set up and expose to the Sound Programmer according to a set of rules I decide.
So, the higher the walking speed, the closer the heel will be to the foot in position in the event – but according to restraints I have set on this parameter (the heel sample can only move between these positions…)
Also, the volume on the heel will be louder at higher speeds, so this has to be modulated as well – with positions the volume can move between decided by me as I preview the event in the Designer.
This also calls for some way of previewing the event between min and max values… can be done by hand, but it would be nice if you could just push a button 😉
Well, hope I made some sense…
Please login first to submit.