No, this is not possible. Why don’t you have multiple sound definitions in your event, and switch between them using a parameter?
Alternatively, you may want to look into using programmer sounds, discussed in this thread: http://220.127.116.11/forum/viewtopic.php … ght=#31235
About one year ago I requested a feature for selecting indexed waves as playback candidates of a sound definition through event user parameters.
http://18.104.22.168/forum/viewtopic.php … ght=#26010
There was an initial positive response, but it never made it in.
I still would use it today if I had it. Please consider adding this support soon.
We have had another look at this, but it’s not really a good fit for the way events and sound definitions interact, so we won’t be adding it, sorry.
You should be able to achieve the things you talk about in that thread using multiple sound definitions with different sets of waves in them, for example:
Sound def 1 goes from 0.5 to 1.5, and has waves 1, 2, 4, 5
Sound def 2 goes from 1.5 to 2.5, and has waves 2, 3, 5
Sound def 3 goes from 2.5 to 3.5, and has waves 4, 5, 6
You should then be able to switch between the groups of waves by setting the parameter to 1, 2 or 3.
Yes, however I am already using the layer’s control parameter for different speeds of motion on a material. The typical slow, medium, fast sound definitions blended with crossfades and autopitch.
Now I need those sound defs to change the their waves based an a material but I can’t do it. Only having one dimension of switching is limiting in cases such as this. Once you have used up the layer control parameter – that is it – not more switching for those sounds. The event system is about making complex sounds easy, but this is a big blocker for wave selection flexibility. Note my request calls for multiple parameters mapped, so parameters A, B, and C could change independently and it would result in different possible wave candidates from a single sound def; plus you’d still have the layer control parameter left open to do the things it is best at.
So for now in my projects every different material has to be a whole new event and the additional overhead is really adding up because of this. Writing out some ints on items in a sound def would surely be more memory efficient than a whole new event per each different material is.
I have had this kind of tech in past audio systems it didn’t seem so hard to implement. When the sound def goes to play, there is a step for picking waves for random or sequential playback, yes? Well this feature would first use the mapped ints and the current parameter values to filter candidates right before that step.
Please login first to submit.