I’m interested in reducing the amount of integration that our audio programmers need to do. We typically give the programmer a list of audio events and when/how to trigger them, but I’d like to have the sound designer handle most of that.

I was thinking that I could use the "User properties" feature to assign an animation event name to each event (where possible). For example, for sound event "MagicSpell1", I would add a User Property to the event called "Anim_MagicSpell1", where this User Property Name corresponds to the appropriate animation event. The code could then just look at the User Property to determine when to play this audio event. With such a system in place, the programmer wouldn’t have to do nearly as much integration work.

Assuming I explained this clearly, is there any reason why this wouldn’t work? Is there a better way to accomplish this?


  • You must to post comments
Showing 0 results
Your Answer

Please first to submit.