I would like to have an annoucer voice sound like it is echoing around in a stadium…but the echos move accordingly to the camera position.
What I have done at the moment is triggered the same sound 4 times in 4 different locations, with a 1 second delay between each trigger. This seems really inefficient to me and I was wondering if there was a way to do this via layers and effects within the event. Also, I have the voice as streamed at the moment, so I’m guessing it’s creating 4 streams to do this.
- nozehed asked 8 years ago
That’s a good idea, I’ll have a go at doing it that way for now.
Just a suggestion…perhaps there could be a possible DSP solution for this as well, a sort of surround delay, where each echo can have an absolute 3d location. Four delays would be great too
Thanks for your help
- nozehed answered 8 years ago
Your method of using multiple event instances at different locations is the easiest way to do it. I don’t think you could do it just using event layers and the built in FMOD effects. But you could make it more efficient quite easily. Since the announcer voice would probably be streaming compressed data, it would be inefficient to have multiple streams all decoding the same data.
I think the best way to achieve what you want is to store a buffer of a few seconds of sound data from the main announcer stream and play the other channels from that.
Create a custom DSP unit and attach it to the announcer’s channel group to retrieve the raw wave data.
Store that wave data in a ring buffer which has a length of the amount of time of the largest delay: (e.g. 3 seconds => 3 * 48000 samples)
Create a custom sound object and feed it data from that ringbuffer.
This would make all the echo instances a lot more efficient.
- Guest answered 8 years ago
Please login first to submit.