send to different position

I’m guessing “no,” but wanted to see if there’s a way to have a send channel play in a different position in the game compared to the main signal. I want to set up multiple reverb zones and have the wet signal for any zone that the sound source is not within play from the zone itself, rather than from the position of the sound source. The goal here is to maintain position/direction of these reflections.

For example: in an FPS, if a player is in a field but approaching a cave, obviously we’d expect at some distance to hear echo from the cave when firing a weapon. This I know how to do in FMOD with snapshots. But I’d also like to make it so that if the player turns perpendicular to the cave and fires, the echo would clearly come from the side, rather than from all around.

Any way to achieve this? Thanks.

Hi Jonnie,
I just wanted to ask the same question!

I am also working on a FPS game and to better simulate spaces I want send signals to come from different coordinates than their sources.

Would be glad to see wheter is possible to do in the Studio.

I should note: one thing I have tried is bouncing both wet and dry audio files and then scripting in Unity to play the dry at the sound source and the wet at the reverb zone. This basically works, but of course, it undermines what middleware is really useful for. That is, this approach would require that I create many additional sound files, and would prevent me from using certain dynamic features like randomized pitch/volume alterations.

Yeah, It’s not really a solution. And if there will be a noticeable difference in file variations it’ll be a mess I think. It can work only if we could duplicate the dry sound signal in different coordinates (some sort of analog of multiposition in Wwise I guess).

By the way a crazy idea - what if we will record a signal from one of the mixer buses (dedicated for gun sounds for example) and play this recording back simultaneously (in real time) as 3D emitter in the place where positional send should be, with added reverb effect? I am not a programmer and only theorizing it.

Or maybe add the second listener at the positional send place - receive perceived audio by this listener and add volume and low pass alteration on its master bus to simulate the distance between the main listener and the send position.
But then you have to consider the signal duplication when two listeners are close to each other. I guess it’s also can be to tangled :slight_smile:

We’re looking at adding a special DSP effect that is like send/return but works sending between buses and events, so you could send from an bus to an event that you could 3d position. Does this sound like something you could use?

1 Like

Hi Brett, yes that sounds very useful. I’ll be very interested to hear more about the feature if it is indeed developed. Thanks.

Another way some of our users have done this is by using a snapshot with automation. The basic setup is as follows:

  1. Create a special reverb bus with a second reverb effect (e.g. “CaveReverb”).
  2. Create a send from your event’s master track to “CaveReverb”.
  3. Create a snapshot (e.g. “CaveReverbPosition”). Inside the snapshot, you need to scope in the bus panner of “CaveReverb”.
  4. Go to the “Tracks” view of the snapshot (the button is in the transport bar), add the automatic parameters for distance and direction.
  5. Automate the panner’s Extent and Direction properties along these parameters, as required.
  6. Place the “CaveReverbPosition” snapshot in the game world, near the entrance to the cave. This will allow for the automatic parameters to be passed in.

Follow up in response to comments: The first thing to keep in mind is that the signal going into the reverb, even when panned in surround, is folded down to mono first. This means that the reverb effect will not preserve the spatialization of the event’s audio. What this is actually doing is using the Direction and Distance parameters of the snapshot, not of the event where the sound is coming from. Just like an event, a snapshot can be given 3D coordinates, which it can use to change how it behaves based on position/angle.

When you say you hear no change, is this in the game or in the tool? In the tool, make sure you hit play on the snapshot. Then you can scrub the distance and direction parameters to test how it will sound. In game, make sure you’re giving a 3D position to the snapshot, which would be at the cave’s location in the game world.

In terms of the event setup, I would place the send to “CaveReverb” before the 3D panner, so that it isn’t attenuated by distance. You could send a static value (say 0dB), or automate it if the event may/may not originate within the cave (you’d need a custom parameter for this). Then, you could scope the fader volume of “CaveReverb” to the “CaveReverbPosition” snapshot, and automate an attenuation value for it over distance. The other thing I’d do for the event is turn its master volume down to zero for testing (assuming the send to “CaveReverb” is pre-fader). This way you can hear the reverb in isolation.

I’ve attached some new screenshots with more details.

2 Likes

Patrick,

I’m very intrigued by this option. I’ve experimented before with using snapshots to set the amount of reverb based on the proximity of a sound source to the particular reverb zone, but not to also position the wet signal within the space. I’ve messed around with this using your instructions but there are some components I’m not understanding. Currently, I can see the pan position changing when I adjust the Direction and Distance parameters, but I hear no change in the sound. Questions:

  1. The built in Distance and Direction parameters, according to the manual, are dependent on the relative positions of an event emitter and the listener. How is it that the position of the wet signal in the setup you described can be something other than the position of the sound source?

  2. In your setup, what determines how much of the original signal goes to the reverb bus? You’ve got a send on the event’s master 7track, but is it a static value or are you automating it?

  3. Might you be willing to send screen shots of the other components of the setup? The event, etc.?

Thanks.

Jonnie

Not sure why all the line breaks go stripped from this post.

Jonnie, I’ve added some more details to the answer.

Patrick…AMAZING. Thanks so much for your very detailed response. I’ve been able to get the effect I was looking for both in FMOD and in-game with your help.

1 Like

This is amazing Patrick! I also recreated your advice both inside FMOD and Unity.
But I have found one problem.


The reverb positioning works only if Windows speaker config set to Surround format. If I am choosing Stereo setup the positioning is not working. Only the distance attentuation of the Snapshot.


I thought maybe because FMOD itself is changing the project setup to the stereo format and tried to scope in and automate also the stereo pan control of the Return bus alongside with the surround. But its not working.


Is there any ways for this algorithm to function if users will select stereo setup inside Windows?

Hi Marcel. Put this code at line 268 on FMOD_StudioSystem.cs sys.setSoftwareFormat(0, FMOD.SPEAKERMODE._5POINT1, 0); (change _5POINT1 to _7POINT1 if that’s what your project setting is). It will ensure all the panning takes place in the correct format, and FMOD will downmix at the end if the user has stereo output.

Hi Nicholas.
I tried that but it didn’t help.


I am using Unity 4 at the moment, but I also tried it in 5.2. No changes.


You also wrote “change _5POINT1 to _7POINT1 if that’s what your project setting is”
My current FMOD project format is 5.1 and I tested the snapshot positioning with Windows speakers setup set to Stereo. You mean I should change the Fmod project to 7.1 and work in this format? Just in case, I did it but nothing changed.