It’s a direct connection from the source to the reverb (so the dry, non high-passed signal). You can control where the connection point is for a Channel to the reverb though, take a look at Channel::setReverbProeprties, the ConnectionPoint member of the FMOD_REVERB_CHANNELPROPERTIES struct. If you wanted to send the wet signal to the reverb set the ConnectionPoint to be the Channel DSP head.
So… I tried setting the dspConnection to the channel’s DSP head, and the result was rather unpleasant (aurally speaking). The mix turned completely muddy… it seemed like the volume control no longer worked on the individual channels (at least the feed that was being sent to the reverb dsp), so I couldn’t turn off sfx in the game. To clarify, here’s what I’m doing with each channel:
Add highpass and lowpass DSPs to the channel using the Channel::addDsp() function.
calling Channel::setReverbProperties(), and using the dsp retrieved via channel->getDSPHead() as the connection point.
result: loss of volume control, resulting in a strange, muddy signal reverb signal.
Any thoughts as to what might be going wrong here? Or am I misinterpreting what setting the dsp connection point to the channel’s dsp head should be doing?
Also, I have to ask: Why in the world is this setting the dsp connection point part of the setReverbProperties() function? Seems like a really odd API design choice. The typical reverb params are things like wet/dry mix which could be constantly changing based on what’s happening in the game. We continuously morph all our reverb settings as the player moves around in the world, so we’re calling this function each update tick. However, this is not something that would typically need to change at all once the connection was made. It really seems like this could benefit from being moved to it’s own API function. I suspected initially that continuously calling this function was possibly causing issues, but it didn’t fare any better from changing it to a one-time call after channel creation.
Just e-mailed you a capture of a fairly typical DSP network in the app, with some additional labels for clarification. As you can see, the send to the sfx reverb seem to be attached to the correct place (at least, it looks correct to me).
All volume control in our engine (except for explicitly controlled submix volumes) happens at the channel level, using Channel::setVolume(). In this configuration, for some reason, though, the Channel::setVolume() function doesn’t seem to correctly attenuate the wet signal to the reverb, although the dry signal does seem properly attenuated. It’s very confusing, because looking at that graph, I can’t explain why this would be happening. If I leave the connectionPoint at the NULL default, the volume control works as expected.
Any thoughts as to what’s going on here?
Thanks for the Profiler image, it all looks correct to me.
When the connection point is set manually we don’t make any assumptions about that connections owner, hence we don’t scale the wet mix by the channel volume. You could have a situation where you are sending the connection point outside the channel entirely (say to a parent channel group). Therefore, as you are in this situation, you will need to scale the wet mix yourself via the ‘Room’ member of the FMOD_REVERB_CHANNELPROPERTIES structure.
Please login first to submit.