I would like to have the orientation of 3D sounds based on my camera, but the volume attenuation based on the distance to my main character’s avatar. Inother words, volume will be calculated based on distance from the player character, but panning (phase shift?) will be calculated based on the angle between the direction the camera is facing and the direction of the sound source.
Is this easily doable with FMOD?
- QuantumAnenome asked 10 years ago
I just realized that these faked positions will mean I am now unable to use 3D cones. Decoupling the microphone position from the position used to figure out the 3D location of the sound in the speakers is necessary to get this behavior and still be able to use other features that are dependent on position.
Brett, any thoughts?
Just to add the weight of one more vote … for all the reasons stated above, I decided early on to forsake FMOD’s 3D system entirely. I do my own conic calculations (which are slightly more complex and more natural than D3D style), my own distance calculations (including portaled deflection path), and my own separation of point-of-orientation and point-of-loudness mic pickups. You might guess I’ve also rejected FMOD’s Geometry system as unsuitable for my particular needs.
Not trying to put down FMOD, here. I love FMOD’s mix chain, and the well-designed separation of parts that makes it easy to pick and choose. Just giving feedback on which bits aren’t useful to me, in their current state. 😀
- sgugler answered 10 years ago
I think it breaks down for 3D positions, though. If I set the listener position at the player’s head, then any sounds that occur between the camera position and the player’s head will come from the rear speakers, even if the orientation of the listener is correct. They’re happening in front of the camera, so you’d expect them to come from in front of the listener.
I was determined to get this in, so I went ahead and implemented it. It turned out decoupling the two was rather easy. I did have to do this within FMOD’s code, though.
All I did was add another function in parallel with set3DListenerAttributes() called set3DListenerMicrophonePos(). I added another FMOD_VECTOR to FMOD::Listener, poorly-named mPositionForPan. In set3DListenerAttributes() instead of setting the listener’s mPosition, I set the mPositionForPan. In set3DListenerMicrophonePos() I set the listener’s mPosition. Then, in ChannelRealManual3D’s set2DFreqVolumePanFor3D() I use the listener’s mPositionForPan rather than the mPosition. Voila.
Of course, this only works for channels that derive from ChannelRealManual3D, so if you’re using DirectSound or something that uses other channels this approach won’t work perfectly. ChannelDSound, for example, does just pass a position to DirectSound to do the attenuation and panning calculations, so you could probably use the approach QuantumAnenome outlined earlier to get it to work right. You wouldn’t have the problem there of breaking other FMOD functionality since it’s already been calculated before handing it off to DirectSound.
Our audio guys requested this feature as well, but I could think of no practical way to give them this either. It sounds odd at first, but I think it would be a very useful feature for games that feature a 3rd person perspective – especially those in which you can pull the camera away from your avatar by a considerable amount, like in ours.
BTW, in our game, we deal with this by positioning the listener about two thirds of the distance between the avatar and the camera, and clamping the value if the camera pulls back too far. It’s not a perfect solution, but it helped the problem somewhat.
- JamesB answered 10 years ago
Reviving an old topic here…
I’m wondering if anyone else has recently tackled this issue. We’re going to be addressing it soon, and I’m wondering if there are any other options out there with the more recent versions of FMOD.
We’re leaning heavily towards the "get volume from playerPos and orientation from cameraPos" style of implementation…
- Symbiotic answered 9 years ago
I’ve had similar requests, and so far I’ve just made an option to set the listener at the player. This gives you a more ‘involved’ experience with sounds coming out of all speakers instead just the front speakers..
But it can get pretty confusing as well. Using the cameras rotation helps a bit with navigating.
To get what the first poster wanted, you could set the listener at the camera and then fake the position of all objects. Note that I have not tried this yet.
The fake positions should have the same rotation to the camera as before, but with the distance equal that of the distance from the player to the original position.
This would make the players own sounds the loudest, which is probably good. The sounds in front of the camera would still be in front of the camera, even the sounds that are between the player and the camera.
However.. any automatic occlusion, doppler and reverb system would be pretty confused by these faked positions, so you would probably need to do these by hand with the original positions.
- jornj answered 10 years ago
I have a related request too.
I asked myself if it would be possible to use FMOD’s audibility computation for Game AI. E.g. it is necessary for an AI-agent’s perception to compute the audibility of, say, the main character’s footsteps to make decisions.
I could imagine something similar to Channel::getAudibility, but computing the audibility (channel volume, distance attenuation, geometry occlusion etc.) between a channel and a ‘virtual’ listener which is independent from the ‘sound’ listener which is set via System::set3DListenerAttributes.
What do you think about that? Is there a possibility to do something in FMODEx? Can I access that audibility computation function? Unfortunately it is not documented or exported..
- mutex answered 10 years ago
Well, we finished implementing what I had originally came up with, which was also hit upon by jornj. We position the ‘listener’ as normal at the camera position and orintation, and position a ‘microphone’ at the main avatar’s position. When we update a sound emitter’s 3D position, we fake it’s corresponding sound’s 3D position to be closer or farther away from the camera at a distance equal to that between the item and the microphone. The result is exactly what we expected, and makes for a better experience. I hope FMOD and other 3rd party audio libraries add direct support for this cool feature of separating the ‘listener’ from the ‘microphone’. I can’t upload a picture here, but for anyone who is interested I will gladly email more details.
QuantumAnenome at yahoo.
Just to reply to Brett’s idea, (which was my first quick solution) –
This does not work because the orientation from the real camera to the real sound source is not preserved. If my avatar in the distance is facing the camera, and a cricket is close by to his left, I do NOT want to hear the cricket from the right speaker if I use the camera’s orientation at the avatar’s position. I also do not want to hear the cricket from the left speaker if I use the avatar’s orientation and position. I do want to hear the cricket from center (ok, slightly right), and loud. Now when the avatar has moved on, and we just happen to have the camera position right on top of the cricket, I don’t want to hear it at all! (Cricket is too far from the avatar.) I hope this is a better explanation.
All it took was the following extra code in the set3D() call.
// Fake the position of the sound source to create a more intuitive experience for the player. float distance = (pos - micPos).length(); Vector3 delta = pos - listenerPos; delta.normalise(); Vector3 newPos = (delta * distance) + listenerPos;
micPos is the avatar, listenerPos is the camera, pos is the original emmitter.
This gets called per 3D sound per frame, and seems to be negligible compared to whats going on inside FMOD’s set3DAttributes(newPos).
Please login first to submit.