0
0

I would like to have the orientation of 3D sounds based on my camera, but the volume attenuation based on the distance to my main character’s avatar. Inother words, volume will be calculated based on distance from the player character, but panning (phase shift?) will be calculated based on the angle between the direction the camera is facing and the direction of the sound source.

Is this easily doable with FMOD?

Thank you,

QA

  • You must to post comments
0
0

I just tried this and it sounds great. Our sound designers love it. It would be nice if this behavior were an option instead of having to hack it in.

  • You must to post comments
0
0

We’ve had a similar request for our project, but I don’t know of any way to actually do it with the current version of FMOD. If we could get something like System::set3DListenerAttenuationPosition(), that would be sweet.

  • Guy
  • You must to post comments
0
0

I just realized that these faked positions will mean I am now unable to use 3D cones. Decoupling the microphone position from the position used to figure out the 3D location of the sound in the speakers is necessary to get this behavior and still be able to use other features that are dependent on position.

Brett, any thoughts?

  • You must to post comments
0
0

the position vector -is- the attenuation location.

set the orientation of the listener to that of the camera and the position to that of the player, it should work ok in most cases.

  • You must to post comments
0
0

Just to add the weight of one more vote … for all the reasons stated above, I decided early on to forsake FMOD’s 3D system entirely. I do my own conic calculations (which are slightly more complex and more natural than D3D style), my own distance calculations (including portaled deflection path), and my own separation of point-of-orientation and point-of-loudness mic pickups. You might guess I’ve also rejected FMOD’s Geometry system as unsuitable for my particular needs.

Not trying to put down FMOD, here. I love FMOD’s mix chain, and the well-designed separation of parts that makes it easy to pick and choose. Just giving feedback on which bits aren’t useful to me, in their current state. 😀

  • You must to post comments
0
0

I think it breaks down for 3D positions, though. If I set the listener position at the player’s head, then any sounds that occur between the camera position and the player’s head will come from the rear speakers, even if the orientation of the listener is correct. They’re happening in front of the camera, so you’d expect them to come from in front of the listener.

  • Guy
  • You must to post comments
0
0

I was determined to get this in, so I went ahead and implemented it. It turned out decoupling the two was rather easy. I did have to do this within FMOD’s code, though.

All I did was add another function in parallel with set3DListenerAttributes() called set3DListenerMicrophonePos(). I added another FMOD_VECTOR to FMOD::Listener, poorly-named mPositionForPan. In set3DListenerAttributes() instead of setting the listener’s mPosition, I set the mPositionForPan. In set3DListenerMicrophonePos() I set the listener’s mPosition. Then, in ChannelRealManual3D’s set2DFreqVolumePanFor3D() I use the listener’s mPositionForPan rather than the mPosition. Voila.

Of course, this only works for channels that derive from ChannelRealManual3D, so if you’re using DirectSound or something that uses other channels this approach won’t work perfectly. ChannelDSound, for example, does just pass a position to DirectSound to do the attenuation and panning calculations, so you could probably use the approach QuantumAnenome outlined earlier to get it to work right. You wouldn’t have the problem there of breaking other FMOD functionality since it’s already been calculated before handing it off to DirectSound.

  • You must to post comments
0
0

Our audio guys requested this feature as well, but I could think of no practical way to give them this either. It sounds odd at first, but I think it would be a very useful feature for games that feature a 3rd person perspective – especially those in which you can pull the camera away from your avatar by a considerable amount, like in ours.

BTW, in our game, we deal with this by positioning the listener about two thirds of the distance between the avatar and the camera, and clamping the value if the camera pulls back too far. It’s not a perfect solution, but it helped the problem somewhat.

  • You must to post comments
0
0

Reviving an old topic here…

I’m wondering if anyone else has recently tackled this issue. We’re going to be addressing it soon, and I’m wondering if there are any other options out there with the more recent versions of FMOD.

We’re leaning heavily towards the "get volume from playerPos and orientation from cameraPos" style of implementation…

  • You must to post comments
0
0

I’ve had similar requests, and so far I’ve just made an option to set the listener at the player. This gives you a more ‘involved’ experience with sounds coming out of all speakers instead just the front speakers..
But it can get pretty confusing as well. Using the cameras rotation helps a bit with navigating.

To get what the first poster wanted, you could set the listener at the camera and then fake the position of all objects. Note that I have not tried this yet.

The fake positions should have the same rotation to the camera as before, but with the distance equal that of the distance from the player to the original position.

This would make the players own sounds the loudest, which is probably good. The sounds in front of the camera would still be in front of the camera, even the sounds that are between the player and the camera.

However.. any automatic occlusion, doppler and reverb system would be pretty confused by these faked positions, so you would probably need to do these by hand with the original positions.

Might work..

  • You must to post comments
0
0

A year later, we’re still using the approach described in my last post without any problems. The ability to decouple the attenuation position and panning position should really be a part of FMOD.

  • You must to post comments
0
0

Ho,

I have a related request too.

I asked myself if it would be possible to use FMOD’s audibility computation for Game AI. E.g. it is necessary for an AI-agent’s perception to compute the audibility of, say, the main character’s footsteps to make decisions.

I could imagine something similar to Channel::getAudibility, but computing the audibility (channel volume, distance attenuation, geometry occlusion etc.) between a channel and a ‘virtual’ listener which is independent from the ‘sound’ listener which is set via System::set3DListenerAttributes.

What do you think about that? Is there a possibility to do something in FMODEx? Can I access that audibility computation function? Unfortunately it is not documented or exported..

Best regards,
Philip

  • You must to post comments
0
0

Well, we finished implementing what I had originally came up with, which was also hit upon by jornj. We position the ‘listener’ as normal at the camera position and orintation, and position a ‘microphone’ at the main avatar’s position. When we update a sound emitter’s 3D position, we fake it’s corresponding sound’s 3D position to be closer or farther away from the camera at a distance equal to that between the item and the microphone. The result is exactly what we expected, and makes for a better experience. I hope FMOD and other 3rd party audio libraries add direct support for this cool feature of separating the ‘listener’ from the ‘microphone’. I can’t upload a picture here, but for anyone who is interested I will gladly email more details.
QuantumAnenome at yahoo.
[/img]

  • You must to post comments
0
0

Just to reply to Brett’s idea, (which was my first quick solution) –
This does not work because the orientation from the real camera to the real sound source is not preserved. If my avatar in the distance is facing the camera, and a cricket is close by to his left, I do NOT want to hear the cricket from the right speaker if I use the camera’s orientation at the avatar’s position. I also do not want to hear the cricket from the left speaker if I use the avatar’s orientation and position. I do want to hear the cricket from center (ok, slightly right), and loud. Now when the avatar has moved on, and we just happen to have the camera position right on top of the cricket, I don’t want to hear it at all! (Cricket is too far from the avatar.) I hope this is a better explanation.

  • You must to post comments
0
0

Just chiming in that I think this feature is a good idea as well. I could use QuantumAnenome’s solution, but doing those matrix transforms on the update of every event instance playing seems really wasteful.

  • You must to post comments
0
0

All it took was the following extra code in the set3D() call.

// Fake the position of the sound source to create a more intuitive experience for the player.
float distance = (pos - micPos).length();
Vector3 delta = pos - listenerPos;
delta.normalise();
Vector3 newPos = (delta * distance) + listenerPos;

micPos is the avatar, listenerPos is the camera, pos is the original emmitter.

This gets called per 3D sound per frame, and seems to be negligible compared to whats going on inside FMOD’s set3DAttributes(newPos).

  • You must to post comments
Showing 16 results
Your Answer

Please first to submit.