Hey Guys, I’ve been asked a few times by a few of my users how to calculate the audibility of a sound being played at a location in the 3d audio word… And it hit me that either I missed something in the FMOD Interface or that a few helpful features are actualy missing…
OK, You can get the Audibility of a channel,
That returns the volume that WOULD be perceived by the listener (if a sound was actually playing full peak to peak).
And you can get the volume of a group or channel with the getVolume function.
But a few of the people I talked with would like to know the level that is acutally being "heard" right now… According to the sound actually playing… So if you hit a silence part in a song (or sound effect), the function would return near 0….
The only way I figure I could think of doing this is to get the wave data and scan the data buffer. But seems like a waste of CPU to me. If this value could be calculated by the FMOD sound system, that would be great.
Also, some people wanted a way for their AIs to listen to the sounds a player made in the 3d world. So that, for example, if the player droped something and made a large sound, the AI would/could hear it… So basicaly a function to query how much a sound (even group) would be heard from a x,y,z positon in the world. I suggested placing the player generated sounds in a player group and getting the wave data of that group and use a linear rollof calculation of the data to achieve this result. But if it was in the FMOD engine, it would be cool.
Finally, I think it would be nice to have a few functions for channel and group that would return how much is actually/would-actually be(ing) heard without me having to analise the wave or spectrum data for 2d sound too… I made an example media player where I display the volume comming out (in a led like manner) of the channel/group. But it was a bitch to have to analize it myself (not that hard, but slow in the context of my dll)… And I was never able to make it look just like the volume indicator on Vista (in the tray)
Thanks and keep up the good work.
- icuurd12b42 asked 8 years ago
[quote="audiodev":3k55f5vx]In my experience, basing AI behavior on the actual sound system starts you down a dangerous road. AI can almost always get away with a much simpler (and more predictable) model of "sound". It’s often handy to be able to run your game with sound disabled, to work around short term issues cased by data, a bad SDK update, whatever.
Game audio is also full of cheats and tricks to get a mix that sounds good given the limitations of consumer audio. You really don’t want the behavior of your AI to change because a the sound team compressed a sample after some producer said it wasn’t "meaty" enough.[/quote:3k55f5vx]
That is correct though my request was geared towards this a little. The principle feature request was
1) getting the amount of audibility of a sound/group according to the data being played. Regardless of the master channel volume setting. Such feature would be used to display a Windows Like volume indicator like you see at the botom right in the task bar…
2) getting the amount of audibility of a sound at a xyz coordinate on a 3d world. The Audibility right now is static and based on listener position. I think having another function to query if a sound would audible at an arbitraty point would be nice.
2b)Having it (another API) based on the data being processed would be better.
Lets talk about 1).
This is achievable by us getting the wave data. However, if looking at the wav data is already being performed by the core, then me doing it again to get the virtual volume is an extra strain…
Lets talk about 2)
OK, anyone can figure this one out… Static audibility if using linear roll off is quite easy to compute from any point, using log roll off, a bit harder. BUT what if you have blockers in the way… So I think this feature would be useful for AIs (if it’s no strain on the game to do so) to figure if someone could be heard, that is the possibility of being heard…
Of course, as you mentioned, if your setup disables to sound system entirelly (no more instanciating channels), then this become quite useless and dangerous to base your AI on something like that. On the other hand, sound is now an integral par of a real world simulation and games. So it’s not such a ridiculous concept to have your AIs react to actual sounds in the 3d world…
Lets talk about 2b
Basically it’s 1 and 2 merged… Such a system could be used to detect the player from his foot steps (louder if running or having jumped off higher heights) or gun shots. Yes, it’s "fakable" without looking at the wave data. But I say it’s an interesting idea. Just tell the sound guys to inform you if they change "game analised" sounds like a grenade sound so you can tweak your thresholds. After all, no one should touch a load bearing wall without consiulting the architect. Same applies to in game sounds that are heard by your AIs….
Sound centric games are interesting. If you have a system that is fast to do it on actual data, right in the core, then we would not need to duplicate the function in our game code…
No argument on #1. I wouldn’t base any AI on it, but I could see it making for nice visuals. Turns out that’s also super easy to do in FMOD by slotting in a custom DSP at the right point. Like Brett, I think that’s appropriate user level code. As someone who doesn’t need that particular feature, I am glad not to pay the perf cost for it.
I’m on the same page w/ you regarding #2, except that I wouldn’t use the actual low level sound system for computing occlussion. I’d use a game level system, like the collision for physics or perhaps a custom data structure. The big win is that the AI and the real audio have different needs, and it’s nice to have the flexibility to tune them indepenently.
Here are some tricky examples:
You want AI to respond when you kill an enemy near them. This is a core mechanic, so it needs to happen pretty much all of the time. However, the sound designers want to use a variety of sounds. Sometimes a guy cries out, sometimes he sort of gurgles, some times he makes no sound at all.
There’s a really important explosion in your game. The designer knows that it can only happen when the player is in a particular room, so he decides to make it 2D or gives it a massive radius and no falloff. Except now, all of the AI in the level are freaking out because they, too, "heard" the explosion even though the designer was only thinking of the player when he authored it.
In the end, I totally agree w/ you that sound centric games are interesting. However, just because the gameplay is sound centric, that doesn’t mean the actual implementation needs to be.
[quote="brett":3nm3bezf]you would have to call getWaveData. You’ve said that scanning it would be a waste of time, but that’s exactly what fmod would have to do, slowing down the mixer, so there wouldnt be any saving anywhere.[/quote:3nm3bezf]
I was hoping if you guys already did see the data at it played… Could it be possible to have an average over time without the/my coslty wave chunk analysis.
Anyway, Happy New Year!
The resampler and mixer are tightly wound loops with SSE etc so we’d have to insert extra instructions to do this , which would slow it down. We’d just have to do a post process scan to work it out, which is the same thing the user can do, optionally.
In my experience, basing AI behavior on the actual sound system starts you down a dangerous road. AI can almost always get away with a much simpler (and more predictable) model of "sound". It’s often handy to be able to run your game with sound disabled, to work around short term issues cased by data, a bad SDK update, whatever.
Game audio is also full of cheats and tricks to get a mix that sounds good given the limitations of consumer audio. You really don’t want the behavior of your AI to change because a the sound team compressed a sample after some producer said it wasn’t "meaty" enough.
Please login first to submit.