0
0

Ok so I don’t think I can do what I plan to with the existing API, but I’m putting the question out there. I have a single stream set up, that, for example, plays back a river gurgling sound. Now in our editor I have a river that has a U shape. When the user is standing completely left of the river, they will hear the stream on their right channel. Now if the user stands in the middle of the U formation, they need to hear it from both channels, because audio is coming from both sides of the listener. In theory sure I should create two streams and position each one either side, as the two sound sources would have their own independent playback position. In practice though, it doesn’t matter if their playback was synchronised in order to save memory and processing (on decoding the stream). As far as I can see there is no way exposed in FMOD to ‘share’ the decode buffer between multiple channels. We have in the documentation:
"Note that a stream only has 1 decode buffer and file handle, and therefore can only be played once. It cannot play multiple times at once because it cannot share a stream buffer if the stream is playing at different positions.
Open multiple streams to have them play concurrently."
Unfortunately on some consoles it’s prohibitively expensive to maintain additional streams (as opposed to cloning them). I will drop this functionality of splitting the sound source into multiple points on these systems, but just wanted to see if there is a way to perhaps achieve what I’m after. I can get half way there, by sharing the data read in from the streamer (emulating such behavior in our read callbacks) – but the internal decode buffer and decoding itself will still be done on a per-channel basis.
Any suggestions?

Thanks,
Jade.

  • You must to post comments
0
0

I’ve kinda run into the same problem… I hate the restriction that you have to create multiple streaming sounds for each instance you want to play.

How long is the river gurgling sound? Is it reasonable to load it into memory? If you can load it into memory, then you could create 3 streaming sounds which are all streaming it from memory instead of streaming it from disk. This would also help because each sound could have a different play cursor so they don’t sound exactly the same..

Also, what our sound designer likes to do in cases like this, is to create 3 or 4 shorter samples that can be stitched together seamlessly in a random order… That way, your ear doesn’t hear the pattern of the gurgling water. Of course the crappy thing about this is that if you use a sound sentence, then you’re going to end up creating at least 3 streaming sounds for each "instance" of the gurgling river sound you want. Check out the realtimestitching example to see how to do something like this.

-sam

  • You must to post comments
0
0

[quote="Jade_Lee":uu36crz4]Unfortunately on some consoles it’s prohibitively expensive to maintain additional streams (as opposed to cloning them). I will drop this functionality of splitting the sound source into multiple points on these systems, but just wanted to see if there is a way to perhaps achieve what I’m after. I can get half way there, by sharing the data read in from the streamer (emulating such behavior in our read callbacks) – but the internal decode buffer and decoding itself will still be done on a per-channel basis.
Any suggestions?

Thanks,
Jade.[/quote:uu36crz4]

Prohibitively expensive? What is the expensive part may i ask? Opening a stream twice is exactly the same as ‘cloning it’ (in which i suppose you mean the buffers, file handles and header) cpu/memory wise. You mean just simply storing the extra handle and updating it? How is that expensive?
In my view all you’d be exchanging is a sound handle for a channel handle.

If we allowed new stream instances to spawn just because you play it, we’d against allocating memory and opening file handles [i:uu36crz4]every time you play a sound[/i:uu36crz4] (ie with FMOD_CHANNEL_FREE), and then freeing and closing when it stops, thats terribly inneficient. I’m vehemently against realtime alloc/performance hit like this, we’d much rather the user decide with a createSound call when that sort of resource management happens. People would just abuse it then ask why it takes so long to play a sound.

You should probably look into using a stereo sound and using set3DSpeakerSpread instead for a cooler volumetric effect like that.
You could have the 3d location at the center of your U, and then simply adjust the spread based on how close you are to that point. When you are inside the U you could have the spread at 180 degrees or even more.
You could also morph it from 3d to a 2d effect using the set3DPanLevel function which is quite unique.

You could even do the 3d calculation yourself and place any input channel to any output speaker using setSpeakerMix/setSpeakerLevels. FMOD Ex is totally flexible in how speaker assignments are managed.

  • You must to post comments
0
0

[quote="brett":8f7dpw9s]If we allowed new stream instances to spawn just because you play it, we’d against allocating memory and opening file handles [i:8f7dpw9s]every time you play a sound[/i:8f7dpw9s] (ie with FMOD_CHANNEL_FREE), and then freeing and closing when it stops, thats terribly inneficient. I’m vehemently against realtime alloc/performance hit like this, we’d much rather the user decide with a createSound call when that sort of resource management happens. People would just abuse it then ask why it takes so long to play a sound.
[/quote:8f7dpw9s]

You could always reserve the number of streaming channels you wanted beforehand. We already have to tell fmod how many channels we want when we initialize it anyhow. That would avoid the alloc when we play a sound.

You could leave the file handle on the sound itself. It would actually be better that way, because then the channels playing that sound would share 1 file handle instead of having 1 file handle open for each instance of the sound playing. Not that it really matters in this case. My guess is if you are using fmod on a console you are almost certainly overriding the file callbacks..

-sam

  • You must to post comments
0
0

[quote="brett":10vx0zuw][quote="Jade_Lee":10vx0zuw]Unfortunately on some consoles it’s prohibitively expensive to maintain additional streams (as opposed to cloning them). I will drop this functionality of splitting the sound source into multiple points on these systems, but just wanted to see if there is a way to perhaps achieve what I’m after. I can get half way there, by sharing the data read in from the streamer (emulating such behavior in our read callbacks) – but the internal decode buffer and decoding itself will still be done on a per-channel basis.
Any suggestions?

Thanks,
Jade.[/quote:10vx0zuw]

Prohibitively expensive? What is the expensive part may i ask? Opening a stream twice is exactly the same as ‘cloning it’ (in which i suppose you mean the buffers, file handles and header) cpu/memory wise. You mean just simply storing the extra handle and updating it? How is that expensive?
In my view all you’d be exchanging is a sound handle for a channel handle.

If we allowed new stream instances to spawn just because you play it, we’d against allocating memory and opening file handles [i:10vx0zuw]every time you play a sound[/i:10vx0zuw] (ie with FMOD_CHANNEL_FREE), and then freeing and closing when it stops, thats terribly inneficient. I’m vehemently against realtime alloc/performance hit like this, we’d much rather the user decide with a createSound call when that sort of resource management happens. People would just abuse it then ask why it takes so long to play a sound.

You should probably look into using a stereo sound and using set3DSpeakerSpread instead for a cooler volumetric effect like that.
You could have the 3d location at the center of your U, and then simply adjust the spread based on how close you are to that point. When you are inside the U you could have the spread at 180 degrees or even more.
You could also morph it from 3d to a 2d effect using the set3DPanLevel function which is quite unique.

You could even do the 3d calculation yourself and place any input channel to any output speaker using setSpeakerMix/setSpeakerLevels. FMOD Ex is totally flexible in how speaker assignments are managed.[/quote:10vx0zuw]

I’m not familiar with the internals of stream decoding, but when I mentioned cloning the stream, I was thinking of a lightweight instance that didn’t maintain it’s own buffer/header/file handle. It would be a bit cludgy to implement (interface wise), as in any system that supports instancing of this nature, you couldn’t modify certain aspects of the clone without necessarily changing the source (and other clones). My parallel in graphics is if you instanced an animation, if you change the animation characteristics or weights of one, all instances will be changed. There is no way you can tune the animation for one instance, if you want that, you have to branch/duplicate it (rather than instance). The benefit of this is you only need to update the animation (skeleton) once, to potentially animate an army of hundreds of characters (not best example, obviously you want some variation), but in any case it’s a massive saving in CPU cost. Forgive me if my terminology, clone vs instance, is arse about :). Essentially the stream read/decoding would happen once into a master instance, and the decoded audio data would be fed into as many channels as you like.

Might I add the set3DPanLevel function is awesome. Right now I’m doing a weighted average position of the sound (rather than multiple streams) to simulate the source, and it’s working pretty good. As the player neared the barrier of the sound (listener position overlaps sound position) I’d get a nasty transition from one channel to stereo. The set3DPanLevel is exactly what is required to smooth this out as user approaches sound position.
In a test scenario, with sound at fixed volume, but also fixed to listener position, I found I had to reduce volume by 25% when set3DPanLevel(0) was used in order to maintain an even volume. That is, set3DPanLevel(0) with volume 0.75 sounds just as loud as set3DPanLevel(1) at volume 1, when the listener position = sound position. If I leave both volumes at 1, set3DPanLevel(0) is clearly louder. There is probably an obvious reason for this that I’m missing. I was expecting I might have needed to attenuate down by -3dB to account for stereo output, though I’m not sure if this will change again when we shift to 5.1 setup. Also not sure what the equivalent of 3dB is in terms of setVolume.
I’ll have a look into set3DSpeakerSpread as well.

Thanks,
Jade.

  • You must to post comments
Showing 4 results
Your Answer

Please first to submit.