I am currently evaluating FMOD and I’m not sure how to handle this situation.
I have over 10000 sounds I want to stream, with stitching (voice commentary). With our current system we have the streams in 4 banks, the biggest one has about 5000.
Originally I thought I would do the same with FMOD, creating fsb’s and using setSubSoundSentence() for stitching. Unfortunately I found that the memory usage for my 5000 stream fsb was about 600K (with small headers and FMOD_LOWMEM). If we want 2 streams at once that would be 1200K which is a lot for us to give to streaming.
One idea is to split up the banks even further, say 500 streams per fsb for example. Then I could stitch using a method similar to the realtimestitching example. Now as I write this I’m wondering if I should just stream each file individually, instead of in an fsb (or put each in their own fsb).
I would think there are other people with this many sounds, if anyone has some advice about this I would appreciate it.
- lwellbrock asked 10 years ago
Ok, that’s not what I’m seeing, I’ll have to look into it more tomorrow I guess. I just tested it quickly in the playstream example and I got the same result. I changed the createStream call to use my fsb and added the FMOD_LOWMEM flag.
Before the call to createStream, Memory_GetStats returns 415288, afterwards it returns 1046626.
I also created another stream and you are correct, it does cache the data. I didn’t realize. Though it still jumped up to 1376204.
I used the sample bank generator to create my fsb, and I’ll mention that I’m testing on Wii right now if that makes a difference.
The memory you are seeing is the stream buffer. That is adjusted with System::setStreamBufferSize. This adjust how much is read from the disk at a time.
FMOD::CREATESOUNDEXINFO::decodebuffersize is also used to decode into, from the disk buffer. This value controls that size.
Note that they are configurable for a reason. The smaller those values are the more suceptible they are to disk read times and therefore stuttering/skipping noise.
I’m using the default stream buffer size which I believe is 16K, which means 32K per stream if I understand correctly.
I set up my own memory functions in the playstream sample so I could see where the memory goes. I’ve modified it to open my fsb (4911 subsounds) for streaming twice. The first time it allocates a 260KB chunk, 78 bytes for each subsound (4911*78 = 374KB), and some other misc allocs.
The second time I open the fsb stream, it skips the 260KB (I assume that’s the fsb header) but it still allocates the 78 bytes for each subsound.
Does that seem correct? It seems a lot bigger than your post suggested?
If you’re on wii then every sound has an ADPCM context that it uses. I can’t remember the exact size but the context is a structure containing coefficient information. You can’t get around that on wii. We don’t cache that info though i think, we could probably do that.
Ok, thank you very much. If I wanted to avoid allocating that much memory, could I leave them as individual files and still stitch them? I know this is how it’s done in the realtimestitching example but I want to make sure that it will be fast enough when reading off disc and that I will be able to use FMOD_NONBLOCKING.
FSB headers are cached, so it wouldn’t be 1200kb it would be 600kb.
Smallheaders are 8 bytes per sound, so i’d be suprised if you were getting 60 bytes per sound as you are claiming. Actually i’m pretty sure this wouldnt be the case.
Besides the 8 bytes per sound, there is a 4 byte offset stored per sound, so it should be more like 120kb for 10,000 sounds.
Please login first to submit.