I’m about to integrate FMOD with our resourcestreamer, that is to use one singular system for streaming data. Why would we like to do this and not let FMOD handle the streaming itself and synchronize the two systems? Well, due to heavy in-game data streaming its just not optimal with two systems streaming in parallell. We have tried this before and even though it works we want to push our engine and gain more performance by using one system for handling all streaming. So, my question now is how to accomplish this task, that is what we need to keep in mind. With FMODs filesystem callbacks and our own resourcesystem it feels like it shouldn’t be impossible to make it all work but I’m pretty sure that there are one or two things that might stand in the way for the obvious solutions for the problem. Like for instance I assume that when the callback asks for data you need to supply it right away, right? This means that you always have to be one step ahead and probabily supply some kind of "prepare" funcitonallity that is called just before the streaming is triggered to make sure the first buffer is ready. How would you recommend implementing such functionallity with FMOD?
- Frohagen asked 10 years ago
If I had read the reference manual a bit better I would have found the lock/unlock functionallity earlier. Lets blame my glasses Anyways, assume I create a "programmer sound" in FMOD designer and when I catch the sound thru my callbacks I feed the FMOD::Sound object data by using FMOD::Sound::lock from a soundbank I have prepared earlier. Also, in the callback I make sure to store the FMOD::Sound somehwere so I can keep updating it at some given rate until I get a callback that says the sound is released. The data is ofcourse streamed to a buffer using our own resourcestreamer as intended. I know this is probably not as efficient in a sound perspective but hopefully it will improve our overall stream performance at the cost of adding some latency to the streamed sounds. Have I missed something or is this the proper way to solve the problem of using an external resource streamer for streamer FMOD sounds?
This forum is not for instant response, if we’re busy attending to our guaranteed 24 hour email turnaround and is very busy, we won’t even be looking at the forum. You are supposed to write to email@example.com for this.
As far as i can see you just want a user created stream this is already shown in several examples.
You’re correct – the FMOD file overrides require data on demand. As such, you’ll probably want to use your own streamer to push data into a buffer, and then FMOD can pull data from it at will. The downside to this system is the increased memory and data transfer overhead. However, with a limited number of streams, it hopefully won’t be too bad – and also, if FMOD is always streaming from memory, you should be able to decrease the stream buffer sizes.
One possible solution is to create a simple StreamingBuffer class that had an interface with generic SetData() and GetData() functions that manipulated data in either a circular or double buffer. You’d create one of these objects for each audio stream you wished to create, and you can pass these object’s pointers for FMOD to hold onto as custom user data. These objects can then be monitored and filled as required from your own streaming system.
- JamesB answered 10 years ago
I’m guessing that is a problem since we are using the eventsystem where we only have restricted access to the FMODex layer. Still, there is ofcourse the option of using the "programmersound" technique which gives us direct access to the FMOD::Sound. Assuming we do this a) the sound designer has to set all sounds that are to be streamed as "programmersound", b) when we catch a "programmersound" we start filling a buffer with data for this and as soon as we have the data we play the sound.
Now, I have a couple if questions regarding this approach. First of all, in a sound designer perspective doesn’t setting "programmersound" set aside the possibillity to audition the sound in FMOD designer, or have I missed something here? Secondly, since we can’t play the sound right away there will a slight latency. Will this affect anything more than the fact that you can’t design anything where you need a sound to be triggered with no latency which can be painfull when dealing with dialogues were you have to sync with subtitles and all.
Please login first to submit.