at the moment it’s possible to randomly play wavs in a sound definition (SD) by defining time between max and min time between plays, but impossible to setup a system where you sequentially play sounds in random.
I would very much like to be able to set the sound definition to play wavs in concatenation and in random. That is, when a wav is reaching it’s last sample, a new wav has been chosen randomly from the SD pool to play it’s first sample in perfect succession to the previous one, giving an unbroken line of randomly played wavs randomly distributed not in time but in content.
A given possibilty for this is to make catalogs of rhythmical ambiences to play from, making them nonrepetetive.
When you say streaming do you mean streaming from disc or memory? Since "sentences" is something we might use alot and since the data mostly likely will be statically loaded into memory it feels rather bad if we exhaust the disc by streaming data from it.
currently it means stream from disk. It is possible for low level fmod to stream from memory, but for event system, you’d have to load it into memory first. You could possibly do this through our new load callback that we are adding. Another option is we add a new wavebank option ‘stream from memory’ which loads the fsb into memory first (can be dangerous if memory usage is a concern, but that’s up to the user)
In theory fmod’s software mixer could do gapless stitching of static samples, but it requires a bit of a change to make the mixer suddenly jump from one sound to another. Hardware is out of the question because for example directsound hardware buffers have no way of doing this, but memory streaming is possible.
Streaming from disc isn’t an option for us since we are already having problems with disc access issues and with the upcoming next-gen games I have a feeling it will be even worse, especially when it comes to the 360. As I see it streaming from memory would be the way to go, or skipping the perfect stich thingie and instead crossfade the components. In that case it would be valuable to be able to control the crossfades graphically which as I see it can’t be done the way the designer tool works right now.
well yes, thats only for absolutely sample accurate gapless stitching (how commonly is this needed, we use it for stitching voice commentary mainly which is usually streamed), there’s always the polling method which could possibly have a gap of about 16ms or something at worse (based on the framerate of the game), and if it was overlapped with a crossfade like you say, then it wouldnt really become noticable. The sound definition properties could probably let you control the crossfade so you could preview/control it.
We would probably be using it quite alot for building up dynamic loops and stuff and in a worst case scenario we might have a couple of these "loops" playing which, if being streamed from disc, could be rather disastrous. In the case of these dynamic loops a small crossfade would do just fine. Tweaking the crossfades in the sound def view doesn’t sound like an optimal solution but I guess it would work. A graphical alternative would ofcourse be better but I guess that means alot more work and due to that it would take longer for the feature to appear in a release.
[quote="brett":3w3r17hb]In theory fmod’s software mixer could do gapless stitching of static samples, but it requires a bit of a change to make the mixer suddenly jump from one sound to another. Hardware is out of the question because for example directsound hardware buffers have no way of doing this, but memory streaming is possible.[/quote:3w3r17hb]
Hmmm… I’m not a programmer, but I remember one of our programmers solving this problem by apending the samples together into memory as one sample, then jumping to different offsets within this glued-together sample during playback. This was much easier on the CPU than jumping between different samples. Could this be a possible solution? Just wondering.
Is it already possible to jump to different offsets within a sample during playback? Then the sound designer could do such samples him/herself, marking the offset points with markers / regions, and the programmer would only need to implement offset jumping according to these regions. Then it would not be necessary to implement sample merging to FMOD.
I don’t think jumping to offsets in the sample-domain is possible. At least not thru the standard designer API. I guess by interfacing the underlying FMOD ex interface it could be done though but that would be rather messy. Then again an event doesn’t work in the sample-domain but rather along a timeline and I guess it would be possible to jump in time by using the "cursor" (param00) parameter, or at least should be.
An external solution with markers would probably also solve the problem but I guess we all, programmers and sound designers, would like to have it all in one API and accesible thru the designer tool, right?
We do have ‘sequential sentence’ and ‘randomized sentence’ planned but it is a matter of getting around to it. To get perfect stitching those sounds have to be streamed though, unless you don’t mind some very small gaps. (most static hardware samples cant gaplessly play from one to the other, so to overcome this sound data is streamed)
Please login first to submit.