Our sound designer has requested that we automatically play our looped sounds back to back, with a new sound from the group selected at the end of every loop.

To that end, I’m currently trying to implement the gapless streaming functionality into our engine. I’ve copied the pertinent parts of the realtime stitching example, but there are two problems with it. The first is that that example can only deal with one sound at a time. If I want multiple independent sentences based off of the same Sound (which has preloaded all of the sounds in this group into subsounds), then I can’t see a way to do that without creating copies of all of the Sound objects (something which I am very loathe to do). The other is that the realtime stitching is done by allocating new streams dynamically, rather than changing the sentence. Since my sounds are all preloaded beforehand, I need to change the sentence to point at different subsounds, rather than changing the subsounds around so that the sentence is correct.

My proposed solutions (I don’t know if these are viable, but they make sense from my end-user perspective) would be the following:
Store the subsound sentence with the Channel, so that each Channel knows which subsound to look at next, and so that the subsound sentence can be manipulated independently.
Then, add a Channel:: (and Sound:: ) setSubSoundWord(), which would take an index into the sentence and a new subsound index to replace it in the sentence.

Or, is there something I’m missing that would let me do this with the current system?


  • Guy
  • You must to post comments
Showing 0 results
Your Answer

Please first to submit.