OK, I know this has been covered already in the forum, but I didn’t really find a conclusive answer.
Background — we’re porting an existing PS2 game, and we’ve replaced the custom sound system with FMOD. So far, it’s working, but now I have to implement the audio in the cut scene system.
The cutscenes are currently made from a single file containing mesh, texture, animation, vfx and audio data (5 languages) – all interleaved. The audio is stored as 32-bit floating point stereo 48khz wave files.
On PS2, this stuff was read into a circular buffer and redistributed to various subsystems. The audio is a pure stream, with no effects or loops.
My question is this… Should I
1 – just cheat and do a createStream at the beginning of the cutscene and hope that the data bandwidth of the CD/DVD/UMD device can maintain two concurrent streams (the target platforms are 360, Wii and PSP)
2 – do the same as the PS2? i.e. interleave pure PCM data with each animation/vfx/texture packet and somehow feed this data into FMOD?
If the answer is 2 — what is the lowest memory footprint method of doing this? What is the recommended method (custom codec, pcmreadcallback, openmemorypoint, etc)?
- collision asked 10 years ago
I think i’d make a floating point custom stream (does it have to be floating point? That’s double the bandwidth for no reason) and feed your data into it (as long as you’re doing the deinterleaving), it should work no problem.
When you create the stream it will immediately call your read callback with 2 requests for data, to fill the front buffer and the back buffer. You would call playsound paused, then when the movie starts, unpause.
Please login first to submit.