We’re hoping to use the exact playback position within an audio channel (i.e. the progress through a loop) to drive animations in our iOS title. It seems like Channel::getPosition is limited to returning values that only increment by the DSP buffer size, which is too coarse for our needs. Can FMOD provide a greater level of position accuracy for a playing sound?
- graham.mcd asked 7 years ago
FMOD polls the output buffer and mixes in blocks, to save your CPU from dying a horrible death. What you’re asking for is ‘time’ rather than samples, so between each 5 or 20ms block (depends what you have setDSPBufferSize set to), you’d want a time value to be interpolated. You could do that yourself with a DSP callback (System::createDSP, System::addDSP), and add it to fmod’s position if you like. See if that works or not for you.
Looking a bit more into it, the jitter frames do happen exactly after a 10ms late callback. The times between frames are consistent and fairly accurate now (around 800 samples, as expected), but right after the late callback they are about half (300ish). So in those frames, the velocities of objects look jittery. Looking at the DSP times, this is starting to make much more sense. They are coming in spaced evenly at 1024 every time, but when there is a long 30ms callback, the timer interpolates to 1.5x the buffer length and I render a future frame. In the next cycle, I render a frame only halfway ahead of that previous one (hence the deltas between them dropping to 800/2) and that looks jittery.
Not sure if that helps, but it does confirm that the jitter is correlated with the odd 30ms callbacks.
Thanks a bunch for your reply. I’ve implemented this and it almost works (frustratingly so). Actually the first time it worked perfectly, but I was disappointed to see that it was just a fluke (somehow the render and audio threads must have lined up).
Anyway the odd behavior I’m getting now is that the DSP callback seems to fire very consistently, but once in a while fires 10ms later than usual. I’m running at 48KHz and get something like the following:
The 30’s occur regularly, around every 7-8 times. I’m running a 1024,2 buffer, as I raise it, the behavior becomes more frequent (every 2-3 calls) but remains to be a 10ms delta. This makes sense (I guess) since the time between callbacks is now much larger. I’m attempting to run my game logic synced to the audio time, since its a rhythm game, and this is causing some odd jittering (much better than before, without interpolation, but still noticeable). At a crazy buffer size around 10K, for example, the jittering only occurs rarely, but of course the sound latency is horrendous.
I think I can probably work around this, but it seems odd so I figured I would ask if this is expected behavior. Do you think something in my app might be starving the callback, for example? I’ve added it to the head node, as you’ve suggested, so it takes no CPU as it is. I’ve also inlined the timers for the test, out of paranoia that there is some wacky function delays going on. I even make sure not to printf between the logic in case there is an odd behavior there (there is no change even with the printf commented out).
static double last_read_time = 0;
FMOD_RESULT F_CALLBACK timerDSPCallback(
FMOD_DSP_STATE * dsp_state,
float * inbuffer,
float * outbuffer,
unsigned int length,
LARGE_INTEGER freq, time;
double ctime = (double)((double)time.QuadPart / (double)freq.QuadPart);//getCurrTime();
double delta = ctime-last_read_time;
last_read_time = ctime;
printf("delta %f\n", delta*1000);
The game logic works off the DSP’s PCM samples (I should probably switch this to MS, but I wanted to run as low level as possible until everything sort of worked).
uint64 curr_time = sfxGetDSPClock();
double last_dsp_time = sfxGetLastReadTime();//ensure this is called first
double curr_sys_time = getCurrTime(); //and this is called second
double delta_time = curr_sys_time-last_dsp_time;
uint64 dt = delta_time * 48000;
curr_time += dt;
After which curr_time flows into the game logic. The odd part is, I can’t see why the extra 10ms between the dsp calls would actually cause any issues, since I think it should just interpolate a longer interval. I don’t hear any music chopping, which is also confusing since I didn’t hear any audio chopping (I did this through addDSP first and memcpy’ed the in->out). My only guess as to why its jittering is that somehow the dsp clock is updating to the next block while the callback hasnt fired yet, and thus there is a mismatch between curr_time and the offset I should be adding.
Anyway, sorry for the code spew, I’ve mostly posted it in case there is something obvious I’m screwing up. If this isn’t typical behavior and you can’t think of something off the top of your head, I’ll try to build a lightweight test app and run FMOD solo and see if I can replicate this.
Thanks again for the help, quite enjoying FMOD so far even with all this trouble.
I should add, if you insert this DSP into the dsp network, using System::getDSPHead, then DSP::addInput will allow you to get a callback without interrupting the singal flow (because in this case, no audio data will feed into it, you are just creating a dangling node).
If you used System::addDSP then you would have to memcpy the inbuffer to the outbuffer to avoid things going silent, because addDSP inserts itself into the current signal flow. I’d use the first method. (less cpu)
Sorry, yes you use System::createDSP, and set a readcallback.
In that callback, you are now being called from the mixer thread, every time it mixes a block of data. This means position information is sample accurate and -exactly- whatever your dsp buffer size is set to (ie System::getDSPBufferSize , blocksize). This blocksize, if it s 1024 for example (as it is in windows by default), at 44khz, is [i:17l6c0o6]exactly[/i:17l6c0o6] 23.21995464852607709750566893424ms each callback, because it is every 1024 output samples.
The trick to get a value better than a granularity of 23ms though, is to interpolate yourself with a clock. Inside your callback, call timeGetTime or QueryPerformanceCounter, note it down as ‘last tick clock’ and in your main thread, when you poll, use current timeGetTime or QueryPerformanceCounter minus the ‘last tick clock’ as the delta between the mixer callback and your current point in time. That is how you can get a millisecond accuracy between dsp updates.
I’ve scoured the docs some more, and if I understand correctly, the way to do this is to register a READCALLBACK with a custom DSP. I’m still a bit confused about how this yields a stable time though. Is the idea to start a system timer when the callback triggers and then use the dsp_clock + timer = interpolated_clock in the rest of the app?
Also, does SystemUpdate have to be called more frequently since it fires callbacks or are DSP callbacks special? If it is not fired by systemupdate, then which thread will the call be made from?
Any input appreciated…
I’m trying to do a similar thing, but I don’t quite understand your reply. What sort of DSP should I create? I’m also not sure how to assign a callback to a DSP, since it doesn’t have a setCallback method like system or channel. A rough code sample would be greatly appreciated.
Please login first to submit.