Hi. I am currently using CoreAudio on MacOS X and I want to switch to use FMOD instead.

I receive streaming audio through my own networking code which is in a custom format (proprietary codec). Currently I decode this audio into PCM. Then, I queue it. I am using an Audio API called Core Audio which is on the Mac. It allows me to create something known as an AUGraph which has a callback which is called periodically and asks me to write a certain number of bytes of my audio data into a buffer. So, I remove it from the queue and write it into the buffer during the callback routine, and thus the audio is played.

There are two specific features of Core Audio that I am relying on and the purpose of this post is to ask how to implement something similar with FMOD.

I am using Core Audio’s clock to know a precise time offset at which I’m playing the audio. Core Audio provides nanosecond precision on this which I don’t need – millisecond precision may be good enough. Microsecond precision would be better.

Second, I am able to precisely set the audio playback rate. For example, I can specify that playback should occur at a sampling rate of 1595Hz.

I think that I want to use the technique of "real time stream stitching" as shown in the sample of the same name.

Are the features that I am describing available in FMOD and how does one go about doing this?


  • You must to post comments

if you want to create a custom audio callback that isnt the same as the output rate, you can use a custom user stream. Channel::getPosition would be used to get the position. Channel::setFrequency could be used to set it to 1595hz.
I would recommend looking at usercreatedsound example.
More high precision sound callback would be done through createDSP with your own dsp callback. That is part of the mixer and is done in very small blocks with lowest latency, but is locked to the output rate.

  • You must to post comments
Showing 1 result
Your Answer

Please first to submit.