0
0

HI,

I’m coding a realtime spectral analysis viewer, and I have some difficulties to make a DSP which convert the input stream (input buffer in read callback function) by reducing the sampling rate before sending it to the output stream (output buffer is half size of the input buffer).

I tried to change the channel frequency with SetFrequency (setting it to half the frequency used with System_Init()), but the sound still be half-pitched, this does nothing.

So how can set the DSP input sampling frequency to f and DSP output sampling frequency to f / 2 ?
Do I need and How can I, fix the input DSP buffer length to n and the output buffer length to n/2 ? (If not possible, I could

  • You must to post comments
0
0

I found a way to fix the output sampling rate by calling System_SetOutputFormat(). This seems to work.

Finaly, I’m sorry, but I have a some more questions :

  • by calling System_SetOutputFormat() is it sure that this set the frequency of decoded stream on the DAC on the sound board ? (I thought that it would be downsampled or upsampled to match hardware limits)

  • what exactly do Channel_SetFrequency() and DSP_SetDefaults() ? They both have a "Frequency" parameter, but, are they purely informationals, or did they have some meaning ?

  • You must to post comments
Showing 1 result
Your Answer

Please first to submit.