0
0

I’m trying to create a plugin for Designer that will allow me to cut a user-selected amount of time from the end of a sound file. I’m new to coding, so I used the dsp_gain example as a starting point and tried altering it to suit my purposes.

I mostly changed this:
[code:31dy6oo2]
FMOD_RESULT F_CALLBACK dspread(FMOD_DSP_STATE *dsp, float *inbuffer, float *outbuffer, unsigned int length, int inchannels, int outchannels)
{
dspgain_state *state = dsp->plugindata;
unsigned int count;
int count2;
int channels = inchannels; // outchannels and inchannels will always be the same because this is a flexible filter.

for (count = 0; count < length; count++)
{
    for (count2 = 0; count2 < channels; count2++)
    {
        outbuffer[(count * channels) + count2] = inbuffer[(count * channels) + count2] * state->gain;
    }
}

return FMOD_OK;

}
[/code:31dy6oo2]

The way I understood this piece of code was:
outbuffer[from sample 0 to last sample] = inbuffer[from sample 0 to last sample] * gain
This results in every sample being multiplied by the gain selected by the user.

For my plugin, I altered it to this ("cut" is a time in ms):
[code:31dy6oo2]
FMOD_RESULT F_CALLBACK dspread(FMOD_DSP_STATE *dsp, float *inbuffer, float *outbuffer, unsigned int length, int inchannels, int outchannels)
{
dspcut_state *state = dsp->plugindata;
unsigned int count;
int count2;
int channels = inchannels; // outchannels and inchannels will always be the same because this is a flexible filter.

for (count = 0; count < length - (state->cut * 44.1); count++)
{
    for (count2 = 0; count2 < channels; count2++)
    {
        outbuffer[(count * channels) + count2] = inbuffer[(count * channels) + count2];
    }
}

return FMOD_OK;

}
[/code:31dy6oo2]

I thought that this would result in:
outbuffer[from sample 0 to "cut" milliseconds before last sample] = inbuffer[from sample 0 to "cut" milliseconds before last sample]

But what I get is a cyclical noise added to the sound files that increases in volume as I increase "cut." Could someone help me out?

  • You must to post comments
0
0

Hi phantomimage,

The way DSP callbacks work is they are called many times. The system reads sound data in fixed size blocks, on PC these blocks are 1024 samples long. Every time it reads a block, it calls the DSP callback. From looking at your code it appears that you are expecting ‘length’ to be the length of the of the whold sound when in fact it is only the length of the block (1024).

If you want to make a DSP which will cut off a sound after X ms, you would have to accumulate the total number of samples in your DSP state. Then once you reach the cut off point set the rest of that block to zero and memset every block after that to zero. If you want a DSP which will trim of X ms from the end of the sound you will have to determine the abosulte cutoff point by subtracting it from the length. You can get the length of a sound using FMOD_Sound_GetLength.

If you just want to stop playback of a sound at a specific point you can use FMOD_Channel_SetDelay with FMOD_DELAYTYPE_DSPCLOCK_END which will set the exact point at which the channel will stop. This is in absolute mixer samples so you will have to add the length to the sound’s start time. You can use FMOD_System_GetDSPClock to get the current clock value.

  • You must to post comments
Showing 1 result
Your Answer

Please first to submit.