0
0

Hi,

I’m using the codec callback example that came with the Mac FMOD Ex examples (for some reason the codec example was not included in the iPhone distribution) to do AAC decoding using the AudioToolbox API.

FMOD docs suggest that I use their default filesystem callbacks but if I were to utilize AudioToolbox I need to handle the filesystem on my own, is that correct?
i.e. I can’t just pass AudioToolbox the filehandle and tell it to decode x bytes of data. It requires the file to be open in its system.

Also I am seeing odd behavior in my FMOD_CODEC_SETPOSITIONCALLBACK. The callback function is repeatedly called with no change in the position parameter (always 0). The following call returns FMOD_OK, and the loop continues indefinitely.

[code:xipr2wbs]
result = codec->fileseek(codec->filehandle, position, 0);
[/code:xipr2wbs]

Thanks for your help.

  • You must to post comments
0
0

Well that’s part of writing a codec, it is your job to work that out :p

FMOD asks for sizebytes worth of sample data, based on the number of channels and format of the output data you can convert that sizebytes into samples, which might be more compatible with what the AAC decoder requires.

You will need to look at the functionality of the decoder and how it works to determine how much raw data it needs to get the number of samples FMOD wants.

  • You must to post comments
0
0

Ok, I think I’m calculating the compressed size correctly now.
I ended up scanning the entire file to build a packet description table using Audio Queues because kAudioFilePropertyPacketToByte was not working for me.

Just to make sure, does this look right to you? It’s where I set the format for the LPCM capture buffer:

[code:2dfbqqdb]_streamInfo.mCaptureFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;[/code:2dfbqqdb]

I’m still working on the codec.. to be continued..

  • You must to post comments
0
0

I just had a quick look at the offline decoding example provided by Apple, it looks to me like you specify input buffers for the Audio Queue and then do the offline decode which fills the output buffer. I didn’t see anything about having to create the file with their API so you should be ok.

Without seeing more of your code I can’t answer your seekPosition question, perhaps you haven’t filled out the length property properly and it is constantly looping?

  • You must to post comments
0
0

If those flags represent what you get back from the encoder then that sounds correct to me. You can also use kAudioFormatFlagsCanonical which means the same thing as those three flags ORed together. You would also need to set mBitsPerChannel to 16 to fully describe that you want PCM16 output.

  • You must to post comments
0
0

You’re right! I was looking at the wrong API (Audio Files). I will proceed with Audio Queues.

I believe the length and other relevant parameters are set properly. Below is my code:

[code:1bnp74vf]

FMOD_RESULT F_CALLBACK aacOpen(FMOD_CODEC_STATE *codec, FMOD_MODE usermode, FMOD_CREATESOUNDEXINFO *userexinfo);
FMOD_RESULT F_CALLBACK aacClose(FMOD_CODEC_STATE *codec);
FMOD_RESULT F_CALLBACK aacRead(FMOD_CODEC_STATE *codec, void *buffer, unsigned int size, unsigned int *read);
FMOD_RESULT F_CALLBACK aacSetPosition(FMOD_CODEC_STATE *codec, int subsound, unsigned int position, FMOD_TIMEUNIT postype);

// Create custom AAC codec
FMOD_CODEC_DESCRIPTION aacCodec =
{
"AAC (m4a)", // Name.
0x00010000, // Version 0xAAAABBBB A = major, B = minor.
0, // Don’t force everything using this codec to be a stream
FMOD_TIMEUNIT_PCMBYTES, // The time format we would like to accept into setposition/getposition.
&aacOpen, // Open callback.
&aacClose, // Close callback.
&aacRead, // Read callback.
0, // Getlength callback. (If not specified FMOD return the length in FMOD_TIMEUNIT_PCM, FMOD_TIMEUNIT_MS or FMOD_TIMEUNIT_PCMBYTES units based on the lengthpcm member of the FMOD_CODEC structure).
&aacSetPosition, // Setposition callback.
0, // Getposition callback. (only used for timeunit types that are not FMOD_TIMEUNIT_PCM, FMOD_TIMEUNIT_MS and FMOD_TIMEUNIT_PCMBYTES).
0, // Sound create callback (don’t need it)
0
};

FMOD_RESULT F_CALLBACK aacOpen(FMOD_CODEC_STATE *codec, FMOD_MODE usermode, FMOD_CREATESOUNDEXINFO *userexinfo)
{
// Quick check file extention
NSString *filePath = [NSString stringWithCString:(char *)userexinfo->userdata];

if (!(NSOrderedSame == [[filePath pathExtension] caseInsensitiveCompare:@"m4a"]))
{
    NSLog(@"This is not a m4a file");
    return FMOD_ERR_FORMAT;
}

FMOD_CODEC_WAVEFORMAT outputFormat;
memset(&outputFormat, 0, sizeof(FMOD_CODEC_WAVEFORMAT));

outputFormat.channels     = 2;
outputFormat.format       = FMOD_SOUND_FORMAT_PCM16;
outputFormat.frequency    = 44100;
outputFormat.lengthbytes  = codec->filesize;
outputFormat.blockalign   = outputFormat.channels * 2;                   /* 2 = 16bit pcm */
outputFormat.lengthpcm    = codec->filesize / outputFormat.blockalign;   /* bytes converted to PCM samples */;
outputFormat.mode         = usermode;
outputFormat.loopstart    = 0;
outputFormat.loopend      = outputFormat.lengthpcm;

codec->numsubsounds = 0;
codec->waveformat   = &outputFormat;
codec->plugindata = [[AACDecoder alloc] init];


return FMOD_OK;

}

FMOD_RESULT F_CALLBACK aacClose(FMOD_CODEC_STATE *codec)
{
return FMOD_OK;
}

FMOD_RESULT F_CALLBACK aacRead(FMOD_CODEC_STATE *codec, void *buffer, unsigned int size, unsigned int *read)
{
return codec->fileread(codec->filehandle, buffer, size, read, 0);
}

FMOD_RESULT F_CALLBACK aacSetPosition(FMOD_CODEC_STATE *codec, int subsound, unsigned int position, FMOD_TIMEUNIT postype)
{
return codec->fileseek(codec->filehandle, position, 0);
}

// part of Init code

    FMOD_IPHONE_EXTRADRIVERDATA extraData;
    memset(&extraData, 0, sizeof(FMOD_IPHONE_EXTRADRIVERDATA));
    extraData.sessionCategory = FMOD_IPHONE_SESSIONCATEGORY_MEDIAPLAYBACK;

    result = _system->init(32, FMOD_INIT_NORMAL | FMOD_INIT_ENABLE_PROFILE, &extraData);
    [self checkError:result];

    result = _system->createCodec(&aacCodec, 0);
    [self checkError:result];

[/code:1bnp74vf]

What’s happening: aacSetPosition is repeatedly called with position = 0, and continues indefinitely.

Contents of structs at the end of aacOpen callback:

[code:1bnp74vf]

(gdb) p codec
$1 = {
numsubsounds = 0,
waveformat = 0x2fffa748,
plugindata = 0x2706b0,
filehandle = 0x2703b0,
filesize = 34314128,
fileread = 0xaf9e4 <FMOD::Codec::defaultFileRead(void
, void*, unsigned int, unsigned int*, void)>,
fileseek = 0xaf9d0 <FMOD::Codec::defaultFileSeek(void
, unsigned int, void)>,
metadata = 0xaf98c <FMOD::Codec::defaultMetaData(FMOD_CODEC_STATE
, FMOD_TAGTYPE, char*, void*, unsigned int, FMOD_TAGDATATYPE, int)>
}
(gdb) p outputFormat
$2 = {
name = ‘\000’ <repeats 255 times>,
format = FMOD_SOUND_FORMAT_PCM16,
channels = 2,
frequency = 44100,
lengthbytes = 34314128,
lengthpcm = 8578532,
blockalign = 4,
loopstart = 0,
loopend = 8578532,
mode = 16578,
channelmask = 0
}

[/code:1bnp74vf]

  • You must to post comments
0
0

[b:tvkhnazo]EDIT[/b:tvkhnazo]: never mind, there is no reason AudioQueueOfflineRender shouldn’t return with valid decompressed data. The QA tech article example shows this. There must be something wrong with my AQ code.

I’m using a rather large blockalign size of 4096, so its always in multiples of AAC packets. I hope this isn’t a problem.

  • You must to post comments
0
0

I have just tried the codec_raw example, basically I merged in some simple init and create sound code and it works as expected here. An initial set position followed by several reads.

I noticed you are setting lengthpcm based on filesize / blockalign this will need to be updated to reflect the number of samples of the m4a file to work properly but unless the length was zero I don’t see how that would cause infinite calling of setposition.

All I can suggest is you send your project to us at support@fmod.org if you can’t find anything obviously wrong.

  • You must to post comments
0
0

I wouldn’t worry about a block size of 4096, our MP3 codec uses a block size of 1152 samples which can be up to 4608 bytes if stereo.

  • You must to post comments
0
0

Thanks Matthew.

I found that if I use FMOD_LOOP_OFF instead of FMOD_LOOP_NORMAL, the seek is only called once. But I need the loop feature.

[code:7yiv8qrv]result = _system->createStream(cString, FMOD_SOFTWARE | FMOD_LOOP_OFF | FMOD_ACCURATETIME, &info, &_soundA);[/code:7yiv8qrv]

And now I have another problem. The read callback is never called and I get an error message:

ERROR – Contact support. A codec has specified a block alignment of 16577 which is bigger than the maximum codec read size of 16384

I have no idea what the message indicates.

  • You must to post comments
0
0

I’ve been at this for a while but I’ve having difficulties getting anything useful out of the offline render function. The output buffer is always filled with 0s.

My problem is well described in this forum post <https://devforums.apple.com/thread/7647?tstart=0&gt;.

I tried all the solutions but they didn’t work for me.

Also, it’s difficult to return FMOD the decoded data to the read callback in a timely manner because (according to what I read) offline render refuses to render all frames in the input buffer. I "cheated" and read 24-32 packets into my input buffers using AudioFileReadPacketData, but even this is not working. For some reason the callback is called 3 times during offline render, even if I request 0 frames (this is AFTER I requested 0 frames, as per requirement). same result with 1024 or 4096 frames. I tried all kinds of buffer sizes too. 😕

This is probably beyond FMOD support.. so just for completeness’ sake here’s my decoder init code, basically everything before the offline render call.

[code:hfbh7e8i]
– (id)initWithFile:(NSString *)filePath;
{

if ((self = [super init]))
{
    _filePath  = [filePath copy];
    _fileSize = 0;
    _duration = 0;
    _isFirstTime = true;
    _isSecondTime = false;
    _streamInfo.mCurrentPacket = 0;
    _streamInfo.mIsVBR = true;
    _streamInfo.mPacketIndex = 0;
    _streamInfo.mPacketsToRead = 0;
    _streamInfo.mBytesToRead = 0;
    _streamInfo.mNumPacketsPrimed = 0;

    OSStatus err = noErr;
    UInt32 propertySize = sizeof(_streamInfo);
    //AudioFileID afid = 0;

    // Open file
    if (noErr != AudioFileOpenURL((CFURLRef)[NSURL fileURLWithPath:self.filePath], kAudioFileReadPermission, 0, &amp;_streamInfo.mAudioFile))
    {
        NSLog(@&quot;could not open audio file. Path given was: %@&quot;, filePath);
        return nil;
    }


    propertySize = sizeof(_streamInfo.mOriginalFormat);
    // Get the audio data format
    err = AudioFileGetProperty(_streamInfo.mAudioFile, kAudioFilePropertyDataFormat, &amp;propertySize, &amp;_streamInfo.mOriginalFormat);
    if (err)
    {
        NSLog(@&quot;get property fail, Error = %ld&quot;, err);
        return nil;
    }


    if (_streamInfo.mOriginalFormat.mChannelsPerFrame &gt; 2)
    {
        NSLog(@&quot;Unsupported Format, channel count is greater than stereo&quot;);
        return nil;
    }

    if (_streamInfo.mOriginalFormat.mFormatID != kAudioFormatMPEG4AAC) // native endian? prolly only a concern for linear pcm which we don't support anyway
    {
        NSLog(@&quot;Unsupported Format, must be MPEG-4 AAC&quot;);
        return nil;
    }

    // Get file size in bytes
    propertySize = sizeof(_fileSize);
    err = AudioFileGetProperty(_streamInfo.mAudioFile, kAudioFilePropertyAudioDataByteCount, &amp;propertySize, &amp;_fileSize);
    if (err)
    {
        NSLog(@&quot;AudioFileGetProperty(kAudioFilePropertyAudioDataByteCount) FAILED, Error = %ld&quot;, err);
        return nil;
    }


    // Get duration
    UInt64 totalFrames;
    totalFrames = 0;
    propertySize = sizeof(_streamInfo.mAudioDataPacketCount);
    err = AudioFileGetProperty(_streamInfo.mAudioFile, kAudioFilePropertyAudioDataPacketCount, &amp;propertySize, &amp;_streamInfo.mAudioDataPacketCount);
    if (err)
        fprintf(stderr, &quot;AudioFileGetProperty kAudioFilePropertyAudioDataPacketCount failed&quot;);


    propertySize = sizeof(_streamInfo.mPacketTableInfo);
    err = AudioFileGetProperty(_streamInfo.mAudioFile, kAudioFilePropertyPacketTableInfo, &amp;propertySize, &amp;_streamInfo.mPacketTableInfo);
    if (err == noErr)
        totalFrames = _streamInfo.mPacketTableInfo.mNumberValidFrames;


    if (totalFrames != 0)
        _duration = (NSTimeInterval)totalFrames / (NSTimeInterval)_streamInfo.mOriginalFormat.mSampleRate;



    // Create an output queue object with a callback
    XThrowIfError(AudioQueueNewOutput(&amp;_streamInfo.mOriginalFormat,     // The data format of the audio to play.
                                      FinishedReadingFromBuffer,        // A callback function to use with the playback audio queue.
                                      &amp;_streamInfo,                     // A custom data structure for use with the callback function.
                                      CFRunLoopGetCurrent(),            // The event loop on which the callback function pointed to by the inCallbackProc parameter is to be called.
                                                                        // If you specify NULL, the callback is invoked on one of the audio queue’s internal threads.
                                      kCFRunLoopCommonModes,            // The run loop mode in which to invoke the callback function specified in the inCallbackProc parameter. 
                                      0,                                // Reserved for future use. Must be 0.
                                      &amp;_streamInfo.mQueue),             // On output, the newly created playback audio queue object.
                  &quot;AudioQueueNew failed&quot;);



    // VBR or CBR?
    _streamInfo.mIsVBR = (_streamInfo.mOriginalFormat.mBytesPerPacket == 0 || _streamInfo.mOriginalFormat.mFramesPerPacket == 0);


    // Find max theoretical packet size
    UInt32 maxPacketSize;
    propertySize = sizeof(maxPacketSize); //kAudioFilePropertyMaximumPacketSize kAudioFilePropertyPacketSizeUpperBound
    XThrowIfError(AudioFileGetProperty(_streamInfo.mAudioFile, kAudioFilePropertyMaximumPacketSize, &amp;propertySize, &amp;maxPacketSize), &quot;couldn't get file's max packet size&quot;);

    // Determine size of inbuffer / outbuffer
    _streamInfo.mInBufferByteSize = maxPacketSize * 8; // space for 8 packets max. FMOD requests up to 4 packets (4096 pcm frames) max
    _streamInfo.mInBufferMaxPacketCapacity = 8;
    //_streamInfo.mInBufferByteSize = 32768;
    _streamInfo.mOutBufferByteSize = 4 * 4096; // inbuffer holds twice as many frames as outbuffer, which is what AQ wants

    // if the file has a magic cookie, we should get it and set it on the AQ
    propertySize = sizeof(UInt32);
    OSStatus result = AudioFileGetPropertyInfo (_streamInfo.mAudioFile, kAudioFilePropertyMagicCookieData, &amp;propertySize, NULL);

    if (!result &amp;&amp; propertySize)
    {
        char* cookie = new char [propertySize];     
        XThrowIfError (AudioFileGetProperty(_streamInfo.mAudioFile, kAudioFilePropertyMagicCookieData, &amp;propertySize, cookie), &quot;get cookie from file&quot;);
        XThrowIfError (AudioQueueSetProperty(_streamInfo.mQueue, kAudioQueueProperty_MagicCookie, cookie, propertySize), &quot;set cookie on queue&quot;);
        delete [] cookie;
    }


    // channel layout?
    err = AudioFileGetPropertyInfo(_streamInfo.mAudioFile, kAudioFilePropertyChannelLayout, &amp;propertySize, NULL);
    AudioChannelLayout *acl = NULL;
    if (err == noErr &amp;&amp; propertySize &gt; 0)
    {
        acl = (AudioChannelLayout *)malloc(propertySize);
        XThrowIfError(AudioFileGetProperty(_streamInfo.mAudioFile, kAudioFilePropertyChannelLayout, &amp;propertySize, acl), &quot;get audio file's channel layout&quot;);
        XThrowIfError(AudioQueueSetProperty(_streamInfo.mQueue, kAudioQueueProperty_ChannelLayout, acl, propertySize), &quot;set channel layout on queue&quot;);
    }


    // Allocate playback buffers - VBR or CBR
    if (_streamInfo.mIsVBR)
    {
        for (NSInteger i = 0; i &lt; kNumInputBuffers; i++)
        {
            // Allocate x buffers for this AQ. The queue manages these buffers.
            XThrowIfError(AudioQueueAllocateBufferWithPacketDescriptions(_streamInfo.mQueue, _streamInfo.mInBufferByteSize, _streamInfo.mInBufferMaxPacketCapacity, &amp;_streamInfo.mInBuffers[i]),
                          &quot;AudioQueueAllocateBufferWithPacketDescriptions failed&quot;);
        }


        // Create a table of packet descriptions, large enough to hold the whole thing
        _streamInfo.mNumAllPacketDescs = (_streamInfo.mPacketTableInfo.mNumberValidFrames + _streamInfo.mPacketTableInfo.mPrimingFrames + _streamInfo.mPacketTableInfo.mRemainderFrames) / _streamInfo.mOriginalFormat.mFramesPerPacket;
        _streamInfo.mAllPacketDescs = new AudioStreamPacketDescription[_streamInfo.mNumAllPacketDescs];

        // Fill the packet table
        UInt32 ioNumBytes = maxPacketSize;
        UInt32 ioNumPackets = 1;
        void *temp = malloc(maxPacketSize);
        SInt64 currentPacket = 0;
        UInt32 buffersFilled = 0;
        UInt32 bufferPacketsRead = 0;
        size_t bufferOffset = 0;
        for (NSInteger i = 0; i &lt; _streamInfo.mNumAllPacketDescs; i++)
        {

            err = AudioFileReadPacketData(_streamInfo.mAudioFile, false, &amp;ioNumBytes, &amp;_streamInfo.mAllPacketDescs[i], currentPacket, &amp;ioNumPackets, temp);
            if (err)
            {
                NSLog(@&quot;exiting 102&quot;);
                exit(102);
            }

            if (ioNumPackets &lt; 1)
                NSLog(@&quot;packet was not read!!&quot;);


            // Prime all input buffers until they are FULL.
            // The buffers can hold kNumInputBuffers * 8 packets.

            if (buffersFilled &lt; kNumInputBuffers)
            {
                _streamInfo.mInBuffers[buffersFilled]-&gt;mPacketDescriptions[bufferPacketsRead] = _streamInfo.mAllPacketDescs[i];

                memcpy((char *)_streamInfo.mInBuffers[buffersFilled]-&gt;mAudioData + bufferOffset, (const void *)temp, ioNumBytes);

                _streamInfo.mNumPacketsPrimed++;

                bufferPacketsRead++;
                bufferOffset += _streamInfo.mAllPacketDescs[i].mDataByteSize;

                if (bufferPacketsRead == 8)
                {
                    _streamInfo.mInBuffers[buffersFilled]-&gt;mAudioDataByteSize = bufferOffset;
                    _streamInfo.mInBuffers[buffersFilled]-&gt;mPacketDescriptionCount = 8;
                    buffersFilled++;
                    bufferPacketsRead = 0;
                    bufferOffset = 0;
                }  
            }

            ioNumPackets = 1;
            ioNumBytes = maxPacketSize;
            currentPacket++;
        }
    }
    else
    {
        // TO DO if CBR is needed
        /*
        XThrowIfError(AudioQueueAllocateBuffer(_streamInfo.mQueue, bufferByteSize, &amp;_streamInfo.mInBuffer),
                      &quot;AudioQueueAllocateBufferWithPacketDescriptions failed&quot;);

        _streamInfo.mNumAllPacketDescs = 0;
        _streamInfo.mAllPacketDescs = NULL;
         */
    }


    // Capture audio data format
    /*
    _streamInfo.mCaptureFormat.mSampleRate = _streamInfo.mOriginalFormat.mSampleRate;
    _streamInfo.mCaptureFormat.mFormatID = kAudioFormatLinearPCM;
    _streamInfo.mCaptureFormat.mFormatFlags = kAudioFormatFlagsCanonical;
    _streamInfo.mCaptureFormat.mBytesPerPacket = 4;
    _streamInfo.mCaptureFormat.mFramesPerPacket = 1;
    _streamInfo.mCaptureFormat.mBytesPerFrame = 4;
    _streamInfo.mCaptureFormat.mChannelsPerFrame = 2;
    _streamInfo.mCaptureFormat.mBitsPerChannel = 16;
    _streamInfo.mCaptureFormat.mReserved = 0;
     */

    FillOutASBDForLPCM(_streamInfo.mCaptureFormat, _streamInfo.mOriginalFormat.mSampleRate, 2, 16, 16, false, false);

    // Set offline render format
    XThrowIfError(AudioQueueSetOfflineRenderFormat(_streamInfo.mQueue, &amp;_streamInfo.mCaptureFormat, acl), &quot;AudioQueueSetOfflineRenderFormat&quot;);            


    // allocate the capture buffer, just keep it at half the size of the enqueue buffer
    // we don't ever want to pull any faster than we can push data in for render
    // this 2:1 ratio keeps the AQ Offline Render happy
    XThrowIfError(AudioQueueAllocateBuffer(_streamInfo.mQueue, _streamInfo.mOutBufferByteSize, &amp;_streamInfo.mOutBuffer), &quot;AudioQueueAllocateBuffer&quot;);


    // start playing immediately
    XThrowIfError(AudioQueueStart(_streamInfo.mQueue, NULL), &quot;AudioQueueStart failed&quot;);

    _streamInfo.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
    _streamInfo.mTimeStamp.mSampleTime = 0;

    // we need to call this once asking for 0 frames (requirement)
    XThrowIfError(AudioQueueOfflineRender(_streamInfo.mQueue, &amp;_streamInfo.mTimeStamp, _streamInfo.mOutBuffer, 0), &quot;AudioQueueOfflineRender&quot;);



    // Enqueue all buffers, fully primed.


    // Enqueue first buffer.
    result = AudioQueueEnqueueBufferWithParameters(_streamInfo.mQueue,              // The audio queue that owns the audio queue buffer.
                                                            _streamInfo.mInBuffers[0],         // The audio queue buffer to add to the buffer queue.
                                                            0,                          // The number of packet desc of audio data in the inBuffer parameter. See Docs.
                                                            NULL,                       // An array of packet descriptions. Or NULL. See Docs.
                                                            _streamInfo.mPacketTableInfo.mPrimingFrames,
                                                            0,
                                                            0, NULL, NULL, NULL);

    if (result)
    {
        DebugMessageN1 (&quot;Error enqueuing buffer: %d\n&quot;, (int)result);
        exit(1);
    }

    // Enqueue second buffer.
    result = AudioQueueEnqueueBufferWithParameters(_streamInfo.mQueue,              // The audio queue that owns the audio queue buffer.
                                                            _streamInfo.mInBuffers[1],         // The audio queue buffer to add to the buffer queue.
                                                            0,                          // The number of packet desc of audio data in the inBuffer parameter. See Docs.
                                                            NULL,                       // An array of packet descriptions. Or NULL. See Docs.
                                                            0,
                                                            0,
                                                            0, NULL, NULL, NULL);

    if (result)
    {
        DebugMessageN1 (&quot;Error enqueuing buffer: %d\n&quot;, (int)result);
        exit(1);
    }

    // Enqueue third buffer.
    result = AudioQueueEnqueueBufferWithParameters(_streamInfo.mQueue,              // The audio queue that owns the audio queue buffer.
                                                            _streamInfo.mInBuffers[2],         // The audio queue buffer to add to the buffer queue.
                                                            0,                          // The number of packet desc of audio data in the inBuffer parameter. See Docs.
                                                            NULL,                       // An array of packet descriptions. Or NULL. See Docs.
                                                            0,
                                                            0,
                                                            0, NULL, NULL, NULL);

    if (result)
    {
        DebugMessageN1 (&quot;Error enqueuing buffer: %d\n&quot;, (int)result);
        exit(1);
    }

    // Enqueue fourth buffer.
    result = AudioQueueEnqueueBufferWithParameters(_streamInfo.mQueue,              // The audio queue that owns the audio queue buffer.
                                                   _streamInfo.mInBuffers[3],         // The audio queue buffer to add to the buffer queue.
                                                   0,                           // The number of packet desc of audio data in the inBuffer parameter. See Docs.
                                                   NULL,                        // An array of packet descriptions. Or NULL. See Docs.
                                                   0,
                                                   0,
                                                   0, NULL, NULL, NULL);

    if (result)
    {
        DebugMessageN1 (&quot;Error enqueuing buffer: %d\n&quot;, (int)result);
        exit(1);
    }




    // Clean up
    XThrowIfError(AudioFileClose(_streamInfo.mAudioFile), &quot;AudioFileClose failed&quot;);
    free(acl);

}

return self;

}
[/code:hfbh7e8i]

  • You must to post comments
0
0

I see what your problem is, you are setting codec->waveformat to the address of a stack variable, when this function returns the link is no longer valid.

So the lengthpcm variable will have garbage, possibly zero causing setposition to call constantly if you use FMOD_LOOP_NORMAL.

Also the read function is falling over because the blockalign value (which is used to tell FMOD to only request multiples of this number of samples from the codec read function) is too large.

If you look at the codec_raw example the waveformat structure is actually a static.

  • You must to post comments
0
0

Hi Hyn,

I’ve not taken a very deep look at your code yet, as I’m still just getting started looking at the offline codec APIs and FMOD, but it seems that your codec references:

codec-&gt;plugindata = [[AACDecoder alloc] init]; 

Should it actually be:

codec-&gt;plugindata = [[AACDecoder alloc] initWithFile:filePath]; 

Or something of the sort? Other than that, good luck. I hope you get it working!

  • George
  • You must to post comments
0
0

Knew it had to be a stupid mistake on my part :\

Made the wavefomat static and now it’s working perfectly.
Many thanks!

  • You must to post comments
0
0

Hi G,

You’re right, that line should be initWithFile:. I just forgot to update the code I posted.

  • You must to post comments
0
0

Hi,

I have a few more codec questions; I read the docs but I don’t think I understand the system completely.

1) The setposition callback is called when FMOD wants to seek to a new position. This position is in units of [b:3ogxpxbv]postype[/b:3ogxpxbv], which for me is [b:3ogxpxbv]FMOD_TIMEUNIT_PCMBYTES[/b:3ogxpxbv]. Does this make sense if the file is in compressed format? How does FMOD know where exactly to seek to in the file? Right now my assumption is that the FMOD [b:3ogxpxbv]codec->fileseek[/b:3ogxpxbv] function will move the filehandle to wherever it needs to be, and I don’t need to do anything on my part.

2) The read callback is called when FMOD needs [b:3ogxpxbv]sizebytes[/b:3ogxpxbv] of file data read out from [b:3ogxpxbv]codec->filehandle[/b:3ogxpxbv], get it decoded and written to [b:3ogxpxbv]buffer[/b:3ogxpxbv].
Is [b:3ogxpxbv]sizebytes[/b:3ogxpxbv] the number of compressed bytes requested to be read out? And is [b:3ogxpxbv]bytesread[/b:3ogxpxbv] the number of decoded LPCM bytes actually written to [b:3ogxpxbv]buffer[/b:3ogxpxbv]? Otherwise how would FMOD know the size of the buffer?

Why should I use [b:3ogxpxbv]codec->fileread[/b:3ogxpxbv] in this case? Can’t I just read [b:3ogxpxbv]sizebytes[/b:3ogxpxbv] of data at [b:3ogxpxbv]codec->filehandle[/b:3ogxpxbv]?

I thought that writing this would help my understanding but I’m still very much confused..
😕

  • You must to post comments
0
0

Hey hyn,

Looks like someone has written a codec for the APE format (lossless compression). Not sure if you have seen it before, but you can find it here:

http://www.ishiboo.com/~nirva/Projects/fmod_ape/mac.cpp

I’ve just started playing around with your code that you’ve posted up. I’m assuming there is more to the AACDecoder class than just the initWithFile? If you could provide me some more info (you can message me if you don’t want to post up the code everywhere), I’d like to help. It’s quite an intriguing problem and I think having HW support for FMOD on iPhone would be incredible.

Thanks.

  • George
  • You must to post comments
0
0

1) The time unit of the set position callback can be whatever you need for your codec. You tell FMOD which you prefer via the FMOD_CODEC_DESCRIPTION timeunits member, you can choose which ever is more convenient for your code. codec->fileseek will move the file handle to the position you set in bytes. It’s up to your codec to translate the exact correct position depending on the position and type passed in.

2) The read callback is asking for sizebytes of decoded data (this is the size of the buffer that is passed in). So FMOD may ask for 1000 bytes (sizebytes), you decode some chunks and end up with 800 bytes of PCM data for whatever reason, you set readbytes to 800 and fill the buffer with that data.

3) You should read data from the file with codec->fileread because the "file" may be in memory, it may be a netstream, it may be a disk file, etc. We abstract the "file" interface so codecs get all that behavior for free. It’s not a good idea to assume what the "file" handle actually is (hence the void*).

  • You must to post comments
0
0

Thanks for the clarification.

So if sizebytes in read callback is size in decoded bytes, how do I know how much compressed data to read using codec->fileread?

  • You must to post comments
Showing 18 results
Your Answer

Please first to submit.