0
0

Hi,

I’m working on a 3D audio app that streams live audio from the net and spatializes it in a 3D space.

My current implementation involves reading the audio data from the sockets and writing them to an AudioTrack buffer, then sending it to FMOD for the spatialization (I know this is a bit inefficient to jump back and forth between the native and java layers). I want to convert the AudioTrack to whatever native buffer format FMOD can read, have it spatialized by the FMOD engine, and then reconverted back to AudioTrack for use by the clients on the Java layer.

Could anyone guide me on this?

Thanks!
Mustafa

  • You must to post comments
0
0

Hi folks,

A bit surprised why no one responded to this as yet 😕 .

Probably I should rephrase my question:

So I get a PCM encoded buffer with voice data from my UDP socket I’m listening to. I then wish to spatialize this audio in 3D space using the FMOD libraries. The trouble is that I need the output of FMOD, i.e. after doing all the convolutions required, in a PCM encoded buffer. The reason is that I then "write" this buffer into an AudioTrack object natively.

So any ideas about which API’s I can use to get the buffer after FMOD has done 3D processing on the input buffer?

  • You must to post comments
0
0

Hi Mustafa, welcome to the FMOD Forums.

Sorry it’s taken so long to reply, it’s been pretty hectic around the office lately.

  1. To get PCM data into FMOD and spatialize it you would want to make a custom Sound, check out the ‘usercreatedsound’ example.
  2. To extract data out of FMOD you can create a custom DSP, check out the dsp_custom example.
  • You must to post comments
Showing 2 results
Your Answer

Please first to submit.