I have 2 soundbanks. One uses stereo wavs only, and one uses mono wavs only. Exported for xbox 360 with xma 50. The stereo fsb is 5MB and the Mono one is 8MB.
I then exported for PS3 using mp3 50. The stereo fsb is 6MB (120%) and the mono one is 30MB (375%)!
I tried converting the mono source wavs to stereo, and re-exporting and the PS3 fsb file is exactly the same size as mono. Ive tried the same thing with MP2 and PC MP3, and get same fsb sizes with both mono and stereo files.
The fact that there is such a vast difference in fsb size when using mono wavs instead of stereo makes me think that the MP3 encoder is converting the mono to stereo before encoding.
Does anyone know whats really going on?
- Jogo asked 8 years ago
Its not that it is converting the data to stereo, its that your quality setting converts to an mp3 bitrate when you set it.
128kbps (fmod quality 40, formula is kbps = quality * 3.2), is 128kbps if it is mono or stereo, it doesnt matter.
You should be using a smaller bitrate for mono data than stereo data.
You can do this on a platform by platform basis.
You can’t compare xma to mp3 either. They are not supposed to be about the same size, you have to use different quality settings for different formats.
I wonder why it works this way? If the code is aware of the actual kbs value when encoding mp3, why doesn’t it reveal this to the user, seeing as it is useful, time-saving information? I understand that with other types of compression (eg xma), the code may not know the kbs, but as you say there is no link between quality values on different compression systems, so when it does know the information, why not display it?
I was also wondering… if the quality value is capped at 100, does that mean maximum quality on mono is better than maximum quality stereo?
Thanks for your help
Please login first to submit.