I am starting to look into integrating fmod in my engine. We are working on x360/ps3 retail game. We have a system for loading data in chunk (which I will call load units).
We want to take the whole tool chain of FMod (designer and event system).
I am unsure of how I will integrate every thing, I would appreciate having some feed back on how other users did it.
Here are a list of thing I would like to do:
-Put sound events in a load unit
(I am thinking this could translate in generating a sound bank/.fev for that load unit)
-Have sound attached to game object which means if I put that object in a load unit, it’s sounds will be added in the load unit.
From my understanding, all the data about the sound event is stored into the project file which mean I would need a project file per load unit if I want to generate a different sound bank, fev for each load unit. But from my second requirement, it would seem I would want to have a project file per object.
We have a elaborate asset build system and it is unclear how I would put everything together.
Is there a way to build sound events seperatly or can we only build the project file ?
Another option would be to generate a project file by Load unit (by aggregating the project files from Game objects). Is this a common practice ?
- luffy911 asked 8 years ago
[b:198cak8a]Edit: audiodev beat me to the punch, but there is still some valid stuff here.[/b:198cak8a]
Hi luffy911, welcome to the forum.
This is an interesting problem, we recently had to tackle this when integrating into UE3. It basically boils down to: FEV and FSB are pack files and embedding those pack files into a game engine’s custom pack file format is non-trivial.
I don’t think you will need to have a project file per object but that is an option and it has been done before. This method is the most straight forward because it gives you a fine granularity which is easier to work into an existing pack file format. The downside to doing this is it’s a pain for sound designers to manipulate and maintain many projects at the same time in FMOD Designer.
On the API side EventGroups are used for fine-grain loading and unloading. It would generally be preferable to associate each object with an EventGroup. The obvious implication is that multiple objects will share a single FEV file.
[quote:198cak8a]Is there a way to build sound events seperatly or can we only build the project file ? [/quote:198cak8a]
You can only build the whole project.
[quote:198cak8a]-Put sound events in a load unit [/quote:198cak8a]
If you have a dependency system, I think the best way to achieve this is to have a event-stub type object which pulls in a dependency on the FEV object.
The FSB files are a different matter and have to be treated differently. The FSB contains the sound data and can be read in multiple ways. Each FSB can (and usually does) contain multiple sounds.
There are three modes for FSBs:
‘Decompress into memory’ or ‘Load into memory’ – FMOD System will read load a whole sound at a time into memory.
‘Stream from disk’ – FMOD system will have a file handle per playing sound each pointing to different parts of the FSB file and reading small chunks at a time.
It is generally easier to to leave the FSB outside your custom package files.
Hope this is helps,
Banking sectors is done in our editor, and we’d like that the sound designer doesn’t have to handle banking on FMOD Designer’s side as well. Otherwise the sound designer would have to know how sectors are laid out and maintain a matching FSB banking.
Also, coupling FSBs to our sectors allows us to have control over our memory footprint.
1. For the music system, is it recommended to have one FSB per stream or one FSB for all streams in a cue?
2. Is the mix of event categories handled somewhere in FMOD Designer or should it be done in code?
Ok thanks for the info
So I would be possible to have only one project for the entire game which will generate a multiple sound banks.
I’ll look into using a Event Group for my game object.
I saw that the project files are pure xml, is it a common practice to parse them or generate them ? I guess usually people will work with the programmer report or the text files.
Artificial Mind and Movement
- luffy911 answered 8 years ago
Unless you’re auto-creating the FMOD projects, the designers will need to do at least rudimentary banking in FMOD Designer. Most other features require you to import wavs into banks just to hear the audio in the tool.
I definitely agree that if you need your audio data bundled into your sectors, it’d be ideal for it to be done automatically. I’m just wondering how necessary that really is. If you can break that constraint, it’d make many things much easier.
Regarding music streams, people generally do many streams per fsb. You could have one giant one for all music, or break it down into a small number of banks (i.e. UI, missions, exploration, etc). People usually go with a small number of music banks just because it saves on P4/server traffic (i.e. you don’t have to download a huge 1 GB fsb every time a single song changes).
Mixing is generally done in FMOD Designer by setting volumes on Event Categories. There are some workflow issues w/ that if you do multiple projects, but those are covered in other threads.
[quote:1zdefgjh]So I would be possible to have only one project for the entire game which will generate a multiple sound banks. [/quote:1zdefgjh]
That is definitely possibly but i think generally most devs will have multiple projects in a game, just probably not quite as fine-grain as one per object.
[quote:1zdefgjh]I saw that the project files are pure xml, is it a common practice to parse them or generate them ? I guess usually people will work with the programmer report or the text files. [/quote:1zdefgjh]
Yes, it is quite common for people to parse the project xml.
It would be advantageous to have all the streams which would play at the same time / change from one another to be in the same FSB so they can share the same pool of streaming buffers.
(Although, there is some header information loaded into memory for each sound in the FSB, this is usually less than allocating additional streaming buffers that might not be used.)
4 FSBs x 1 stream each = 4 buffers, yet each FSB can only play 1 stream.
I found it more efficient memory wise to include everything that would possibly stream in a level in a single streaming FSB for that level.
That said, streaming buffers are only allocated when the Event system loads a group/event that uses an audio file in that FSB. So if your sectors are loading and unloading events/groups that correspond to the FSBs, you should be ok.
The main reason to have multiple sound projects is if you have multiple sound designers who need to work on the game at the same time or if you have so many sounds that FMOD takes a long time to open the file (this would be in the tens of thousands). If it’s just you, then it’s actually easier all in one project.
FWIW, our approach was not to parse the XML but to just have our build machine run the commandline version of designer to build our projects.
We’re evaluating integrating FMOD Designer in our engine.
If I understand correctly, the responsability of banking audio resources and scenaric data is up to the sound designer, through the way they choose how to organize their FSB files, and whether they create multiple projects to get one or more FEV files.
If possible we’d like to transfer this responsibility to the programmer so that banking of both resources and scenaric data depends on our sector hierarchy.
Because the FSB file format is known, it seems possible to reorganize FSB files the way we want.
But if we do this, how do we link scenaric data to our newly organized FSB files?
The "-s 2" switch of fmod_designercl.exe allows to build a FEV file with no matching FSB file. Is this available for people who want to do just that?
Is it possible to do the same thing for scenaric data? Is there a description of the FEV file format available?
Peter, you say it is common to parse the project XML. What do people use that for?
Could it be used to generate multiple FEV files out of one Designer project?
[quote:1sjrwbns]Peter, you say it is common to parse the project XML. What do people use that for?[/quote:1sjrwbns]
Generally just extracting information for their tool chain.
[quote:1sjrwbns]Could it be used to generate multiple FEV files out of one Designer project? [/quote:1sjrwbns]
Theoretically yes, but I think that would be a lot of work.
You can initialize the EventSystem with the FMOD_EVEN_INIT_USER_ASSETMANAGER flag which will provide you with a callback everytime a soundef is created or released. This way you could use the designer built FEVs with cutom made FSBs.
- Guest answered 7 years ago
We use multiple projects. It’s a benefit for us on 2 counts:
- We have multiple audio designers working on separate projects simultaneously.
- Having separate projects for each level of the game decreases memory overhead for events that aren’t used in that level. (FEV file takes up memory for event definitions even if you don’t load those events).
We also generally use an EventGroup for each game object/actor – except for common sounds.
And our FEV and FSB files are not packed into the game’s pack files.
Here is a diagram of how we hope it could work:
http://img132.imageshack.us/img132/5755 … gnerwo.png
- Sound designers work on one or more FMOD Projects.
- These projects are built and loaded in our editor so that Events can be enumerated and linked to game objects.
- In our engine, sectors are the streaming unit.
- Based on our sector hierarchy, we parse the FMOD projects and create new ones, so that we can build one FEV file per sector.
- FSB files are not built at the same time as the FEV files above. They are built separately, from our editor, either with the command-line tool or the FSBank API.
- With the preloadFSB method, would FMOD System be able to link the SoundDefs in the loaded FEVs to the wavedata in the loaded FSBs?
- If not, is this why you suggested the SoundDef creation/release callback, Peter?
If I understand the callback correctly, we have a SoundDef index available, and we need to provide a matching FMOD::Sound.
- How are SoundDefs indexed when multiple FEV files are loaded into FMOD::EventSystem?
- Does this callback behavior imply that we build a list of all SoundDefs indices and their matching FSB file and have it available for this callback?
Once an FMOD::Sound is created from the needed FSB file, a SoundDef is able to find its wavedata on its own, right?
UnrealEngine has a sectorization system similar to ours. If this is open for discussion, could you describe how you integrated FMOD’s asset system with theirs?
Another unrelated question: in FMOD Designer, effects can be applied to Event layers. Can they be applied to a whole Event Category as well?
Thanks a lot.
Guillaume, I think that system is totally workable, but it’ll be a fair amount of work to do and will make your audio pipeline fairly…bespoke. I’m curious what advantages you expect to realize by tightly coupling your fsbs to your sector system.
I ask because we took a simpler approach on Brutal Legend and it worked out ok. Our solution (which I talk about some earlier in the thread) was basically to keep FMOD data separate from our sectors assets, but grouped using EventGroups. Our file system is an elaboration on an elevator pattern + priorities. We basically throw stuff in the queue, sort it by where it is on disk (and priority) and then load it when we get there. I assume you have something similar if you’re looking into an open world game.
If you do, you can take good advantage of FMOD’s new async file APIs. It let’s you do nice things like treat preloads (loadEventData) as a low priority requests, service streams at lower priority (at the cost of some extra memory), etc. This didn’t exist during BL and we had to put some gross hacks in a couple paces, but even then this system was totally shippable for us.
Anyway, I guess all I’m suggesting is that you consider a more "stock" FMOD deployment combined with some additional smarts at the FMOD file system level. I think it’d be a lot less work for your tech team, result in a simple audio pipeline, and perhaps work just as well as the more elaborate approach.
There’s no one answer, depends a lot on your game. In general, if you are a regular level based game, then you want to basically load all your data at the start of the level in one big gulp and unload it all at the end. If you’re a more complex level based game, or an open world game, you probably want to load and unload on a per object/per zone basis. Regardless, you can only build at the project level, but you have a lot of room to decide how granular each project should be.
The simplest approach is to use loadEventData on EventGroups (or whole projects). Basically, you’d associate a set of EventGroups with each "load chunk" and then call loadEventData on all of them when the chunk is loaded. You’ll probably want to reference count those groups, too, and call freeEventData when a group has it’s refs drop to 0.
That scheme is orthogonal to the way the data is actually laid out on disk. You could leave the raw FMOD data (fev, fsb, etc) on your disk and be fine or you could override FMOD’s file system and package them up into your own data files. YMMV depending on your constraints.
The other thing you can look at is preloadFSB, which basically is useful if you want to slurp whole FSBs into memory at once. If you have one or more FSBs per "load chunk" then that can work fine.
The biggest and trickiest question is probably if you are ok with basically letting FMOD be in charge of data loading (possibly routed through your filesystem) or if you want/need to slurp up all of the data for, say, one level in one, ginormus read. If it’s the latter, you’ll need to look at preloadFSB and structure your banks very carefully.
One more thought: in your diagram you wonder if streaming sounds need to be handled differently. I think the answer is definitely yes (though they still need to flow through your file system using FMOD’s file system hooks).
Really, though, you can think about all audio as streaming. Even "resident" sounds will be loaded asynchronously. It’s just that they stay in memory after the load. Unlike streams which may starve if given too little bandwidth, resident samples will simply play late.
There’s another interesting property of audio, which is that much of it is very latency tolerant. For any given area of the world, or any one character, it’s unlikely that they will play all of their sounds at once. In fact, many of their sounds won’t actually need to play until many seconds after you encounter them. This is a nice property that is worth trying to take advantage of.
Now, here’s an even crazier extension of that idea. What if you started thinking about resident audio memory not as a list of big discrete chunks of wav files but as an LRU cache of wav files? You stream samples into this LRU and evict old ones as the cache fills. With a sufficiently sized buffer you’ll get really nice hit rates. Cache misses basically amout to additional latency, which you can mitigate with some smart preloading. Best of all, this matches well to the granularity of the actual game audio, and allows you to "trickle" audio in off disk rather than trying to do everything you can to load it in huge chunks.
We took this approach for our voice lines in BL, and it worked great. If I had the time and the need, I’d convert all of our resident audio over to this pattern.
Please login first to submit.