0
0

Hi!

We are switching from Xact to Fmod so I’m busy trying to get my hands on the tools and understanding the fundamentals of Fmod. It looks great so far.

Here is a question i have about workflow in the designer tool.

I have a few hundred lines of dialogue that I want to import into events, pretty much just naming the event after the filename (which is autogenerated from the dialogue scripts in a sortable format). I put them into a wavebank and created the sound definitions without any problems. But how do I create multiple events for each sound definition? In xact i would drag the wavebank entries into the soundbank list, and then drag the sounds to create cues, in just a few steps. With FMod i must be missing something because it is seeming like i would have to create an event by hand for each sound definition? There will eventually be thousands of lines of dialogue for this project so the amount of time involved in creating each of those by hand will go up a lot.

Is there a more efficent way to do this? I’m thinking that I don’t really understand templates well, so maybe I can create a template to auto-generate the events for each sound definition, but as far as i can tell that just streamlines the by-hand creation process slightly by filling out a few parameters. In my case i want all the parameters to be identical other than the specific sound definition for that event. Is this even the philosophically ‘correct’ approach for doing many many lines of streamed one shots?

Anyway thanks for any information or tips on this topic.

-andy crosby

  • You must to post comments
0
0

okay, i’m a space cadet, i didn’t even think to look in the preferences.

thanks jcobb!! i owe you a beer

  • You must to post comments
0
0

sure, you’re welcome. :)

I can certainly take you up on a beer for being a quick reply;
but it is not as if I write the code that makes it work or anything. ๐Ÿ˜†

A round of beers for the firelight guys!

cheers
-jason

  • You must to post comments
0
0

Drag the files into the event pane.

Events and sound definitions are created for you.

Also, you can use a template event if you have custom parameters that you’d like to be set the same in all of the events upon their creation.

-jason

  • You must to post comments
0
0

One major win for programmer sounds is you don’t have the memory overhead of an event for every line of dialog. Say you have a couple thousand dialog lines – this adds up. When all these events will be identical then it’s starting to look like a complete waste of memory – hence, programmer sounds.

Sound designer’s not seriously going to hand edit every line in FMOD Designer – you can put effects on your dialog event if you like, put them in different categories, even have completely different dialog events for different purposes. The lack of freedom is balanced by the ease of only managing one (or a handful) event and in the potentially huge savings in memory etc.

@ Frohagen: Presumably your game knows the dialog line at runtime so you can decide what sound to pass to FMOD. As in acrosby’s case, most games will need to have other information about the specific dialog line anyway so this won’t be a problem.

@ jcobb: Better to mix the dialog lines offline rather than at runtime. This method does break the data-driven model a bit but it liberates the designer from hand-editing 1000+ dialog lines which will save you time to put polish where it’s more needed.

A number of large scale AAA games are using programmer sounds in this way and loving life.

  • You must to post comments
0
0

Aha! I was dragging onto the categories pane instead of heirarchy pane and assumed i was doing something wrong, after trying it now it worked fine. Thank you for the prompt reply! ๐Ÿ˜€

-andy crosby

  • You must to post comments
0
0

I’d started using the programmer sounds in this way for our speech system, and have got it up and running, but haven’t really pushed it too hard yet. It doesn’t fit in ideally with the data-driven method of the event system as mentioned, but the memory savings are likely to be crucial so it doesn’t seem like there’s really an alternative. There were a few FMOD issues at the start, but these have been resolved. The main concern really is that you’re most likely needing to handle non-blocking loading and streaming with it. The basics of the way I have it working is

  1. Package related speech items in the same soundbank
  2. Have metadata detailing the sounds and which subsound it uses
  3. Load the soundbank to prepare the speech. I did have this using the nonblock callback to inform me when it was finished loading, but that gets triggered every time it loads a subsound it seems (maybe just the way I was testing it), so wasn’t useful for that purpose for me.
  4. When a request to play a speech item is made and the soundbank is ready (check getOpenState), the FMOD_EVENT_CALLBACKTYPE_SOUNDDEF_CREATE callback is fired, and inside that I use the metadata of the current speech item to call soundbank->getSubSound.
  5. You then need to wait until that’s ready (polling the sound’s getOpenState) to check for errors, and it will play it when ready
  6. The release of the sound works in the same polling way.

The only issues that I have, that could do with being explained, or fixed are

  1. There’s no way to organise the order of the sounds in the soundbank from within Designer it seems (rather than building the fsb manually). That’s essential for the subsounds to be in a known order and to allow for sounds to be inserted/moved about.
  2. From Designer, if you just have the programmer sounds, it won’t actually build the soundbank, since nothing uses the sounds. I had to create a dummy sound that had all the sound definitions in it so I could build an FSB
  3. Unfortunately with asynch loading, you can’t know the names of the subsounds until after they’re seeked to, so you’re kind of stuck with using the subsound index, which of course throws your metadata out if you wanted to insert a new subsound somewhere. A way of knowing the names of the subsounds when you load the fsb and/or being able to call getSubSound with a name would be handy to avoid this (for speech where it’s not a pre-defined sequence.
  4. There were comments about not really being able to release an fsb properly (wasn’t designed to do that for runtime reasons I think), but it doesn’t make sense to have lots of unused streams open, and logically it doesn’t seem to make sense to have 1 streaming soundbank for all speech (especially with the above problems of manipulating the order of the subsounds, and having to deal with indices which will change if you change the order)
  • You must to post comments
0
0

[quote="acrosby":et4sz5sf]Hi!

We are switching from Xact to Fmod so I’m busy trying to get my hands on the tools and understanding the fundamentals of Fmod. It looks great so far.

Here is a question i have about workflow in the designer tool.

I have a few hundred lines of dialogue that I want to import into events, pretty much just naming the event after the filename (which is autogenerated from the dialogue scripts in a sortable format). I put them into a wavebank and created the sound definitions without any problems. But how do I create multiple events for each sound definition? In xact i would drag the wavebank entries into the soundbank list, and then drag the sounds to create cues, in just a few steps. With FMod i must be missing something because it is seeming like i would have to create an event by hand for each sound definition? There will eventually be thousands of lines of dialogue for this project so the amount of time involved in creating each of those by hand will go up a lot.

Is there a more efficent way to do this? I’m thinking that I don’t really understand templates well, so maybe I can create a template to auto-generate the events for each sound definition, but as far as i can tell that just streamlines the by-hand creation process slightly by filling out a few parameters. In my case i want all the parameters to be identical other than the specific sound definition for that event. Is this even the philosophically ‘correct’ approach for doing many many lines of streamed one shots?

Anyway thanks for any information or tips on this topic.

-andy crosby[/quote:et4sz5sf]

I think it is worth you taking a look at the "Programmer’s sound" sound definition type. Adding a "Programmer’s sound" to an event layer allows you to set all the volumes, effects, etc of the event…however the filename of the audio is supplied by the programmer via a callback at run time.

Therefore you could create a single ‘dialog’ event, and the programmer specifies the appropriate source file at run-time (perhaps by algorithm).

Adding a programmers sound to a layer is covered in Chapter 3 (I think??) in the designer manual.

If most of the dialog is treat the same, this will be a much more efficient use of your time ๐Ÿ˜‰

  • You must to post comments
0
0

So by using "programmer sound" the workflow is as follows (correct me if I’m wrong).

  • In FMOD designer the sound designer creates one event for each dialogue using the proper cuename for the dialogue according to the dialogue-sheet or whatever is used for maintaining dialogues.

  • In code a callback fetches the programmer sound and uses the passed FMOD::Sound pointer for playing the sound.

Now, I might be missing something but as I have understood it…

a) The programmer sound can’t be auditioned making it hard to tweak whatever properties are applied.

b) Somewhere there must be a link between the programmer sound and a sound def but where is this specified. And if not then how can the FMOD::Sound fetched in the callback be valid? And if it isn’t and the programmer is meant to create this from the name of the event then how is this meant to work. I haven’t found any documentation about this.

c) Has anyone used this technique on a bigger scale and is it really better than the bruteforce version of implementing dialogues just like any other event. It seems to me that using programemr sounds do has its costs both in extra maintenance in code and performce if strings are to be parsed.

Please give me some feedback on this…

  • You must to post comments
0
0

i will investigate further about the programmer events, although i’m not sure that it would really be easier to have the dialogue system iterate through a list of filenames as opposed to a list of events – we also have to iterate through meta-data for each line such as subtitle text, localization text, art/animation stuff etc, so there already is iteration through a bunch of higher level structures which would include an event or filename anyway. now that i understand i can drag and drop to create the events it’s really not much work at all, at least, generating the list of events or generating a list of filenames is about the same work!

  • You must to post comments
0
0

My concern wasn’t really iteration but rather parsing strings. I mean provided that I get a cue with some name and have to figure out which filename to link to it. By not using "programmersound" there will be direct access to the sound which ofcourse is a big win for performance. I guess a couple of days or even weeks extra work for the designer is well motivated by gaining extra performance in most games. And also the sound designers pain can be relieved in other was such as making a script that generates the events thru some kind of automating tool that hacks into the FMOD projects xml code. I don’t see this as an impossible task though I would prefer if it was supported some way in FMOD designer already jyst like the support for localization which has to be "hacked" as well if I’m not mistaken.

  • You must to post comments
0
0

I think the programmer sound pipeline for dialog only makes sense if you are using fmod to port an existing title and you just need a place to dump wave data into for playback.

If you are authoring content originally in fmod designer, having all the dialog as separate events allows the option for audio designers to fine tune each and every line, which will result in a more polished game mix than the broad strokes of templates and programmer sounds will achieve.

An example is different lines of dialog needing entirely different distance attenuation rolloffs and levels, for instance shouted dialog vs whispered dialog and even the slight loudness variations from line to line with a character.

Also, with fmod designer network audition, being able to adjust the mix per line in context is a lot more rewarding work than just trudging through the events in designer "blind".

In regard to localization, often the foreign dialog comes back so heavily compressed (dynamics) that you can’t just use the native language levels and expect dialog to balance the same in the mix.

-jason

  • You must to post comments
0
0

ok, a quick followup question to this – when i drag and drop to create multiple events, i can use a template to set the event type as "one shot" automatically, but that just is for the high level event and not the way the sound definition is added to the layer. so i still have to edit each event, go to the layer and into the sound instance properties and manually change that from "loop and cutoff" to "oneshot" in order for the event to truly act as a oneshot.

is there a way to configure how the default for how the sound definition is added? i tried setting "layers and effects" tickbox as part of the template, but it seems that the sound definition is completely replaced. maybe i’m missing something for how layers/effects are stored in templates?

thanks!
-andy crosby

  • You must to post comments
0
0

the default setting for sound definition playback is in preferences > general > default sound type

  • You must to post comments
Showing 13 results
Your Answer

Please first to submit.