Now that Studio is no longer in the developer preview stage (I assume this means that it’s at/almost at feature parity with Designer), I ran an importer test on a current fdp of mine to see how well the import process works. I’d like to switch a couple projects over to Studio ASAP, but some enhancements to the importer would help streamline that process for me. Here’s some feedback on the importer, based on my novice knowledge level of Studio:
2 Reverbs sends are being added to my events (1 pre fader and 1 post fader). I’m guessing that these belong to the event properties (i.e. ‘Reverb Wet Level’ or ‘Reverb Dry Level’)? Will setting these to -60 eliminate the sends during the import?
The ‘Reverb’ portion of the ‘Occlusion’ effect is getting discarded. I had originally thought that one of the above mentioned reverb sends was for the Occlusion reverb, but I don’t believe it is because I also get those two sends on events without Occlusion. I’m also not sure how Reverb presets/emitters will work in Studio.
The ‘Direct’ portion of the ‘Occlusion’ effect is getting reduced to just volume automation. I’m not sure what the plan is for the LowPass filter part of the Occlusion effect or the ‘3D Auto Distance Filtering’. Is this being replaced by adding an EQ to each layer of a positional event or, if that’s too CPU heavy, will there be something analogous to Occlusion added? It would be great to have the filtering part of the Occlusion/3D Auto Distance Filtering effects brought over during the import process. Perhaps a prompt could be added to ask the user if they want Occlusion to include LP filtering (and HP filtering, if that’s going to be added), since I assume some games may not want to use the LP/HP filtering.
I occasionally have events that basically contain two timeline parameters. E.g. one timeline to handle modulation/automation of a looping sound, and the other to handle a sustain point and keyoff automation for that same looping sound. I’m not sure how to go about doing this in Studio without setting a velocity on a 2nd parameter (i.e. turning it into a 2nd timeline). Is it going to be possible to set a velocity on non-timeline parameters, or is there a different way to accomplish what I’m describing? I’m thinking that maybe I can put the above mentioned looping sound in an Event module (nested within the primary event) and use that module’s timeline to modulate/automate the sustained loop, and then set up the sustain point & keyoff automation in the primary event’s timeline?
As a side note, when our game code tries to keyoff an event, it does so on a parameter that I arbitrarily call ‘Key’. I suppose we
can just have the code do this on ‘Timeline’ instead of ‘Key’, but I’m not sure how this will work when it needs to keyoff in an Event module.
For surround events, I often have one or more layers that cover just the front channels (and sometimes I add some centre channel to the front layer) and one or more layers that cover just the rear channels. I use Designer’s ‘Speaker Level’ effect to assign each layer to the front, back, center or LFE. Studio is importing those layers as stereo and discarding the ‘Speaker Level’ data/automation. Is a similar Speaker Level effect in the pipe? If not, is it possible to have Studio import this data in some way? E.g. if the automation is just a flat line (one point), set the layer to use the surround panner, and deactivate any surround channels that are set to a Speaker Level value of 0. Beyond that I’ll have to figure something out for layers that have Speaker Level settings that are somewhere in between 0 and 1 (or use multiple points for the automation), unless you guys know a good way to deal with that automatically?
I use the ‘Surround Pan’ in Designer to automate front/back and left/right panning sweeps. This data is discarded during the import. Can this data be translated to use the automation features of Studio’s surround panner?
Can ‘trigger delays’ in sound definitions be brought over to their corresponding modules?
Can sound definitions that have spawn times above a certain value be imported as sound scatterers? I figure that very low spawn times, such as 1ms, are likely to be indicative of granular synthesis, and those sound definitions should be imported as multi sound modules and setup for granular synthesis. If automating this isn’t practical, maybe implement a prompt to let the user decide what sort of module to assign each sound definition to? It may also be handy to be able to be able to switch a Multi sound module to a Scatterer (and vice versa).
The import of the ‘Priority’ setting for an event seems to be inverted. In Designer, lower values represent higher priority, but the importer seems to have this backwards (e.g. a Designer event with a priority of 100 gets a Studio priority of ‘Low’, instead of ‘High’.
Additional volume effects/automation on a layer get discarded. Are there still plans to add a volume/gain effect, so that you can stack multiple volumes/gains on the same layer? I often use one volume effect for modulation and the other to set the overall volume of the layer. Studio is only retaining one of the two volume effects. It would be great if Studio could import this. If adding a volume/gain effect isn’t practical, could the importer take secondary volume effects that are just one point of automation data (flat lines) and place them on a new parameter (that the importer auto generates)?
FMOD Lowpass effect automation is discarded. I assume this can be mapped to Studio’s ‘FMOD Lowpass’? Highpass is being imported fine.
My Tremolo effect automation was discarded from the Distance parameter. If the Tremolo effect is the same in Studio, I assume this can be carried over?
That’s all for now. Let me know if you need clarification on any of this or would like my fdp.
- capybara asked 4 years ago
Thanks for the feedback. I will have a look at the inverted priority bug that you mentioned. I believe there should be a corresponding Lowpass, that was added in 1.0. I will update the migration to handle the case. Sorry for the lack of update to the Designer-Studio migration aspect of the tool. Now that GDC is out of the way, I should be able to dedicate more time to focus on that. As for the questions related to the surround and reverb, I will have to double check with Gino, who have a better understand of the effects to answer some of your questions. He’s away for GDC at the moment, and should be back next week.
Just like to give you an update again on the progress of things.
The 2 Reverb sends was actually a bug with the importer that has been fixed in Studio 1.00.03. The idea was to translate the wet and dry reverb settings of an event to something equivalent in Studio. The corrected version in 1.00.03 is to add a pre-fader reverb send where the level is a combination of the event’s volume and the wet level and the fader level will be a combination of the event’s volume and dry level.
Occlusion effect is not supported at the moment so there shouldn’t be any migration done. Wondering if the volume automation you are seeing, is that from a volume effect on the layer?
I have put in a fix for the "Priority" settings, it should be available in the next release.
I have also added support for importing of FMOD Lowpass, FMOD Highpass Simple, FMOD Pitch Shift and FMOD Tremolo effects. It should be available in the next release as well.
I will leave some of the questions to Gino and Raymond since they have better knowledge of the sub systems.
Thanks for the update, Thuan.
Regarding the Volume effect that I suggested was coming from Occlusion, I do also have a Volume effect on my 3D events, so that’s probably what that’s mapping to. Sorry for any confusion that may have caused.
- capybara answered 4 years ago
Please login first to submit.