We’re working on a project where Designer’s ‘Time Offset’ effect would be quite useful. Specifically, we sometimes start an animation part way through and need the audio to do the same.
In Studio, is this sort of thing handled significantly differently than it is in Designer (i.e. using ‘Time Offset’)? If it is, then I’ll ask our audio programmer to hold off implementing a system that uses ‘Time Offset". I’d like us to switch over to Studio as soon as it’s got all the features we need, but I’d rather not have our programmer implement a time offsetting system twice. However, if the changes required to switch from using ‘Time Offset’ to however Studio handles this are trivial, then we’ll go ahead and get this system working.
- capybara asked 4 years ago
[quote="capybara":29cc0w1f]I’m looking around in version 1.1.15 and I can’t seem to locate the Time/Start Offset feature.[/quote:29cc0w1f]
In the Deck, right-click on a sound’s waveform display to open the context-sensitive menu, then select the ‘Start Offset’ menu item. As you can see, this feature is currently a little arcane, but we are planning on making it more flexible and obvious when we can.
Thanks, Joseph! For now, is there a config file or something that I can edit to allow me to increase the max value of this parameter. The current max is 10 secs, but I know there are one or two scenarios where I’d like to start the sound much later (e.g. 60 secs in).
This is something we’ll have in the 1.1 release (May 24). It’ll work a little differently to Designer though. It’ll appear as a automation for a sound module rather than a track automation as it appears in Designer. So it’ll get a little messy if you want to offset multiple sounds on the one track (you’ll need an automation for each sound).
Hopefully this will work for you. Let us know.
Thanks, Raymond. That sounds similar enough to how it would work in Designer. I.e. the programmer would set a parameter that, in turn, controls the time offset value. Correct me if I’m wrong on this. If that’s the case, then I don’t foresee any issues with this.
Could Start Offset also be represented as a percentage (/float) of the sample’s length, as in 0.0 = sample start, 1.0 = sample end? Could a Randomizer be applied to it?
Then I could start a continuous looping sound at a random offset, regardless of its length, to avoid phasing if multiple instances of the same loop get started simultaneously.
I could scatter random, ADSR faded slices of a loop one after another to create a continuous looping sound with "infinite variation" (rain, river flow, torch, what not). Increase polyphony slightly = more density.
Sample length relative percentage would also be useful for oneshot sounds. I’ve used Start Offset (pka Time Offset) to make collision / impact sounds softer by starting them further down the tail (this is controlled by the impact strength parameter piped from Havok). Sometimes the samples are of different lengths, so a length-relative percentage value would work better than an absolute time value. Especially as it could be applied to all sounds within a multisound with a single envelope (currently I’d have to do this by adding a Start Offset for each wav separately, so if you have 20 random variations it would get rather unwieldy).
[quote="Skaven":4h99bbgr]Could Start Offset also be represented as a percentage (/float) of the sample’s length, as in 0.0 = sample start, 1.0 = sample end? Could a Randomizer be applied to it?
Especially as it could be applied to all sounds within a multisound with a single envelope (currently I’d have to do this by adding a Start Offset for each wav separately, so if you have 20 random variations it would get rather unwieldy).[/quote:4h99bbgr]
Thanks for the suggestions! The Start Offset property is still quite limited in 1.1; We are planning to improve it in future releases, and situations such as those you’ve described will definitely be considered during our planning process.
I found a workaround for randomization: add automation to Time Offset, then nest this inside another event and apply a Random modulation to the exposed input.
First day of using Studio. Looks like a lot of things can be achieved, and should be done via nesting events within events within events… eventception! Some kind of a tree/schematic view may become necessary at some point.
[quote="Skaven":a1moq4f3]Looks like a lot of things can be achieved, and should be done via nesting events within events within events… eventception! Some kind of a tree/schematic view may become necessary at some point.[/quote:a1moq4f3] We do have a few plans for an event nesting inspector, but it’ still a fair way into the future.
That said, you might find that event reference sound modules are more flexible than event sound modules, since they’re visible in the Folders Browser and can be organised and edited independent of the events that contain their trigger regions.
Please login first to submit.