So, I want to switch from one piece of streamed music to the next, but I need to wait until the downbeat of the current piece to kick off a transition that sounds right. The usual scenario is that the request for a switch comes in during the middle of the measure, so I just wait until the right time. The problem I’m having is that if I call Event::start() on the downbeat, stream latency makes the new bit of music sound late. My assumption is that I need to do some sort of preloading on the new stream well in advance of when I need to to be heard.
What I was thinking of doing instead is, when the change request comes in, to call Event::start() and then Event::setPaused(true), finally calling Event::setPaused(false) on the actual downbeat. Will this effectively "preload" the stream? Is there a better way to do what I’m trying to do?
- audiodev asked 12 years ago
I’m very curious to know if calling "Event::start() and then Event::setPaused(true), finally calling Event::setPaused(false)" will enable the play request to work at all with music stitching. Since I’m working in Designer right now I don’t have access to these features. If it’s possible, please let me know!
We’ve just added ‘dont play until previous sound has finished’ mode for sound definition instances, so that you can do short loops and in conjunction with ‘loop and play to end’ loop mode, you can make beat synced loops jump from one bit of music to another.
We’re still working on this to reduce the gap between the jump (and even make it seamless), and also to allow the loop to jump from wav to wav inside a sound definition so you dont get the same wav playing over and over if the ‘mood’ parameter is the same.
[quote="brett":2w1i60dr]We’ve just added ‘dont play until previous sound has finished’ mode for sound definition instances, so that you can do short loops and in conjunction with ‘loop and play to end’ loop mode, you can make beat synced loops jump from one bit of music to another.
We’re still working on this to reduce the gap between the jump (and even make it seamless), and also to allow the loop to jump from wav to wav inside a sound definition so you dont get the same wav playing over and over if the ‘mood’ parameter is the same.[/quote:2w1i60dr]
I just had to register to tell that those features would be highly appreciated at least by me.
I’ve got a situation where I have three wav files that need to be played in a pattern like 1-2-1-2-1-2-1-3, looping to beginning. And if the game launches an event (level up), it stitches an intermediate transition sample after the sample that happened to be playing and starts the next track after that with different 3 samples. So the playback needs to be gapless to sound good. (…and I have an additional layer on top of that that adds some intensity to the music (drums etc.) so it needs to be sample accurate with that too.) What would allow me to do this in the Designer is (I think) those features plus one that would allow me to order the samples in sound definitions and add one sample multiple times. (so I could create the pattern in the sound definition, and do the transitions in the event editor with a seeking parameter)
I’ve for now implemented my own simple tool to do music tracks and use the sentencing engine in the runtime, but it would be nice to not have to. And if I’m just stupid for not figuring out how to do those things now, that would be nice to know too… So +1 for these features from me.
- gimblll answered 11 years ago
I see that you’re working a little more on the code side than I am but perhaps I can help. I was able to achieve synchronized streams on a console for a reactive beat matched musical score.
For my experiment, I took a funk song that I really liked and cut out measures (I’ll call these edits) based on levels of intensity. I had four levels of intensity to work with. Each edit started on the 1.
The next step was to unify the length of each edit/measure as accurately as possible using time stretching in Sound Forge. I forced each edit/measure to have the same number of samples. This enabled each edit to have a sample accurate tempo lock. Synchronizing to time was not as accurate as synchronizing to samples.
I’ll digress for a moment here, some of my edits were two or three measures long but I made sure that the number of samples in a measure were two or three times the number of samples in a single measure respectively.
After that, I created looping sound definitions out of each edit and placed them into an event with one sound definition on each layer. At this point, I played the event and all four of my edits started playing simultaniously staying beat matched as long as I let them run.
I then created a parameter which I called intensity. I set up volume envelopes in the event so that when I started the event, only the low intensity edit played. As I increased the parameter value, the next level of intensity would fade in and the last one would fade out.
The cool thing was that the edits stayed beat matched since they all started playing at the same time and kept playing in the background even though the volume was turned down.
So now I had my beat matching music escalation system but I wanted to take it a step further and introduce tempo changes. At this point I began experimenting with the pitch shifters built into Designer.
I created a parameter for pitch shifting (ParamPitch) with a value range of 0-1. I then added a pitch effect to each layer on that parameter. Each pitch effect would be at unity gain at ParamPitch value 0 but would pitch down to it’s max by ParamPitch value 1. Because the pitch envelope was the same on each layer I could adjust the ParamPitch value, while keeping all edits beat matched.
I also tested this with the granular pitch shifter to change pitch/tempo using the standard pitch effect but then repitch my music, independent of tempo, back up to my nominal pitch. What I ended up with was a beat matching music system with the ability to tweak tempo and pitch independently. I tested all these features streaming on the console and they worked as intended.
To summarize, if you keep the assets in the same tempo, start all of the streams at the same time from a single event, you can have sample accurate synchronization between your streams and fade between them to play back what you want. It is a little expensive on the streaming side but if you use this system with multichannel wav streaming it could open up a world of possibility!
Thanks for the reply, but my situation is somewhat different. I don’t have the bandwidth to be playing anything more than the active piece for more than a short amount of time. Also, these pieces are more distinct, they transition from one to another, but they have different tempos, keys, etc. So, for me, I think that my system is right for my needs, I just need to be able to prime a stream so that I can start it the instant I need to.
Please login first to submit.