I have tried to implement "mix-level" for 3d positioning, that is manipulating 3DPanLevel runtime, and the only way to do that as I saw it was to access the channels of an event. Now, after some debugging and going thru the forum I have come to the conclusion that it is just possible to access channelgroups used in an event since ChannelCount for a ChannelGroup fetched from an event is always zero. Since this obviously isn’t the way to do it would it be possible to add set3DPanLevel to the event interface? Or would that mean that the entire idea of the way events are supposed to be used gets violated? Is there some specific reason why you can’t access channels used by an event or doing anything else than setting a DSP to a channelgroup?
- Frohagen asked 11 years ago
But lets say that I want to be able to set amount of 3d pan level on any 3d sound no matter its parameters like for instance if I want to make geometric emitters or interpolate between hearing a sound as 2d and 3d. According to what you are saying this isn’t possible and I guess that might be fair enough. I really like the idea of a data-driven system but I guess that in some cases it would be nice to use anohter interface than thru parameters since they easily grow in amount if you want to make complex events. For example it might be nice to let the sound designer set if an event is 2d, 3d or both, that is with an option to pan between both modes. I have always had issues with the idea that a sound has to be 2d or 3d and that it can’t be a blend.
A sound can be a blend between 2d and 3d in FMOD Ex, and no other engine has this feature we have, which is 3d pan level. You could pretty much just set all sounds to ‘3d’ and use 3d pan level to controll the 2d/3d-ness.
As for control, you can just make a parameter for your event to scale 3d pan level from 0 to 1, but you will have to put that parameter in all events you want to adjust. Event templates make that job easier though.
Having worked with a number of sound middlewares I’m pretty aware that you are the only one that have this feature where a sound can blend between 2d and 3d and I really do appreciate that and think it’s a really strong feature. Now, over to controlling it then. My fear here is that this isn’t the only parameter that should be in each event at least in the sfx ones and that in the end there will be a number of parameters taking up space that will be needed for basic functionallity. I could live with it if it wasn’t a performance issue but I fear that it will be in combination with lots of layers and other parameters per event. Correct me if I’m wrong.
yes it is mainly because it violates the whole ‘data driven’ idea of the event system.
The programmer is not supposed to be doing this sort of thing, the sound designer is, so if there is something missing in designer then you can request for it to be added there, not in the programmer interface.
You said ""mix-level" for 3d positioning, that is manipulating 3DPanLevel runtime", that sounds exactly what we already have with our network audition mode.
FMOD designer can connect to your game through fmod_event_net.dll and the -sound designer- can play with the 3d mix level at run time.
Please login first to submit.