Well, since you’re working on effects right now I figured I’d just bring this back up with a few questions I have… been thinking about this on and off for the past month or two If you haven’t really given this stuff much consideration yet, feel free to just say ‘uhh… dunno’.
Are you planning on developing a custom compiler and interpeter/translator for your ‘shaders’, or are you going to try and build it on top of existing technology (for example, Lua)?
Are you planning on making shaders more of a design-time thing (where basically the sound engineer would design his shaders in some sort of editing environment and ‘compile’ them and plug them into the engine), or compiled and even generated at runtime, ala pixel/vertex shaders? Or something in-between? If DSP plugins are going to be able to interact with shaders, I’m assuming it’ll be compiled at runtime, to some extent at least.
If you’re thinking about having a script form for shaders, what level of complexity are you aiming for? C-style, like the newer pixel shaders, or a simpler assembly-style form? Or, once again, something in-between?
Sorry to bother you with all these questions. I’m interested in maybe trying to develop a SoundShader IDE or something so I can play around with the technology more easily.
[quote="brett":2qiad83i]We’ll probably post something here when it comes time for that.
We’re just churning through all the basic features at the moment, like at the moment i’m doing spectrum analysis, mod support then output plugins so stuff like winmm/asio/wavwriter can work. I dont think this will take too long.
After that will be the first run of the polygon based geometry engine, and then the sound shader work will start.
We can throw some ideas i guess, at the moment we’re not sure if it is going to just be an ‘in code’ command list, or if it should be some sort of program that involves a compiler, i would probably prefer the first option.[/quote:2qiad83i]
Well, once you have a code-based command list it’s not too hard to write a compiler around that, at least it wouldn’t be using a managed language like C# or even VB…
As far as flexibility goes – are you thinking of allowing soundshaders to be chained together? One other thing I would wonder about is how shaders would interact with streams – if a shader requests data from the end of a stream, and that data isn’t decoded yet, what happens? I can’t really think of any trivial way to solve that problem… in my experiments with sound shaders I had to cache the whole sound in RAM.
[quote="brett":7i07pov8]simple, you can’t request data that far ahead or behind. There is no room for FIR here, just IIR which will allow plus and minus a certain number of samples.
Anyway, we’re not looking at ‘pixel’ level shaders yet, we’re more interested in ‘vertex’ level shaders to start with.
These allow the user to do things like insert commands into a sound to make it fade out for example by setting 2 commands (from and to) with each respective volume level. You could do this with filter parameters (ie for built in FMOD filters or probably even DSP plugin parameters) or standard attributes (vol/pan/freq).
It also allows for stuff like complex loop points and envelopes.
More people are going to be interested in this level of control than low level sample access as most people dont even know anything about DSP.[/quote:7i07pov8]
Ahh, excellent. That’s a really good idea… combine that with a nice IDE and you can give the sound engineer/designer a nice toy to play with so they don’t have to touch engine code.
Please login first to submit.