View Single Post
Old 14th February 2012, 07:56   #14
fragmer
Senior Member
 
fragmer's Avatar
 
Join Date: May 2003
Posts: 101
Hello; Yathosho got hold of me. "avsx" was a little proof-of-concept project, which used OpenGL and revolved around EXT_framebuffer_object.

Presets were made up of "nodes", which could be connected into a directed graph. There were three types of nodes: producers, filters, and consumers.
  • Producer nodes only had output(s), and included: previous frame's buffer, saved user-defined buffers (similar to Misc->Buffer Save), bitmaps, render modules.
  • Filter nodes had both inputs and outputs, and included transforms (done by mapping textures onto a polygon rendering it back to itself), convolution (shader), blenders (merged two inputs into one output using a function), and color filters.
  • Consumer nodes only had inputs, and included "Display" and buffer-saving.
Each input accepted exactly one link/connection to an output. Each output could be connected to zero or more inputs. Original plan was to have a WYSIWYG editor, like this: http://fragmer.net/temp/uiconcept.png

Call stack was constructed at runtime by iterating over the graph, figuring out a call order that satisfied all input/output dependencies, then allocating and assigning buffers. Circular node connections were not allowed. A stack function pointers was created. I wasn't well-versed with shaders at the time, so it was mostly immediate-mode OpenGL, not very efficient.

EDIT: If I were to redo avsx today, I'd go with OpenCL/OpenGL combination code. This will allow keeping all buffer data on the GPU, and still doing general-purpose computations. I am familiar with the basics of this setup, but doing advanced things (like recreating Winamp's scripting language for use with OpenCL) is way beyond me.
fragmer is offline   Reply With Quote