Thread: WebGL Shader
View Single Post
Old 12th September 2017, 18:49   #7
wellspring of milk
Major Dude
Flexi's Avatar
Join Date: Apr 2007
Location: 54.089866,12.11168,18.75
Posts: 2,058
Send a message via ICQ to Flexi
There's always a trade-off. Perhaps the compilers could be more clever to optimize masks, but that's not what they were built for in the first place anyway, and it would also bloat up the compile time. Of course specifying certain areas by geometry is also more descriptive. But that's really not the point of code golfing challenges. Wasting performance is tolerable in size coding. Optimization for performance is a whole other story. What counts is the performance at a demoscene event, where there's usually a powerful PC with fast processors and a Titan class graphics card, and how you engage with the crowd. There are some standards that I find overused, but the audience tends to cheer for such effects as rgb channel splits and tv scan lines, so every high-ranked entry has them. Those are my two cents:

But apparently you can pull off some nasty long running loops for every pixels on graphics cards nowadays. That WebGL demo doesn't even run with full native performance since its glsl shader code is indirectly executed as DirectX shader at least under Windows (read: ), but I'm getting good frame rates anyway. I'd expect the direct manual translation to run slightly faster. I actually went the other way around and started porting some of my effects from Milkdrop to WebGL and glsl.

Then there's Jordan Berg who wrote a clojure script to trans-compile Milkdrop presets from the original source directly in the browser: I met him when I visited San Francisco last year, but he doesn't seem to be working on it anymore.
Flexi is offline   Reply With Quote