Announcement

Collapse
No announcement yet.

Zoom exponent

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Zoom exponent

    Does anyone know the mathematics that drive the 'zoom exponent' setting? It seems that the value either alters the vertices passed by the vertex shader or is passed by each vertex to the fragment shader, and is used to alter the zoom based on the length of the vector from center screen to the vertex in question. I haven't been able to determine anything conclusive, but I also haven't delved into the source code (nor would I know where to look if I did).

    Note that I've made the assumption that the mesh specified in MilkDrop's settings determines a wireframe for the rendering - a 3x3 mesh, for example, would result in four quads joined into a square.

  • #2
    You got that right, the mesh rendered by the vertex shader can be distorted in the xy-plane by variables zoom, rot, warp, dx, dy and zoom. The quantity of zoom can be made dependant of the distance from the center screeen using the "zoom exponent".

    However, you can alternatively code this directly into the per vertex section, which is much more flexible as you can make zoom depend on x and y as you like, not only on the distance from the center.

    That said, why would you bother with vertex shaders in the first place ? I have made quite a few presets and never bothered about vertex shaders, as you can enter your code directly in the pixel shader which calculates pixelwise, not interpolated over a mesh.

    Comment


    • #3
      So all the motion parameters are handled within MilkDrop's vertex shader? I was kind of expecting that a couple were handled in the fragment shader to make their quality independent of the mesh size. Since you seem to know a fair amount about this, I don't suppose you know what warp, warp size, and warp speed ACTUALLY do?

      Truth told, I am attempting a MASSIVE project - coding a MilkDrop-like package built in Flash. I've always wished MilkDrop was more versatile at integrating with other applications, and wanted to have active control over it with buttons, sliders, and the like. Since the semi-recent implementation of the Stage3D API (graphics acceleration in Flash), the project has become, at the very least, a possibility. A few weeks ago I decided I was sick of waiting for someone else to make one - especially since it would likely be proprietary - and started coding one myself. I wish to stay true to the original where possible.

      Presently 'Stillicidium' is in a very infantile, pre-alpha stage. I have a couple basic content generators working properly (custom wave, border), some useful utilities (color conversion, mesh generator), and am working on the first incarnation of the rendering engine, which is mostly complete. It can't take audio yet, but I figured I should probably get the basic visual stuff set up first

      Comment


      • #4
        Uhhhh....that sounds like a lot of work.

        Basically, the motion parameters, as you call them, are a rather ancient way to distort the screen while keeping GPU load low. The CPU calculates the mesh distortion first and then passes this on to the GPU which does a simple linear interpolation. The mesh is a simple flat m x n mesh.

        The waves, dots and shapes are generated first, by the CPU, dan drawn to the screen.
        Then, the CPU applies the motion variables zoom, rot, dx,dy, warp to the mesh.
        The GPU applies the distorted mesh on the actual screen, generating a distorted copy.
        This copy is then added to the original screen content by, I think, variable echo. Only then it becomes visible.

        That means all motion variables apply to the past frame, not the newest one, i.e. you will always get to see the undistorted content of waves and shapes, plus the distorted echo.

        I don't know exactly what warp does but certainly it is a composite of the basic motion dx,dy and possibly zoom or rot. You can try to emulate it in the per vertex section. Start with something simple like

        dx = 0.01 * sin(6*x+time);
        dy = 0.01 * sin(6*y+time);

        As the name implies, the per frame section is run once per frame, and the per vertex section is run once per mesh point, where x and y (0...1) are the position of the mesh point on the screeen. You may also use rad for the distance from the center. For a mesh size of 64 x 48, the CPU runs the per vertex code 3072 times per frame which is why mesh size should not be set too high.

        In the warp shader section (=a pixel shader; note that the term "warp" is arbitrary and has nothing to do with warp as motion parameter for the mesh), the plain screen position is available as uv_orig.xy, where the distorted version (as said above; distorted by the CPU and interpolated by the GPU) is uv.xy.

        Comment


        • #5
          It has been rather grueling, but things are progressing pretty well. Thanks for the synopsis! It sounds like I've gotten the order of operations correct, so to speak - I just have to finish piecing the actual operations together.

          I've gotten the main rendering engine together, and am working on integrating it - mainly I just need to finish the class that handles interop between the textures passed to the GPU and the raster images composited from the generative components (waves, shapes, etc.). The post processing code is also complete and tested (sustain, echo, invert, solarize - now 0-1 instead of boolean, gamma, brightness). Adding the motion settings will probably be the next step.

          I'll likely scrap the per-vertex stuff for now in favor of using a transform matrix to do most of the vertex manipulation... It sounds like a matrix could probably handle everything except the warp settings. I'll also be ignoring the programmable shaders for the moment - if I can get a pseudo-MilkDrop 1.x running smoothly, THEN we'll see about adding extras!

          Comment


          • #6
            Honestly I'm a little confused regarding the choice to use the vertex shader to handle ANY of that stuff. You think it was mostly just to provide backwards compatibility for the older presets? I wish I had contact info for Geiss so I could get his two cents.

            Comment


            • #7
              Well this goes back to MD1, and I reckon it was originally done this way to save GPU power. With MD2, HLSL programming was introduced, now all of that motion stuff can be programmed into pixel shaders directly, and with much higher precision, as no mesh interpolation is required.
              Certainly it was reasonable to leave the vertex section in, to keep compatibility to older presets. Plus, most preset authors never touch the shaders... it is much easier to use

              zoom = 1.1;

              than

              ret = tex2D (sampler_main, (uv-0.5)/1.1+0.5);

              ... let alone more complex operations such as rot.

              Oh, and on the warp settings - there is nothing mysterious about warp, just make your transformation matrix time dependant.

              Comment


              • #8
                Kinda what I figured, then - I suppose it would have been a real pain to redo everything to work off the fragment shader! Thankfully, since mine is OOP, it's easy to slip code into several different parts and obscure it so it appears to be going to one location - I'll just use the fragment shader. I won't be able to add in a lot of dynamic pixel shader functionality anyway, since Adobe has kindly chosen (queue facepalm) assembly as the main means to address your video card from Flash

                As for the warp, I'm not sure I'll be able to pass it into the shaders using a matrix since the calculation is dependent on the uv coordinate being altered... I was thinking I'd pass in the warp settings on one of the constant registers and then alter the zoom/rot/rot center/dx/dy/sx/sy matrix once the uv coordinates become available (fragment shader).
                Last edited by Psyne; 23 March 2014, 22:29.

                Comment

                Working...
                X