Originally posted by kostyap
I would recommend you to refrain from assumptions like that.
Its not an assumption. As a trivial example: if you have a fixed view point for instance, you can neglect considering the back halves and occluded areas of models without doing any bfc/occlusion tests. If you allow the user to choose a view point you can no longer do that without resorting to expensive tests which are often not possible to perform in real-time without expensive pre-calculations (hence generally useless in this situation).
Also I'm not doubting your scripting language, evallib already exists and compiles to reasonably fast code (no MMX, SSEn instructions...) in memory. All I'm saying is that there is a world of difference between rendering something to the screen without allowing it to be programmable and allowing it to be programmable. As mentioned above optimisation becomes much more difficult since your program knows much less 'ahead of time'. More importantly though is that rather than creating single instances of some object and using them you have to write code that manages multiple instances, creating, destroying, rendering from them (inheritence is nice here) etc... which isn't difficult, but its time consuming. You need to scan directories to look for files, create and maintain lists/trees in memory etc... None of this is necessary or even desireable for a stand alone plugin with fixed options.
A simple example of this is texturing. With hard coded scenes there is no reason to give any general mechanism to link together the various components of textures, you can just assign which ever one to which ever texture unit you like, and then use them as you like within shader code. Its faster and easier to write this way, at least it is how I would do it for a stand-alone demo. Its also very little work to add specific optimisations for a specific texture.
In a more general model though it is preferable to have some kind of script which defines the texture components, blend modes, which shader it needs to be rendered with etc... this means having to find all of these files out of a directory, parse them and store the data in some structure for fast and easy access, you also have to write more generic rendering code so that a texture appears on the screen. For all of this to be useful there needs to be some kind of UI for selecting a texture file and some mechanism for applying it to an object. Then there is allowing custom shaders for textures and working out which texture will end up in which unit etc.. This all boils down to needing some base class (without OO function pointers or whatever) for any component (triangle render, line render, spectrum, moving particle etc..) using textures. The bottom line is that you usually end up with a much bigger 'mess' of dynamic allocation, inheritance and data structures that you would need in a non programmable vis.
If we decide to make it scalable as well it becomes even messier since many routines will need to be written multiple times for different targets to allow the program to exploit the better features of newer hardware.
Ideally you would end up with something like the Quake 3 .shader or Doom 3 engine .mtr files.
You can of course do all the above with a static renderer and get benefits and since I haven't seen yours yet I can't comment on it specifically.
Originally posted by kostyap
My "hello world" application does not need any graphics and compiles and runs on just about everything
I guess I should have been more detailed about what the program does. "Hello" World doesn't really compare to even an empty window (assuming Win32). Here, this one runs on *nix and Win32 consoles:
int main(int argc, char** argv)
Sorry if I have offended you btw, I seem to do that easily without realising when I get overzealous about stuff.