GP2X Trenki's Software Renderer Tutorial


efegea said:
I'm on 64 bit gentoo linux, AMD Athlon64 3400 2.4Ghz, GCC 4.1.2 with -O3 -funroll-loops -DNDEBUG, 125fps on your cow demo
Weird. There should not be such a big fps difference between gcc 4.1.2 and 4.2.1. Is the demo compiled as 64bit or 32bit application? I don't know what the default for gcc on a 64bit system is, but I think with -m32 or -m64 you can toggle this. The SDL_BlitSurface and SDL_FillRect calls should normally not make such a big difference on different systems.
 
Last edited by a moderator:
Trenki said:
Weird. There should not be such a big fps difference between gcc 4.1.2 and 4.2.1. Is the demo compiled as 64bit or 32bit application? I don't know what the default for gcc on a 64bit system is, but I think with -m32 or -m64 you can toggle this. The SDL_BlitSurface and SDL_FillRect calls should normally not make such a big difference on different systems.
The default is 64bit. I tried 32bit using -m32 and the framerate dropped to 98fps (max, the minimun fps were 92)
 
Last edited by a moderator:
I wonder if the performance of your renderer will drop too much if I use python on my game as scripting engine. What do you think? I don't have a GP2X for testing..

Will it be faster if the renderer is ported to the 940 cpu? If so please port it :D

EDIT: ups, double post, sorry :unsure:
 
efegea said:
I wonder if the performance of your renderer will drop too much if I use python on my game as scripting engine. What do you think? I don't have a GP2X for testing..

Will it be faster if the renderer is ported to the 940 cpu? If so please port it :D

EDIT: ups, double post, sorry :unsure:
Python on the GP2X? I doubt that will be fast. Porting to the 940 would free the 920 but potentially give a performance penalty when texturing. Porting it to the 940 is also very non trivial (even with the cmd940 framework). You would not be able to specify shaders in the same way. A small layer on top of the renderer would ne necessary.

I will release a small maintenance release of my software renderer in the next days and I am working on another project right now. I already get some results but I will only announce it when it is mature enough (in a couple weeks).
 
Last edited by a moderator:
Hi!

I't been a while since I posted my last tutorial and noone had a good idea for follow up tutorial, but I thought I could let you know about some advanced techniques that can be implemented in my software renderer.

Controlling perspective correction and interlacing

The rasterizer class has some functions to control perspective correction. You can toggle wheter perspective correction is beeing used with the perspective_correction function.
With the perspective_threshold function one can define the threshold width and height for triangles. Triangles below the threshold size are considered small and will not get perspective correction. I tried to use values as larges as (24, 24) and it worked well without too noticable artifacts.

A feature that one could actually implement themselfs within the affine or perspective_span function is already integrated into the rasterizer (legacy). It is interlacing. Basically this allows one to only render half of the scan lines in each frame and alternate between odd and even scanlines. This generally increases the fps by a great deal but also introduces noticably artifacs (especially if the framerate is low already).
Left/Right rotation produces the most noticable artifacts while they are not so troublesome with forward movements. In the source there is a comment block that explains how to use the function. You should try it out.

Perspective correction only for texture coordinates

Currently my software renderer interpolates all varying parameters perspectively correct when perspective correction is enabled. For texture coordinates this is adequate but for color values not absolutely necessary.

A solution to reduce the overhead a bit can be implemented by overriding the perspective_span function and implementing it appropriately. How it's done can be seen in the existing perspective_span function. It can speed things up, but it is hard to say by how much though.

Move stuff from pixel to span level if possible

It is easy to write a single fragment shader by simply implementing the function single_fragment but when you have conditionals inside your fragment shader that are actually constant per span or even per triangle it is a good idea to override the affine_span function, do the test there and invoke a specialized fragment shader. Sure this is a lot of work but gives most likely better performance. And instead of writing X different fragment shader combinations one can also use template parameters (which are treated as constants) to reduce the amount of coding required.

A hierarchical z-buffer approach

For scenes with a high depth complexity one should render front to back and have the depth test the first thing in the fragment shader. This still requires interpolation of all varyings for the whole span though which consumes time.
For high depth complexity scenes it may be beneficial to have something like a hierarchical z-buffer (per span in our case). One hierarchy level would probably suffice.

Basically you group 8 to 16 adjacent pixels together and store the maximum depth value you find in these pixels in a separate buffer. Whenever you write to the depth buffer you also update this maximum. Before rendering a span segment (assuming you have a less or less than depth test) you compute the maximum possible z value for that segment and compare this with the maximum z value in the corresponding pixel group in the hierarchical buffer. If the span z value is greater you can be absolutely sure no pixels will have to be drawn and skip interpolating 8 or 16 pixels. This can potentially save you a lot of time although it comes with a performance hit for scenes with low depth complexity.

I still have to try this out but I think it might speed up my Need For Speed 3 Level renderer on the gp2x since the depth complexity is in the range of 1.3 to 2.3 (1.0 beeing optimal). I still don't render a background layer which would add +1 to the overdraw factor and reduce performance again.
 
Thanks again Trenki! I have been using a custom span for a bit and it made a huge speed difference. Hopefully I've have something to show in the near future.
 
dockthepod said:
Thanks again Trenki! I have been using a custom span for a bit and it made a huge speed difference. Hopefully I've have something to show in the near future.
You claim that putting the fragment shader directly into the affine_span function makes a huge speed difference. I find this weird as when the fragment shader function is declared inline the compiler will take care of this. Nevertheless I tried it with a simple shader which does texture mapping and manually put all of the fragment shader into the span function. For me performance was exactly the same. So I am wondering how you managed to get better performance? Maybe you did something wrong and the span function does not do the same as it used to do when the code was in the fragment shader?
What gcc compiler where you testing this with? I tested with gcc 4.0.2 from devkitGP2X. The only thing I noticed was that gcc 4.1.1 from open2x makes a huge difference in my filltest.

If you are still sure putting the stuff inside the affine_span function instead of the single_fragment function improves performance would it be possible for you to provide the smallest possible test case that shows this behaviour? Maybe with a define to toggle between span function and single_fragment.
 
Last edited by a moderator:
For using different shaders for different type of mesh, how do I that, running this function..

CODE
g.vertex_shader<VertexShader>();


before drawing the mesh with g.draw_triangles() or is it best to do another way?
 
efegea said:
For using different shaders for different type of mesh, how do I that, running this function..

CODE
g.vertex_shader<VertexShader>();
before drawing the mesh with g.draw_triangles() or is it best to do another way?


Vertex and fragment shaders have to be set before drawing. To use different shaders you either need different shader classes and set the right one or use a templated shader class and set the appropriate template shader instance.

Assuming the fragment shader will always be the same and needs not to be set:

CODE

g.vertex_shader<MyFirstShader>();
g.draw_triangles(...);
g.vertex_shader<MySecondShader>();
g.draw_triangles(...);


You can also have a larger shader with different functionality depending on template arguments like this:

CODE

template <bool enable_something>
struct VertexShader {
...
void shade(...)
{
if (enable_something) {...}
else {...}
}
...
};

g.vertex_shader<VertexShader<true> >();
g.draw_triangles(...);
g.vertex_shader<VertexShader<false> >();
g.draw_triangles(...);



With a little C++ skill you can envision a way to have a shader library with all the specialized template shaders and an easy way to select from them based on render states. This is a technique I will be using for my current (yet to be announced) project.
 
Last edited by a moderator:
Trenki said:
dockthepod said:
Thanks again Trenki! I have been using a custom span for a bit and it made a huge speed difference. Hopefully I've have something to show in the near future.
You claim that putting the fragment shader directly into the affine_span function makes a huge speed difference. I find this weird as when the fragment shader function is declared inline the compiler will take care of this. Nevertheless I tried it with a simple shader which does texture mapping and manually put all of the fragment shader into the span function. For me performance was exactly the same. So I am wondering how you managed to get better performance? Maybe you did something wrong and the span function does not do the same as it used to do when the code was in the fragment shader?
What gcc compiler where you testing this with? I tested with gcc 4.0.2 from devkitGP2X. The only thing I noticed was that gcc 4.1.1 from open2x makes a huge difference in my filltest.

If you are still sure putting the stuff inside the affine_span function instead of the single_fragment function improves performance would it be possible for you to provide the smallest possible test case that shows this behaviour? Maybe with a define to toggle between span function and single_fragment.


I swear at one point I was seeing a big difference but I'm not any more :) I've been compiling with open2x toolchain and maybe that helped out. Anyhow, sorry to alarm you :)
 
Last edited by a moderator:
Hi!

I have updated my software renderer to version 1.6.2. I did some minor changes and you may have to adapt your code a bit because I made it more const correct. Specifically in the vertex shader the input structure is now const as you are not supposed to write to it.

I also released Fusion2X. This is an OpenGL ES-CL 1.0 layer on top of my software renderer. It implements most useful functionality to get something on the screen but it is still in an alpha state. Although I have a version with some optimizations the raw software render is still at least 1.25 times faster.

Unfortunately I don't have any more time that I could spend on improving it right now because of Uni and a shift in interests.
 
Great news. Nice to tell us that you are likely to quite continuing, because most libraries are thought to be alive turn out dead when you really use them.
As far as I can say, you've done some robust and efficient piece of code here that's very valuable for us. Thanks again.
 
Trenki said:
Hi!

I have updated my software renderer to version 1.6.2. I did some minor changes and you may have to adapt your code a bit because I made it more const correct. Specifically in the vertex shader the input structure is now const as you are not supposed to write to it.

I also released Fusion2X. This is an OpenGL ES-CL 1.0 layer on top of my software renderer. It implements most useful functionality to get something on the screen but it is still in an alpha state. Although I have a version with some optimizations the raw software render is still at least 1.25 times faster.

Unfortunately I don't have any more time that I could spend on improving it right now because of Uni and a shift in interests.
Thanks Trenki, Me and my associates have several projects on the go, but eventually we will use your excellent renderer. The OpenGL ES layer will help greatly with this once the time comes.
 
Last edited by a moderator:
PokeParadox said:
Thanks Trenki, Me and my associates have several projects on the go, but eventually we will use your excellent renderer. The OpenGL ES layer will help greatly with this once the time comes.
This is nice to hear. But unfortunately the OpenGL ES layer is just an Alpha version and thus more like a proof of concept. In my Need For Speed 3 level renderer I get 3fps with the unoptimized version and 7fps with the optimized version while I get 13fps using my software renderer directly.

There are also some known and most likely also unknown bugs in the code. There is no trivial way to optimize this in a way to get comparable performance to the underlying software renderer but I still have one idea up my sleeve that I will try out in the near future. If it works out it may improve performance dramatically without me having to write a dynamic code generator but it will put some workload on the developer using the OpenGL ES layer.
 
Last edited by a moderator:
Hi all!

I've again made a small update to the software renderer. Now it is not any more necessary to tell what the fragment shader function wants to do when using the SpanDrawer16BitColorAndDepth. I also updated the code shown in the tutorials by removing the respective definitions. Now it is easier to use and has the same functionality and speed.

The most up to date version can always be retrieved from my homepage (http://www.trenki.net/).
 
Hello Trenki,

i tried to run fusion2x but i can't get anything on the screen, seem i don't understand how work the context creation... but it's relatively hard without documentation or examples, i don't know how to create correctly the context with SDL, here is what i'm doing at initialization, i think i should use SetParam but dunno how it work:

CODE
SDL_Init(SDL_INIT_VIDEO|SDL_INIT_JOYSTICK);
SDL_Surface *screen = SDL_SetVideoMode(320, 240, 16, /*SDL_FULLSCREEN|*/SDL_SWSURFACE);
SDL_JoystickOpen(0);
SDL_ShowCursor(SDL_DISABLE);

F2X_ContextCreateParams par;
par.size = 10;

F2X_Context* f2xc = F2X_CreateContext(&par);

glClearColorx(0, 0, 0, 0);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthox(0.0f, 1.0f, 0.0f, 1.0f, -1.0f, 1.0f);

glVertexPointer(3, GL_FLOAT, 0, square);
glEnableClientState(GL_VERTEX_ARRAY);


can someone help me please :) ?
 
Grz- said:
Hello Trenki,

i tried to run fusion2x but i can't get anything on the screen, seem i don't understand how work the context creation... but it's relatively hard without documentation or examples, i don't know how to create correctly the context with SDL, here is what i'm doing at initialization, i think i should use SetParam but dunno how it work:

I know, there is no documentation because Fusion2X was a proof of concept project and is still in alpha state and I don't have any plans to work on it in the next time. While you can use it for some small test applications its performance really sucks. The "optimized" version runs a lot faster but using the pure software renderer you can still achive at least twice as fast rendering speed when you put enough work into it.

Nevertheless, if you want to try it you can use the following code: color_buffer and depth_buffer are two 16bit SDL_Surfaces with the same dimensions and will be used to render to.

CODE

F2X_Context *ctx = F2X_CreateContext(0);
F2X_MakeCurrent(ctx);

F2X_RenderSurface color_buffer;
memset(&color_buffer, 0, sizeof(color_buffer));
color_buffer.format = F2X_FORMAT_UINT16_R5_G5_A1_B5;
color_buffer.data = color_buffer->pixels;
color_buffer.width = color_buffer->w;
color_buffer.height = color_buffer->h;
color_buffer.pitch = color_buffer->pitch;

F2X_RenderSurface depth_buffer;
memset(&depth_buffer, 0, sizeof(depth_buffer));
depth_buffer.format = F2X_FORMAT_UINT16_R5_G5_A1_B5;
depth_buffer.data = depth_buffer->pixels;
depth_buffer.width = depth_buffer->w;
depth_buffer.height = depth_buffer->h;
depth_buffer.pitch = depth_buffer->pitch;

F2X_SetParam(0, F2X_COLOR_BUFFER, &color_buffer);
F2X_SetParam(0, F2X_DEPTH_BUFFER, &depth_buffer);



After this you should be able to use the OpenGL ES commands and render to the specified surfaces.
 
Last edited by a moderator:
I tried to use your texture code with images with an alpha layer: a 1bit alpha gif and a 8bit alpha png. It doesn't work, how can I render triangles with 1bit alpha textures? I need them to be transparent.
 
efegea said:
I tried to use your texture code with images with an alpha layer: a 1bit alpha gif and a 8bit alpha png. It doesn't work, how can I render triangles with 1bit alpha textures? I need them to be transparent.
You obviously have to implement this yourself with an alpha test in the fragment shader. You preferably store the texture information in R5_G5_A1_B5 format (as the load_surface_r5g5a1b5 function from this thread does). This way you can use the same texture with and without alpha test. When you fetch the texel from the texture you get an unsigned short and you then have to test for the alpha bit with a bitwise AND operation.

CODE

if (!(color & 0x20)) skip_pixel;



So all you need to do is to adapt the fragment shader in a way that it accounts for the alpha bit in textures.
 
Last edited by a moderator:
Thanks, your tip worked greatly, I can now render polys with alpha bit. Usefull for things like billboards or OSDs..

But now I'm trying to adapt glsl shaders to your shader system, i.e a cell shading shader but can't get it working, because the difference of them.

On glsl yoy can use the different coordinates of a vector, x, y z or you can combine them like xy, xz, or xyz and use it as just one value, is possible to do it with your vector math library? I know it's not possible to just use myvector.xyz, but is there another way to do it? I don't know how glsl unifies the values..


EDIT: I got the toon shader working, although I tried a dynamic lighting shader (just for fun) and didn't work..

I've used SDL_MapRGBA function to get a simple color value from the 4 components of a "color" vector. Although I don't know if it returning uint32 and been the color pointer an unsigned short could cause the problems I had..
 
Back
Top