GP2X Trenki's Software Renderer Tutorial


Trenki

Member
Joined
Jun 15, 2006
Messages
114
Age
40
Location
South Tyrol, Italy
Website
www.trenki.net
Hi guys!

I have been developing my software renderer for quite some time now and also had two threads about it on this board. In the last couple of months I optimized and refactored the renderer quite a lot, but now the interface should be mostly stable.

I thought about the possibility of making a 3D game/demo contest for it to boost it's popularity but I don't know if that will ever happen. In any case I thought it would be good to provide some tutorial material, so that it would be easier for new users to actually use my render for something good.

So I will present a first and very simple tutorial which shows how to use my software renderer here on this forum.

But first I will have to brag about the features my software renderer supports:
  • Written in pure C++, so it should compile without problems on many platforms. For me it works on the GP2X and on the PC without any code changes. I also tried to compile it for the 940 once and the compiler didn't complain.
  • The renderer is very modular and provide many places where you can customize it's behavior by simply using different classes and/or overriding the functions. Where possible compile time polymorphism is used for this, so there is no virtual function call overhead in these cases.
  • The source code is very compact (only ~2200 lines of C++ code including comments)
  • It only uses fixed point arithmetic internally for maximum speed.
  • You can implement nearly any imaginable effect, since it supports vertex and pixel shaders written in C++ and the engine allows to interpolate an arbitrary number of varying variables per triangle (color, texcoords, etc.). You could easily do per pixel lighting or use multiple render targets if you wanted. The shaders actually are the most powerful feature in the renderer since it allows you to implement your own specialized rasterizers.
  • There are some optimization functions which allow to tune the perspective correction quality or turn it off and it is also possible to do progressive interlacing which can cut the per pixel cost per frame in half.
Ok, now that you know about the features how to exploit them? The following tutorial will give you an introduciton.

I will be using SDL together with my renderer but this is no requirement. The renderer is completly independent of it. We also have to include some headers for our software renderer to work with it. All the API of the renderer is located in the swr namespace.

CODE

#include "renderer/geometry_processor.h"
#include "renderer/rasterizer_subdivaffine.h"
#include "renderer/span.h"

// the software renderer stuff is located in the namespace "swr" so include
// that here
using namespace swr;


Next I will show you the application's main function which shows hot to draw a single colored triangle. After that I will fill in the blanks and show you how the vertex and fragment shaders have to be written for the final application to actually compile.

First we simply initialize SDL with a 16 bit color buffer.
CODE

int main(int ac, char *av[]) {
// Intialize SDL without error handling at all
SDL_Init(SDL_INIT_VIDEO);
SDL_Surface *screen = SDL_SetVideoMode(640, 480, 16, 0);


Next we define the three vertices of our triangle together with its colors. I still use floats in this minimal example but in a real application you would use fixed point values to specify the vertex coordinates. How the Vertex structure actually looks you will se later.
CODE

// The three vertices of the triangle and the colors
Vertex vertices[] = {
{0.0f, 0.5f, 255, 0, 0},
{-.5f, -.5f, 0, 255, 0},
{0.5f, -.5f, 0, 0, 255},
};


We also need indices for out triangle.
CODE

// The indices we need for rendering
unsigned indices[] = {0, 1, 2};


After this we initialize the renderer, so that we can actually use it. This involves creating a Rasterizer subclass and a GeometryProcessor class which will be configured with the rasterizer. You always have to set the viewport and the clipping rectangle to valid regions.
CODE

// Create a rasterizer class that will be used to rasterize primitives
RasterizerSubdivAffine r;

// Create a geometry processor class used to feed vertex data.
GeometryProcessor g(&r);

// It is necessary to set the viewport
g.viewport(0, 0, screen->w, screen->h);

// Set the cull mode (CW is already the default mode)
g.cull_mode(GeometryProcessor::CULL_CW);

// It is also necessary to set the clipping rectangle
r.clip_rect(0, 0, screen->w, screen->h);


Before actually beeing able to render anything the vertex and fragment shaders have to be set. The following code does this. As you can see you set the vertex shader on the GeometryProcessor while you set the fragment shader on the Rasterizer subclass.
CODE

// Set the vertex and fragment shaders
g.vertex_shader<VertexShader>();
r.fragment_shader<FragmentShader>();


Next we have to tell the GeometryPipeline where out vertex data lies in memory and how it is laid out (just like in OpenGL with glVertexPointer). You can have more than just one vertex attribute pointer, but in this example a single one is enough.
CODE

// Specify where out data lies in memory
g.vertex_attrib_pointer(0, sizeof(Vertex), vertices);


The rest of the main function simply calls the draw_triangle function passing the indices and then displays the result to the screen.
CODE

// draw the triangle
g.draw_triangles(3, indices);

// Show everything on screen
SDL_Flip(SDL_GetVideoSurface());

// Wait for the user closing the application
SDL_Event e;
while (SDL_WaitEvent(&e) && e.type != SDL_QUIT);

// Quit SDL
SDL_Quit();
return 0;
}


This was all for the main function. The most work was the initialization, but that is normally only required once in an application. Next we will have a look at the Vertex structure, the VertexShader and the FragmentShader classes.

The Vertex Structure is a POD and just holds the data we associate to a single vertex.
CODE

// Our vertex structure which will be used to store our triangle data.
struct Vertex {
float x, y;
int r, g, b;
};


The vertex shader is a bit more complex. You can see the source for the whole vertex shader below. There are two static const fields which tell the renderer how many attribute streams the vertex shader will use and how many varyings (OpenGL term) it will output into the pipeline. If you output only a 2D texture coordinate that value would have to be set to 2. This value will be used by the clipping stage to do interpolation if necessary and if you set the wrong value you might get strange artifacts when clipping is beeing done. The only thing left is the static!! shade function in the vertex shader. It takes the vertex input (which is an array if attribute stream pointers) and has to write to the out variable. The stream pointers have void* as type, so they need to be casted to the correct type before they can be used.

The shader always has to write the x, y, z and w variables in the out structure. These are integers and interpreted as 16.16 fixed point numbers. Therefore the float values from the input structure are converted into fixed point before their value is assigned to the out variables.

In this example we interpolate three color values (r, g, b ). The values in the vertex structure are in the [0,255] range. These values are shifted left by 16 bits before beeing written to the output structure since this improves the accuracy by which they are interpolated (the lower bits are not interpolated very precisely)
CODE

// This is the vertex shader which is executed for each individial vertex that
// needs to ne processed.
struct VertexShader {

// This specifies that this shader is only going to use 1 vertex attribute
// array. There you be used up to Renderer::MAX_ATTRIBUTES arrays.
static const unsigned attribute_count = 1;

// This specifies the number of varyings the shader will output. This is
// for instance used when clipping.
static const unsigned varying_count = 3;

// This static function is called for each vertex to be processed.
// "in" is an array of void* pointers with the location of the individial
// vertex attributes. The "out" structure has to be written to.
static void shade(const GeometryProcessor::VertexInput in, GeometryProcessor::VertexOutput &out)
{
// cast the first attribute array to the input vertex type
Vertex &v = *static_cast<Vertex*>(in[0]);

// x, y, z and w are the components that must be written by the vertex
// shader. They all have to be specified in 16.16 fixed point format.
out.x = static_cast<int>((v.x * (1 << 16)));
out.y = static_cast<int>((v.y * (1 << 16)));
out.z = 0;
out.w = 1 << 16;

// The vertexoutput can have up to Rasterizer::MAX_VARYING varying
// parameters. These are just integer values which will be interpolated
// across the primitives. The higher bits of these integers will be
// interpolated more precicely so the values [0, 255] are shifted left.
out.varyings[0] = v.r << 16;
out.varyings[1] = v.g << 16;
out.varyings[2] = v.b << 16;
}
};


The last thing to show to make this tutorial complete is the FragmentShader class. Below is the whole fragment shader. The fragment shader has to derive from a span drawer class (defined in renderer/span.h). The following example only shows a specialized span drawer which works on a 16 bit color and depth buffer. There is also a more generic span drawer which you can use with a slightly different interface (look at the full example on my homepage).

The static const bool at the beginning tell how many varyings variables the fragment shader will use and whether z should be interpolated or not. In out example we do not need a depth buffer, therefore we also do not need depth to be interpolated.

The begin_triangle function provides is callback function which will be called for each triangle to be rasterized. The example does not use it but one could compute e.g. the mipmap factor on a per triangle basis in this function. The function still has to be defined and needs to be static.

The next four static const bools define what the single_fragment function will be able to do. The names should be self explanatory.

The single_fragment function is the core of the FragmentShader class and will be called for each pixel. In this example it simply computes a color value to be output and writes it to the color variable. If you had a depth buffer and wanted to do depth testing you would have to implement it in this function.

In the fragment shaders the interpolated color values are clamped to make sure the range is correct, since interpolation can make them go out of range even when they were in the proper range in the beginning (was that way the last time I checked. Would have to check again).

The last two *_pointer functions are the functions which the specialized SpanDrawer16BitColorAndDepth requires. These simply have the task to return a pointer to the pixel with x, y coordinates in the color or depth buffer respectively.

CODE

// This is the fragment shader
struct FragmentShader : public SpanDrawer16BitColorAndDepth<FragmentShader> {
// varying_count = 3 tells the rasterizer that it only needs to interpolate
// three varying values (the r, g and b in this context).
static const unsigned varying_count = 3;

// We don't need to interpolate z in this example
static const bool interpolate_z = false;

// Per triangle callback. This could for instance be used to select the
// mipmap level of detail. We don't need it but it still needs to be defined
// for everything to work.
static void begin_triangle(
const IRasterizer::Vertex& v1,
const IRasterizer::Vertex& v2,
const IRasterizer::Vertex& v3,
int area2)
{}

static void single_fragment(const IRasterizer::FragmentData &fd, unsigned short &color, unsigned short &depth)
{
SDL_Surface *screen = SDL_GetVideoSurface();

// Convert from 16.16 color format to [0,255]
// Here the colors are clamped to the range[0,255]. If this is not done
// here we can get very small artifacts at the edges.
int r = std::min(std::max(fd.varyings[0] >> 16, 0), 255);
int g = std::min(std::max(fd.varyings[1] >> 16, 0), 255);
int b = std::min(std::max(fd.varyings[2] >> 16, 0), 255);
color = SDL_MapRGB(screen->format, r, g, b);
}

// this is called by the span drawing function to get the location of the color buffer
static void* color_pointer(int x, int y)
{
SDL_Surface *screen = SDL_GetVideoSurface();
return static_cast<unsigned short*>(screen->pixels) + x + y * screen->w;
}

// this is called by the span drawing function to get the location of the depth buffer
static void* depth_pointer(int x, int y)
{
// We don't use a depth buffer
return 0;
}
};


Ok, this was the whole tutorial and should have shown you the very basics of my software renderer.
I would be happy to get some feedback for this and I also encourage you to go to my website and download the renderer and the example pack. It contains the whole sorce of the above example and also shows the GenericSpanDrawer. There is also a demo which shows how to render a large (5800 triangles) model and it works with 19-20fps. Finally I also have put the improved source and executables for my GBAX 2007 demo on my website for you to check out.

Depending on the feedback I may do another tutorial, maybe showing how to do texturing or something else if someone has any ideas.

Trenki's Programming Page.
 
Very nice! Thanks! I liked it a lot! I greatly appreciate all your work.

I was working on a 3d engine, perhaps I should code a driver for your software renderer.

Your renderer doesn't have inmediate mode, right? I have not found the way to render MD2 (quake2) models using vertex arrays, because the format lacks indices..

I have to work on another format loader then :D
 
efegea said:
Very nice! Thanks! I liked it a lot! I greatly appreciate all your work.

I was working on a 3d engine, perhaps I should code a driver for your software renderer.

Your renderer doesn't have inmediate mode, right? I have not found the way to render MD2 (quake2) models using vertex arrays, because the format lacks indices..

I have to work on another format loader then :D

no, mine does not have an "immediate mode", but you can simply build a list of the vertices you want to send and a dummy index buffer with (1, 2, 3, 4, ...) in it.
 
Last edited by a moderator:
Anyway I find all the vertex and pixel shaders thing too difficult. I tried to implement it on my 3d engine and got it working, well when I say working I can see something in the screen moving, but that thing is a lot of weird coloured polygons that I still don't know if are the correct ones. I know it's because I have to modify the shaders but I have not clue of how to do that. i.e I have to add support to model matrix transformations (is that how it's called? I mean: gltranslate, glrotate..) and textures support.

At this moment what I see is a mess of blue, green, and a lot of colours, polygons (I'm trying to render a CAL3D model using vertex arrays)
 
Yes, you would need to model and projection matrices to transform the incomming vertices into clip space. If you want to see how this is done take a look at the cow demo from my homepage.
The vertex and pixel shaders may be a bit hard to grasp in the beginning, but after you get how they work it makes things a lot easier.
 
Yes, the inclusion of shader processing is a very powerful feature.
I really liked the tutorial, Trenki! I'm sure to be bookmarking this thread.
I think texturing, possibly combined with SDL_Image may be a useful follow-up tutorial?
 
This is really cool! I think I'm going to play around with this. Thanks for the tutorial!
 
Very nice that you've written a tutorial. Hopefully I'll be making the 3D plunge pretty soon, maybe this will inspire me.
 
efegea said:
Anyway I find all the vertex and pixel shaders thing too difficult.
At the first glace I thought so, too. After really working with it, I found that you can incredibly quick change the shaders or recreatet ehm using a template, so you can optimize for non textured or non-lighting render modes. I implemented this library in GLBasic for POLYVECTOR as well as 3D and I'm impressed how fast and bug-safe it works.

Also with custom shaders, you have the full debug-tools at hands. With (e.g.) TinyGL you have a huge file with thousands of #defines, which is terribly hard to debug.

Awesome work!
 
Last edited by a moderator:
I played around with this yesterday on a 5 hour plane flight (made the time pass quickly :) ). I got it built for OSX but had to change a couple lines due to some already defined typedefs. Good stuff. I got some polys rendering and got my head around the shaders. Thaks again for this renderer and tutorial!
 
Hello guys!

This post will be a short intermission before I will do a texturing tutorial and will show you some nice features of my renderer which you may be able to exploit if you have some creative ideas.

The last tutorial showed how to render a simple triangle where the rgb colors were interpolated to produce a nice color gradient. The actual color comutation was done in the fragment shader which you can see below.

CODE

static void single_fragment(const IRasterizer::FragmentData &fd, unsigned short &color, unsigned short &depth)
{
SDL_Surface *screen = SDL_GetVideoSurface();
int r = std::min(std::max(fd.varyings[0] >> 16, 0), 255);
int g = std::min(std::max(fd.varyings[0] >> 16, 0), 255);
int b = std::min(std::max(fd.varyings[0] >> 16, 0), 255);
color = SDL_MapRGB(screen->format, r, g, b);
}



Appart from the SDL_Surface stuff (which should not be there in a real app) we had to clamp the three rgb values to the appropriate range. If we didn't do this some pixel colors at the edges were messed up.

To have to clamp the colors like this in the fragment shader is quite expensive though, so we would like to avoid it. Turns out there is a way to do this with my renderer.

Instead of clamping the colors on a per pixel level we can do it on a per span level if it actually turns out to be necessary. The default span handling function is implemented by the SpanDrawer from which you derive your fragment shader and simply calls the fragment shader when appropriate. You can however override this function and do your own per span computation in it.

The following shows the overridden span function.

CODE

static void affine_span(int x, int y , IRasterizer::FragmentData fd, const IRasterizer::FragmentData &step, unsigned n)
{
if (!n) return;

IRasterizer::FragmentData s = step;
clamp_varying(fd, s, n, 0, 0, 255 << 16);
clamp_varying(fd, s, n, 1, 0, 255 << 16);
clamp_varying(fd, s, n, 2, 0, 255 << 16);
SpanDrawer16BitColorAndDepth::affine_span(x, y, fd, s, n);
}



The above function can be inserted directly into the fragment shader class. The function simply clamps the colors values to the correct range and then calls the base span handing function which in turn calls the fragment shader which now does not have to clamp the color values any more. The test "if (!n) return" has to be done because it can happen that the span length is 0 but the span function was called nevertheless.

Next I will show you the actual clamp_varying function.

CODE

static inline void clamp_varying(
IRasterizer::FragmentData &fd,
IRasterizer::FragmentData &step,
unsigned n,
int var,
int min,
int max)
{
bool fixit = false;

if (fd.varyings[var] < min || fd.varyings[var] > max) {
fixit = true;
fd.varyings[var] = std::min(std::max(fd.varyings[var], min), max);
}

int right = fd.varyings[var] + n * step.varyings[var];
if (right < min || right > max) {
fixit = true;
right = std::min(std::max(right, min) , max);
}

// note the "signed(n)" here. We want to do a signed division and therefore all the operands need to
// be signed. Since n is unsigned it needs to be casted explicitly. The compiler seems to silently
// ignore this issuse even when all warnings are enabled.
if (fixit) step.varyings[var] = (right - fd.varyings[var]) / signed(n);

// The above does a normal signed integer division. Since n will always be smaller than some maximum
// value (screen witdh presumably) this division can be speed up.
}



The above function simply checks if clamping the values on either side of the span is necessary and does it if required. If any clamping was done the new step values have to be derived. This is done with a division by n.

With the above code we now have reduced the per pixel cost but introduced the possibility of up to three costly divisions per span. It will probably still be faster than clamping per pixel though.

There is also a way to reduce cost of the division by defining a special division function. We know that n will not be larger than the screen width in the normal case. We can exploit this knowledge with the following function.

CODE

int divide(int num, int den)
{
switch (den) {
case 1: return num;
case 2: return num / 2;
case 3: return num / 3;
case 4: return num / 4;
...
case 320: return num / 320;
case 321: return num / 321;
...
}

return num / den;
}



You may now think "WTF!", why would this buy us anything, the divisions are still in there.

Well, the divisions are still in there yes. But the compiler (GCC at least) will transform divisions by constant values into multiplications + some bit trickery. And the big switch statement will be transformed into a jump table. In the end it turns out to be a really good solution and it will even work for numbers which are not in the switch.

I hope you liked this tutorial as well and will come back when the texturing tutorial is done.
 
Very nice, but I still see your renderer very difficult to use, at least for begginers. Anyway I like it a lot.

In your renderer, you have to know how all the pipeline of a renderer works and implement it on the shaders. I feel like writing the renderer myself.

I know the shaders are a very powerful feature and I like them, but they are not optional, you have to write them. In opengl you can write a game without using shaders. And without having to learn how the renderer works internally (although is nice to know that)

I see this a problem (the difficulty) because the GP2X is not a device with a lot of 3D games, so if we want more of them, we have to make easy to develop one. If not, people (developers) will run scared :ph34r:

Anyway, I want to learn to use them and will try to code something. Waiting impatient for your texturing tutorial :) (and searching for the way to do a glTranslate and glRotate in your renderer. I looked at your cow example, but...:()
 
efegea said:
Very nice, but I still see your renderer very difficult to use, at least for begginers. Anyway I like it a lot.

In your renderer, you have to know how all the pipeline of a renderer works and implement it on the shaders. I feel like writing the renderer myself.

I know the shaders are a very powerful feature and I like them, but they are not optional, you have to write them. In opengl you can write a game without using shaders. And without having to learn how the renderer works internally (although is nice to know that)

I see this a problem (the difficulty) because the GP2X is not a device with a lot of 3D games, so if we want more of them, we have to make easy to develop one. If not, people (developers) will run scared :ph34r:

Anyway, I want to learn to use them and will try to code something. Waiting impatient for your texturing tutorial :) (and searching for the way to do a glTranslate and glRotate in your renderer. I looked at your cow example, but...:()
Well, yes, you have to know more details when you are going to use shaders. This is no different to DX or OpenGL 2 with shaders but shaders give you all the power and control you can get. DX10 for instance is shaders only and new engines on the PC are also shaders only. GL3 will also be shaders only. Even HL2 (which is old now) only included a fixed function pipeline renderer for compatibility with crappy GFX cards.

I can see that the shader approach might be a bit intimidating for beginners since it leaves a lot of work to be done. On the other hand it allows me to keep the software renderer as small and optimized as possible and the library user can tune the shaders for his particular need giving potentially better performance.

glTranslate and glRotate are actually quite simple to implement once you know what they actually do. OpenGL has a matrix stack and both functions construct a new matrix and multiply that with the current matrix on the stack. Vertices are transformed my multiplying them with the matrix on top of the stack (this would mean you would have to do a matrix * vector multiplication in the vertex shader)

In the Cow demo the rotation is achieved even simpler my doing the equivalent of a gluLookAt. This matrix is concatenated to the perspective projection matrix since the result is needed for vertex transformation in the vertex shader.

QUOTE

I see this a problem (the difficulty) because the GP2X is not a device with a lot of 3D games, so if we want more of them, we have to make easy to develop one. If not, people (developers) will run scared :ph34r:


Wait for my next project ... B)
 
Last edited by a moderator:
Yes, programmable pipelines are the way to go from here on out. So much more flexible! efegea, just spend a little time figuring out how they work. Basically all the gl stuff like glTranslate does is push a matrix onto the stack. You can do the same thing pretty easily and then transform your verts in the vertex shader using the transform matrix you've built up.
 
I don't understand why default shaders that provide the same functionality as fixed function pipelines can't be provided? If this is done then moving to a "shaders only" environment should be nearly transparent for the user that doesn't want to have to write his/her own...
 
Trenki said:
Wait for my next project ... B)
I want to know what is! now!! :D


You have convinced me, I'll try to learn how it works and then do my own shaders. Any good resources for learning? web pages, articles, tutorials, books..?
 
Last edited by a moderator:
Hi!

As I promissed here comes the texture mapping tutorial. It will show you how to texture map a single fullscreen quad. Naturally this can be extended to texture map arbitrary geometry.

The first tutorial already showed the basics of how to use my software renderer, so I will only explain the relevant changes I made for this tutorial and explain the details of the changes.

As always the full source code can be found on my homepage.

Now lets start.
I changed the Vertex structure from the first tutorial to suit this tutorials needs. Since we do not need the color values any more I removed them and gave the structure two float variables which will be used to hold the texture coordinates:

CODE

struct Vertex {
float x, y;
float tx, ty;
};


The main function has been adapted accordingly to store the four vertices of our quad together with its texture coordinates:

CODE

// the four vertices of the textured quad together with the texture coordinates
Vertex vertices[] = {
{-1.0f, 1.0f, 0.0f, 0.0f},
{-1.0f, -1.0f, 0.0f, 1.0f},
{ 1.0f, -1.0f, 1.0f, 1.0f},
{ 1.0f, 1.0f, 1.0f, 0.0f}
};

// the indices we need for rendering
unsigned indices[] = {0, 1, 2, 0, 2, 3};


The vertex and fragment shaders obviously had to be changed. The vertex shader is shown below:

CODE

// this is the vertex shader which is executed for each individual vertex that
// needs to ne processed.
struct VertexShader {

// this specifies that this shader is only going to use 1 vertex attribute
// array. There you be used up to Renderer::MAX_ATTRIBUTES arrays.
static const unsigned attribute_count = 1;

// this specifies the number of varyings the shader will output. This is
// for instance used when clipping.
static const unsigned varying_count = 2;

// this static function is called for each vertex to be processed.
// "in" is an array of void* pointers with the location of the individial
// vertex attributes. The "out" structure has to be written to.
static void shade(const GeometryProcessor::VertexInput in, GeometryProcessor::VertexOutput &out)
{
// cast the first attribute array to the input vertex type
Vertex &v = *static_cast<Vertex*>(in[0]);

// x, y, z and w are the components that must be written by the vertex
// shader. They all have to be specified in 16.16 fixed point format.
out.x = static_cast<int>((v.x * (1 << 16)));
out.y = static_cast<int>((v.y * (1 << 16)));
out.z = 0;
out.w = 1 << 16;

// bring the texture coordinates into the appropriate range for the rasterizer.
// this mean it has to be converted to fixed point and premultiplied with the width, height of the
// texture minus 1. Doing this in the vertex shader saves us from doing this in the fragment shader
// which makes things faster.
out.varyings[0] = static_cast<int>(v.tx * texture->w_minus1 * (1 << 16));
out.varyings[1] = static_cast<int>(v.ty * texture->h_minus1 * (1 << 16));
}

static Texture *texture;
};


The vertex shader now only outputs two instead of three varyings which is indicated by the static varying_count member. The vertex coordinates (which are stored as floats) are taken and converted to fixed point (no other transformations applied).

The texture coordinates are also converted to fixed point but additionally they are multiplied with the width or height of the texture minus one. As a result the integer texel coordinate to be used for texturing is stored in the upper 16bit of the integer varying. Doing this multiply in the vertex shader saves us from doing the multiplication in the fragment shaders which will give us better performance.

In the fragment shader only the shade function changed (appart from the varying_count variable which was set to two this time):

CODE

static void single_fragment(const IRasterizer::FragmentData &fd, unsigned short &color, unsigned short &depth)
{
// sample the texture and write the color information
color = texture->sample_nearest(fd.varyings[0], fd.varyings[1]);
}


All the fragment shader does is to call the sample_nearest function of the texture member passing the interpolated texture coordinates stored in fd.varyings[0] and fd.varyings[1]. This function looks up the texel in the texture map and returns its value as an unsigned short.

Next we will take a look at the Texture class which contains the sample_nearest function:

CODE

// our texture class
struct Texture {
SDL_Surface *surface;

unsigned w_log2, h_log2;
unsigned w_minus1, h_minus1;

// create a texture from an SDL_Surface
Texture(SDL_Surface *s)
{
surface = s;

w_log2 = log2_of_pot(s->w);
h_log2 = log2_of_pot(s->h);

w_minus1 = s->w - 1;
h_minus1 = s->h - 1;
}

~Texture()
{
SDL_FreeSurface(surface);
}

// returns log2 of a number which is a power if two
unsigned log2_of_pot(unsigned v) const
{
unsigned r = 0;
while (!(v & 1)) {
v >>= 1;
++r;
}
return r;
}

// samples the texture using the given texture coordinate.
// the integer texture coordinate is given in the upper 16 bits of the x and y variables.
// it is NOT in the range [0.0, 1.0] but rather in the range of [0, width - 1] or [0, height - 1].
// texture coordinates outside this range are wrapped.
unsigned short sample_nearest(int x, int y) const
{
x >>= 16;
y >>= 16;
x &= w_minus1;
y &= h_minus1;
return *(static_cast<unsigned short*>(surface->pixels) + ((y << w_log2) + x));
}
};


The constructor of the class takes an SDL_Surface which is assumed to contain a 16Bit color image. Some values are computed from the surface dimensions (which must be a power of two in this example) and stored in the class's members. The function also assumes that surface->pitch == surface->width * 2.

The most important function is sample_nearest. It takes two ints specifying the x and y texture coordinates and returns the texel for this texture coordinate.
First the lower 16 bits of the texture coordinate are discarded (as these are only needed for texture coordinate interpolation). The two bitwise and instructions clamp the texture coordinates to the allowed range and make the texture coordinates wrap if they are outside the [0.0, 1.0] range. This is like specifying GL_REPEAT in OpenGL.
The last line computes the address of the textel in the SDL_Surface and returns this texel to the caller.
NOTE: We can compute the address of the pixel with a bitshift instead of a multiply since we know the texture dimensions are a power of two. This is the reason why POT textures are faster than NPOT textures. This is also the reason why OpenGL and DirectX were restricted to POT textures until recently. And even today POT textures are supposed to be faster.

The only other change in the main function is the loading of the texture which is done like this:
CODE

// load the texture file
Texture *texture = new Texture(load_surface_r5g5a1b5("texture.png"));

// make the shaders know which texture to use
VertexShader::texture = texture;
FragmentShader::texture = texture;


The texture is loaded and the vertex and fragment shaders are told that this texture is going to be used.

load_surface_r5g5a1b5 is a function which loads and converts and image file to a surface with a format with nice properties. As the name of the functions suggests it stores rgb data with 5 bits per component and also has a one bit alpha channel. The normal framebuffer is stored with an R5G6B5 format but the R5G5A1B5 format is directly compatible with the framebuffer and can even be used for textures with an alpha channel to do an alpha test without having to convert between formats.

The function implementation is as follows:
CODE

// loads an image file using SDL_image and converts it to a r5g5a1b5 format.
// this format can be directly copied to the screen and one can also use the
// embedded alpha bit for an alpha test.
SDL_Surface* load_surface_r5g5a1b5(const char *filename)
{
const Uint32 rmask = 0xF800;
const Uint32 gmask = 0x7C0;
const Uint32 bmask = 0x1F;
const Uint32 amask = 0x20;

SDL_Surface *result = 0;
SDL_Surface *img = 0;
SDL_Surface *dummy = 0;

img = IMG_Load(filename); if (!img) goto end;
dummy = SDL_CreateRGBSurface(0, 0, 0, 16, rmask, gmask, bmask, amask); if (!dummy) goto end;
result = SDL_ConvertSurface(img, dummy->format, 0);

end:
if (img) SDL_FreeSurface(img);
if (dummy) SDL_FreeSurface(dummy);

return result;
}



This would be the end of the texturing tutorial. I will just discuss some performance issues one should know about.
  • The use of any SDL functions inside the shaders should be discouraged. It can potentially slow it down.
  • The texture's sample_nearst function even though it is already optimized a bit (slower ways to do it exist) is not optimal either. It is independednt of the texture size and thus has to load three variables into registers. If one hard codes the texture dimensions these variables become constants and a screen fill gets ~30% faster. So it would be wise to generate a shader for each different texture size and select the appropriate shader at runtime.
  • The sample_nearest function always does the wrapping by doing two bitwise and instructions. If one knows that the values of the texture coordinate do not need to be wrapped because they are in the correct interval one could omit these ands and gain additional speed. One could determine the need for wrapping at the span level.
  • Performance heavily relies on the compilers ability to inline code. So make sure that if you ever call functions from the shader they are declared inline (funcitons implemented in class/struct bodies are inline by default). Normal function calls or function calls via function pointers could hurt performance.
  • The tutorial uses floats for the vertex and texture coordinates. This requries the vertex shader to convert to fixed point before continuing. In a real application the vertex data should be supplied in fixed point only since the GP2X does not have hardware support for floats.
You can find the whole source code for the tutorial on my homepage. I hope you enjoyed it and can give me some feedback. I would also like to hear some ideas for a next tutorial.

Bye!
 
Back
Top