Depth sorting problems on SGX


TheGoodDoktor

Still Fresh
Joined
Sep 6, 2008
Messages
74
Hi

I've almost got my project working on OMAP3 (using a Beagleboard). The Win32 version that uses the ImgTec GLES 1.1 emulator works fine but when I try to run it on the Beagleboard I get loads of depth sorting issues.
Has anyone else experienced any problems like this?
I was thinking that it was something to do with the way I set up the projection matrix but a lifted a function out of the PVR SDK to setup the projection matrix and still had the same problems.
Can anyone help me?

Cheers,
TheDoktor.
 
Did you try the sample codes yet - Do those work fine? Can you post some parts of your source?
I had some weird behaviour while setting uniforms for a shader which is not in the current used shader-program, but you mentioned OGL ES 1.1 so that should be out of question.
 
by 'depth sorting' you mean occlusion artifacts, right? i mean, you're not trying to sort anything yourself and then draw it by painter's algorithm?

one thing that could differ between emu and actual hw is depth buffer precision, and we really don't know what sgx does when in es1 mode - it could just as well try to get away with reduced precision depth buffer, or one that matches the bitness of the color buffer (say, when in 16 bit color falling back to 16 bit depth). is your scene depth-precision-sensitive in the first place? what is your proj matrix like?
 
Cheers for the replies.

Here is my projection matrix setting code. It's based on some of the PVR sample code.

// Set the Projection matrix using PVR sample
struct PVRTMATRIXf
{
float f[16];
};

void PVRTMatrixPerspectiveFovRHF(
const float fFOV,
const float fAspect,
const float fNear,
const float fFar)
{
float f, n;
float fovRad = fFOV * ((M_PI*2) / 360.0f);
struct PVRTMATRIXf mOut;

// cotangent(a) == 1.0f / tan(a);
f = 1.0f / (float)tan(fovRad * 0.5f);
n = 1.0f / (fNear - fFar);

mOut.f[ 0] = f / fAspect;
mOut.f[ 1] = 0;
mOut.f[ 2] = 0;
mOut.f[ 3] = 0;

mOut.f[ 4] = 0;
mOut.f[ 5] = f;
mOut.f[ 6] = 0;
mOut.f[ 7] = 0;

mOut.f[ 8] = 0;
mOut.f[ 9] = 0;
mOut.f[10] = (fFar + fNear) * n;
mOut.f[11] = -1;

mOut.f[12] = 0;
mOut.f[13] = 0;
mOut.f[14] = (2 * fFar * fNear) * n;
mOut.f[15] = 0;

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMultMatrixf(mOut.f);
}

I call it using:
PVRTMatrixPerspectiveFovRHF(40.0f,(F32)screen.width/(F32)screen.height,gl.znear,gl.zfar);
glScalef(1,-1,-1); // Model view matrix is LHF

where:
screen.width = 1280
screen.height = 768
gl.znear = 0.2f;
gl.far = 2500.0f;

Other things of note:
glDepthRangef(gl.zfar,gl.znear);
glClearDepthf(0.0f);
glDepthFunc(GL_GEQUAL);

As far as depth buffer precision is concerned as far as I know SGX uses 32 bit floating point in all cases as it doesn't actually have a depth buffer but it's tile sorting algorithms use 32 bit floats.
I don't do any sorting myself, I just through everthing in the scenegraph that hasn't been frustrum culled at the SGX.

Let me know if you spot anything suspicious in the above code. As I've said before, it all looks fine when run on the GLES 1.1 emulator on my XP with Intel integrated gfx part.
Cheers,
TheDoktor.
 
Stupid question but are you requesting a zbuffer when you set up EGL?

You can set the minimum depth buffer size in your config attributes (default is 0), You can query the actual depth buffer precision used via: eglGetConfigAttrib(Display, Config, EGL_DEPTH_SIZE, &Value).
 
Adventus said:
Stupid question but are you requesting a zbuffer when you set up EGL?

You can set the minimum depth buffer size in your config attributes (default is 0), You can query the actual depth buffer precision used via: eglGetConfigAttrib(Display, Config, EGL_DEPTH_SIZE, &Value).

That's not a stupid question at all.
I set up EGL differently for the Beagleboard version and the display looks similar to what I'd expect it to if there was no depth buffer.
I'll try you're suggestion next time I have the beagleboard set up.
 
TheDoktor said:
Here is my projection matrix setting code. It's based on some of the PVR sample code.
<snip>
that's the canonical gl projection when it comes to depth, nothing extraordinary there.

I call it using:
PVRTMatrixPerspectiveFovRHF(40.0f,(F32)screen.width/(F32)screen.height,gl.znear,gl.zfar);
glScalef(1,-1,-1); // Model view matrix is LHF
ok, the canonical gl proj matrix expects a right-handed model-view, which it then flips the handedness of by flipping the z axis - native gl clipping space is left-handed and +z points away from the viewer. by making your model-view's z opposite to the norm, while preserving the original z-axis negation in the projection matrix, you effectively invert the order of your depth ranger, ergo, depth buffer (read further below).

where:
screen.width = 1280
screen.height = 768
gl.znear = 0.2f;
gl.far = 2500.0f;

btw, znear is generally too small. if your scene is depth-precision-sensitive that could cause issues. try adjusting it to something at least a unit big.

Other things of note:
glDepthRangef(gl.zfar,gl.znear);
glClearDepthf(0.0f);
glDepthFunc(GL_GEQUAL);
so here you'd need to either invert the depth compare function, of flip the znear/zfar mapping to the depth range. of course, clear depth should be fixed accordinly.

apropos, the depthRange function provides the mapping of NDC coords to depth-buffer. that means it takes the -1..1 NDC depth and maps that somewhere in the 0..1 range of the depth buffer, which is what the function params signify; those are automatically clamped to 0..1. so your gl.zfar an gl.znear end up as 1.f and .2f, respectively - that's another issue you have there.

As far as depth buffer precision is concerned as far as I know SGX uses 32 bit floating point in all cases as it doesn't actually have a depth buffer but it's tile sorting algorithms use 32 bit floats.

I don't do any sorting myself, I just throw everthing in the scenegraph that hasn't been frustrum culled at the SGX.
that high-precision tile's depth should still get exported to an external depth buffer if needed, which buffer could later participate in occlusion tests. or at least that's what previous pvr generations have allowed. but given you don't do anything fancy with your scene, you should indeed be on the vanilla path.

ed: post-morning-coffee edit. hope now it's more readable.
 
TheDoktor said:
Adventus said:
Stupid question but are you requesting a zbuffer when you set up EGL?

You can set the minimum depth buffer size in your config attributes (default is 0), You can query the actual depth buffer precision used via: eglGetConfigAttrib(Display, Config, EGL_DEPTH_SIZE, &Value).

That's not a stupid question at all.
I set up EGL differently for the Beagleboard version and the display looks similar to what I'd expect it to if there was no depth buffer.
I'll try you're suggestion next time I have the beagleboard set up.

Actually this was the problem! EGL was using a config with no depth buffer! Thanks Adventus for suggesting this!
I also implemented blu's suggestions and thanks for making the depth setup clear to me.
Now I've got to get it more stable & running at 30fps!

Cheers,
TheDoktor
 
This is indeed a really silly emulation bug. I made the same mistake some days ago :wink:
 
Back
Top