GP2X Problem With Gpu940


efegea

Active Member
Joined
Aug 8, 2005
Messages
636
Age
38
Location
GP32Spain, Spain
It's alpha blending supported on gpu940?

I'm developing a multi-platform 3D engine, currently the same code is working on Linux SDL Opengl (not gpu940 pc version) and on GP2X gpu940. The current status is: it loads MD2 (quake2) models and displays them with animations :)

BUT..I'm working into implementing ortho mode to display 2D images (pngs) with alpha layer. On the pc side, it works ok, but in gpu940 the images with alpha layer are shown green with transparency only on pixels that are 100% transparent, and the green zone is semi-transparent but without mid-values.

Another difference of the pc version is that the gpu940 version shows 3D polygons over the 2D images, while on the pc the images are always on top. I don't know if it's my fault, but why is different on each platform?
 
efegea posted on Feb 21 2007 at 01:19 AM said:
It's alpha blending supported on gpu940?

I'm developing a multi-platform 3D engine, currently the same code is working on Linux SDL Opengl (not gpu940 pc version) and on GP2X gpu940. The current status is: it loads MD2 (quake2) models and displays them with animations :)

BUT..I'm working into implementing ortho mode to display 2D images (pngs) with alpha layer. On the pc side, it works ok, but in gpu940 the images with alpha layer are shown green with transparency only on pixels that are 100% transparent, and the green zone is semi-transparent but without mid-values.

Another difference of the pc version is that the gpu940 version shows 3D polygons over the 2D images, while on the pc the images are always on top. I don't know if it's my fault, but why is different on each platform?


I don't know much about that, but I think gpu940 use YUV colors. Check it out ;)

Good luck for your project, were all looking forward to see 3D games (as well as 2D games, as long as they are fun to play!)
 
Last edited by a moderator:
slygamer posted on Feb 21 2007 at 02:04 PM said:
I believe alpha blending is not yet supported in gpu940.

There is no Z-buffer in gpu940, so your 2D polygons must be drawn after all your 3D polygons.

My 2D polygons are drawn after the 3D polygons :(


I'm also getting random floating point exceptions when an alpha blended image is shown (but a green image is displayed)

Here's the code I use:

Code:
	if(bitmap->channels==4)
	{
		glEnable(GL_BLEND);
		glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
		glDepthMask(GL_FALSE);
	}

glBegin(GL_QUADS);
 (...)
glEnd();
	
	if(bitmap->channels==4)
	{
		glDisable(GL_BLEND);
		glDepthMask(GL_TRUE);
	}
 
Last edited by a moderator:
efegea posted on Feb 21 2007 at 01:19 AM said:
It's alpha blending supported on gpu940?

Not really.
the renderer can do keyed textures and constant blending.
The libGL tries to map OpenGL alpha layer requirement to these features (quite badly, it seams).

So, you can have transparency and blending, but not a complete alpha layer.

Also, there are probably some bugs in there, since you are the second to report the greenish issue.
This bug rank second in the priority list, so will hopefully be fixed in a week or two.

Anyway, there is not, and will never be, an full alpha layer. So GL blending possibilities will stay partly implemented only.

Another difference of the pc version is that the gpu940 version shows 3D polygons over the 2D images, while on the pc the images are always on top. I don't know if it's my fault, but why is different on each platform?

I don't know, but if you can produce a (simple) sample that shows the bug I will work on it.


slygamer posted on Feb 21 2007 at 02:04 PM said:
There is no Z-buffer in gpu940

Of course there is !
 
Last edited by a moderator:
rixed posted on Feb 22 2007 at 01:31 AM said:
Anyway, there is not, and will never be, an full alpha layer. So GL blending possibilities will stay partly implemented only.

Why not? :(

It's too hard to implement? too cpu expensive?
 
Last edited by a moderator:
efegea posted on Feb 22 2007 at 05:02 PM said:
rixed posted on Feb 22 2007 at 01:31 AM said:
Anyway, there is not, and will never be, an full alpha layer. So GL blending possibilities will stay partly implemented only.

Why not? :(

It's too hard to implement? too cpu expensive?

Keying is simple : if the incoming fragment color match the key, the write is skiped.

Constant blending is simple : for each incoming fragment, it's combined with the already stored color ; this combination is constant, involve only shifts, and the code for it can be hardcoded.

Alpha layer in the other hand implies that for every incoming fragment you have a random alpha value that's used to scale the already stored colors and the incoming colors. I don't see how to do this without multiplying each color component. That would be at least 4 MULs per pixels, that is 12 cycles, plus all the padding and shifting required. In some cases (texturing in mode GL_REPLACE), you can store another version of the texture with already scaled incoming colors, so that the MUL count per pixel reduces to 2, at the expense of memory requirement for storing textures doubled - for the additional pre-scaled version).

I don't think this fit to a software renderer. Keying and constant blending are much easer, and are enough to implement transparency and translucency most of the time.
 
Last edited by a moderator:
I'm trying to implement simple blending, making transparent the pixels that are black. On the pc side it works using glBlendFunc(GL_ONE, GL_ONE) but on gpu940 the black is still black. what parameters do I have to put on glBlendFunc to make them transparent?

EDIT: About fog...I've read on the documentation that it's not implemented, but the headers have the definitions of the glFog(i,x,xv) functions, it's implemented or not? If not, do you know how can I implement fog without been supported? It's important because of the low polygon count, to hide the most distant with a fog. Oh, well, and because I'm trying to develop a silent hill-like game :lol:

EDIT2: And what about getting the projection/modelview matrices? I need it for doing frustum culling but there is not GL_MODELVIEW_MATRIX/GL_PROJECTION_MATRIX parameters for glGetFixedv :(

EDIT3: forgive that matrix thing, I've found another way to do the frustum culling arrrr...that way didn't work..still need to get the projection/modelview matrices :(
 
efegea posted on Feb 25 2007 at 02:37 AM said:
I'm trying to implement simple blending, making transparent the pixels that are black.

The glBlendFunc parameters are unused (IIRC).

The whole blending thing is under redevelopment, so it's safer to wait one more week before entering this area.

EDIT: About fog...I've read on the documentation that it's not implemented, but the headers have the definitions of the glFog(i,x,xv) functions, it's implemented or not? If not, do you know how can I implement fog without been supported? It's important because of the low polygon count, to hide the most distant with a fog. Oh, well, and because I'm trying to develop a silent hill-like game :lol:

Fog is not implemented. This is in the TODO list, but with low priority.

EDIT2: And what about getting the projection/modelview matrices? I need it for doing frustum culling but there is not GL_MODELVIEW_MATRIX/GL_PROJECTION_MATRIX parameters for glGetFixedv :(

Probably because they are not required in GL-ES specs. I will add them.
Meanwhile, you can change modelview_ms and projection_ms in GL/transfo.c so that they are no static any longer, and use this directly.
 
Last edited by a moderator:
I am sorry to ask so many things about gpu940, I hope you don't get angry with me :unsure:

But..ehm..what about vertex arrays? In the doc I can't find any mention to them, are yet implemented? I tried to draw a quake 3 bsp map, using vertex arrays, on the pc side it works but not on the gp2x

Many thanks, your project is great GREAT!! :)
 
efegea posted on Feb 26 2007 at 03:03 PM said:
But..ehm..what about vertex arrays? In the doc I can't find any mention to them, are yet implemented? I tried to draw a quake 3 bsp map, using vertex arrays, on the pc side it works but not on the gp2x

vertex arrays should be supported. I already used them and it wotked, but perhaps it's bugged now ?
The only thing that's not implemented is glInterleavedArrays(). Is it what you are using ?
If not, did you enabled the arrays ? And what primitive do you use ? The only one supported are :
GL_POINTS, GL_LINES, GL_LINE_STRIP, GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS, GL_QUAD_STRIP. *No* GL_POLYGONS (I can add this if someone needs it).

Note: you are not planning to try to port quake3 on the GP2X, are you ? :-O
 
Last edited by a moderator:
rixed posted on Feb 26 2007 at 11:08 PM said:
vertex arrays should be supported. I already used them and it wotked, but perhaps it's bugged now ?
The only thing that's not implemented is glInterleavedArrays(). Is it what you are using ?
If not, did you enabled the arrays ? And what primitive do you use ? The only one supported are :
GL_POINTS, GL_LINES, GL_LINE_STRIP, GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS, GL_QUAD_STRIP. *No* GL_POLYGONS (I can add this if someone needs it).

It's weird, it should work :huh: Should be a bug in my code, but it works on the pc... I'm using the binaries you released for egoboo2x, if you used them on it, it shuld work, but then, why doesn't work? who knows

Anyway, here is the code I use:

Code:
glVertexPointer(3, GL_FLOAT, sizeof(tBSPVertex), &(scene->bsp->m_pVerts[pFace->startVertIndex].vPosition));
glTexCoordPointer(2, GL_FLOAT, sizeof(tBSPVertex), &(scene->bsp->m_pVerts[pFace->startVertIndex].vTextureCoord));
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D,  scene->bsp->m_textures[pFace->textureID]);
glDrawElements(GL_TRIANGLES, pFace->numOfIndices, GL_UNSIGNED_INT, &(scene->bsp->m_pIndices[pFace->startIndex]) );
This code is on a loop that goes throught all the faces of the map

rixed posted on Feb 26 2007 at 11:08 PM said:
Note: you are not planning to try to port quake3 on the GP2X, are you ? :-O

hehe :lol: No, I'm using the quake3 format for maps on my own project. Well, I decided to not use the quake3 maps and develop my own format, using blender and python, it's easy. Still need the vertex arrays...

My project is a multi-platform 3D engine "for educational purporse" 3DEEP :D just coding it for learning, I've never done opengl before. This is what I got on about a week of coding after a month of reading:



That's the pc linux version, on the GP2X it lacks the fog and the map... and the fps are a bit lower.....100times lower :p

It is developed with portability in mind, now it runs on linux and GP2X, when finished I'll port to DS,dreamcast and windows.

I have future plans for the engine, at the moment I have two games in mind, but first: finish the engine then: develop games
 
Last edited by a moderator:
efegea posted on Feb 27 2007 at 12:02 AM said:
Code:
glVertexPointer(3, GL_FLOAT, sizeof(tBSPVertex), &(scene->bsp->m_pVerts[pFace->startVertIndex].vPosition));
glTexCoordPointer(2, GL_FLOAT, sizeof(tBSPVertex), &(scene->bsp->m_pVerts[pFace->startVertIndex].vTextureCoord));
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D,  scene->bsp->m_textures[pFace->textureID]);
glDrawElements(GL_TRIANGLES, pFace->numOfIndices, GL_UNSIGNED_INT, &(scene->bsp->m_pIndices[pFace->startIndex]) );
This code is on a loop that goes throught all the faces of the map

You should get a GL_INVALID_ENUM after your glVertexPointer().
GL_FLOAT is not supported here, and no support for GL_FLOAT inside the libGL is planned (some floating points functions are "supported" via inlined functions that do conversions to fixed points, but obviously arrays cannot be converted on the fly).
Also if you consider porting to the GP2X, you really should consider using fixed point only (between 10 and 20% of the 920 time in egoboo2x is spent converting from float to fixed). Of course, that mean you have to make some choice for your engine : either support both floats and fixed points, either reduce the number of targets for which it will be useful.
Anyway, I'm not a fan of engines in general : OpenGL is already a 3D engine. We coders like to code engines with engines, but after some such stages of stacking several engines up, the possibilities are reduced and the performance goes down.

Apart from that, if you want to draw a triangle it will be faster to simply use glBegin(GL_TRIANGLES) then give your 3 vertex and texture array. Why not use indexed arrays to draw all your triangles in one go ?

Also, GL_TRIANGLE is not the faster primitive. Quake2 models are stored with the best GL primitive for each piece of the mesh, and perhaps it's the same for quake3 models ? Anyway, TRIANGLE_STRIPS or TRIANGLE_FAN are faster on every implementation of OpenGL that I know of (they can reuse positions).

Good luck !
 
Last edited by a moderator:
Back
Top