Sgx Shader Performance


dflemstr

It's a ball.
Joined
Jul 31, 2008
Messages
2,514
Location
Stockholm, Sweden
Website
Visit site
We all know that the SGX supports OGLES 2.0 and therefore shaders.

Now, we have talked previously about polygons per second and other performance statistics, but never really about the raw number crunching capabilities of the chip.
I intend to create some graphics applications for the Pandora myself that uses shaders extensively (what I have in mind right now is a 6-pass depth-of-field-effect fullscreen shader using 6 textures/FBOs for instance), so this is why I'm asking.

So, to anyone who has had any experience with this chip:
- What can we expect that the chip would manage when it comes to shaders in general?
- Are there any limitations on how many can be loaded at once?
- Are there any issues with having too long fragment programs (with loops or too many if-clauses etc, you all know what I mean)?
- How many textures (uniform sampler2D) can you bind to a shader at once?
- Does anyone have an idea about how many "fullscreen" shaders (aka shaders applied to a 800x480 scene-rendered-to-texture) of reasonable complexity (say, with around 20 instructions) the chip would manage (chained in order) while still not having too huge an impact on FPS?
 
Here's some other questions:

Will the SGX be able to use any sort of deferred shading techniques? I've read about these recently and they seem good, but they require one pass to render to 4 screen-sized buffers, and later passes to read back from those. Will the SGX have the memory bandwidth to do this?

Will there be a problem allocating a 800 by 480 texture, since it's not anywhere near a power of two?

Is there a known limit to how many uniform variables you can pass to a given shader?
 
everything known so far has pretty much been discussed on these and openpandora.org boards.

nobody, at least among the interested parties here at liberty to talk, has had the chance to put the SGX through extensive tests, so it has all been speculations and best guesses based on paper specs and expected clock rates.

what we know is that the SGX will not excel in brute fill-rate, as it has just two shader units, and a conservative core clock. OTOH, it will be very efficient at opaque occlusion by virtue of being a scene-capturing pipeline, and its ISA is so advanced that it easilly covers the GLSL 1.2 requirements.

we dont know how well it ballances between vertex and fragment work, we dont know how well it fetches texels and from up to how many textures.

in other words, we have very little concrete data, so patience ; )
 
dflemstr: I didn't try it on my beagleboard yet but my vertex shader for Pandora-PSP is more than 500 lines atm. Works fine (or atleast it seems to do)
I doubt anyone will hit the shader count limit because you can unload those you don't need.
Again, I think that you can load enough textures for most things. Not sure about specific numbers. But I doubt it will be lower than 4 which is quite alot already..
That guy who is reverse engineering the SGX gave some numbers for that by looking at the bandwidth from what I know, you should look around. It was 2 or 3 passes from what I remember. Thats also the most limiting factor probably.
 
Last edited by a moderator:
I don't understand why you ask here, and not in IMGtec forums. I don't think many people here really know the SGX, except for Xmas who anyway is working for IMGtec :)
 
Last edited by a moderator:
well how about a demo/benchmark along the lines of the breakpoint 2009 4k winner? It runs half decent on the radeon hd 2600 xt here at school ...

though the pandora probably isn't as fast it has less pixels to render as well... maybe it could be ran at half res never the less it is worth watching on youtube or downloading if your graphics card is up to it

It uses procedural shaders and there are no textures... gah now i wanna go learn glsl XD... FINALS EXAMS they kill me!!

CODE
http://vimeo.com/4138285
or
http://www.pouet.net/prod.php?which=52938
 
Last edited by a moderator:
personally I think that the soundtrack of that demo was nice but the graphics seemed to suck at the bigscreen to be honest. I thought it was too low-poly. But yeh the fragment shader does all the work so it would be nice for a benchmark, but I doubt this is possible in any way on the Pandora because of the poor bandwidth of the SGX.
And then again, I doubt that iq (or the other person (or group?) who worked with him) will release the source of this (I have only seen very few bits at bp and didn't bother too ask as he was talking to a "friend" of mine - I expect some articles tho).
PS: my P4, 2.8GHz, nVidia 6800 doesn't even run this ;)
 
Last edited by a moderator:
Pixel shaders are awesome, I'm sad that we won't have much power for shading.
But if overdraw is compensated for, at least I won't have to do any of that "occlusion" crap I was too lazy to learn. You just throw the whole scene into the SGX and it shades the visible parts, right?
 
Hm, thanks for the responses everyone; the main reason why I asked all this is that I need to know how much I need to optimize certain things.
How I wish now that the Pandora would have dedicated video memory; then you could just keep the textures in VRAM and stop caring about bandwidth!

Oh well, I guess that I will have to compress my 2-pass adaptive gaussian filter (with depth testing! :p ) into one pass somehow...

EDIT: That 'Occlusion crap' is awesome to prevent blur bleeding! Just mentioning that to anyone interested in constructing a DOF shader :p So stop complaining about it, and instead use it to your advantage.
 
Last edited by a moderator:
'lulzfish' said:
You just throw the whole scene into the SGX and it shades the visible parts, right?
right. just be mindful of some shader ops you really should not do, namely anything that ends up as a fragment kill/discard.

'dflemstr' said:
How I wish now that the Pandora would have dedicated video memory; then you could just keep the textures in VRAM and stop caring about bandwidth!
muahaha. ha.

for a decade of doing rendition, i'm yet to come across a platform where you don't care about bandwidth. i hear one sony console got somewhat near that (i.e. "unlimited" local bandwidth), but i never got to work on it.

seriously though, you really have to be shader-limited these days to not be affected by bandwidth. but it is a dynamic game of rope-ballancing - you get free of one bottleneck just to get cought into another. esentially, there are not free lunches. but occasionally you may get the price of the apetizer included in that of the main course.
 
Last edited by a moderator:
dflemstr said:
Hm, thanks for the responses everyone; the main reason why I asked all this is that I need to know how much I need to optimize certain things.
How I wish now that the Pandora would have dedicated video memory; then you could just keep the textures in VRAM and stop caring about bandwidth!


Is the SGX's bandwidth even an issue that can be solved with dedicated VRAM? The thing only runs at 110MHz, wereas the bus should provide 32bits at 333MHz. Maybe if we were talking 128-bit or 256-bit VRAM, but just a dedicated slab of 32bit stuff probably wouldn't help you. All it'd mean is less contention from other devices, but the only sorta big one there should be the display processor, unless you're streaming heavy data off of the CPU or DSP.

It could possibly help with latency, but a number of factors are present that are supposed to hide latency.
 
Last edited by a moderator:
QUOTE
Is there a known limit to how many uniform variables you can pass to a given shader?
You have a maximum of 128 vertex and 64 fragment active uniform vectors. Basically you have a bank of 128 x 4 registers, its up to the driver implementation to pack your uniforms into these banks based on the packing rules in the ES GLSL spec. If you used vec4's for everything you would achieve the maximum 1024 uniform scalars.

QUOTE
- Are there any issues with having too long fragment programs (with loops or too many if-clauses etc, you all know what I mean)?
According to the IMG tech guys its generally better (performance-wise), upon a state change, to upload a different fragment program without any branches. This seems to be true on modern pc GPUs aswell.
 
Last edited by a moderator:
'Adventus' said:
You have a maximum of 128 vertex and 64 fragment active uniform vectors. Basically you have a bank of 128 x 4 registers, its up to the driver implementation to pack your uniforms into these banks based on the packing rules in the ES GLSL spec. If you used vec4's for everything you would achieve the maximum 1024 uniform scalars.

512 uniform scalars, perhaps?
 
Last edited by a moderator:
So, 128 vec4s for the vertex shader and 64 vec4s for the fragment shader, and the driver may or may not pack floats into vec4s?

That's awesome. ES seems to be a bit more demanding since the matrix stack is all done in the vertex shader, but that sounds like a lot of room for matrix blending, fancy lighting tricks, etc.

I should learn how to do matrix blending.
 
QUOTE
512 uniform scalars, perhaps?
Haha woops. Yea 512.

QUOTE
So, 128 vec4s for the vertex shader and 64 vec4s for the fragment shader, and the driver may or may not pack floats into vec4s?
Hmmm Kind of. No splitting of vectors is allowed though. Consider this senario:

vec3 A;
vec3 B;
float C[2];

As per the packing rules you must store each new component of a vector in a different column, like so:

| A.x | A.y | A.z | |
| B.x | B.y | B.z | |

Each element of an array on the otherhand must occupy a different row, so ideally you could achieve this:

| A.x | A.y | A.z |C[0]|
| B.x | B.y | B.z |C[1]|

Whether the driver recognises this optimisation depends on its packing algorithm (the minimal algorithm is pretty good though). Its gets more complicated when you introduce Matrices which span rows and columns.

QUOTE
ES seems to be a bit more demanding since the matrix stack is all done in the vertex shader
I don't quite understand what you mean by this.... I haven't seen a case yet where ES is more demanding by design (i might not be looking hard enough).
 
Last edited by a moderator:
Well, not hardware-demanding.
OpenGL ES 2 doesn't specify a matrix stack, you have to either get a separate matrix library or do the CPU math yourself, and then set it as a uniform variable in the vertex shader.

.. :ninja:
Don't you?
 
Last edited by a moderator:
QUOTE
OpenGL ES 2 doesn't specify a matrix stack, you have to either get a separate matrix library or do the CPU math yourself, and then set it as a uniform variable in the vertex shader.
Yep thats correct. I guess it is more demanding on the programmer.... thats generally the price of more flexibility.

I think this is getting a bit offtopic....
 
Last edited by a moderator:
'lulzfish' said:
Well, not hardware-demanding.
OpenGL ES 2 doesn't specify a matrix stack, you have to either get a separate matrix library or do the CPU math yourself, and then set it as a uniform variable in the vertex shader.

.. :ninja:
Don't you?
You wouldn't want to rely on OpenGL matrix code anyway ... generally, you are much better off using your own matrix code and avoid DLL calls for things like matrix operations.
 
Last edited by a moderator:
Back
Top