Pandora How Much Performance Can Give The Pandora Gpu


PSP is a fixed function graphics platform around OpenGL ES 1.1 in feature set.

As to the topic question, Pandora's GPU approximately has GeForce 3 performance with GeForce 7 graphics features.
 
QUOTE
And where does the GeForce 3 performance idea come from?
Yeah, I would say a Kyro I/II would be a better example. It's at least PowerVR, TBDR, runs at a similar clock, has 2 pipelines.... although it didn't have hardware TnL.
 
dmdm said:
Clearly the third option would yield a ridiculously high fillrate but that wouldn't be realistic and wouldn't represent real world performance unless you had a vast amount of overdraw... similarly option 1 would be far too pessimistic and also wouldn't represent real-world performance... so i'm afraid option 2 is the only sensible option although i definitely agree it is not nice to have to resort to a magic "factor" in the calculation however this is the only way to get accurate and representative figures i'm afraid.
I still don't really think this is fair. First, just because an implementation isn't tile based doesn't mean that there's no hidden surface removal whatsoever. The other companies aren't putting in any kind of factor to account for this. Second, PowerVR didn't actually say what this factor is, nor how they derived it. I hear that Quake 3 has an average overdraw factor of 3.8, but it seems to me that a first person shooter is going to have a lot more overdraw than games with very different camera angles, like overhead perspective. Third, I don't think TBDR can remove overdraw 100% (the remaining pieces that are inside the tiles), so 3.0 would have to be a 3.0 reduction, and I just don't think overdraw is that high in most normal things.

Now granted, if overdraw could indeed be removed 100% then intuitively it'd seem like you only need enough fill rate for every pixel on the framebuffer (plus whatever other render surfaces you might be targetting) * N texture passes, and any more would never do you any good. With supersampling AA you could always use more fillrate, but here just doing a straight 800x480 should be more than enough. Still, if we're going to be talking about how much overdraw things tend to have then it's fair to assess how much remains after TBDR is used. I don't expect what's there to be a problem though.
 
Last edited by a moderator:
QUOTE
Second, PowerVR didn't actually say what this factor is, nor how they derived it.
I think its a historical value that was produced between the Kyro I/II vs Geforce 2/3 time. Kyro gave similar performance to Geforces at the same clock which had 3x the theoretical fillrate. When compared to a modern Geforce I agree, its unlikely to hold due to the fancy z testing, but compared to other embedded cards its probably in the right ballpark (given the constraints).

Feel free to disagree. At least until we actually get our hands on one. :)
 
Adventus said:
QUOTE
Second, PowerVR didn't actually say what this factor is, nor how they derived it.
I think its a historical value that was produced between the Kyro I/II vs Geforce 2/3 time. Kyro gave similar performance to Geforces at the same clock which had 3x the theoretical fillrate. When compared to a modern Geforce I agree, its unlikely to hold due to the fancy z testing, but compared to other embedded cards its probably in the right ballpark (given the constraints).

Feel free to disagree. At least until we actually get our hands on one. :)
I guess the reason why I'm bothered by this is because PowerVR is and always has been heavily promoting the benefits of TBDR, they shouldn't be granted a grossly artificially adjusted fillrate number in addition to that. That and it really just isn't honest (and I don't see anyone else doing it). They would have probably done as well to give the real fillrate and explain why that can get a lot further.

More on target... I hear that SGX530 has 8 unified shader units. Unfortunately I don't have any very good sources for this, so we can just hope that it's true. This is quite a bit better than the 1, 4 vertex/pixel shaders that NV20 (Geforce 3) has. Bear in mind, though, that the SGX530 running in the Pandora will run at a much lower clock speed than any Geforce ran at (apparently 110MHz, vs 175MHz for GF3 Ti200 and 233MHz for NV2A in the XBox, which also had another vertex shader).

Bottom line: memory bandwidth and raw fillrate are well below what a GF3 could handle, but the inherently low resolution (that shouldn't necessarily need anti-aliasing, or can AA at 400x240 even, since for a small screen that'll still look good), TBDR, and compressed texture formats help a lot to negate this. We can also hope that the hierarchical cache for the SGX is sufficiently large. So these things should be able to keep up fine in the real world.

On the other hand, the number of available shaders, flexibility of usage, and quality of the shader language should be well above GF3. Exactly how this will be utilized will remain to be seen.
 
Last edited by a moderator:
Exophase said:
More on target... I hear that SGX530 has 8 unified shader units. Unfortunately I don't have any very good sources for this, so we can just hope that it's true.

Wouldn't 8 USSE @ 200 MHz * 3x overdraw translate to > 4Gpixels/s (while you were previously quoting 1.2Gpixels/s)?

Anyway I think we can't conclude anything from any number, we will just have to wait and see how it performs... or how it doesn't perform :p
 
Last edited by a moderator:
Laurent said:
Exophase said:
More on target... I hear that SGX530 has 8 unified shader units. Unfortunately I don't have any very good sources for this, so we can just hope that it's true.

Wouldn't 8 USSE @ 200 MHz * 3x overdraw translate to > 4Gpixels/s (while you were previously quoting 1.2Gpixels/s)?

Anyway I think we can't conclude anything from any number, we will just have to wait and see how it performs... or how it doesn't perform :p
Answers to a couple of questions, Exophase, yes a TBDR is completely efficient and will remove 100% of the overdraw even within tiles.... you are correct in your assertion that the fillrate is therefore shared between the surfaces we are rendering to etc which is why you can easily get away with quite a low theoretical peak fillrate and still match performance.

SGX530 only contains 2 USSE pipelines however they are SIMD in nature, so comparisons with a GF3 will probably be difficult given its less flexible 1:4 Vertex to Pixel shader ratio etc. As i say i still believe my original statement will hold true and in most games i would expect you to be shader limited... vertex/polygon throughput and memory bandwidth are highly unlikely to be the limiting factor.
 
Last edited by a moderator:
Exophase said:
With supersampling AA you could always use more fillrate, but here just doing a straight 800x480 should be more than enough. Still, if we're going to be talking about how much overdraw things tend to have then it's fair to assess how much remains after TBDR is used. I don't expect what's there to be a problem though.
If I'm not mistaken, fillrate when using AA in TBDR's doesn't take that much of a hit and it's definitely a reasonable trade-off to have it enabled. Bilinear filtering is also absolutely free, disregarding memory usage.

Lazy8s said:
PSP is a fixed function graphics platform around OpenGL ES 1.1 in feature set.

As to the topic question, Pandora's GPU approximately has GeForce 3 performance with GeForce 7 graphics features.
Is it really?? Excuse my ignorance, but is DOT3 per-pixel lighting possible in hardware for the PSP?
 
Last edited by a moderator:
By "around OpenGL ES 1.1", I meant that PSP's feature set was roughly similar to the spec (more similar to 1.1 than a programmable spec like 2.0, at least), not based around it. As assumed, PSP doesn't support DOT3 shading but does have a few of its own advanced features like bezier acceleration.

Supersampling AA still requires the fill rate hit with TBDR, but it doesn't incur the super sized memory footprint or bandwidth expense from the back/sample buffer of an IMR since larger resolutions only mean extra tiles for a TBDR.

The rumor that SGX530 had eight shader units might've been a misunderstanding about its Z comparator units. Pandora's SGX530 probably has eight of those, giving it a maximum effective opaque and also stencil fill rate of 1200 megapixels (at the 150 MHz clock speed that corresponds to the 900 MHz Cortex of the reportedly developer clocked OMAP3530s.)

The point of PowerVR's HSR/visible surface determination, basically, is to completely eliminate overdraw, as highlighted in actual benchmarks.
QUOTE
Take the GeForce2 Ultra, for example. Theoretically, this card has a 1000 megapixel per second fill rate given its clock speed and rendering pipe. What we see in actuality, however, is that the GeForce2 Ultra is only able to fill 375 megapixels per second. This means that given the synthetic Serious Sam fill rate tests, the GeForce2 Ultra is only 37.5% effective. One can attribute this to overdraw as well a memory bandwidth limitations

The Kyro II, on the other hand, features what many would consider a lowly 350 megapixel per second fill rate. However, when the tests are run, the Kyro II scores a fill rate that is only 22 megapixels per second less than the GeForce2 Ultra. Coming out at 352.89 megapixels per second, the Kyro II's effective fill rate matches its theoretical fill rate, something we cannot say about any other card on the market. According to the Serious Sam benchmarks, the Kyro II is actually 100% efficient.

http://www.anandtech.com/showdoc.aspx?i=1435&p=13

The amount of benefit provided by early Z techniques to IMRs can still be rather limited comparatively: the depth complexity they rendered in Quake III only fell to around 3.0 when they employed their early Z.
 
Laurent said:
Exophase said:
More on target... I hear that SGX530 has 8 unified shader units. Unfortunately I don't have any very good sources for this, so we can just hope that it's true.

Wouldn't 8 USSE @ 200 MHz * 3x overdraw translate to > 4Gpixels/s (while you were previously quoting 1.2Gpixels/s)?

Anyway I think we can't conclude anything from any number, we will just have to wait and see how it performs... or how it doesn't perform :p
Okay, going to try this again. Texture mapping units are not shaders. That is and always has been the number from which fill rate is devised. A shader can't texture map in a single cycle.

dmdm said:
Answers to a couple of questions, Exophase, yes a TBDR is completely efficient and will remove 100% of the overdraw even within tiles....
I guess there must be something I'm missing about TBDR then, since that's not really the intuitive explanation. I would figure something would have to scale time-wise with the number of polygons in a tile.

dmdm said:
SGX530 only contains 2 USSE pipelines however they are SIMD in nature, so comparisons with a GF3 will probably be difficult given its less flexible 1:4 Vertex to Pixel shader ratio etc. As i say i still believe my original statement will hold true and in most games i would expect you to be shader limited... vertex/polygon throughput and memory bandwidth are highly unlikely to be the limiting factor.
Only two shaders? Source please. I was under the impression that all GPU shaders ever made were vector coprocessors and hence were SIMD by default. All of the assembly language I've seen for them seems to agree with this.

Lazy8s said:
Pandora's SGX530 probably has eight of those, giving it a maximum effective opaque and also stencil fill rate of 1200 megapixels (at the 150 MHz clock speed that corresponds to the 900 MHz Cortex of the reportedly developer clocked OMAP3530s.)
Would also like a source on the SGX clock for Pandora. If it's 110MHz at 600MHz for the Cortex then that'd mean 165MHz at 900MHz if they're divided off of the same clock. But there's no guarantee that they would be. The Cortex-A8, C64x+, and SGX530 clocks we've been seeing all seem pretty weird in relation to each other.
 
Last edited by a moderator:
QUOTE
I guess there must be something I'm missing about TBDR then, since that's not really the intuitive explanation. I would figure something would have to scale time-wise with the number of polygons in a tile.
The Kyro II tile co-processor could do 512 polygons per tile per cycle. Since each of these tiles were 16x32 pixels, it would be highly unusual to reach this limit. Some factor like this probably exists in the SGX.

QUOTE
Only two shaders? Source please.
There's is probably a more direct quote somewhere, PVRInsider: http://www.iii.co.uk/investment/detail?cod...p;action=detail

Para-phrasing: "With double the pipelines of the already blisteringly-powerful SGX530" .... "POWERVR SGX540 is the first 4 pipeline version of the SGX family."
 
Adventus said:
Para-phrasing: "With double the pipelines of the already blisteringly-powerful SGX530" .... "POWERVR SGX540 is the first 4 pipeline version of the SGXfamily."
Yes, pipelines referring to TMUs, where I got that figure (Wikipedia says that the two are often analogous). Shader count doesn't have to be the same as TMUs.

Speculation here says 8 shaders, but there's a lot of debate over things in general:

http://forum.beyond3d.com/showthread.php?t=32809

But other sources are suggesting that the two are USSE pipelines (probably coupled with the TMUs, not disjoint like in modern cards)

PowerVR doesn't seem that willing to give real information.

Honestly 2 shaders doesn't sound that good.
 
Last edited by a moderator:
QUOTE
Yes, pipelines referring to TMUs, where I got that figure (Wikipedia says that the two are often analogous). Shader count doesn't have to be the same as TMUs.
Yea, I'm not quite up with all the terminology. However, how did you extrapolate only 2 shaders from dmdm saying there is only 2 USSE pipelines?

PS: I would guess dmdm is a source himself, since he said he's one of the SGX designers.

EDIT: Funny how that topic is from ~2years ago and we still dont have any more information
 
If I remember correctly, the SGX core has two ways to set its clock. It can be set to a fixed frequency of 96MHz or it can be a divided down frequency of the CPU clock. The legal dividers are 3, 4, 5 and 6. It's been a while since I looked that up but I think it's right.

The demos we saw in Texas were running at 500MHz with the SGX core at 166MHz which means they were using a divider of 3. Based on that, you can speculate which values would work for 600, 800 and 900 MHz. :)
 
O.O

600mhz Cortex + 200mhz SGX = Erection
900mhz Cortex + 300mhz SGX = Orgasam

Thats all I have to say...

...Omg wait... when this things is fully clocked out to 900mhz... I think its about as powerful as an xbox if I remember the specs...actually I think its a litte bit more :-||

Holy **** Am I comparing this wrong or can someone else look at the xbox specs and agree with this?

[edit] Scratch that.. I think I was being a little over zealous to quicky. I Just checked the specs again, in my mind I was only comparing proccessors clock speeds with a litte +/- arch diffrence. Its actually quite diffrent...but then again.. idk... It defintly looks like it could compete with xbox hardware though... this is very intruiging... I would love someone who knows the architecture diffrences to take a look at these spec sheets and see how they compare though.
 
Soulkiller said:
...Omg wait... when this things is fully clocked out to 900mhz... I think its about as powerful as an xbox if I remember the specs...actually I think its a litte bit more :-||

Holy **** Am I comparing this wrong or can someone else look at the xbox specs and agree with this?
It's a bit difficult to compare specs- the XBox had an x86 CPU for starters...

Having said that, it's interesting what all one could conceivably do with the OMAP3 in the 900MHz mode- certainly much, much more than any handheld unit to date.
 
Last edited by a moderator:
Adventus said:
However, how did you extrapolate only 2 shaders from dmdm saying there is only 2 USSE pipelines?
USSE is the name for PowerVR's unified shader architecture.

Adventus said:
PS: I would guess dmdm is a source himself, since he said he's one of the SGX designers.
Guess that settles that then. All we really needed to hear.

So.. yeah, that sounds too bad. :/
 
Last edited by a moderator:
MWeston said:
If I remember correctly, the SGX core has two ways to set its clock. It can be set to a fixed frequency of 96MHz or it can be a divided down frequency of the CPU clock. The legal dividers are 3, 4, 5 and 6. It's been a while since I looked that up but I think it's right.

The demos we saw in Texas were running at 500MHz with the SGX core at 166MHz which means they were using a divider of 3. Based on that, you can speculate which values would work for 600, 800 and 900 MHz. :)
Great, this was the same OMAP3 you guys are using right? Bet that means it can at least handle 600 / 4, if not 600 / 3, and 900 / 5 or 6. At any rate, something better than a divider of 6 (or that slow fixed clock).

That much is good to know anyway.

Still, with only 2 USSEs I think that it might be better to do vertex shading on the CPU in a lot of cases, to free up both onboard for pixel shading. NEON at 600MHz can probably push more than both USSEs a lot of the time anyway.

A little bit of math:

200MHz (okay, this is optimistic, but PowerVR advertises it so it's not too unrealistic) * 2 USSEs = 400,000,000 cycles per second. We'll assume common 4 way ALU operations can be done in 1 cycle. 800 * 480 = 384,000 pixels on the screen. dmdm says there's no overdraw, and this applies to shaders as well, so as long as there aren't render targets beyond the framebuffer (as is typical) that gives about 1041.67 cycles per pixel. If you want 60fps you can shade up to 17.36 cycles per pixel. At 30fps, 34.72 cycles per pixel. If around 320x240 is done (say, for PS1 emulation) at 60fps then 86.8 cycles per pixel.

Unfortunately I have no idea what you can really do in a cycle, but like dmdm said, I'm sure that heavy optimization is going to be number one for getting the most out of this.
 
Last edited by a moderator:
Back
Top