I Want To Compare The Pandora Video Card With A Pc Video Card


Calmatory said:
A reminder, Pandora uses 800*480 resolution, which means much more raw power per pixel than with general PC's(1024*768 was quite the standard back in GeForce 2 days).

384000 pixels, versus 786432 pixels. More than twice as many in 1024x768. I really do think GF6200 level graphics might be possible, if a talented developer made an optimized engine.
 
Last edited by a moderator:
Given the performance we've seen in Quake 3, I'd peg it at around the speed of a GeForce 2 Ti or a GeForce 4 MX

The GF2/4 of course had no shader units, so I'm going purely for triangles/sec and texture fill rate. A GeForce 5200LE might be what it can manage at a push (The 5xxx had SM2 support)
 
Kramy said:
Calmatory said:
A reminder, Pandora uses 800*480 resolution, which means much more raw power per pixel than with general PC's(1024*768 was quite the standard back in GeForce 2 days).

384000 pixels, versus 786432 pixels. More than twice as many in 1024x768. I really do think GF6200 level graphics might be possible, if a talented developer made an optimized engine.

Doesn't really work that way, upping the screen-res generally doesn't cut or increase your framerate in a linear fashion...
 
Last edited by a moderator:
Enverex said:
Kramy said:
Calmatory said:
A reminder, Pandora uses 800*480 resolution, which means much more raw power per pixel than with general PC's(1024*768 was quite the standard back in GeForce 2 days).

384000 pixels, versus 786432 pixels. More than twice as many in 1024x768. I really do think GF6200 level graphics might be possible, if a talented developer made an optimized engine.

Doesn't really work that way, upping the screen-res generally doesn't cut or increase your framerate in a linear fashion...

Depends if you are fragment bound or vertex bound (or cpu bound). Most old skool games didn't have very much per-pixel cost so frame buffer size isn't a big difference. Many games these days are fragment bound.
 
Last edited by a moderator:
Calmatory, you're focusing on the wrong metrics. A tile based deferred renderer (TBDR) like SGX scales very differently with respect to both memory bandwidth and fillrate (texture accesses, per-fragment operations) than an immediate based renderer (IMR) does. Compared to an old DX6 or DX7 level card like TNT1, TNT2, or GeForce 1, you could say that SGX 530 might not work as hard but it works much more smartly. TBDR makes it so only the top-most opaque pixel and whatever translucent pixels are on top of it need to be rendered, which includes all of the fragment shading pipeline. What this also means is that you don't normally need depth or stencil buffers stored along the framebuffer, so they can be kept in very fast on-chip SRAM, and the framebuffer is cached per-tile allowing for quick bursting and aggressive prefetching. Even with alpha blending every framebuffer pixel has to be written only once (fire and forget).

So you should be able to see how a platform like this gets away with needing much lower bandwidth and pixel operations per second. About the only thing that needs to be fast are the depth comparators for performing the early-Z removal, which is why there are 8 or so of them. As an added bonus, you also get very high depth precision (32bit floating point, I think using 1/w) which helps prevent z-fighting.

The shader pipelines also look weak on paper since there are only two of them and they only support single issue 32bit (or maybe 40bit, I'm not altogether sure about this) operations. But this is mitigated by supporting 3/4-way SIMD over 10bit fixed point color formats, allowing color operations to still be vectorized. Texel blending/fogging/per-pixel lighting/etc operations will tend to dominate over per-vertex operations that need higher precision, and the reduced color range is perfectly acceptable for a device such as this, especially when you're comparing it to old non-HDR graphics cards to begin with. The efficiency of the USSEs is also boosted by supporting vertical multithreading (single-issue SMT), allowing very fast thread switching automatically to hide latency. Old fixed function archs were probably already designed to hide latency but this allows for much more effective throughput of shaders than would be possible otherwise.

There are of course some gotchas: you have to avoid alpha testing and multi-pass rendering to really use the renderer effectively. These things still work but they take a big toll on the efficiency. But if a game plays nice then it can make the rather low clocked/narrow platform go surprisingly far.

If you want a realistic comparison then look at the graphical quality of Sega Dreamcast games, and imagine something at least 2x stronger and with much more modern features/flexibility. Also imagine something that can support at least a few times as many polygons on screen.
 
Exophase said:
Radeon 9000 is the only tough one. If you limited the SGX to doing things exactly as the Radeon does and no more then I think the Radeon could probably win, but it'd be so crippled that it's really far from a fair comparison. Given the lower resolutions expected and the TBDR I think that even if the SGX has less brute force power it can still probably deliver better looking games without much problem.
i think there's no need for the probabilistic part in the bold. SGX is unified shader, r2xx is not. while the SGX may reach as extrema the vertex and the fragment performances, individually, of the R2xx, it has little chance of maintaining them simultaneously. i'm saying that as a RV280 home user - that chip can happily crunch some ~2M verts/s worth of phong-blinn with a single light source, while simultaneously painting a 1024^2 viewport with a fragment shader.
 
Last edited by a moderator:
Exophase said:
If you want a realistic comparison then look at the graphical quality of Sega Dreamcast games, and imagine something at least 2x stronger and with much more modern features/flexibility. Also imagine something that can support at least a few times as many polygons on screen.

So you're saying that we can get around Dreamcast quality games, graphics wise, all things considered.

I thought the Dreamcast rivaled the PS2 in graphics, beat it in places!
(of maybe I'm just being nostalgic)
 
Last edited by a moderator:
(naw)mcx said:
Exophase said:
If you want a realistic comparison then look at the graphical quality of Sega Dreamcast games, and imagine something at least 2x stronger and with much more modern features/flexibility. Also imagine something that can support at least a few times as many polygons on screen.

So you're saying that we can get around Dreamcast quality games, graphics wise, all things considered.

I thought the Dreamcast rivaled the PS2 in graphics, beat it in places!
(of maybe I'm just being nostalgic)

If the thing can run Doom 3 (minus some graphical features with low resolution), then the graphical abilities would easily surpass Dreamcast level. I think it could rival a Radeon 7000 or maybe 8500. I'm also assuming it could rival those Intel GMA chipsets.
 
Last edited by a moderator:
Well the Intel GMA 500 is an SGX 535, which is a faster version of the Pandora's SGX, the 530.

And here is the benchmark for the GMA 500 when matched with an x86 atom.

The process seems very large. Using 13 Micron with the chip test. Whereas the SGX 530 in the Pandora is 65nm. However, it probably only gives us lower power consumption.

However, we might have faster ram then what was tested on these Benches.

However, I don't think looking at these benches is fair. I mean, we've already seen Quake 3 running at 800x480 with medium settings running quite nicely. So we have to be able to get at least, comparatively, 6000-7000 points in 3D Mark 2001.

The test didn't run on this page. Honestly, The hardware is nice for how much power is being used. Also it's a very deep feature set, so honestly unless a team of people can get coordinated to create something specifically for this chipset, I don't think we will see exactly what she can do.
 
Phawx said:
Well the Intel GMA 500 is an SGX 535, which is a faster version of the Pandora's SGX, the 530.
Hmm... If this is true, would a driver for the GMA 500 be useful in getting full OpenGL 2.0 on the Pandora? Does such an open-source driver for the GMA 500 exist?
 
Last edited by a moderator:
wermy said:
Phawx said:
Well the Intel GMA 500 is an SGX 535, which is a faster version of the Pandora's SGX, the 530.
Hmm... If this is true, would a driver for the GMA 500 be useful in getting full OpenGL 2.0 on the Pandora? Does such an open-source driver for the GMA 500 exist?

I've heard the GMA500 is a real pain in linux currently. Many people I know wanting good compatibility with linux and a certain bsd-based system steer away from it when looking for a netbook to buy.
 
Last edited by a moderator:
Exophase said:
...
The shader pipelines also look weak on paper since there are only two of them and they only support single issue 32bit (or maybe 40bit, I'm not altogether sure about this) operations. But this is mitigated by supporting 3/4-way SIMD over 10bit fixed point color formats, allowing color operations to still be vectorized. Texel blending/fogging/per-pixel lighting/etc operations will tend to dominate over per-vertex operations that need higher precision, and the reduced color range is perfectly acceptable for a device such as this, especially when you're comparing it to old non-HDR graphics cards to begin with.
...

I've read this a few times now. Is it possible that some one could take some pictures and compare them side by side with the 10bit colors? Maybe some emulator screenshots as well as movie and HDR.
 
Last edited by a moderator:
Exophase, thanks for clearing things up.

I'm quite sure that the hardware is capable of running pre-2002 games quite decently, at least when it comes to the graphics troughput. The real problem is the CPU.
 
Calmatory said:
Exophase, thanks for clearing things up.

I'm quite sure that the hardware is capable of running pre-2002 games quite decently, at least when it comes to the graphics troughput. The real problem is the CPU.

Going back to my Dreamcast comparison, the CPU on the Pandora is generally a lot faster than the Dreamcast's. In integer performance I wouldn't be surprised if it was often 3-4x faster, especially considering the Dreamcast's pretty weak caching. The exception is in how fast it can crunch dot products: Dreamcast has specialized hardware that excels at this and can do 1 (vec4) per cycle, while NEON on Cortex-A8 needs two cycles to do the same thing, assuming that structures are arranged optimally. But the 3x clock speed advantage and 0-cycle load use from L2 that NEON has should still give a 1.5-2x advantage in this area. The SGX can also be used to perform vertex transforms, although I'd personally stick to NEON if the game needs the pixel shading.
 
Last edited by a moderator:
greendots said:
I've read this a few times now. Is it possible that some one could take some pictures and compare them side by side with the 10bit colors? Maybe some emulator screenshots as well as movie and HDR.

I think you have some misconceptions about this.. 10bit means 1.1.8 fixed point colors, so they can range from (inclusive) -2 to 2 (non-inclusive) and have 256 fractional positions. They're basically just slightly scaled 8bit components that have extra range so they can be clamped after color operations. The framebuffer will still be 8bpp, and I don't think that this format will offer any visual advantage over a traditional fixed function pixel pipeline that's fully 8bit internally.

SGX CAN do color operations using 16bit or 32bit floating point for HDR, and the on-tile scanline buffer is probably 32bit per component to accommodate this, and can still output to an external 8bit per component framebuffer. You'd just end up using a lot more shader cycles doing this, as many as 4x more.
 
Last edited by a moderator:
I usually avoid commenting on topics that are over my head, and 3D acceleration is one of those cases. But, somewhere between 3 and 4 times faster than the Dreamcast GPU is HUGE in my opinion. The DC still is gorgeous, even by today's standards, I'm biased because I've seen one hooked up to a PC Monitor instead of a TV, and the difference is astonishing. I always thought the DC looked better than the PS2, period. The only game on the PS2 that looks anywhere near as good as a game on the DC was Ico, or Shadow of the Colossus. If we see anything even close to Soul Calibur (graphically) on the Pandora, I'm going to need a strait-jacket and a padded room.
 
PhonicUK said:
Given the performance we've seen in Quake 3, I'd peg it at around the speed of a GeForce 2 Ti or a GeForce 4 MX

The GF2/4 of course had no shader units, so I'm going purely for triangles/sec and texture fill rate. A GeForce 5200LE might be what it can manage at a push (The 5xxx had SM2 support)
I remember the devs originally estimating that it was about as powerful as a Geforce 3 but with most of the features and programmability of a Geforce 7.
I don't know if this still holds true but I would be happy with that. Games like Battlefield 2 can run ok on a Geforce 3 with all the effects turned down, so that would be powerful enough for most types of games IMO :)
 
Last edited by a moderator:
This color space thing is confusing me.
I thought that 3D accelerated stuff with dynamic lighting usually occurred in 24 / 32 bit color space?

How does 10 bit fit into this? Is it 10 bits per color channel, so it's 10.10.10.2 with 2 junk/alpha bits instead of 8.8.8.8 with 8 junk/alpha bits?
 
lulzfish said:
This color space thing is confusing me.
I thought that 3D accelerated stuff with dynamic lighting usually occurred in 24 / 32 bit color space?

How does 10 bit fit into this? Is it 10 bits per color channel, so it's 10.10.10.2 with 2 junk/alpha bits instead of 8.8.8.8 with 8 junk/alpha bits?

I said components. Per channel.

The USSE shaders on SGX has three datatypes. High-precision, which are IEEE-754 compliant 32-bit floats, medium-precision, which are 16-bit floats (1 bit sign, 5 bits exponent, 10 significant bits), and low-precision, which are the 10bit fixed point format I described. In a cycle it can do one high-precision operation, two medium-precision operations, or three/four low-precision operations. What documentation/comments are available do actually say three or four, which is unclear but suggests 40bit registers.

Textures can be of several formats and the framebuffer can be in several formats. When you perform pixel shading each color component is converted to one of the three internal formats, depending on which sample type you used. If your sample type is not high enough precision then you risk getting the range truncated in addition to losing significant bits. When the shading is done the color gets converted to the framebuffer's format. Most people are going to be using 8888 RGBA textures or one of the 16bit formats, and similarly 32 or 16bit framebuffers. But you can use texture formats that have colors whose components are half floats or full floats, and you can use higher precision formats than the 10bit per component one in your shaders too.
 
Last edited by a moderator:
Back
Top