Pandora''s 3d Graphics


'Exophase' said:
'darkblu' said:
a guys was at this party many a year ago, and was having a good time, enjoying the drinks and the girls, until this unfamiliar guy pops up, persistently trying to strike up a conversation with everybody around, assuming the geeky angle (as it was somewhat of a hackers party). after boasting with pc specs and other geeks-must-be-impressed-of-such-stuff things, he approaches our guy with the question 'hey, dude, what is your VGA?' (as at that time the VGA and derivatives were the top of the food chain). our guy, being the rather eloquent type, answers:

'suuuuper.'
<darkblu> this one time we were gathered at the town square and isaac brought a bale of cotton
<darkblu> he was showing it off to all of the townspeople
<darkblu> he asked luther, "hey sir, how do you process your cotten?"
<darkblu> luther, being the smug asshole that he is, responded
<darkblu> "ginned"
<darkblu> i'm old :(


I dont get either of them?
 
Last edited by a moderator:
Never heard of the cotton gin? What are they teaching you in school these days...
 
lulzfish said:
Was that the joke?
drunk on gin / cotton gin?

I was expecting something more subtle, apparently.


No, the joke was that darkblu is old. At least he got it ;p
 
Last edited by a moderator:
'Mr Poletski' said:
.. the real saviour is the fact that overdraw can be totally eliminated on a per-pixel level..
i'm afraid there are no such architectures. at least not such that can draw translucent pixels.
 
Last edited by a moderator:
darkblu said:
'Mr Poletski' said:
.. the real saviour is the fact that overdraw can be totally eliminated on a per-pixel level..
i'm afraid there are no such architectures. at least not such that can draw translucent pixels.



The IMGTec employee who posted here (whose handle escapes me at the moment, I'm afraid) told me that the overdraw elimination is 100% efficient not just on a per-tile level but on a per-pixel level as well. I take it this only applies to opaque pixels, but I don't think having the ability to render translucent pixels is hindering the rendering of opaque ones. I think many scenes will have little or no translucency in textures, so this isn't totally irrelevent.
 
Last edited by a moderator:
Whatever you guys are saying it sounds cool :D
 
Last edited by a moderator:
I know, right? I almost want to look up some of this stuff on Wikipedia so I can pull out shit from my ass but make it sound like I know what I'm talking about. ;)
 
Last edited by a moderator:
'Exophase' said:
The IMGTec employee who posted here (whose handle escapes me at the moment, I'm afraid [ed: xmas]) told me that the overdraw elimination is 100% efficient not just on a per-tile level but on a per-pixel level as well. I take it this only applies to opaque pixels, but I don't think having the ability to render translucent pixels is hindering the rendering of opaque ones. I think many scenes will have little or no translucency in textures, so this isn't totally irrelevent.
indeed, handling of translucent pixels is not hindering the handling of opaque ones (and TBDRs have many good things to offer to translucent pixels too*), but you underestimate the relevance of the translucent pixels:

1) every multi-pass primitve (less relevant these days, but still not extinct, particularly on architectures where the shader op count per pass is limited) is essentially a 'blending' one, whether it is translucent in its first pass or not.
2) statistically, entirely-opaque scenes almost don't exist in nature, at least not in the context of the average modern video game. even if only for all the fancy full-scene post-processing effects popular today. then, factor in 'momentary peaks in translucencies' like explosion fx and etc particles, and you can see how an 'opaques-only optimised' architecture could pose a challenge. in contrast, a raw-fillrate-optimised architecture could present a better ballance.

* like gargantuan bandwidth to the on-chip tile buffer, removing the onus from the IMR's read-modify-write blending ops (some IMRs like xenos do another trick there). also, older implementations of the PVR architecture had free per-pixel translucencies depth-sorting, which allowed for perfectly-correct arbitrary-complex translucent meshes - something which has traditionally been a PITA for IMRs.
 
Last edited by a moderator:
(naw)mcx posted on Apr 15 2009 at 01:45 PM said:
'Vorporeal' said:
I know, right? I almost want to look up some of this stuff on Wikipedia so I can pull out shit from my ass but make it sound like I know what I'm talking about. ;)
Hey, I did get the VGA joke :(


So did I. But all this multi-pass, pipeline, overdraw, etc. It would be pretty cool to be able to spit that out and sound legit (or actually be legit).
 
Last edited by a moderator:
Vorporeal said:
(naw)mcx posted on Apr 15 2009 at 01:45 PM said:
'Vorporeal' said:
I know, right? I almost want to look up some of this stuff on Wikipedia so I can pull out shit from my ass but make it sound like I know what I'm talking about. ;)
Hey, I did get the VGA joke :(
So did I. But all this multi-pass, pipeline, overdraw, etc. It would be pretty cool to be able to spit that out and sound legit (or actually be legit).
Ah, I do get a lot of that
But yes, that is pretty cool to be able to use it in context!
 
Last edited by a moderator:
'darkblu' said:
'Exophase' said:
The IMGTec employee who posted here (whose handle escapes me at the moment, I'm afraid [ed: xmas]) told me that the overdraw elimination is 100% efficient not just on a per-tile level but on a per-pixel level as well. I take it this only applies to opaque pixels, but I don't think having the ability to render translucent pixels is hindering the rendering of opaque ones. I think many scenes will have little or no translucency in textures, so this isn't totally irrelevent.
indeed, handling of translucent pixels is not hindering the handling of opaque ones (and TBDRs have many good things to offer to translucent pixels too*), but you underestimate the relevance of the translucent pixels:

1) every multi-pass primitve (less relevant these days, but still not extinct, particularly on architectures where the shader op count per pass is limited) is essentially a 'blending' one, whether it is translucent in its first pass or not.
2) statistically, entirely-opaque scenes almost don't exist in nature, at least not in the context of the average modern video game. even if only for all the fancy full-scene post-processing effects popular today. then, factor in 'momentary peaks in translucencies' like explosion fx and etc particles, and you can see how an 'opaques-only optimised' architecture could pose a challenge. in contrast, a raw-fillrate-optimised architecture could present a better ballance.

* like gargantuan bandwidth to the on-chip tile buffer, removing the onus from the IMR's read-modify-write blending ops (some IMRs like xenos do another trick there). also, older implementations of the PVR architecture had free per-pixel translucencies depth-sorting, which allowed for perfectly-correct arbitrary-complex translucent meshes - something which has traditionally been a PITA for IMRs.


The bottom line is that there is no point attempting to duplicate code running on 8800 GTX on Pandora.

Beyond demos , I don't think you can realistically expect normal mapped worlds with full screen blooms and other effects on Pandora.

Basic multitexturing with static shadow maps will most likely end up being the most efficient and visually pleasant way to render relatively complex worlds.
 
Last edited by a moderator:
(naw)mcx posted on Apr 15 2009 at 09:57 AM said:
Aha, I get the VGA one.

(http://www.pcguide.com/ref/video/stdSVGA-c.html might help)


Heh, I got that, but I'm still trying to figure out what the expected response to VGA was. Super seems a legitimate answer, unless of course he was really using an XGA adapter, in which case it's possibly funny.
 
Last edited by a moderator:
'hch' said:
first, the memory is DDR-333 (shared) with a total bandwidth of 2.6GB/s. for many applications, this is the limiting factor anyways.

You're too optimistic: the Pandora will come with LP DDR 166 on a 32-bit data bus so the bandwidth will be at the very best 1.3 GB/s.
The best BW that could be reached using the A8 L2 preload engine was about 750 MB/s. OMAP3 has some serious bandwidth issues...

EDIT: Forgot to say that memory bandwidth matters less on tiled architecture and that SGX has compressed textures that will help, as Xmas previously explained.
 
Last edited by a moderator:
Laurent said:
You're too optimistic: the Pandora will come with LP DDR 166 on a 32-bit data bus so the bandwidth will be at the very best 1.3 GB/s.
The best BW that could be reached using the A8 L2 preload engine was about 750 MB/s. OMAP3 has some serious bandwidth issues...
"DDR-333" is the correct term for it. I wonder if DMA can sustain full bus throughput...
 
Last edited by a moderator:
'fischju2000' said:
I think there was a comparison a while back to the Nvidia 6600, the SGX is faster in some ways and slower in others.
But I guess this is in term of the OpenGL rendering aspect. Since linux isn't using directx, which the 6600gt has more performance in.
 
Last edited by a moderator:
Back
Top