What's The Most Powerfull ? Open Pandora Or Iphone 3Gs ?


WizardStan said:
The iPhone OS supposedly has more overhead than the Pandora, so running the same program at the same clock speed, the Pandora version will complete slightly faster. Supposedly. I forget where I heard that, so may be misremembering.

Like on most modern platforms, there isn't a bunch of overhead just from the kernel running. If your task is say, purely numeric, then you won't gain efficiency on Pandora over iPhone.

The main bottleneck is in I/O, especially graphics. From what I understand, everything 3D is rendered to a texturebuffer on iPhone, then composited to the framebuffer as 3D operations. Pandora you would have the option to render directly to the framebuffer when running in fullscreen; I don't really know how X11 and OGL interact when windowed.

But since a lot of things people run on Pandora will be fully software rendered (ie, emulators) what will matter more is how fast you can get things to application surfaces. On Pandora you can probably get direct framebuffer access when fullscreen but making it play nice with task switching would be hard. On both platforms the 3D drivers should support pixel buffers and possibly some extensions to improve this, but since the same vendor is providing drivers for both (or at least the core of the drivers) I wouldn't expect one to have a substantial advantage over the other. This can of course be further hurt by whatever abstractions Apple added, ie core surface and whatever.

I'm hoping that pixel buffers will prove to be fast and asynchronous like they are on my archaic GeForce 6200. There it's much faster than SDL/X11, especially when going between pixel formats.
 
Last edited by a moderator:
Like it or not OS plays a huge role here. Being open gives the Pandora a significant edge over any other device with similiar hardware specs like the iPhone. The programmers have much tighter control over the environment their game will run in. Even to the point of shutting off all non-essential processes. This simply cannot be done on an iPhone. In the end the overall comparision of power between these two devices the Pandora is going to win in spades in the end.
 
Exophase said:
I'm hoping that pixel buffers will prove to be fast and asynchronous like they are on my archaic GeForce 6200. There it's much faster than SDL/X11, especially when going between pixel formats.
What about FBOs? How will those compare to pixel buffers, you think, provided that you only ever need to do stuff with them on the graphics card? E.g. is there some advantage to using pixel buffers instead of frame buffer objects since you'll avoid a copy into the (non-separate) graphics memory of the device, or, as you say, be able to use asynchronous writes/reads? Or does the driver just avoid that copy, making FBOs the faster alternative (since pbuffers will need a CPU-controlled image copy, too)?

And are you using "texture buffer" as a synonym for FBO (which only seldom contains actual texture data)?
 
Last edited by a moderator:
mali said:
It depends. On some emulators CPU might be the bottleneck, on others it might be the GPU. Experts like Exophase could answer it better.

GPU will almost never be a bottleneck. Most emulators won't use it all except possibly to scale/convert the final screen output. PS1 shouldn't come close to taxing it and by all means N64 shouldn't either, although we've been hearing contrary (could be too much burden in the pixel shaders in current implementation). drkIIRaziel claims it's not a limiting factor for Dreamcast and I suppose that remains to be seen.

mali said:
Edit:
Btw, does the 3GS have L2 cache?

Yes, of course. Without it it'd be much slower.

Sites (especially Anandtech, unfortunately) have touted the core improvements of Cortex-A8 over ARM11 fairly dramatically - the reality of the situation is that L2 cache benefits is of far more important than the core benefits. This has at least one major implication - Tegra 1 got slammed for being ARM11, but since it was an ARM11 with 256KB L2 cache it would have performed much better than the ARM11 in iPhone 3G. This is something I haven't seen anyone even mention.

If anyone remembers the very first Celeron processors they'll know what I mean. 266 and 300MHz, but no L2 cache - and ran much more slowly than their Pentium counterparts. Back then L2 cache on Pentium 2s was clocked at only half the CPU clock speed and wasn't on the same die, so would have been much higher latency than on-die cache. In fact, when the next round of Celerons came out with on-die fullspeed L2 cache they actually beat the Pentium 2s in several applications despite having only 1/4th as much L2 cache.

For reference, Cortex-A8's L2 cache is on die/full speed and very low latency. Early on some benchmarks were mistakenly done with L2 cache off, and the performance was dramatically worse.
 
Last edited by a moderator:
dflemstr said:
What about FBOs? How will those compare to pixel buffers, you think, provided that you only ever need to do stuff with them on the graphics card? E.g. is there some advantage to using pixel buffers instead of frame buffer objects since you'll avoid a copy into the (non-separate) graphics memory of the device, or, as you say, be able to use asynchronous writes/reads? Or does the driver just avoid that copy, making FBOs the faster alternative (since pbuffers will need a CPU-controlled image copy, too)?

FBOs are sort of opposite direction of PBOs. You use PBOs if the data is being generated client-side, as it would be with an emulator rendering a frame. You use FBOs if the data is being generated by the 3D accelerator.

dflemstr said:
And are you using "texture buffer" as a synonym for FBO (which only seldom contains actual texture data)?

FBOs can be "render buffers" or "texture buffers." You use render buffers if you want to read the data back to the client application and texture buffers if you want to use the data as a texture (render to texture). At least, that's how I understand it.
 
Last edited by a moderator:
Exophase said:
For reference, Cortex-A8's L2 cache is on die/full speed and very low latency. Early on some benchmarks were mistakenly done with L2 cache off, and the performance was dramatically worse.
Well now you've got me curious: what situations would no L2 cache be desirable that would even warrant putting an on/off switch on it? Seems like you could cut that on/off switch out and save a few transistors.
 
Last edited by a moderator:
WizardStan said:
Well now you've got me curious: what situations would no L2 cache be desirable that would even warrant putting an on/off switch on it? Seems like you could cut that on/off switch out and save a few transistors.

L2 cache uses a comparatively large amount of power and generates a comparatively large amount of heat. If you don't mind those things then you wouldn't want to turn off L2 cache wholesale; you would instead delegate regions of memory where L2 is bypassed via MMU permission.

Sometimes you would want to avoid L2 cache. On Pandora L2 is configured as write-allocating by default. What this means is that when you're streaming a lot of data out you'll be going through L2 cache, which involves a big penalty before the write can be placed on the writebuffer and requires fetching the cache line so that its a partially dirty line writeback will have the clean parts merged. This is probably the reason why measured memory bandwidth has been reported as much lower than maximum; they need to do something to fiddle with MMU permissions in Linux (this is very much like mmuhack on GP2X and Wiz, except the intent is kind of in reverse).

There are of course also cases where you don't want a memory region to be cached because of coherency, like if you're writing to a hardware register.
 
Last edited by a moderator:
WizardStan said:
The iPhone OS supposedly has more overhead than the Pandora, so running the same program at the same clock speed, the Pandora version will complete slightly faster. Supposedly. I forget where I heard that, so may be misremembering.

Plus, won't the pandora have built in overclocking anyway?

Also, I think the pandora has more RAM than Iphone...
 
Last edited by a moderator:
I guess you could argue over which was more powerful till you were blue in the face, it really is irrelevant at the end of the day.

The basic facts are; even if you bought the icontrol pad, you'd still have to jailbreak your phone to use it, and after you'd gone through that process and started to enjoy a nice game of quake your battery would be flat after about 2 hours and you'd be getting a tan from the heat kicking out of it.

Personally, graphical power and processor speed are useless to me if I have to void the warranty of the unit to make proper use of them, and then couldn't enjoy that use for longer than 12 minutes.
 
We've got a DSP :)

As for the rendering: this is what I believe I know about it, correct me if I'm wrong: with DRI1, XVideo and OpenGL would always render straight into the framebuffer, even with compositing on. DRI1 pretty mch completely bypassed the X-server. With DRI2, the X-server has more control over the output, and can also give the app an offscreen-buffer to render into. With X, apps pretty much don't have a clue what happens to their output. The X-server can move the output-buffer and apps won't notice it, so it can also be moved onto and off the framebuffer, or just not even rendered at all. So, compositing can be enabled and disabled on-the-fly, even with opengl-apps running, and the window-manager can choose to let fullscreen-apps be rendered directly to the framebuffer.
 
Sphinxter said:
Which makes a better hammer?

I think the pandora has more heft.
 
Last edited by a moderator:
Back
Top