What's The Most Powerfull ? Open Pandora Or Iphone 3Gs ?


I wish people wouldn't keep making arguments about how the Pandora wins because it has better controls or doesn't need to be jailbreaked. This thread isn't about which one is better, just which one has more powerful hardware. There's no need to be defensive.

On the other hand, I'm sure I've seen this thread several times now, but the same can be said for a great many threads posted on this forum -_-

600MHz is what we were hearing on iPhone 3GS. The thing with that 533 seems tied to previous generation iPod Touch. Bear in mind that they're all running the same OS, so questions of "can we overclock it" at least mildly apply across the board. I've been trying to find a more official confirmation but haven't found anything; but I think the popular 600MHz figure hasn't been coming from nowhere. So I'm going to continue going with 600MHz, but feel free to disregard what I say if you think it's 533MHz.

Baseline, iPhone 3GS and Pandora have the same clock speed. Although Pandora can be overclocked that can't be considered as a variable, because it's only rated for 600MHz and we don't know if any given unit will perform above that - we have to go with the worst case when comparing the two. At the same clock speed iPhone 3GS has the advantage because it has double the L1 cache, which is not insignificant. Other factors like memory latency and effective bandwidth could play to its advantage or disadvantage - we don't really know how it performs. On the GPU side of thing iPhone is not "about the same", it's clearly superior since PowerVR rates it as being able to handle twice as many polygons per second. The actual differences in internal metrics that allow for this are unclear - it could mean more shaders or it could just mean a faster tiling engine. But it's still undeniable a better chip.
 
In that thread I linked to, IvanRaide's post:

Its always great when someone says something like 'ignoring battery concerns' and like 90% of the comments are about battery

Pelaez: The command you probably saw was this (from terminal)

sysctl -w hw.cpufrequency=533000000

And you are right, it returns a read only error. You would have to traverse the firmware and cook a new one with this variable exposed but you would REALLY need to know where to look, (like source). Given no one has done it, I'm assuming its hard to find or those that could find it don't care enough to find it, (probably too worried about battery) ... so I don't think you will see an OCing app for a long while

I don't know for sure, but I don't think that's all made up.

But yeah, the difference between 533 and 600 ain't much, but the fact it seems to be unchangeable makes it somewhat something to talk about :p

About the GPU:

I have a 3Gs, and I tried out Quake 3 to compare it to the videos we've seen of Quake 3 on the Pandora (ported from the iphone source for its nanogl library, which is on google code). They both seem to run about the same FPS, a little worse on the 3Gs. Which is why I came to the conclusion the GPU isn't all that more powerful then the Pandora's 530 variety. And consider the Pandora version runs at 800 x 480 too! ;)
 
MDave said:
In that thread I linked to, IvanRaide's post:

I saw the thread and the post and referred to it. It's definitely not made up. The problem is that we don't know if it actually applies to iPhone 3GS because people were getting this same thing on previous generation iPod Touch. Even though the thread is about iPhone 3GS the post doesn't specifically say so.

MDave said:
About the GPU:

I have a 3Gs, and I tried out Quake 3 to compare it to the videos we've seen of Quake 3 on the Pandora (ported from the iphone source for its nanogl library, which is on google code). They both seem to run about the same FPS, a little worse on the 3Gs. Which is why I came to the conclusion the GPU isn't all that more powerful then the Pandora's 530 variety. And consider the Pandora version runs at 800 x 480 too! ;)

Two different ports, who knows what variables there are. I doubt the Pandora is fill limited at 800x480 for Quake 3, resolution probably doesn't matter.
 
Last edited by a moderator:
What I failed to explain properly was, the port the pandora is based on is from the same source base as the iphone q3 port. And resolution definitely has an impact, check out the beagle board videos of quake 3 running at 720p ;) (yes I checked this one too, it's from the same source base).

Beagleboard @ 720p: http://www.youtube.com/watch?v=tSZ6I-A01wk

Pandora @ 600mhz: http://www.youtube.com/watch?v=mW6CAD2UVek

Unfortunately I can't find a video for the 3gs. But the same demo file loaded up on my 3gs, it loads the map slower and the fps is slower then the Pandora video linked.
 
He said resolution didn't matter when comparing the 3Gs' resolution to that of the Pandora.

Real selective reading you have there...
 
Exophase said:
MDave said:
In that thread I linked to, IvanRaide's post:

I saw the thread and the post and referred to it. It's definitely not made up. The problem is that we don't know if it actually applies to iPhone 3GS because people were getting this same thing on previous generation iPod Touch. Even though the thread is about iPhone 3GS the post doesn't specifically say so.

MDave said:
About the GPU:

I have a 3Gs, and I tried out Quake 3 to compare it to the videos we've seen of Quake 3 on the Pandora (ported from the iphone source for its nanogl library, which is on google code). They both seem to run about the same FPS, a little worse on the 3Gs. Which is why I came to the conclusion the GPU isn't all that more powerful then the Pandora's 530 variety. And consider the Pandora version runs at 800 x 480 too! ;)

Two different ports, who knows what variables there are. I doubt the Pandora is fill limited at 800x480 for Quake 3, resolution probably doesn't matter.

Plus, wasn't that a dev board pandora we saw in those videos? I seem to remember they only have half the ram of the releasing versions.....therefore, quake 3 would probably run much faster, depending on whether it is ram hungry or not!
 
Last edited by a moderator:
Not all Dev boards had 128mb. Some has 256mb.
Not sure on details of how many.
 
Willrandship said:
Plus, wasn't that a dev board pandora we saw in those videos? I seem to remember they only have half the ram of the releasing versions.....therefore, quake 3 would probably run much faster, depending on whether it is ram hungry or not!

This is not how most programs work. Either you have enough RAM or you don't. If you're exhausting your RAM then it's possible for a desktop to start crawling since it's swapping, but on a Pandora the program would just crash.

There are times when space can be severely compromised for speed, but usually not on the order where going from 128MB to 256MB will make a difference. Furthermore, programs are unlikely to be setup such that they can scale in performance of key algorithms based on amount of memory.
 
Last edited by a moderator:
Butterman said:
He said resolution didn't matter when comparing the 3Gs' resolution to that of the Pandora.

Real selective reading you have there...

I was saying the Pandora version was running it FASTER then the 3gs, despite the bigger resolution on the Pandora. And you completely missed my point where resolution DOES make a difference with evidence?

Real selective reading you have there...
 
Last edited by a moderator:
Well durr, that's because at both the Pandora's and the 3Gs' resolutions the fill rate of the GPU doesn't bottleneck either, causing no noticable performance hit from the jump in resolution.
 
MDave said:
I was saying the Pandora version was running it FASTER then the 3gs, despite the bigger resolution on the Pandora. And you completely missed my point where resolution DOES make a difference with evidence?

Real selective reading you have there...

I didn't say that resolution didn't matter, I said that Pandora was probably not fillrate limited at 800x480. 1280x720 could be a different story. But your comparison is invalid because the Pandora was running the CPU at 600MHz and the SGX at 110MHz, while the Beagleboard was running the CPU at 500MHz and most likely keeping the SGX at the default of 55MHz.
 
Last edited by a moderator:
Ah ok, I apologise.

The point I'm trying to get across though, is that the 3gs isn't necessarily more powerful then the Pandora because of the possible lock down on the clock speed, or software reasons like the possible overhead of the iphone OS running processes in the background like phone/ipod player functionality or what have you. At least this explains the reason why quake 3 performs worse when compared to the Pandora port. And yes, both ports use pretty much the same code. It was me that pointed out to pickle that there is source available for the iphone port of quake 3, which he could use because of the opengl es support ;)
 
GPU will almost never be a bottleneck. Most emulators won't use it all except possibly to scale/convert the final screen output. PS1 shouldn't come close to taxing it and by all means N64 shouldn't either, although we've been hearing contrary (could be too much burden in the pixel shaders in current implementation). drkIIRaziel claims it's not a limiting factor for Dreamcast and I suppose that remains to be seen.
I think drkIIRaziel is right. The Dreamcast actually has significantly simpler per fragment operations than the N64, obviously the DC has much higher polygon throughput but the SGX is much better at handling that. On the dreamcast there are only a handful of RGB colour operations used to mix a per vertex colour with a single texture (assign, mix and mul), the N64 supports seperate RGB + A 2x (A - B ) * C + D where A,B,C,D can be a variety of different RGBA sources (ie 2 textures, a per vertex colour, 2 constant colours, etc). Both have the option of an Alpha test.

The most complex shader nullDCe produces is:

Code:
color = in_color;
texcol = texture2D(tex, in_uv);
color.rgb = mix(color, texcol, texcol.a);
color.rgb += in_offcol.rgb;
if (color.a < 0.0078125) discard;
gl_FragColor = color;
Whereas with fog and alpha tests enabled the N64 can produce something like this:

Code:
lTex0 = texture2D(uTex0, uTexCoord0)
lTex1 = texture2D(uTex1, uTexCoord1)
gl_FragColor.rgb = (lTex0.rgb - lTex1.rgb) * vec3(lTex0.a) + vShadeColor.rgb;
gl_FragColor.a = (lTex0.a - vShadeColor.a) * (lTex1.a) + uPrimColor.a;
gl_FragColor.rgb = (lTex0.rgb - gl_FragColor.rgb) * vec3(lTex1.a) + vShadeColor.rgb;
gl_FragColor.a = (lTex0.a - gl_FragColor.a) * (lTex1.a) + uPrimColor.a;
gl_FragColor = mix(gl_FragColor, uFogColor, vFactor);
if (gl_FragColor <= uAlphaRef) discard;
The Dreamcast is also much better suited to PowerVR HLE, so assuming we can improve the CPU emulation, fast Dreamcast is a possibility.

Two different ports, who knows what variables there are. I doubt the Pandora is fill limited at 800x480 for Quake 3, resolution probably doesn't matter.
Someone from Nokia profiled ioQuake3 for Maemo and reported only 20% SGX utilisation. Changing from 800x480 to 320x240 went from 25-27 fps. We appear to be heavily CPU bottlenecked, I was recently working with someone to incorporate math-neon into q3 for Android, I'll do the same with the Pandora port when i get a chance.
 
I think we should get used to the idea that soon, the Pandora will not have the edge in hardware. Instead, lets focus our efforts on making the Pandora an amazing experience. That's what will set it apart.


Xian Long said:
pandora.


is not.


just for.


emulation.
BING!
 
Last edited by a moderator:
Adventus; All of that is true, but Dreamcast games are on average higher framerate and resolution. I understand that the resolution is moot when rendering both at 800x480, but I see rendering over 320x240 (for all but a few games) as being a bonus when it comes to N64, and rendering at the typical 640x480 or higher as more of a requirement with Dreamcast emulation. I also don't see what PowerVR HLE you're referring to - I don't know of any DC emulation performing HLE, being that it's a much different setup than N64 and much less amenable to it, and it all has to go through OpenGL ES anyway. The structure of CLX2 does mean that you can probably batch a lot more. But you have to worry about order independent translucency.. actually, good luck handling that at all, you probably just have to write it off as an incompatibility. But that does mean Dreamcast games will be less reluctant to use alpha blending, which adds up to more higher fill requirements.

Nonetheless, I never expected DC emulation to be fill limited. It is, afterall, an older series PowerVR chip.

The N64 shader you gave also looks like a worst case, which is probably the point, but it's worth indicating. I would be shocked if N64 games used multitexturing at all more than very infrequently, and the ones that did were probably shooting for a lower framerate since it was hardly free on N64.

Of course CPU emulation in Dreamcast is the hardest part, but that includes geometry and lighting. drkIIRaziel says that games push upwards of 1 million vertices per second. That's way less than DC is capable of, but still a pretty hefty number when you consider all of them being transformed and lit using CPU issued vector commands. Emulating that in NEON will be a certain handful, although at least it takes away from emulating CPU cycles that could have been used on something else.
 
I understand that the resolution is moot when rendering both at 800x480, but I see rendering over 320x240 (for all but a few games) as being a bonus when it comes to N64, and rendering at the typical 640x480 or higher as more of a requirement with Dreamcast emulation.
True.

I don't know of any DC emulation performing HLE, being that it's a much different setup than N64 and much less amenable to it, and it all has to go through OpenGL ES anyway
Yeah that's what I meant, the Dreamcast's GPU maps more easily to OpenGL (with the much higher batching).

I would be shocked if N64 games used multitexturing at all more than very infrequently, and the ones that did were probably shooting for a lower framerate since it was hardly free on N64.
Yep. 15/20fps seems to be used a fair bit in the complex games. Its amazing that it looks so smooth, i guess the fps doesn't vary much.

Of course CPU emulation in Dreamcast is the hardest part, but that includes geometry and lighting
Yea, forgot about that. There is a projection matrix being applied to the vertices in the GPU, but nothing more.

Emulating that in NEON will be a certain handful, although at least it takes away from emulating CPU cycles that could have been used on something else.
A fair bit has already been NEON-ized (FADD, FMUL, FSUB, FIPR, FTRV) but there are some others that might benefit (FMAC, FSRRA, FDIV, FSQRT, FABS, FNEG), i guess we'll see what kind of a difference it makes.
 
Adventus said:
A fair bit has already been NEON-ized (FADD, FMUL, FSUB, FIPR, FTRV) but there are some others that might benefit (FMAC, FSRRA, FDIV, FSQRT, FABS, FNEG), i guess we'll see what kind of a difference it makes.

You seem to know way more about this than the rest of us. I take it you've been working with drk and ZeZu directly and have access to the source? Or have things been talked about more openly than I'm currently aware?

It's my expectation that FTRV (geometry transforms) and FIPR (dot products for lighting) are what dominates the floating point usage in typical games. Emulating FIPR with NEON would probably yield the worse emulated cycles to executed cycles ratio of the two, but FTRV would have to be at least twice as bad as well.

Do you know about how it's performing right now? With all this done I wonder how much there is left to squeeze out of it.
 
Last edited by a moderator:
Ouch, that means you effectively have to do geometry transforms as two matrix multiplies instead of one... but I guess since you can handle the projection on the SGX it's not a huge deal.
At the moment there doing a full matrix mul on the SGX, but it's not really neccessary.... Its just:

x' = a*x + b*z;
y' = c*y + d*z;
z' = e*z + f*w
w' = z

where a,b,c,d,e,f = const.

You seem to know way more about this than the rest of us. I take it you've been working with drk and ZeZu directly and have access to the source? Or have things been talked about more openly than I'm currently aware?
Yea, i have access to the source. I'm meant to be looking into NEON-zing stuff, but i got side tracked with work. I've spent the spare moments of the last few weeks just going through the source trying to understand how the hell it works.... They use alot of fancy macro tricks that i was barely aware of. :)

It's my expectation that FTRV (geometry transforms) and FIPR (dot products for lighting) are what dominates the floating point usage in typical games. Emulating FIPR with NEON would probably yield the worse emulated cycles to executed cycles ratio of the two, but FTRV would have to be at least twice as bad as well.

Do you know about how it's performing right now? With all this done I wonder how much there is left to squeeze out of it.
I have no idea how well it performs at the moment, i haven't got around to compiling it. I'll put some time into getting the build system working, but i've still got to actually "acquire" a DC game.

Hmmm yeah, Its bit of an unknown how much more performance can be gained, just looking now i can see that the ipr is really not ideal.... it's using 8 FLDR + 1 MUL + 3 MACs and its not being inlined so you get the 20 cycle argument return stall.... so i can potentially make that ~4x faster. I can shave maybe 8 cycles off ftrv, otherwise it's pretty good. They aren't using any alignment specifiers so i maybe able to gain something across the board. Then there's neon-zing the other operations, particularly FSRAA (which is just 1.0 / sqrtf). I can also possibly gain some more by only sending the vertex data when its actually needed (ie no sending UV's when there's no texturing).... but maybe the OGL driver is already smart enough.
 
Adventus said:
Hmmm yeah, Its bit of an unknown how much more performance can be gained, just looking now i can see that the ipr is really not ideal.... it's using 8 FLDR + 1 MUL + 3 MACs and its not being inlined so you get the 20 cycle argument return stall.... so i can potentially make that ~4x faster. I can shave maybe 8 cycles off ftrv, otherwise it's pretty good. They aren't using any alignment specifiers so i maybe able to gain something across the board. Then there's neon-zing the other operations, particularly FSRAA (which is just 1.0 / sqrtf). I can also possibly gain some more by only sending the vertex data when its actually needed (ie no sending UV's when there's no texturing).... but maybe the OGL driver is already smart enough.

Register caching for floating point to NEON registers should be a win. At the very least they should be arranged transposed in memory, I would think. On the other hand, having x/y and z/w pairs adjacent allows the VMUL/VMAC/VPADD that I know you've used before for dot products.

Inlining has the added advantage of possibly being able to schedule things up a bit to avoid some dependency stalls. Since the real SH4 had latency periods for these operations as well there should be some opportunities. Having dependency stalls on VMACs has got to be killing you.

I do think all of this still sounds pretty brutal, and that NEON or not these things are going to be taking far more cycles on Cortex-A8 than they did on SH4 :/
 
Last edited by a moderator:
Back
Top