Pandora Not Powerful Enough?


What do you experts say about Marvell's modified Cortex A8 architecture (Armada 610)? I don't have links and didn't do research, but I've read that they shortened A8's pipeline.
 
As Laurent told us before, you can't make a modified Cortex-A8. This is a totally different processor that might resemble A8 but is built from the ground up.

Marvell has actually had superscalar ARM cores available for a while now, but I don't know of anything that has used them. Earlier, I was confused into thinking that Sheeva Plug had one, but actually it was a different model under the same processor line.

Pipeline length can be quite overstated. Oversized pipelines have gotten a bad rep because of Pentium 4, but Pentium 4 was slower than other x86 at similar clock speeds for a slew of other reasons as well. Smaller pipelines usually have lower branch misprediction and other serializing event delays (but not always), but this can also come at the expense of trading pipeline stages for multi-cycle operations, basically undoing some of what pipelining gained when originally introduced. Sometimes a bigger pipeline can even cause less stalls instead of more (such as the transition from the traditional 3 stage ARM pipeline to the 5 stage one in ARM9)

What makes a bigger difference than straight up length is how the workload is distributed through it. Cortex-A8 suffers a little bit from having address generation, shifts, and ALU operation execution all taking place in different stages. If any of these took place in the same stage then there'd be less dependency stalls. Generally speaking, a long front end involving fetching, decoding, and scheduling can be okay so long as the instruction flow is well predicted. But some branches will never be well predicted, so shortening the mispredict penalty does always help. Some architectures can actually shorten the mispredict penalty if the software can determine the branch direction well in advance. ARM11 could save a couple cycles, and PowerPC was designed with this especially in mind.

Apparently Marvell claims a much higher typical instruction per cycle ratio, but I haven't seen the numbers, much less an explanation for how they're achieved. So this is a pretty big unknown to me right now, much like Snapdrgon. If someone had some timing information that'd be nice, but some CPU manufacturers seem to be loathe to release such things. It's hard to imagine that a different dual-issue in-order implementation of the same instruction set (except WMMX2, meh) can do dramatically better than Cortex-A8. Maybe it has dual ported icache and dual LSUs or something, not that even that would help tremendously.
 
Ah yes, you are right, it can't be called Cortex, its just an ARM v7 compatible chip. Thanks for the explanation :)
 
I don't know what they're trying to pull with that 47M number - but it's clear something is preventing the real number from being more than 24M

47M Triangles /sec (peak) <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
24M Drawn Triangles /sec (peak)
600M Pixels /sec (peak)
240M Textured Pixels /sec (peak)"

maybe they do some occlusion on chip, so a scene and models can have 47 million triangles, and of these the visible triangles (about half .. ie 23.5 million rounded up to 24 million cos sales dept said so ;) ) are drawn ?

disclaimer: just based on knowledge of rendering from Darkbasic usage, when it comes to C I'm a 2d coder.
 
hobbyman II said:
maybe they do some occlusion on chip, so a scene and models can have 47 million triangles, and of these the visible triangles (about half .. ie 23.5 million rounded up to 24 million cos sales dept said so ;) ) are drawn ?

I'd want to defer to someone who knows better, like darkblu, but I personally doubt it. You couldn't tell if a triangle is occluded without full deferred rendering. If you only needed its geometry and didn't have to render it then you wouldn't have to do a lot of the per-vertex work though. But who knows if that was included in their numbers to begin with.

It could refer to clipping.. don't need deferred rendering to determine if a triangle is within the clip view or not.
 
Last edited by a moderator:
Exophase said:
hobbyman II said:
maybe they do some occlusion on chip, so a scene and models can have 47 million triangles, and of these the visible triangles (about half .. ie 23.5 million rounded up to 24 million cos sales dept said so ;) ) are drawn ?

I'd want to defer to someone who knows better, like darkblu, but I personally doubt it. You couldn't tell if a triangle is occluded without full deferred rendering. If you only needed its geometry and didn't have to render it then you wouldn't have to do a lot of the per-vertex work though. But who knows if that was included in their numbers to begin with.

It could refer to clipping.. don't need deferred rendering to determine if a triangle is within the clip view or not.

Could it refer to front-to-back rendering as well?

ie, perform the Z-check but not the texture sample?
 
Last edited by a moderator:
andys said:
Could it refer to front-to-back rendering as well?

ie, perform the Z-check but not the texture sample?

I'm thinking not because this is pixel-level and thus a matter of fillrate, not vertex rate.
 
Last edited by a moderator:
iprice said:
The most worrying aspect here for me is that most users seem to want the Pandora for emulation as its primary purpose. That also seems such a waste and a shame for me. Emulation should be an added bonus. I really hope we get some decent homebrew and commercial apps too. Don't get me wrong, I also want to play emulated games, however I want so much more than the ability to play old games on a new machine, when I can play the old games on the old, original machines already and at full speed.

Everyone needs to raise their expectations, methinks :p

Heh, not all of us. Emulation is a nice sideline for me, but I'm primarily getting it for the keyboard w/clamshell combination. I had a Nokia N800, but the screen cracked on it. I've also got a sony E-Reader with a cracked screen. I'm going to be reading fanfiction on it, and writing my fanfiction on it (I really look forward to waking up in the middle of the night with an idea in one of my stories, and being able to reach over, snap it open, alt-f-whatever to get to the console, and type it in.)

I like to swap in to a game when I'm writing on my stories, or coding, and then pop back, so having gaming controls just makes that an extra plus.

Howard, the Grum
 
Last edited by a moderator:
My main reason for getting a Pandora is as a super portable mini-laptop for doing some on-the-road work.
I'm also looking forward to emulation of 8/16bit systems for downtime fun, rather than current devices.
The Pandora looks like it will emulate these older consoles/computers beautifully. :)

As for newer/current platforms, I own these and I'm happy to play the games on the 'real thing' for as
they are usually longer, chair bound gaming sessions. ;)

I guess if money is tight and you really want a "do everything" portable emulation machine, you'd be better
off with a good spec. web-book (or small laptop) and a good USB Joystick/controller?
 
Here's an interesting read from XDA-Developers forums.
Tegra Vs Snapdragon.
It was concluded that theoretically Tegra has better graphics and more battery power VS Sdragon which has alot more power.
The respondent said it was a trade off, but he'd rather more power as everything in general would speed up.

http://forum.xda-developers.com/showthread.php?t=567401

Benjiro
Junior Member


Originally Posted by joplayer View Post
Tegra seems to have 8core of execution for great graphics but not a big frequency(600-800Mhz). Snapdragon got the Ghz and is supposed to reach 1.3Ghz in 2010.
Tegra is just like the Snapdragon a SoC. If we use the same logic that Nvidia used, then the Snapdragon is also a multi core SoC ( CPU, GPU, DSP, ... ). But its just marketing to make it look to people that they get a 8 Cpu system

Like Wishmaster89 pointed out, there is a major difference between the CPU's used on both system.

The 600Mhz Arm11 ( ArmV6 ) on the Tegra is capable off executing, about 1/3th what the Snapdragon's ArmV7 1Ghz Cpu can do.

The GPU on the other hand, is more powerful in the Tegra. There is a little list being used to compare the overall ( theoretical ) strengths off each platform's GPU

Nintendo DS: 120,000 triangles/s, 30 M pixels/s
PowerVR MBX-Lite (iPhone 3G): 1 M triangles/s, 100 M pixels/s
Samsung S3C6410 (Omnia II): 4 M triangles/s, 125.6 M pixels/s
ATI Imageon (Qualcomm MSM72xx): 4 M triangles/s, 133 M pixels/s
PowerVR SGX 530 (Palm Pre): 14 M triangles/s, ___ M pixels/s
ATI Imageon Z430 (Toshiba TG01): 22 M triangles/s, 133 M pixels/s
PowerVR SGX 535 (iPhone 3GS): 28 M triangles/s, 400 M pixels/s
Sony PSP: 33 M triangles/s, 664 M pixels/s

PowerVR SGX 540 (TI OMAP4): 35 M triangles/s, 1000 M pixels/s
Nvidia Tegra APX2500 (Zune HD): 40 M triangles/s, 600 M pixels/s
ATI Imageon _ (Qualcomm QSD8672): 80 M triangles/s, >500 M pixels/s

So, the Tegra's GPU is about twice as powerful as the Snapdragon's ATI Z430 ( looking at Triangles ). The reason why i use the term theoretically is because a lot off factors can make or break a GPU ( many more then on a CPU ). Bad drivers, bandwidth limitations, to little memory, bad mix off texture units, vertex units etc..

Problem with Nvidia is, they have always had the habit off exaggerating things ( a lesson learned more then a few times in the past ).

Another problem is, are the GPU's actually being used on the PDA/Smartphone's? A lesson i learned in the past from the x50v, with its own dedicated powerful ( in that time ) 2700g ( 800.000 Triangles in that time ). The reality is, most applications rely the most on the CPU.

At best, if you have dedicated games, written for the PDA/Smartphone market, very few will tap in to all the power that the Tegra has to offer.

Even the PSX Emulators ( who run great ( full speed 50/60fps pal/ntsc games ) ) on the Snapdragon. Forget about running a lot off psx games on a Arm11 without tweaking ( and frame skipping ). Because it relies the most on brute force cpu power ( and this is where the Snapdragon shines ).

So? What is there besides games? Video playback? Sure... The Tegra can supposedly do 1080p, while the TI OMAP & Snapdragon's only do 720p. But from what i have read, its more to the DSP that does the work. The snapdragon's DSP runs at 600Mhz, i don't find any information about the Tegra's DSP? Does it even have any? Anybody with more info how they even handle things?

When it comes down to PDA/Smartphone's... take it from me. The most important thing is first the CPU. Then the amount off memory ( and memory speed ). Then the GPU.

Lets just say i like to see a fair comparison between both systems, to see there real power ( and not some nvidia fake PR where a lot off people still fall in ).

Like i said, i don't exactly trust Nvidia's numbers when there PR posts crap like this:



Those numbers are what you can call a pure lie. When people from the OpenPandora project ( what uses a TI Omap3630 @ 600Mhz, with a slower GPU ), is able to run quake3 at 35+ fps... Yet, Nvidia claims 5fps for the Snapdragon, thats actually more powerful then the TI Imap3630... I love those little[*] next to the text... Small text below: "* NVIDIA estimates". In other words, how much trust can somebody place in the specs from a company that that pulls stunts like that.

Also... Snapdragon is used in the following smartphones that i know off: Toshiba TG01, Asus F1 ( S200 ), HTC HD2 ( Leo ), and a few more that are on the way. Where is the Tegra? The MS Zune... Thats it...

You think that HTC, Toshiba, Asus will all have looked at the different available SOC providers ( TI, qualcomm, Samsung, Nvidia etc ). Yet ... Who do they pick for there new top off the line products...

I hope this helps...
 
Laurent said:
This talks about the old Tegra...

Yes, he thinks we care about the Tegra 1 and/or doesn't really understand the difference between it and Tegra 2. He's the only one who has been mentioning it all thread.
 
Last edited by a moderator:
Back
Top