Pandora Not Powerful Enough?


Powerful enough for what? Powerful enough to use the battery up in three hours? Powerful enough to get too hot to hold? It's more than powerful enough to do some swell games in 3d and 2d along with X and a few desktop applications for a long, long ass time, so yeah, it's more than powerful enough. If it's not open, and by open I don't mean broken it doesn't matter if it can rip a hole in the fabric of time, it's not competition.
 
I am not going to argue any longer.
I just want to ask what is a more powerful (graphics and processing wise) chip.
This Tegra or Tegra 2 I've just read about or the SGX543.
I've heard the specs they provide has not been confirmed by any other person/company.
So if you are an expert or just going by intuition, which do you think is better?


Love, Peace and Chicken Grease :D
 
Adventus said:
When you consider the handheld homebrew platforms that went before (GP32, GP2X, PSP), there's nothing as interesting as a Cortex A8 + NEON + SGX + DSP. Luckily it looks like this combination (Cortex + NEON and SGX anyway) will remain pretty common for a while so anything programmed for Pandora will probably scale up to newer platforms rather easily.
I'm sorry but that won't be as easy:

1. Tegra2 doesn't have an SGX (but it should support OpenGL ES 2.0)
2. OMAP4 DSP is rumoured to not be the same one as in OMAP3 3530 (I've heard people in the know say it was half of a C64x+)
3. NEON is optional on Cortex-A9, so don't expect every SoC to have it.

For the rest of your post, I agree :)
 
Last edited by a moderator:
All the research that I've seen on TBDR and theoretical numbers they can push seem to be very close to accurate. I should validate this: If you look at Kryo2 and see the theoretical numbers vs the actual performance it gets, it is very close.

@Laurent

Do you know of any OMAP 4xxx series that will use a C67x DSP? Being able to use Floating point is the bee's knees.

@Kangal

I can't find any numbers for Tegra, however the SGX543 is hard to pin-point. Discounting all of the features and only taking into account the efficiencies of TBDR, the SGX 543 in raw power is probably somewhere in the high end geforce 4 or 5 series. But of course with a feature set supporting DX10 (11?) / OpenGL equiv.
 
"The OMAP 4 processor balances processing across four main engines: a programmable multimedia engine based on TI's C64x DSP and power-efficient, multi-format hardware accelerators; general-purpose processing based on the dual-core ARM(r) CortexTM-A9 MPCoreTM supporting symmetric multiprocessing (SMP) and capable of speeds of more than 1GHz per core; a high-performance programmable graphics engine; and an Image Signal Processor (ISP) for unparalleled video and imaging performance."

http://focus.ti.com/pr/docs/preldetail.tsp?sectionId=594&prelId=sc09021

I don't know what "half of a C64x+" means exactly. The key issue is going to be whether or not it supports the C64x+-only instructions or not. Otherwise the compiler/programmer will have to fall back to C64x or C6x instructions.
 
As I said it was told to me by someone who is supposed to know given where he works, but he might be wrong.
Half a c64x+ would be easy from a design point of view given how c6x is architected. But from the portability point of view that would be a hell.
 
Laurent said:
As I said it was told to me by someone who is supposed to know given where he works, but he might be wrong.
Half a c64x+ would be easy from a design point of view given how c6x is architected. But from the portability point of view that would be a hell.

Explain what you mean here exactly. You need to have the 8 functional units to be a C6x DSP and I don't think TI would deviate from this, especially not while calling it C64x or really C6x anything. I know C64x does have some more orthogonality and additional instructions, not to mention twice the registers, over say, C62x, but the basic core structure is more or less the same. If they wanted to do something C62x like they'd probably just be using a C62x and specifying that. Changing the basic C6x structure would necessitate a heavily modified compiler, among other things.

Maybe it just means that it's a C64x and not a C64x+? That's what TI's statement suggests to me. It could also mean it has half the cache, half the clock speed, so on and so forth.
 
Last edited by a moderator:
First of all, I didn't mention c62 vs c64/c64x/c64x+.

If you look at the block diagrams of all c6x variants you'll see that many things are split in two so as I said from a design point of view, no big deal. The advantage would be less silicon area, less power and potentially higher clock speed.

From another point of view, IVA3 contains IP blocks that are more powerful than the variant found on OMAP3 so DSP performance is perhaps less needed.

Now my personal point of view is that it would be a crazy decision for the software reasons you explained: even though the tools would probably be the easiest to change, all assembly code would have to be rewritten and re-tuned, and this represents a much larger part of the software given the chip nature (even if the C6X C compiler is good, assembly usage is still the best).

Again this was a rumour, nothing more. And I hope it's not true.

OTOH I can guarantee you'll see some A9's without NEON :)
 
Laurent said:
First of all, I didn't mention c62 vs c64/c64x/c64x+.

If you look at the block diagrams of all c6x variants you'll see that many things are split in two so as I said from a design point of view, no big deal. The advantage would be less silicon area, less power and potentially higher clock speed.

From another point of view, IVA3 contains IP blocks that are more powerful than the variant found on OMAP3 so DSP performance is perhaps less needed.

Now my personal point of view is that it would be a crazy decision for the software reasons you explained: even though the tools would probably be the easiest to change, all assembly code would have to be rewritten and re-tuned, and this represents a much larger part of the software given the chip nature (even if the C6X C compiler is good, assembly usage is still the best).

I just doubt that after making dozens if not hundreds of C6x implementations TI is going to make a completely incompatible DSP but call it C6x. Are you sure that this is what your contact meant when he said half a C64x?
 
Last edited by a moderator:
http://www.beagleboard.org/irclogs/index.php?date=2009-08-21#T12:43:13
In my logs I also found this
Oct 03 13:58:18 <guy1> the 4440 dsp is also slower as the 3530 one, since it has only half of the execution units
Oct 03 14:45:53 <guy2> I've had conflicting information on the dsp in omap4
Oct 03 14:46:03 <guy2> some say it's a cut-down version, some say it's the real thing
Oct 03 14:46:21 <guy2> or maybe they were talking about a different chip and weren't too specific
One of these 2 guys works for TI, and the other one knows a lot. Both can be wrong of course (having worked at TI, I can tell you that getting accurate information is not always easy, as is the case in most big companies).
 
1. Tegra2 doesn't have an SGX (but it should support OpenGL ES 2.0)
From my eyes the SGX appears to be dominating with its proven track record and TI/Samsung/Apple apparently behind it (aswell as the rumored PSP2). If the Tegra2 is in the new DS then it's unlikely to be an open platform.... at least initially.

2. OMAP4 DSP is rumoured to not be the same one as in OMAP3 3530 (I've heard people in the know say it was half of a C64x+)
Yeah that's why i didn't include the DSP in my statement.

3. NEON is optional on Cortex-A9, so don't expect every SoC to have it.
That's true.... but they will have a decent FPU, so at least floating point C implementations will perform ~optimally.

Can someone explain to me why the Cortex A9 is such an improvement over the A8? How much is really gained by being Out-of-Order? Multi-core doesn't really sell it for me, making efficient independent threads can be hard.
 
At CES the SGX545 was announced. http://www.engadget.com/2010/01/08/imagination-technologies-announces-new-mobile-gpu-casually-glan/ - Check out the source from AppleInsider for the specs.
 
Adventus said:
From my eyes the SGX appears to be dominating with its proven track record and TI/Samsung/Apple apparently behind it (aswell as the rumored PSP2). If the Tegra2 is in the new DS then it's unlikely to be an open platform.... at least initially.
Only time will tell in what platforms Tegra2 will be. You can already buy a T2 dev kit for $400.

Can someone explain to me why the Cortex A9 is such an improvement over the A8? How much is really gained by being Out-of-Order?
The people who know don't have the right to quote numbers :)
Let's say the claimed DMIPS gain over Cortex-A8 is not unusual across benchmarks (except for FP intensive code where A9 is of course much faster).

Multi-core doesn't really sell it for me, making efficient independent threads can be hard.
The problem is that there's no choice: on the desktop single thread speed seems to have reached a plateau and for embedded targets aiming at single thread speed comes at the price of higher power needs.
Programmers will have to learn to program multicore efficiently (and that frightens me too).
 
Last edited by a moderator:
Cortex-A9 also has a smaller pipeline, so branch mispredicts cost less. Unfortunately, it makes some sacrifices to accomplish this. For instance, performing a shift with an ALU operation always takes 2+ cycles now, as do bit extractions. There are also some cases where load-use latency can be two cycles instead of one - basically anything that isn't an arithmetic (not logical, just arithmetic) instruction, or if the register is used in a shift. On the flip side, "AGU cycles" is listed as 1, so I take it that means one less potential cycle penalty than on A8.

ARM possibly determined that the second cycle of the shifts were getting paired with free things enough to be worth the loss, instead of spending a pipeline stage on it. That'll happen better with out-of-order execution.

Cortex-A9 also brings back the strengths that ARM11 had but were not part of Cortex-A8, a lot of which can be attributed to its out of order design, because the ARM11 itself was sort of out of order. I'm referring to folded (zero cost) branches, being able to schedule 64bit loads/stores that take 1 cycle if they're aligned, non-blocking memory misses and hit-under-miss. The latter hopefully means L1 data preloads are supported, and don't cause contention with the load-store unit. The BTB is twice as big (and called BTAC now for some reason o_o)

Unfortunately I can't find any information on actual branch mispredict penalty or L2 cache latency for A9 at this point. It's not in the TRM (yet).

For those who don't know, the DMIPS figure Laurent is talking about is 2.5/MHz instead of 2/MHz. Remember, DMIPS aren't MIPS. In Cortex-A8's case, ARM did give a figure of 0.9 MIPS/MHz over several real world programs. The theoretical maximum is 2 for both processors, but there will inevitably be a lot of stalls and a few instructions that naturally take more than 1 cycle. Cortex-A9 does a better job finding things to fill those stalls with.
 
I didn't know the TRM was available...

L2 cache latency will vary depending on the SoC given it's an external chip.

PLD don't block slots in LSU (6.5.1).
 
Kangal said:
Hi all,

Just to begin I am not a dev and a newbie Pandora fan, so don't BURN me but inform me nicely ;)
That being said I love the concept of Pandora!!
But after some little research on forums and whatnot from actual devs, I was really disappointed
It seems Pandora is not powerful enough (check wording). It is the most powerful handheld (actually it isn't, its a very powerful computer/pda).
BUTTT, pandora was designed to emulate other consoles and as most will know an emulator copies the processes in the original console to the new one and this is never effecient thus more powerful hardware is necessary to compensate for this.


WHY do I say Pandora is not fast enough, have a look at the graphics power of other consoles:
I was told polygons/sec (PPS) is the important aspect of console expectations
Eg Polygons/sec is like horsepower of a car where as the processor speed (MHz) is like its engine capacity
(eg/ 3.4L Pontiac can produce 190hp where 2.0L VW can make 240hp) --> not logical but reality

Here are the max. polygons/sec according to gameplay-conditions (textures, curvature, lighting etc) of each console:

PS3 - irrelivant
X-360 - irrelivant
Wii - not published

Xbox - 100M (million) reported --> but this is exagerrated and believed to be about 30M (!!!!)
Gamecube - 12M reported --> tested to be roughly 20M (was understated: 12M is during normal play, 20M is highest possible under game conditions)
PS2 - 66M reported --> tested to be roughly 18M (but first-gen games are below 8M due to game developers not taking full advantage
PSP - 33M reported --> tested to be roughly 10M
Dreamcast - 7M reported --> tested to be roughly 9M (was understated: 7M is during normal play, 9M is highest possible under game conditions)
PS1 - 360K
Nintendo 64 - 150K


I believe Pandora will be sort of in direct competition to the PSP (2K's cause of hack/homebrew).
And the only way I believe the wide community will fork out money to buy this powerful system is if it can accomplish its expectations.
I believe it would become more popular and supported if it managed to emulate PSP at a very decent rate, showing to the public "look you don't need to risk bricking your PSP, buy a pandora it plays powerful games like PSP and emulates earlier consoles and is a complete PC"

And for this to be possible Pandora must be have a pretty high PPS.
I was thinking graphics potential (max. PPS under gameplay conditions) of 19M
How I deduced this:
Handheld performance = Emulator speed / Emulator effeciency
= Console performance (PPS) x Console emulator performance (%) / Emulator effeciency (%)
Example = 10M (PSP's power) x (games run at 85% of normal rate) / (great effecient emulator)
= 18 x 0.85 / 0.45 = 18.89M polygons per sec

19M PPS!!! This is more powerful than the PS2 and relative to Gamecube. So PS2/Gamecube emulator would never be playable on such powerful device but PSP will.


Note the stats of Pandora's graphic capabilities is never fully given ("few million polygons per sec").
This statement implies of something between 5-10M. This is half of what's needed.

Also note making an emulator is very difficult, especially one's with high graphical features of PSP or higher.
If a perfect code is written it will increase effeciency and directly affect gameplay/threshold of device power.
This is near impossible.

If the Pandora's hardware reach about 13.5 PPS, and a good emulator is written, dreamcast can be played.
But I reacon dreamcast is dead and the amount of people who will buy Pandora for Dreamcast is very miniscule compared to the people that expect a PSP emulator.
But I've seen a demo of dreamcast on Pandora (I dont have link anymore) and its too slow/unplayable.
So I think this supports to my prediction when I say the Pandora's PPS is 7M - 11M.
Don't be disuaded, 11M for a handheld is very high. On par with PSP, but thats cause Sony has a headstart (they have pro's that can utilize the little processors to its limits whereas Pandora is makeshift and can easily surpass if given guidance/pro dev teams.

By all this I mean N64 emulator on PSP will be poor/mediocre but way better on Pandora. But PSP can play PSP games and obvious advantage as more titles continue to be released.
Sorry for the comparison to PSP, dont want to start a PSP vs Pandora thread just sharing what I researched.

EDDIT:
I just wished if the Pandora dev's actually made emulators for Linux, improved on them, made the drivers ... then selected hardware that can accomplish it.
This way they would've done with the hard stuff, get CE approved, get units made, advertise their accomplishments, then release units polished with beautiful themes cases with emuators ready. ---> much more supporters as people would wait less time for units, can even charge more for efforts of emulator.

If in any area I made a mistake please TELL me, not scream at me. I am genuinly interested in Pandora but I just wanted to clearly know its limits.
Again I'm not a dev just graduate. Thanks for all that reply with technical info or development expertise. Happy o'ten!

For some reason you think this was intended to run ps2 games. It has been stated that it may even have trouble emulating n64, but psx and psp are going smoothly (as is n64 now, I believe) Running a system that powerful would cost about twice as much as it will now, and no one would buy it. Is there even a reasonably priced mini-cpu and graphics card (for handhelds) capable of emulating the ps2 yet? I think Not. Heck, my pc has trouble emulating ps2, and it's about 4 times the pandora's power.

Also, this is a portable gaming console. Its main competition, which really can't be called competition because of capability, is going to be the wiz, the psp and the ds.

One other thing. What makes you thing PPS is determined at all by cpu power? that's not true at all. It's all in the gpu, and can be found very easily as the manufacturers gladly share that information.

Plus, since the psp has a very similar architecture (Like saying gba has similar processor to DS, both ARM), emulating will be full speed fairly easily. Gamecube and Ps2 emulators were never planned, and are unrealistic for any handheld with a dissimilar architecture.
 
Last edited by a moderator:
Laurent said:
I didn't know the TRM was available...

Me neither, until one day I saw that it is. ;D About time, given you can buy implementations now.

Laurent said:
L2 cache latency will vary depending on the SoC given it's an external chip.

Yeah, that makes sense, since it's shared/multicore now. But I'm still interested in what the best case is, similar to how Cortex-A8 gave best case for L2 misses. Since it's external are we looking at guaranteed higher miss time compared to A8?

Laurent said:
PLD don't block slots in LSU (6.5.1).

Yes, I read that part. Sorry, what I originally wrote didn't have the hopefully. I added it because I realized that the TRM didn't actually say that PLD goes through L1 cache. Since L2 cache is external it really has to apply to L1, although it seemed highly likely that something with non-blocking L1 would allow it anyway.
 
Last edited by a moderator:
Kangal said:
For some reason you think this was intended to run ps2 games. It has been stated that it may even have trouble emulating n64, but psx and psp are going smoothly (as is n64 now, I believe) Running a system that powerful would cost about twice as much as it will now, and no one would buy it. Is there even a reasonably priced mini-cpu and graphics card (for handhelds) capable of emulating the ps2 yet? I think Not. Heck, my pc has trouble emulating ps2, and it's about 4 times the pandora's power.

Also, this is a portable gaming console. Its main competition, which really can't be called competition because of capability, is going to be the wiz, the psp and the ds.

One other thing. What makes you thing PPS is determined at all by cpu power? that's not true at all. It's all in the gpu, and can be found very easily as the manufacturers gladly share that information.

Plus, since the psp has a very similar architecture (Like saying gba has similar processor to DS, both ARM), emulating will be full speed fairly easily. Gamecube and Ps2 emulators were never planned, and are unrealistic for any handheld with a dissimilar architecture.

What are you getting at? I never based PPS figures it on cpu, but on actual performed tests (which tests the combined power of cpu+gpu). You can tell this by clearly looking at the Gamecube info I put, 20M PPS instead of the Nintendo acclaimed 12M PPS. Why? 12M PPS is gameplay but the max PPS in "gaming conditions" (lighting, texture, curvature etc) is 20M PPS as tested by various sources. That's what I compared.

In that post I was just saying, I was a OPnewbie and the Pandora looked really awesome, beating PSP in graphics power, but realised it can beat PSP but not powerful enough to emulate PSP games at decent rate. So like MWeston (from BurnNotice?) said, Pandora wasn't designed to emulate PSP, but specs sheet make it seem possible for the Average Joe.

It would be awesome if it could emulate PS2, but we dont have that tech yet (that's the near next-gen). But we have high-end devices that have the muscle to emulate PSP, and even PS2 at "evolution rate". For eg, this Tegra or SGX543 seem promising. Just the fact that OP states "the most powerful handheld ever" gave this illusion to me, and after some more detailed specs it seems blurred. I mean what is the use for power, if it has no use.

By this I mean, its very likely that games utilizing Pandora's graphics capabilites will not be made in sufficient succession for the Pandora community.
For example, if Pandora can play UT2003 at decent rate (if ported), will take too long to make, so long that people may even stop playing with their Pandies by the time it is completed. However if Nint. or Sony tried, it would take much less time. And I think people are now used to this rate of increase in the library of their device. There's a new "best game" being made every 6months; look out the window you will see assasins creed 2, what happened to COD:MW2 poster?
 
Last edited by a moderator:
Tegra 2 won't be able to sufficiently emulate PS2. Nothing with an SGX543 will be able to sufficiently emulate PS2. Ability to render a lot of polygons per second is not enough to emulate PS2. On modern platforms ability to render polygons is almost entirely a function of the GPU and not the CPU.

Tegra 1 can't render 40 million polygons per second under any circumstances. This is what nVidia claims:

"47M Triangles /sec (peak)
24M Drawn Triangles /sec (peak)
600M Pixels /sec (peak)
240M Textured Pixels /sec (peak)"

I don't know what they're trying to pull with that 47M number - but it's clear something is preventing the real number from being more than 24M. But what's more important here is that for an immediate mode renderer the textured fillrate is too low. Untextured fillrate is pretty much useless. It's probably going to be fillrate limited under a lot of circumstances. Meaning things will never see that 24M polygon rate.

The 40M number you've been mentioning is probably regarding Tegra 2. But you also mention a lot that real world benchmark results are what matter, not vendor marketing numbers. So far no one has benchmarked Tegra 2 (or quite likely Tegra 1, for that matter) so the numbers mean nothing.

I know you think that it's a natural conclusion for someone to look at polygons per second numbers for one device and another device then use that to conclude if one can emulate the other. And that if it can't emulate the other it's because of subtle reasons that wouldn't be apparent to anyone. But the reality is that this is totally off base. Only someone with a very limited understanding of what emulation is like and how computer hardware works would ever think this. Having a limited understanding is not a problem, but thinking that you have a much better understanding than you do can be. I also think you should stop instantly believing a lot of what you read on other forums.
 
Back
Top