Next Generation Pandora


Pandor IS the next generation.
Too bad the delays have really hamstrung it.

Probably the "next gen" pandora will add more ergonomics and refinements to the excellent control selection.
I could easily see the whole "keys" portion of the lower part getting the biggest overhaul.
Maybe go "DS" and default the lower screen to software configurable keyboard (I know, tactile sucks on touch screens and I'm sure this has been hashed over plenty of times).

Internal wise the biggest overhaul (besides an arm core update) would be to dump the PowerVR part for something more open source friendly (which should be available in a few years). If the screen res doesn't increase real time raytracing could become more mainstream. Multicore would really help here.
 
bnolsen said:
If the screen res doesn't increase real time raytracing could become more mainstream. Multicore would really help here.
impatient, aren't we? : )
while being a nicer 'rasterization abstraction level', ray-tracing will never be cheaper than scan-conversion, and, in this regard, the latter will be preferred in the power-sensitive spectrum for a healthy number of years to come. just IMHO. of course.

as for open-source friendliness, i think our best bet is GPGPU-sort-of designs with as much direct programability exposed as possible. as a matter of fact, even as i type this, there's already a soon-to-be-released handheld that takes steps in that direction ; )
 
Last edited by a moderator:
Esn said:
but Bluray is a currently a cheaper storage mechanism than others.

Blu-Ray just doesn't make a sensible data storage mechanism. Trouble is that for the cost of a blue ray burner and 10 disks, which would give you 250Gb storage, you could buy a 1 TB USB hard disk.

And of course, USB hard disks are recognised by almost EVERY computer around now, while only a very small percentage can read Blu-Ray disks.

When CD burners arrived, the average hard drive was around 60Mb, meaning you could back up your entire data collection ten times over on a single disk.

When DVD burners arrived, the average hard drive was around 12Gb, meaning you could back up all your data on about 3 disks, which was still pretty good.

When blue-ray burners arrived the average hard disk was, what, 150-200Gb? Meaning you'd need 5-10 disks to back up all your data (and at that point the burners were several hundred pounds and the disks around fifteen pounds each)! To back all my data, just over a TB, I'd need over 40 disks! USB hard disks are becoming cheaper and higher-capacity all the time. This, even more than the move to digital downloads, is why blu-ray will never become as common place as DVDs were.

Unless people come up with a new type of optical storage that can store significantly more data than the average hard drive, then optical storage is dead in the water. And good riddance, I say!
 
Last edited by a moderator:
Not to mention, optical disks just aren't that fast at all. It would take weeks to back a modern hard drive onto blu-ray.

But anyway, that was part of a totally different discussion.
 
Thats exactly what I was thinking...

(*- edit: this was supposed to be in reply to exophase, who asked if this was a joke topic -*)
 
Kramy said:
christo930 said:
With such a small screen and good design, I would think it could easily reach 5 hours battery time

No. The most power efficient Atom + Chipset combos still consume about 8x what the OMAP 3530 does. Also keep in mind that ARM vs x86 emulation of DOS won't make a huge difference in performance, because to get decent battery life, your Atom CPU won't be running full speed. ;)

If you want proof, look at an energy efficient atom notebook. The EEE PC 1000HE, which easily gets 7-8 hours battery life, has a battery many times the capacity of the Pandora. Not only is the MaH higher, but also the voltage, which gives it something ludicrous like 7x the capacity. That's also why the thing is as heavy as a brick.

To get "At least 5 hours battery life" out of an Atom Handheld, that sucker is going to have to weigh more than an old Gameboy + power brick! :) And if you're going for a handheld that heavy, why not just put the same massive battery on the Pandora for 50 hours battery life? :p
christo930 said:
And there is always running dos games under windows 98 if need be.
There is a 0% chance of getting something like Windows 98 running on the Pandora, or a Pandora-like device. There is a far greater chance that some miraculous developers will come from nowhere and release a cut-down 100% assembly 1MB OS for gaming. (perhaps with DOS compatibility, perhaps not)

0% chance of windows 98? how come? surely if you *REALLY* wanted to do it you could intall it under dosbox.. *shudder*
 
Last edited by a moderator:
There's indeed 100% more chance to boot Windows 98 when Pandora arrives than getting a 1MB OS for gaming :)
For the fun: http://www.youtube.com/watch?v=G-Ecr8tWetI
 
Enverex said:
WizardStan said:
Enverex said:
That makes -no- sense. How on earth do you expect the laptop to use two processors as the main CPU when they are completely different architectures?
The PS2 used the PS1 processor as one of it's subsystem processors. That's how it achieved backwards compatibility, by simply starting up the second processor as a primary when it was needed.
Most modern graphics cards have their own processors on the board which have absolutely nothing in common with the x86, just waiting for a program to be fed to them. Desktops don't have any problems with these "dual" setups.
The NDS has both an ARM7 and an ARM9 based CPU which, while very close, are not the same architecture, and both CPUs operate mostly independently within the system.
Bringing closer to home, the Pandora has a primary processor, but it also carries a DSP, which is a processor in its own right and yet another different architecture.

So there's no reason that two CPUs of differing architecture can't exist within the same system, with one acting as primary and the other just waiting to be turned on and given a task.

You're completely missing the point here. Those were consoles and the software was designed to use the processors in this way. PC's expect 1 (or more) processors of the same architecture, there is no software that can use different processors of different arch. Also GPU's have nothing to do with this, I was referring specifically to CPUs.

The "can boot with one or the other" idea is ludicrous as well, none of the software that works on one would work on the other, you'd basically have a dual-boot setup so different that it would be like having 2 different machines.

I'm not saying its a good idea, but its not so ludicrous, this has been done before:

http://en.wikipedia.org/wiki/Commodore_128

two completely different CPUs (65xx, Z80) sharing the same hardware.
they didn't run at the same time but could "task switch" back and forth between the CPUs.

and some Amiga expansion boards had both 68K and PowerPC CPUs which could run in parallel.
 
Last edited by a moderator:
I think we can all agree that the Z80 in the Commodore 128 was a pretty silly design move.

Nonetheless, tons of computers have CPUs in them of varying architectures. Chances aren't bad that you already have an ARM in your laptop or desktop that you didn't even know was there.
 
Even Apple2 had CPU cards (Z80, 68k and so on).
And as far as ARM being all around, many WiFi chipsets have an ARM7 inside :)
 
I think a hardware-based dynamic recompiler from x86 to ARM would be an interesting chip to have. That's all I had to say; please, carry on with whatever you were doing :p
 
dflemstr said:
I think a hardware-based dynamic recompiler from x86 to ARM would be an interesting chip to have. That's all I had to say; please, carry on with whatever you were doing :p
That'll happen when ARM has killed Intel :lol:

Even though that was meant to be a joke, that'd be very hard to achieve due the memory systems being very different. For instance x86 magically makes I cache and D cache coherent, which basically means all JIT on x86 don't need to flush caches, while on ARM you have to flush...
 
Last edited by a moderator:
He said hardware based dynamic recompiler, not hardware x86 support - this could mean any amount of hardware functionality meant to help this process, from a small amount to a very large amount. The Loongson 3 CPU has added several instructions meant to make emulating x86 less burdensome and ARM could adopt strategies like this. Although a lot of the extensions are not needed since ARM already facilitates something like what it's doing. That, and the Loongson 3 paper compares examples of recompilation with the extensions with recompilation done by a laughably bad recompiler. So I guess part of the purpose of the extensions is to work around a lack of experts doing the software.

But, there are definitely some features that could be added to a CPU to make emulation and virtualization faster in general. Like hardware hash lookup tables, fast same-process page faults that have features making it quick to emulate the instruction, ability to quickly swap out flags, perhaps some flagless test + branch instructions like seen on MIPS, hardware assisted cycle counters that use special registers and fast fault (call a callback) when they overflow, popcount instruction to help with that damn parity flag... I'm sure I could think of some more.

By the way, what you said about cache coherency can be worked around in a dynamic recompiler pretty easily w/o adding any additional hardware support. Code is not translated until it is encountered or modified, and in both cases the recompiler knows to push the generated code out of dcache when it's done, and also invalidate the edge line in icache. Code pages are marked read-only and writes are detected, flushing the the translation cache. So long as the code doesn't keep modifying code sections inbetween executions of the same sections then this works fine. This is not the access pattern that recompilers themselves follow; the only thing that might is aggressively self-modifying code, which barely exists in most x86 code people would be interested in running as opposed to using a slower emulator for. If that DOES happen then there are strategies to deal with it more effectively.

All of the things I said can apply to a dynamic recompiler that generates code blocks in hardware (not a hardware interpreter, an actual recompiler). See Transmeta Crusoe if you don't believe me (although admittedly its strategy is not really the same)
 
Laurent said:
Even Apple2 had CPU cards (Z80, 68k and so on).
*raises hand*

z80 in apple2e represent.

cp/m, turbo pascal 2.0 (or was it 3.0, can't recall anymore), 80-column text mode support in editors.

take that, commodore!

edit: now that i think of it, there was an apple2 model (or was it a clone?) that had the z80 board built-in on the motherboard.
 
Last edited by a moderator:
Exophase said:
But, there are definitely some features that could be added to a CPU to make emulation and virtualization faster in general. Like hardware hash lookup tables, fast same-process page faults that have features making it quick to emulate the instruction, ability to quickly swap out flags, perhaps some flagless test + branch instructions like seen on MIPS, hardware assisted cycle counters that use special registers and fast fault (call a callback) when they overflow, popcount instruction to help with that damn parity flag... I'm sure I could think of some more.
Yeah, a integer divide instruction would help too. That's one of the main things missing in ARM. The x86 parity flag is rarely used, most of the time you would not have to calculate it.
 
Last edited by a moderator:
have any of you checked out the UMID mbook M1? Nice canidate for framework and proof of concept that x86 version of pandora could work if did proper. Also you could probally knock off about $70-100 for the copy of xp ;) like to hear opinions here.

cost- $599usd
os- windows xp home
speed- 1.33ghz x86 intel atom
screen- 4.8in touchscreen
disk- 16gb solid state
ram- 512mb ddr
weight- 315g
battery- 2400mah
expansion- microsd card slot

some pros and cons
+3.5 hour battery life running full blast
+6+ hour battery life running minimal
+fanless with slight heat production

-no gaming controls
-no internal 3.5mm headphone jack
-only uses mini usb ports

size comparison
http://www.sizeasy.com/page/size_comparison/25060-ASUS-EEE-PC-90x-vs-UMID-mbook-M1-vs-OpenPandora-vs-Nintendo-DS-vs-Pack-Of-Playing-Cards

reseller
http://www.dynamism.com/#Product=umid

EDIT:removed quote and added eee pc for better size comparison
 
Neko said:
Yeah, a integer divide instruction would help too. That's one of the main things missing in ARM. The x86 parity flag is rarely used, most of the time you would not have to calculate it.

ARMv7m has an integer divide, maybe it'd get carried over. It's not a very big deal since integer divide isn't especially fast on most x86 either (dozens of cycles), nor is it especially common. A good compromise would be division step instructions like the ones available on SH4.

I largely included that parity bit for Laurent's sake, since he's mentioned it before, but the raw truth is that for a statistically significant number of instructions you can't guarantee that the parity flag will never be looked at. This of course goes down somewhat the more complex the global analysis in your recompiler is, but becomes close to impossible to determine across indirect branches and loading/pushing flags. But yes, if you limited parity flag emulation to cases where there's a jump or set on parity even/odd on the live result of an operation in the same basic block then you'll probably get most uses of it out of the way. Certainly there could be a "do not emulate" option, but there could be for a lot of things. But probably not for something implementing this in hardware.

jb0yx said:
+3.5 hour battery life running full blast

That's a pro? And here people complain about PSP giving 5-6 hours.
 
Last edited by a moderator:
jb0yx said:
some pros and cons
+3.5 hour battery life running full blast
+6+ hour battery life running minimal
+fanless with slight heat production
hmm, i'm not exactly sure why, but somehow the above reads to me as:

+3.5 hour battery life at normal usage
+6+ hour battery life when idling

don't mind me though, i'm like that - overly suspicious.

ps: i almost physically killed an atom board today. poor bugger flew through the air for about 1.5m and landed flat on its back on the ground. that while still powered and running. i didn't expect it to work after that, but it did.

Exophase said:
It's not a very big deal since integer divide isn't especially fast on most x86 either (dozens of cycles), nor is it especially common.
i wouldn't be surprised if the geode we discussed yesterday had the fastest (clocks-to-retire) idiv among current x86's, by virtue of its cyrix heritage ; )
 
Last edited by a moderator:
Back
Top