Next Generation Pandora


Exophase said:
[
jb0yx said:
+3.5 hour battery life running full blast

That's a pro? And here people complain about PSP giving 5-6 hours.

I'm not a battery expert but 4000mah>2600mah but it did say 2 cell that might have some weight/size difference to the pandora battery i donno, but briefly looking at it looked like the numbers would match up if on the same scale?? maybe??

4000/(10*60)= 6.66~ pandora battery at 10 hours w/ arm proc 6.66 mah/min
2600/ (3.5*60)= 12.38~ mbook battery at 3.5 hours w/ intel proc 12.38mah/min
4000/12.38= 323.10~ intel proc consumption on pandora battery 323.10min
323.10/60= 5.38 ~5.5 hours on 4000mah battery?

not as good as pandora but I'd say acceptable, plus you could probally underclock to around 900mhz or so and get better life right?

@darkblu- was reading review and full blast was described as wifi + bluetooth + macro program loading internet exploerer, loading a page closing, then repeating, not 100% cpu use but seemed fair for real world or above, minimal was described as idle without screen saver or powersaving and wifi and bt disabled. hopefully that clears stuff up.
 
Last edited by a moderator:
Exophase said:
By the way, what you said about cache coherency can be worked around in a dynamic recompiler pretty easily w/o adding any additional hardware support. Code is not translated until it is encountered or modified, and in both cases the recompiler knows to push the generated code out of dcache when it's done, and also invalidate the edge line in icache. Code pages are marked read-only and writes are detected, flushing the the translation cache. So long as the code doesn't keep modifying code sections inbetween executions of the same sections then this works fine. This is not the access pattern that recompilers themselves follow; the only thing that might is aggressively self-modifying code, which barely exists in most x86 code people would be interested in running as opposed to using a slower emulator for. If that DOES happen then there are strategies to deal with it more effectively.
You perfectly know the devil is in the details, emulating a given instruction being in HW or by means of translation is not a big deal. If you have a very fast and accurate way to detect SMC on x86, you will easily get a very well paid job at VMWare :)

We don't agree on the accuracy thing I'm afraid, I place it above speed and for any commercial solution that should be the case or you'll be dead when your Photoshop (which seems to use SMC a lot according to Derek Bruening) keeps on crashing for customers :)

I agree with all other things you wrote.
 
Last edited by a moderator:
darkblu said:
i wouldn't be surprised if the geode we discussed yesterday had the fastest (clocks-to-retire) idiv among current x86's, by virtue of its cyrix heritage ; )

Hm. http://7-cpu.com/cpu/GeodeLX-lat.txt

39-42 cycles for 32bit, 100% unpipelined. Having a full blown hardware implementation on ARM isn't going to save you enough to be worth it, even if it's as fast per-cycle as the one on Cortex-M3 (which is of course a silly comparison since the clock speed is an order of magnitude lower). Better yet, a software or bit-stepped hardware option would be guaranteed to scale down in execution time for the lower data sizes like the x86 timings do. A hardware instruction may not do any such range detection. You could of course use software for the 8 or 16bit ones only.

Laurent said:
You perfectly know the devil is in the details, emulating a given instruction being in HW or by means of translation is not a big deal. If you have a very fast and accurate way to detect SMC on x86, you will easily get a very well paid job at VMWare :)

We don't agree on the accuracy thing I'm afraid, I place it above speed and for any commercial solution that should be the case or you'll be dead when your Photoshop (which seems to use SMC a lot according to Derek Bruening) keeps on crashing for customers :)

I agree with all other things you wrote.

Please don't get the wrong idea, nothing in my suggestions compromised accuracy, only performance - so long as we're not talking about timing, which of course no commercial solution for virtualization makes any attempt whatsoever to model. How much performance gets hit is going to depend on the nature of the SMC patterns, even if something performs SMC a lot it might not interleave execution in a pattern that incurs a large performance penalty handling it. Of course "a lot" is subjective - I've seen all kinds of horrible SMC patterns on GBA but despite x86 being cache coherent it's not as if self modifying code is a free ride like it was in the past. It won't have the outrageous expense the simplest recompiler solutions would but it'll still likely cost you at least dozens of cycles when you do it.

I suggest you read about Transmeta's recompiler, if you're curious about some strategies. Some of them do use hardware assistance, for instance, having a small number of pages capable of having fine-grained protection. But a lot of the techniques can be applied in software.

From my own experience alone, here are some things that you can do:

- If only a small number of single instructions are modified, flag them and patch them out of the generated block so that they can be interpreted (or immediates loaded indirectly if only the immediates are modified, which is a popular form of self modifying code)
- If a large portion of a block is changed, mark the block as dirty by patching its entry so that when it's next executed it compiles a new block for itself. And keep a separate translation cache for dynamic blocks where checksums are generated so common blocks can be reused. If there's little reuse this will at least section off the translation caches so that you don't end up flushing your non-caching portions.

(the downside to these techniques is that you can't have blocks overlap, which is actually kind of a bitch)

Above all, you'll get the best performance if you have an adaptive/cascading strategy which monitors behavior because no program is going to constantly trample all over its entire code base unless it's just designed to defeat your recompiler.

By the way, this one guy I vaguely knew who was a total jerk to me once (goes by ChipX86) got a job at VMWare, so why can't I? Not that I'd necessarily want to work there.
 
Last edited by a moderator:
The problem with approaches that rely on write protecting pages is the number of false alarms you'll get when you have data close to instructions (which seems to happen a lot depending on the compiler used). But I guess you already know that :) I wonder if accurate and efficient SMC detection is not harder that proper handling of SMC itself.

BTW I read somewhere at some point Intel had removed coherency inside the instruction pipeline, but had to reintroduce it later due to portability issues. If I remember that correctly and if it's true, then this tells much how bad the situation is on x86. (After some digging it looks like that in fact Intel, starting from Pentium, handles modifying the next instruction, so perhaps half of my story is wrong...)

Do you have some pointers to Transmeta doc (that wouldn't be patents, I don't have the right to read patents due to potential pollution with my work :( )? I wonder how good/bad was their strategy relying on small pages; this is known to give poor performance due to TLB thrashing (that's why hugetlbfs exists for instance). Or did you really mean that only protection is fine grained on a standard sized page? I'm not sure it'd be easy to implement efficiently without costing too much power-wise (you'd basically have to do multiple parallel checks for every write access).
 
With GBA emulation I definitely feel that checking for self modifying code (a byte lookup per write, I guess about 4 extra cycles) was not nearly as bad as actually handling it. I don't expect false positives to be an issue most of the time. When it happens a lot you can patch the access to do the check in software instead, which would probably be a three level page table down to bitmaps or something. I wouldn't expect compiler generated code to interact with this, because usually it'll be in the .code section (page aligned) which is often read only to begin with. And because self modifying code usually happens over either assembly or program generated code, not something a compiler generated. Hard to modify something that's that big of a moving target.

Intel really wasn't in a position to be able to have any compatibility problems, since you can't just go with different software solutions for it if they do. My guess would be that self modifying code just flushes the pipeline unconditionally, making it all the slower. Trying to track the PCs of all the speculatively executing code in the pipeline sounds pretty awful, especially for a corner case like this. But this is assuming that something in the pipeline is guaranteed to be in icache - if you have enough ways to cover the pipeline then this should always be the case but I could imagine some kind of pathological case where it isn't. Horrible.

I don't know if the fine grained access was only protection or only paging - the impression I got from what I read is that it was an L3 page table that is only allocated when deemed utterly necessary. The same way L2 page tables should really be allocated and probably would be if OS's cared more about this. So it'd only happen once they started getting those false positives on write accesses. I don't know the granularity of the pages. Another thing that Transmeta did was code checksumming prior to execution - I've never been very enthusiastic about this approach, especially considering that code is usually executed far more times than it's modified (or else what's the point? Unless you're working around your instruction set not having indirection on something..). But maybe real time code checksums can be done in hardware instead? That's probably not that much overhead, if it's just keeping a running counter that you can then retire and verify at checkpoints. A lot of CPUs already have fast checksum or encryption engines. I imagine it could be built into the icache.

Here are some Transmeta docs:

http://www.cc.gatech.edu/~ntclark/8803f08/notes/transmeta.pdf
http://www.complang.tuwien.ac.at/scopes03/slides/dehnert.pdf

The latter in particular covers some SMC strategies, although it's harder to read since it's just in slides.

With the way these things read you'd basically think that the CPUs were made for no other purpose than to emulate x86, which is a crying shame because really they should have been good for emulating other platforms too. I can only imagine how much more popular it might have been if it could dualboot Windows and MacOS X when it first came out.

It was a pretty power efficient chip though.
 
Exophase said:
Hm. http://7-cpu.com/cpu/GeodeLX-lat.txt

39-42 cycles for 32bit, 100% unpipelined. Having a full blown hardware implementation on ARM isn't going to save you enough to be worth it, even if it's as fast per-cycle as the one on Cortex-M3 (which is of course a silly comparison since the clock speed is an order of magnitude lower). Better yet, a software or bit-stepped hardware option would be guaranteed to scale down in execution time for the lower data sizes like the x86 timings do. A hardware instruction may not do any such range detection. You could of course use software for the 8 or 16bit ones only.
yeah. they clearly botched it on the mediagx line (the predecessor to the geodes).

on the M1, things stood a little differently:
http://datasheets.chipdb.org/Cyrix/M1/6x86/M1-6.pdf

Code:
DIV Unsigned Divide
Accumulator by Register/Memory
Divisor:
Byte        13-17
Word        13-25
Doubleword  13-41

Code:
IDIV Integer (Signed) Divide
Accumulator by Register/Memory
Divisor:
Byte        16-20
Word        16-28
Doubleword  17-45

note the data-dependent hw early-out, that intel were so happy about in their relatively-recent pM architecture ; )
 
Last edited by a moderator:
Mr Poletski said:
0% chance of windows 98? how come? surely if you *REALLY* wanted to do it you could intall it under dosbox.. *shudder*

Laurent said:
There's indeed 100% more chance to boot Windows 98 when Pandora arrives than getting a 1MB OS for gaming :)
For the fun: http://www.youtube.c...h?v=G-Ecr8tWetI

I was talking about natively running. :p

jb0yx said:
4000/(10*60)= 6.66~ pandora battery at 10 hours w/ arm proc 6.66 mah/min
2600/ (3.5*60)= 12.38~ mbook battery at 3.5 hours w/ intel proc 12.38mah/min
4000/12.38= 323.10~ intel proc consumption on pandora battery 323.10min
323.10/60= 5.38 ~5.5 hours on 4000mah battery?

In my experience, most Atom-powered devices use higher voltage batteries, which automatically have a higher capacity, despite the Mah.

You want to measure VA, or w/e. :p Not just MAH.

I suspect the run time would be cut on a Pandora battery, but since I haven't looked up the specs of both batteries, I can't say for certain.
 
Last edited by a moderator:
Gruso said:
"We can't compete in the smartphone sector. What should we do?"

"How about we make a netbook?"

"Brilliant! Give that man another line of coke."

To be fair to Nokia, they make nice h/w, and with the battery life claims it's rumoured they're using the next Atom revision (IIRC the Northbridge in the Atom we've seen to date consumed more power than the CPU itself, which the next version was due to fix) so maybe the battery life claims aren't totally off base.

Still don't want an x86 Pandora though, ARM makes much more sense IMO.
 
Last edited by a moderator:
Kramy said:
You want to measure VA, or w/e. :p Not just MAH.

I suspect the run time would be cut on a Pandora battery, but since I haven't looked up the specs of both batteries, I can't say for certain.

The usual measure is Watt-hours for a battery, and watts for the device.
That way you get a ballpark estimate of runtime regardless of voltage step-ups or step-downs.
 
Last edited by a moderator:
darkblu said:
bnolsen said:
If the screen res doesn't increase real time raytracing could become more mainstream. Multicore would really help here.
impatient, aren't we? : )
while being a nicer 'rasterization abstraction level', ray-tracing will never be cheaper than scan-conversion, and, in this regard, the latter will be preferred in the power-sensitive spectrum for a healthy number of years to come. just IMHO. of course.

as for open-source friendliness, i think our best bet is GPGPU-sort-of designs with as much direct programability exposed as possible. as a matter of fact, even as i type this, there's already a soon-to-be-released handheld that takes steps in that direction ; )

Well, considering at work we're doing very rigorous full ray casting at work with dual 1.6 clovertowns at 3.6MP/s (this include 300+ camera reorientations per second) And I have some ideas I don't have time to implement which would cut the needed computations down by an order of magnitude at least for non rigorous use (ie: gaming). The problem I see right now would be the amount of cpu power (watts) needed to pull this off, even in the next generation.
 
Last edited by a moderator:
darkblu said:
Laurent said:
Even Apple2 had CPU cards (Z80, 68k and so on).
*raises hand*

z80 in apple2e represent.

cp/m, turbo pascal 2.0 (or was it 3.0, can't recall anymore), 80-column text mode support in editors.

take that, commodore!

edit: now that i think of it, there was an apple2 model (or was it a clone?) that had the z80 board built-in on the motherboard.
Well, there was at least a clone that had a built-in z80: The Basis-108, a German apple-II clone that looked nothing like one on the outside. It was made of steel, you could literally jump up and down on it with no harm done. It ran as either an Apple IIe (6502 mode) or CP/M (z80 mode), CP/M 2.2 or 3.0.
 
Last edited by a moderator:
bnolsen said:
Well, considering at work we're doing very rigorous full ray casting at work with dual 1.6 clovertowns at 3.6MP/s (this include 300+ camera reorientations per second) And I have some ideas I don't have time to implement which would cut the needed computations down by an order of magnitude at least for non rigorous use (ie: gaming). The problem I see right now would be the amount of cpu power (watts) needed to pull this off, even in the next generation.
exactly. we're talking of some 100-160W (depending on the clovertown variation) in just TDP, for those 3.6MP/s.

in comparison, the handheld hosting a 'GPGPU' i was referring to in my last post does some 'measly' 8GFLOPS, and does 'software' scan-conversion at ~50MP/s. at ~1W draw.

and that device is already considered too much off the ideal features-performance/powerdraw curve by some industry men.. : ) so draw your own conclusions.
 
Last edited by a moderator:
darkblu said:
bnolsen said:
Well, considering at work we're doing very rigorous full ray casting at work with dual 1.6 clovertowns at 3.6MP/s (this include 300+ camera reorientations per second) And I have some ideas I don't have time to implement which would cut the needed computations down by an order of magnitude at least for non rigorous use (ie: gaming). The problem I see right now would be the amount of cpu power (watts) needed to pull this off, even in the next generation.
exactly. we're talking of some 100-160W (depending on the clovertown variation) in just TDP, for those 3.6MP/s.

in comparison, the handheld hosting a 'GPGPU' i was referring to in my last post does some 'measly' 8GFLOPS, and does 'software' scan-conversion at ~50MP/s. at ~1W draw.

and that device is already considered too much off the ideal features-performance/powerdraw curve by some industry men.. : ) so draw your own conclusions.
)

The point I was making is that the problem we're solving at work using ray casting is at least an order of magnitude more complex than what gaming may require. And it's being done with no special optimization with good results on hardware more than 3 years old (3 cpu generations back).

I think it's not unreasonable to expect very good results with real time ray tracing and mobile hardware in 3 or years.
 
Last edited by a moderator:
I think the zii has been talked about before but now they say they're selling their boards at volume for $75

http://www.zii.com/Technology/ViewArticle.aspx?Article=4

Pricing and Availability

Volume shipments of the ZMS-05 System Modules are expected in September 2009 with volume pricing starting at US$75. Customers who wish to manufacture or customize the modules themselves can obtain all the hardware design data required for US$5,000. Specifics of the “Shanzhai” OEM Marketing Program will be detailed later. For more information, please visit the Contact Us section at: www.ziilabs.com

Pandora 2 with zii power? lol

or how about a more expensive pandora version with the board? I'd buy a zii if it had the same form factor as a pandora. I need a keyboard damn it.
 
^ Do you have a source for that? Lots of people here are interested in exactly what's inside, but no one has found any solid information yet. The page you linked says this:

ZiiLABS ZMS-05 Media-Rich Applications Processor
- Flexible StemCell Media Processing Array
- Dual ARM926-EJS Cores
...which doesn't really mean much.
 
Back
Top