Porting Various Operating Systems To Pandora?


Chip said:
The .65W TDP rating for the Atom Z500 is for the CPU only, but the .25W TDP for the Cortex-A9 is also only for the CPU. The numbers for an A9-based SoC with all the trimmings would obviously be higher, just as a complete Atom-based system would be higher.
TDP is supposed to be a maximum allowed dissipation - with that in mind, you can see that the only thing that could vary this is clock speed. Indeed, Intel claims that it's the 500MHz version that can get 0.6W.

Now, the thing is, Intel is being very careful with their wording because there's no question that this CPU will perform worse, clock for clock, than any x86 CPU Intel has released in several years. They say that the single threaded performance is equivalent to an original Pentium-M, but they don't mention clock speeds. I wouldn't be at all surprised if they were comparing a 900MHz one to a 1.8GHz Atom. Or, if compared with the A100, it'd be a 600Mhz CPU against, again, something as high as 1.8GHz. If either are the case then a 500MHz Atom doesn't sound that great at all.

Sure, these will be viable for handhelds, if that's what people want to use. The question is if they'll be competitive with ARM in power/performance ratios, and at this point I'm strongly doubting it. If Intel can halve the power consumption without compromising performance in any way whatsoever then I'd love to know what they're doing, and why they're even bothering rolling out Atom as it is now at all.

EDIT: Finally found the relevant comparison: "The 2 GHz variant of the Silverthorne processor will operate at 1 volt and it will have performance equivalent to a first generation “Banias” Pentium M notebook processors circa 2003. " (http://blogs.zdnet.com/Ou/?p=987) - unfortunately this helps little, because 2003 saw the release of 900MHz to 1.7GHz Pentium-M's. Even if you restrict it to the very first batch they still went all the way up to 1.6GHz. But I think giving MHz for the Atom and not the Pentium-M is pretty telling.
 
Last edited by a moderator:
Exophase said:
The question is if they'll be competitive with ARM in power/performance ratios, and at this point I'm strongly doubting it.

I am ready to bet they won't be able to achieve that within two years.

QUOTE
If Intel can halve the power consumption without compromising performance in any way whatsoever then I'd love to know what they're doing, and why they're even bothering rolling out Atom as it is now at all.

Indeed. Atom looks like a complete architectural mistake. Going in-order was utterly stupid. I can only guess they did that because they didn't how to reduce power. However I am sure they are able to beat ARM, but it certainly won't happen within two years.

Exophase said:
EDIT: Finally found the relevant comparison: "The 2 GHz variant of the Silverthorne processor will operate at 1 volt and it will have performance equivalent to a first generation “Banias” Pentium M notebook processors circa 2003. " (http://blogs.zdnet.com/Ou/?p=987) - unfortunately this helps little, because 2003 saw the release of 900MHz to 1.7GHz Pentium-M's. Even if you restrict it to the very first batch they still went all the way up to 1.6GHz. But I think giving MHz for the Atom and not the Pentium-M is pretty telling.
You want real data? Atom @1.6 GHz is about the speed to P3-M @1.13 GHz and 20% slower than current Celeron-M @900 Mhz.
Granted, it's only some Super PI stuff, but anyway that proves my point regarding in-order shit :p

Ref: http://xtreview.com/addcomment-id-4447-vie...-benchmark.html
 
Last edited by a moderator:
Laurent; All the negative in-order sentiments makes me worried about some things on A8, especially generated code (recompilers). However, I do think that having significantly more registers for most things that will be ran (ie, I don't think that people will be running 64bit apps on Atom all the time) helps lessen the pain of in-order, allowing the compiler to rename more registers in software.

I really hope GCC can schedule for Cortex-A8 okay.

It's kind of disillusioning that ARM was defending their choice for in-order for A8 so heavily (albeit, purely from a power consumption point of view), then rolled out A9 which went for out-of-order and a smaller pipeline.

EDIT: Important question that I think is relevant to determining Atom's performance - why is a 1.13GHz Tualatin performing so much worse a 900MHz Celeron-M in Super-PI? The architectural differences shouldn't be that ground breaking, and the only major differences I can think of that would contribute this much are memory bandwidth and SSE2. If it's using an SSE2 version vs a SISD one for non-SSE2 then we can throw the Tualatin score out altogether (and any tests involving Cortex-A8/A9, since it doesn't have double precision SIMD either.. which is all but irrelevant for just about anything anyone would want to use it for on a handheld)

EDIT2: Looks like Super-PI does use allocate a lot of memory (way more than L2 cache), I guess that could be making a huge difference here. I didn't think computing pi would need a lot of memory.
 
Exophase said:
Laurent; All the negative in-order sentiments makes me worried about some things on A8, especially generated code (recompilers). However, I do think that having significantly more registers for most things that will be ran (ie, I don't think that people will be running 64bit apps on Atom all the time) helps lessen the pain of in-order, allowing the compiler to rename more registers in software.

I really hope GCC can schedule for Cortex-A8 okay.

As far as recompilers are concerned, you will have to schedule by hand. But did you have to do that for the GP2X ARM9?
For gcc, the back-end has been available for months and it's open-source, so it's up to you to make it better if you feel the need :p

QUOTE
It's kind of disillusioning that ARM was defending their choice for in-order for A8 so heavily (albeit, purely from a power consumption point of view), then rolled out A9 which went for out-of-order and a smaller pipeline.

Though I can't comment too much on that as you guess, keep one thing in mind: a processor takes several years from early design decisions before it comes in your hand. Now project yourself back, let's say 4 or 5 years back. Do you think there were many out-of-order chips in the low-power embedded market? Did the UMPC concept even exist at that time? What CPU power did PDA and smartphones need? The market has evolved, ARM has moved forward, but Intel has stepped back.

QUOTE
EDIT: Important question that I think is relevant to determining Atom's performance - why is a 1.13GHz Tualatin performing so much worse a 900MHz Celeron-M in Super-PI? The architectural differences shouldn't be that ground breaking, and the only major differences I can think of that would contribute this much are memory bandwidth and SSE2. If it's using an SSE2 version vs a SISD one for non-SSE2 then we can throw the Tualatin score out altogether (and any tests involving Cortex-A8/A9, since it doesn't have double precision SIMD either.. which is all but irrelevant for just about anything anyone would want to use it for on a handheld)

SuperPI doesn't use SSE2. I was told it uses x87, so it naturally stinks :) My point was basically that it's not because a chip is an x86 from Intel, that it shines.

The only comparison Intel did, was Atom vs OMAP2420 browsing performance. And you know what? They did that with no network connection, pages were locally stored. And comparison against a 2 or 3 years old chip. Pure marketing bullshit.
 
Last edited by a moderator:
Exophase said:
Chip said:
The .65W TDP rating for the Atom Z500 is for the CPU only, but the .25W TDP for the Cortex-A9 is also only for the CPU. The numbers for an A9-based SoC with all the trimmings would obviously be higher, just as a complete Atom-based system would be higher.
TDP is supposed to be a maximum allowed dissipation - with that in mind, you can see that the only thing that could vary this is clock speed. Indeed, Intel claims that it's the 500MHz version that can get 0.6W.

Now, the thing is, Intel is being very careful with their wording because there's no question that this CPU will perform worse, clock for clock, than any x86 CPU Intel has released in several years. They say that the single threaded performance is equivalent to an original Pentium-M, but they don't mention clock speeds. I wouldn't be at all surprised if they were comparing a 900MHz one to a 1.8GHz Atom. Or, if compared with the A100, it'd be a 600Mhz CPU against, again, something as high as 1.8GHz. If either are the case then a 500MHz Atom doesn't sound that great at all.

Sure, these will be viable for handhelds, if that's what people want to use. The question is if they'll be competitive with ARM in power/performance ratios, and at this point I'm strongly doubting it. If Intel can halve the power consumption without compromising performance in any way whatsoever then I'd love to know what they're doing, and why they're even bothering rolling out Atom as it is now at all.

EDIT: Finally found the relevant comparison: "The 2 GHz variant of the Silverthorne processor will operate at 1 volt and it will have performance equivalent to a first generation “Banias” Pentium M notebook processors circa 2003. " (http://blogs.zdnet.com/Ou/?p=987) - unfortunately this helps little, because 2003 saw the release of 900MHz to 1.7GHz Pentium-M's. Even if you restrict it to the very first batch they still went all the way up to 1.6GHz. But I think giving MHz for the Atom and not the Pentium-M is pretty telling.



Well see this youtube video : http://youtube.com/watch?v=rWFkRUOFVmM
Intel guy was asked about performance of atom and he said "half of celeron" which I assume means "half of celeron of the same clock". But well not sure which Celeron he was referring to since there have Celerons of so many generations :)

In any case most of this discussion is theoretical at this point. In this industry never believe what the hardware vendors tell you until some independent performance tests comes out. Wait for a couple of months till the Atom based systems come out : the new EEE and the much hyped MIDs. Once we get some independent reviews of its performance, battery life and once we get the idea of the price of devices, then we will know better. Right now all that we have is some info and PR videos fed to us by Intel.

The same goes for the Cortex A8 and A9. ARM claims good numbers but we dont have a device in our hands. At max what people have are dev boards till now. In the comments on the article on the beyond3d site, you will also notice strong displeasure with the Cortex A8. And the A9 hasnt appeared in any real devices either. In most cases we only have numbers like "web page rendering" and Dhrystone MIPS neither of which can easily map to a real world workload for a device like the Pandora for example.

Finally only technology has never been a deciding factor in the industry either. We cannot say what the big software vendors will do in the future or what kind of devices will people want. Maybe people simply dont want those MIDs or whatever and only want a smartphone in which case there simply isnt a big enough market for Intel. Or take the EEE. Sure the EEE has sold about half a million (correct?) units but how many more will people buy? Maybe 1 million? 10 million? 50 million? No one knows that either. If these categories dont take off then I doubt Intel will put a lot of money behind the Atom.
 
Last edited by a moderator:
Laurent said:
From my hacker point of view, what I find great with Pandora is its chip. If it went the x86 road, I would not consider it anymore.
I had to ponder on this comment for a bit...A question or two, if I may...

Will you be coding in ARM assembly, either inline or straight up assembly to an assembler?

If not, why does it matter what CPU you're running on so long as it gives sufficient performance to you with your C/C++, Pascal, C#, Java, etc. code?

Keep in mind, I'm not telling you that you have to shift your position. It's just I find it an odd one to possess unless you're intrigued with trying to
hand optimize code or produce finely tuned code by hand.
 
Last edited by a moderator:
Laurent said:
The only comparison Intel did, was Atom vs OMAP2420 browsing performance. And you know what? They did that with no network connection, pages were locally stored. And comparison against a 2 or 3 years old chip. Pure marketing bullshit.
I'd hesitate to call it pure marketing bullshit. Peak performance for rendering HTML would be from static content on the local disk. All you're doing with a network connection is testing networking performance and storage performance (because it's typically dumped to RAM or "disk" locally FIRST before rendering it..).

As for the comparisons, considering that they're comparing what's in the field (OMAP2420 is, while 2-3 years old, it's shipping and in use in some applications...) it's something of a valid comparison. Marketing, yes. Spin present, yes. Is it total bullshit? No.

Is x86 great? No. Is ARM great? Not really, either. There's, by far, better architectures in the RISC camp. And... Before you get on about me not knowing much about it- I've programmed professionally on the following:

x86
x86_64 (Yes, it IS different... :D)
ARM
MIPS
Sparc
Power4

In either desktop, server, or embedded contexts.
 
Last edited by a moderator:
Svartalf said:
Laurent said:
The only comparison Intel did, was Atom vs OMAP2420 browsing performance. And you know what? They did that with no network connection, pages were locally stored. And comparison against a 2 or 3 years old chip. Pure marketing bullshit.
I'd hesitate to call it pure marketing bullshit. Peak performance for rendering HTML would be from static content on the local disk. All you're doing with a network connection is testing networking performance and storage performance (because it's typically dumped to RAM or "disk" locally FIRST before rendering it..).


A serious question : did they mention the specification of the ARM and Atom based devices being used? And importantly, the browsers? and the operating systems? I cannot find that information on any of the usual news sites. Does anybody have a link ? If it turns out that they were using pocket IE ARM vs Webkit x86 .. or 128mb RAM vs 1gig RAM on ARM and Atom device respectively, they have some explaining to do.
 
Last edited by a moderator:
randomhack said:
A serious question : did they mention the specification of the ARM and Atom based devices being used? And importantly, the browsers? and the operating systems? I cannot find that information on any of the usual news sites. Does anybody have a link ? If it turns out that they were using pocket IE ARM vs Webkit x86 .. or 128mb RAM vs 1gig RAM on ARM and Atom device respectively, they have some explaining to do.
I don't know the answer to that question- however, the amount of memory really comes relevant when you start considering size of the content. Pocket IE ARM is actually going to perform similarly to MiniMo on a Nokia N8x0 machine or on an eeePC if built for it for similar load situations.
 
Last edited by a moderator:
Svartalf said:
randomhack said:
A serious question : did they mention the specification of the ARM and Atom based devices being used? And importantly, the browsers? and the operating systems? I cannot find that information on any of the usual news sites. Does anybody have a link ? If it turns out that they were using pocket IE ARM vs Webkit x86 .. or 128mb RAM vs 1gig RAM on ARM and Atom device respectively, they have some explaining to do.
I don't know the answer to that question- however, the amount of memory really comes relevant when you start considering size of the content. Pocket IE ARM is actually going to perform similarly to MiniMo on a Nokia N8x0 machine or on an eeePC if built for it for similar load situations.


a) The N8x0 devices do not use Minimo. They use microb, a port of firefox done by Nokia.

B) There is considerable speed difference b/w browsers built for the same device. Firefox ppl are doing some testing of a mobile FF3 on the N8x0 and they find it to be WAY faster than Microb both tested on N810.

Some benchmarks : http://www.0xdeadbeef.com/weblog/?p=349
As you can see there is almost a 3x-4x difference b/w browsers running on the same platform but derived from different versions of the Gecko engine.

For the desktop : an informal benchmark here says that the daily builds of Webkit are almost 3x as fast as the current Safari : http://blogs.computerworld.com/safari_is_a..._get_crazy_fast
Again as you see considerable speed difference b/w different builds of the SAME engine.

So well .. I would say which browser .. is a very important question. Btw as a side note, interestingly Intel did not include any heavy AJAX based sites.

edit : I am simply not believing the 4.5-6.1x number till somebody confirms it in 3rd party tests. I definitely expect the Atom numbers to be much better than the OMAP2420 but how much better remains to be seen.

edit : I also think the discussion of Atom is reaching a bit of ridiculous proportions. So I think I am not going to post anything more about this topic till we get real devices out.
 
Last edited by a moderator:
Laurent said:
As far as recompilers are concerned, you will have to schedule by hand. But did you have to do that for the GP2X ARM9?
For gcc, the back-end has been available for months and it's open-source, so it's up to you to make it better if you feel the need tongue.gif
There's no way I'm diving into gcc whether it needs it or not, I'd much sooner write my own ASM. I don't schedule for ARM9 in my recompiler (it's really not that sophisticated, and you'll be very hard pressed to find a recompiler that is), but there's much less to actually schedule for than on Cortex. Loads would usually be naturally surrounded by other instructions anyway.

Laurent said:
Though I can't comment too much on that as you guess, keep one thing in mind: a processor takes several years from early design decisions before it comes in your hand. Now project yourself back, let's say 4 or 5 years back. Do you think there were many out-of-order chips in the low-power embedded market? Did the UMPC concept even exist at that time? What CPU power did PDA and smartphones need? The market has evolved, ARM has moved forward, but Intel has stepped back.
Maybe I noticed Cortex-A8 a little late (afterall, it hasn't even been a commercial option at all until what, last year? Maybe 2006?) but it hardly seems to be 4 or 5 years old, and A9 does seem to be coming on its heels. The minor number change follows this, despite it being a major architectural revision. I think that A8 was probably a transitional chip and will never really be used nearly as much as ARM11 or Cortex-A9 will be unless OMAP3xxx catches on heavily.

randomhack said:
The same goes for the Cortex A8 and A9. ARM claims good numbers but we dont have a device in our hands. At max what people have are dev boards till now. In the comments on the article on the beyond3d site, you will also notice strong displeasure with the Cortex A8. And the A9 hasnt appeared in any real devices either. In most cases we only have numbers like "web page rendering" and Dhrystone MIPS neither of which can easily map to a real world workload for a device like the Pandora for example.
I didn't see any "strong displeasure" towards Cortex-A8 at all, merely indications that A9 is a certain percentage faster than it. That is completely to be expected for an entirely new generation CPU core. I can't see why anyone would be critical of A8 when it's a major move forward over ARM11 while still being quite low power, as opposed to Silverthorne which looks more like something Via would roll out (ironically, Via's Messiah actually looks like a better chip). There are more benchmarks than that, Dhrystone DOES mean something (remember, we're testing CPUs here, not the memory subsystem), and we've even seen numbers for Pandora (although probably nowhere near what it'll be like when we have final revisions of hardware and more optimized code)

Svartalf said:
Is x86 great? No. Is ARM great? Not really, either. There's, by far, better architectures in the RISC camp. And... Before you get on about me not knowing much about it- I've programmed professionally on the following:
x86 and ARM are not architectures, they are instruction sets. If indeed you do mean to talk and instruction sets then please qualify why you think there are other RISC ISAs that are far superior to ARM. And no offense, but please do it with actual reasons and not just by saying how many years you've programmed in such and such for so and so. I'm getting kind of tired of that general line of argument from people.

For the record, pretty much everyone I've ever talked to who has programmed in multiple ISAs thinks that ARM is the most elegant they have used. I'm familiar with all of the instruction sets you've listed and I can't think of any of them that offers conditional execution, folded barrel shifting, optional flag setting, PC as a register, and the flexible addressing modes ARM does (I don't mean none of them offer any of this, but none to the degree ARM does). There are benefits, but they're mainly limited to more registers or lack of flags. MIPS and SPARC in particular are pretty bottom of the barrel when it comes to power per opcode.

EDIT: Since I figure someone's gonna say it, yes I know ISA stands for "instruction set architecture." But usually when people say architecture they mean a particular CPU design.
 
Last edited by a moderator:
That Beyond3D editorial is full of inaccuracies, but it doesn't really matter. In the end we're all just speculating about the statistics of hardware that is at least a year away from actual production. Intel and ARM themselves don't know how Moorestown and A9 will compare because neither of them actually exist yet.

The facts are that ARM processors will probably continue to increase in computing power while Intel processors will probably continue to decrease in power consumption. In a year or two they'll meet in the middle and we can have this discussion all over again.

I'm just happy the whole video driver thing is sorted out. Remember the video driver thing?
 
Chip said:
That Beyond3D editorial is full of inaccuracies, but it doesn't really matter. In the end we're all just speculating about the statistics of hardware that is at least a year away from actual production. Intel and ARM themselves don't know how Moorestown and A9 will compare because neither of them actually exist yet.
What inaccuracies? If you're going to say that please clarify. This is coming from someone who himself admitted to not understand the technology involved, right?

Chip said:
In a year or two they'll meet in the middle and we can have this discussion all over again.
Now THAT'S speculation.
 
Last edited by a moderator:
Exophase said:
What inaccuracies? If you're going to say that please clarify. This is coming from someone who himself admitted to not understand the technology involved, right?
If it will make you happy...

QUOTE
If Intel's management and marketing personnel had bothered to properly analyze the handheld market rather than trying to figure out how to impress the press with empty rhetoric, they'd have concluded they shouldn't even bother. Silverthorne is a good design for UMPCs/MIDs/Ultraportables, but that's about it really. Any mobile phone manufacturer or carrier which is seriously considering making designs based on this architecture should seriously reconsider its strategic planning process.

Silverthorne was designed for, and is only being marketed for, the UMPC/MID/ultraportable market. Even Intel isn't claiming you should stick it in a smartphone. He's basically arguing that it is only good at what it is supposed to be good at.

QUOTE
Intel claims that the 1.6GHz Silverthorne is 4.1-6.5x faster than an ARM11 400MHz core at Internet Browsing. Great - too bad that doesn't seem to be much faster, if it's even faster at all, than 40nm Cortex-A9 implementations coming out in the same timeframe as Moorestown

I'd be very interested to know how he got web rendering performance data for a processor that doesn't exist yet.

QUOTE
This kind of problem won't stop the Intel marketing department from hyping this to infinity and back though by using the most ridiculous of metrics; their 'average' power consumption is measured under the maximum sleep mode (C6) on 80-90% of the time. Errr, yeah, sure, why not - but that's not very comparable to anything else, now is it?

According to the chart he links to in the previous sentence, 'average' power consumption is measured while runing BAPCo MobileMark’05 Office Productivity suite under WinXP. The idle ratings are measured in C6 sleep mode.


Exophase said:
Chip said:
In a year or two they'll meet in the middle and we can have this discussion all over again.
Now THAT'S speculation.


Then it's Intel's engineers doing the speculating.
QUOTE
Chandrasekher provided a sneak peek at Moorestown that consists of a system on chip (SOC) design combining the CPU, graphics, video and memory controller onto a single chip. A Moorestown-based MID will have idle power that will be 10x lower than the 2008 Menlow design, enabling longer battery life in smaller form factors.
 
Last edited by a moderator:
Chip said:
Silverthorne was designed for, and is only being marketed for, the UMPC/MID/ultraportable market. Even Intel isn't claiming you should stick it in a smartphone. He's basically arguing that it is only good at what it is supposed to be good at.
Read the bottom of the article, it's amended to clarify that he's talking about all of Intel's slated low power chips thus far announced.

Chip said:
I'd be very interested to know how he got web rendering performance data for a processor that doesn't exist yet.
Do you know anyone who has a Silverthorne for testing? He's comparing Intel's claims with ARM's (you don't think ARM doesn't have working reference silicon, I hope). More importantly, he's illustrating the flaw in Intel comparing Silverthorne with a CPU announced 6 years ago. Just one of Intel's many marketing tricks which they'll use to try to overtake the mobile sector.

Chip said:
According to the chart he links to in the previous sentence, 'average' power consumption is measured while runing BAPCo MobileMark’05 Office Productivity suite under WinXP. The idle ratings are measured in C6 sleep mode.


No, he said that the benchmark was in C6 80-90% of the time, not 100% of the time. This test didn't say what the CPU load was like. Don't be fooled by the fact that it's running a "benchmark", for the numbers to be this far below TDP it's unlikely that it was using very much CPU time. It could very well be I/O bound. The idle measurement, on the other hand, are clearly for 100% time in C6. There's no reason why it'd be anything else.

Nothing in that document talked about the numbers, so I'm assuming they were taken from somewhere else.

Chip said:
Then it's Intel's engineers doing the speculating.
QUOTE
Chandrasekher provided a sneak peek at Moorestown that consists of a system on chip (SOC) design combining the CPU, graphics, video and memory controller onto a single chip. A Moorestown-based MID will have idle power that will be 10x lower than the 2008 Menlow design, enabling longer battery life in smaller form factors.



All this says is that they're lowering the idle power consumption - although this is important for devices that stay in standby all the time it not only means little with respect to the big picture of performance but once it hits a certain point it'll mean less and less too (and I'm sure ARM will still beat them in this area because they've been working on it for so much longer - of course Intel can make big improvements on their own design, because they're just starting out)

Looking at things right now, Intel is taking a major dive in performance and indications are that they'll take even more dives to try to get closer to ARM's power consumption. Meanwhile, ARM is getting faster while not using more power. In fact, all indications suggest that Silverthorne and Cortex-A9 will be pretty close in terms of performance, with A9 offering much better power consumption.

What I'm seeing from big websites is that the more people know about CPU architecture the less thrilled they seem about the direction Intel is moving in. This doesn't just go for Beyond3D, but ArsTechnica as well. There seems to be an underlying assumption from everyone that ISA is irrelevant when it comes to how a CPU performs, and that the most hard work and motivation will yield the best watt/performance ratios. In reality, running x86 code comes with overhead. That overhead doesn't necessarily scale linearly with the complexity of the chip, which is why it was eventually swallowed in desktop CPUs where brute force architecture optimizations made x86's deficiencies irrelevant. But the overhead of power demands on handhelds hasn't been shrinking, because people expect the same power performance out of their handheld CPUs (yes, dies get smaller, but transistor counts go up to match it). And batteries haven't been getting better fast enough to compensate.
 
Last edited by a moderator:
Chip said:
Intel and ARM themselves don't know how Moorestown and A9 will compare because neither of them actually exist yet.

I can state that at least half of this statement is wrong :p
 
Last edited by a moderator:
Svartalf said:
Will you be coding in ARM assembly, either inline or straight up assembly to an assembler?

If not, why does it matter what CPU you're running on so long as it gives sufficient performance to you with your C/C++, Pascal, C#, Java, etc. code?

Keep in mind, I'm not telling you that you have to shift your position. It's just I find it an odd one to possess unless you're intrigued with trying to
hand optimize code or produce finely tuned code by hand.
I don't know yet if I will get a Pandora or not. What I am sure is that if I would get one, it would be to play at assembly level, both on the ARM core and the c64x. I have various projects in mind. Too bad craigix found no time to answer my e-mail :(

Svartalf said:
Laurent said:
The only comparison Intel did, was Atom vs OMAP2420 browsing performance. And you know what? They did that with no network connection, pages were locally stored. And comparison against a 2 or 3 years old chip. Pure marketing bullshit.
I'd hesitate to call it pure marketing bullshit. Peak performance for rendering HTML would be from static content on the local disk. All you're doing with a network connection is testing networking performance and storage performance (because it's typically dumped to RAM or "disk" locally FIRST before rendering it..).
I disagree (surprise :p): most of the time rendering performance is hidden by network speed, or lack of it. It has almost nothing to do with end user experience.

I have been playing with standard benchmarks (from dhrystone up to spec2k) and I can tell you it's easy to double or halve the speed even for spec2k depending on the OS conditions (and no I am not talking about a loaded OS, I am just talking about funny effects of TLB).

It's like the PC world: do you really look at synthetic benchmarks when you build your PC? Or do you look at the speed of the type of games or applications you will use?

So when I am presented with the Intel slide about browser performance, I call it pure bullshit (also note others on this thread have already told how bad this slide is due to lack of real information about the conditions).

QUOTE
Is x86 great? No. Is ARM great? Not really, either. There's, by far, better architectures in the RISC camp. And... Before you get on about me not knowing much about it- I've programmed professionally on the following:

x86
x86_64 (Yes, it IS different... :D)
ARM
MIPS
Sparc
Power4

In either desktop, server, or embedded contexts.



So I have more experience than you :)

I seriously hate x86(_64) ISA, any RISC is so much funnier to program :D

As far as RISC ISA's are concerned, MIPS is the cleanest one, though I don't know what it has become, I have not used it since 10 years.
 
Last edited by a moderator:
Exophase said:
There's no way I'm diving into gcc whether it needs it or not, I'd much sooner write my own ASM. I don't schedule for ARM9 in my recompiler (it's really not that sophisticated, and you'll be very hard pressed to find a recompiler that is), but there's much less to actually schedule for than on Cortex. Loads would usually be naturally surrounded by other instructions anyway.

Scheduling wouln't only affect loads and stores. And C-A8 support in gcc is located in a few files. The scheduler description is not that hard to read :)

QUOTE
Maybe I noticed Cortex-A8 a little late (afterall, it hasn't even been a commercial option at all until what, last year? Maybe 2006?) but it hardly seems to be 4 or 5 years old, and A9 does seem to be coming on its heels.

The 4 or 5 years was from design start to market.

QUOTE
There are more benchmarks than that, Dhrystone DOES mean something (remember, we're testing CPUs here, not the memory subsystem), and we've even seen numbers for Pandora (although probably nowhere near what it'll be like when we have final revisions of hardware and more optimized code)

Dhrystone is the worst of the currently industry quoted benchmarks. It certainly doesn't mean anything, trust me. For instance, its rules forbid inlining, because if you inline stuff, you can propagate so many constants that it runs incredibly fast. You also can't compile the two files in a single compilation, because again information propagation would kill it. Now is that representative of how developers work? And is your code spending most of its time doing memcpy and strcmp? :)

QUOTE
x86 and ARM are not architectures, they are instruction sets. If indeed you do mean to talk and instruction sets then please qualify why you think there are other RISC ISAs that are far superior to ARM. And no offense, but please do it with actual reasons and not just by saying how many years you've programmed in such and such for so and so. I'm getting kind of tired of that general line of argument from people.

For the record, pretty much everyone I've ever talked to who has programmed in multiple ISAs thinks that ARM is the most elegant they have used. I'm familiar with all of the instruction sets you've listed and I can't think of any of them that offers conditional execution, folded barrel shifting, optional flag setting, PC as a register, and the flexible addressing modes ARM does (I don't mean none of them offer any of this, but none to the degree ARM does). There are benefits, but they're mainly limited to more registers or lack of flags. MIPS and SPARC in particular are pretty bottom of the barrel when it comes to power per opcode.

EDIT: Since I figure someone's gonna say it, yes I know ISA stands for "instruction set architecture." But usually when people say architecture they mean a particular CPU design.
ARM architecture has nothing to do with implementation of it. The ARM architecture describes : an ISA, a system coprocessor, a memory model. At least that's the way ARM talks. For instance, Cortex-A8 is an implementation of ARMv7-A architecture.

MIPS is much more elegant than ARM. The existence of flags is a pain for designers. The complex addressing modes are also a pain. The existence of multiple ISAs in the same architecture is a pain. The non-orthogonal opcode encoding is a pain.
 
Last edited by a moderator:
Laurent said:
Scheduling wouln't only affect loads and stores.
What would scheduling affect besides loads? I don't know of any other important case where the ordering of the instructions can add stalls on ARM9.

Laurent said:
And C-A8 support in gcc is located in a few files. The scheduler description is not that hard to read :)
Maybe for someone who is experienced hacking GCC. Again, there's no way I'd be diving into it for something like this. Obviously you know what you're doing with it.

Laurent said:
The 4 or 5 years was from design start to market.
If that's the case then either A9 was being developed concurrently with A8 or it took much, much less time to develop. It's not as if out of order was invented 5 years ago, and for that matter Silverthorne is being released hardly any later than A8 is being adopted, so the same argument could apply to both (except there was no superscalar ARM before this, and x86 has been superscalar for over a decade)

Laurent said:
Dhrystone is the worst of the currently industry quoted benchmarks. It certainly doesn't mean anything, trust me. For instance, its rules forbid inlining, because if you inline stuff, you can propagate so many constants that it runs incredibly fast. You also can't compile the two files in a single compilation, because again information propagation would kill it. Now is that representative of how developers work? And is your code spending most of its time doing memcpy and strcmp? :)
I'm not really getting how any of this means that it means nothing, it just means that it has to be compiled a certain way, although I suppose lack of certain optimizations will forces the compilation used down to something unpleasant. I still think the numbers have shown something in terms of relative performance, even if the test is flawed.

Laurent said:
ARM architecture has nothing to do with implementation of it. The ARM architecture describes : an ISA, a system coprocessor, a memory model. At least that's the way ARM talks. For instance, Cortex-A8 is an implementation of ARMv7-A architecture.
I don't know how ARM defines architecture, but this isn't just about ARM. I've never heard of anyone refer to x86, the ISA, as an architecture. I'm going to assume ISAs are what were being referred to anyway. I do believe that "architecture" has connotations of physical design (like a building being made) rather than a more general language and model. I take it you call ARM9, ARM11, etc "implementations" then? Or does ARM have a more suitable term?

Laurent said:
MIPS is much more elegant than ARM. The existence of flags is a pain for designers. The complex addressing modes are also a pain. The existence of multiple ISAs in the same architecture is a pain. The non-orthogonal opcode encoding is a pain.
When I say elegant I mean for the PROGRAMMER, not someone designing a CPU that uses it. For someone citing years of experience programming it and not designing CPUs that utilize it I can only think that's what he meant, and for the many more people that program it than design CPUs around it I can only guess that's what's going to be the relevant factor (even if far less people use ASM now).

Of course MIPS is simple to implement, that's a major part of its premise. It's also a major compromise in what you can do with it. The simple encoding and lack of addressing modes real require multiple instructions where ARM would not (great example is picking a random element off of an array). Lack of flags I'm okay with, but is more something you can get away with with more registers, which ARM deliberately avoided for more flexibility. For as simple as it is, simple by itself does not mean elegant. It only succeeds in being elegant if it can accomplish the same things with its simpler set of tools - in some ways it can, but in other ways it really can't.

And don't even bring Thumb into this while ignoring MIPS16, which is a concurrent option for MIPS designs.

I once knew a professor (he worked with HP on the Dynamo project) who taught a CPU design class that implemented ARM. He told me that despite how it would appear to one emulating it in software, ARM was in fact not at all complex to decode. If undergraduate students can do it then I suspect that he can't be so far off the bill on that one, even if MIPS is easier (at an obvious price). At any rate, wouldn't you think that by now other architectural (and no, I'm not using the word like you say ARM does) complications have long since eclipsed the difficulty in decoding fixed length instructions?
 
Last edited by a moderator:
Back
Top