Is X86/X64 slowly dying?


The core market for x86-64 was desktops, which allowed it to broaden into servers associated with crosscode compatibility between desktops, workstations, and servers. ARM market penetration into these spaces has been minimal.

ARM's core market is Smart Phones and related handsets, which x86-64 never managed to really penetrate. Architecturally Intel has no answer to the ARM big.Little strategy, and ARM implementations fall between Intel's Out of Order x86-64 Atoms (Saltwell forward) and Core M series depending upon model and benchmark. Apple's much ballyhooed bit via Anandtech of their processors beating Core Ms is likely overstated, and not of particular importance when an iPad Pro can't exactly run desktop application grade software.

Intel's intentions to push Atoms into Smart Phones failed, and they admitted as much in the past couple of years when they shut that branch of Atom architecture development down. Mind that as Atom architecture is used for their current Pentium and Celeron lines, it's not really dead.

At this point there's no indications of any serious market shifts in the work. ARM advocates are hoping for the ARM v8 ISA to gain some server share aimed at stuff that the zillion Atom server market was supposed serve, but said zillion Atom server market didn't exactly materialize for Intel...

As for the decoder the problem is it must run constantly, and doesn't directly contribute to actually processing anything. Intel's attempts at damage control that there wasn't a meaningful difference are from when the Cortex A9 and A15 were relevant, and they were trying to tell everyone how they'd be sweeping the Smart Phone market. Whether that was true then is highly in doubt, but at this point the discussion is the validity of Apple's antics with ARM verse Core M.
http://www.anandtech.com/show/9766/the-apple-ipad-pro-review/4
 
Last edited:
http://linuxgizmos.com/nvidias-new-jetson-tx2-module-runs-linux-on-tegra-parker-soc/

"The Nvidia blog post offers far more details on the two modes. It also describes benchmarks showing the Jetson TX2 in Max-P mode running at under 15 Watts beating a 200W system running an Intel Xeon E5-2690 v4 SoC. The test measures deep learning inference throughput (images per second) using the GoogLeNet deep image recognition network."

That's more that a GPU is much more efficient at that specific workload than a CPU. Not really anything about the CPU arch itself. So a completely pointless and uninteresting comparison.
 
Of course it is biased, but if you compare the size and the performance per watt, that should raise some questions.
 
Of course it is biased...

Nvidia SOC vs. Intel CPU: "The test measures deep learning inference throughput (images per second) using the GoogLeNet deep image recognition network."

Biased is an understatement. I'm sure my 3 year old son could beat me in a weightlifting contest given a crane... in that case too, the only task left for my son is pressing the "up" button... which was the case here for the actual ARM CPU.

But for many customers, of course, this is exactly what they are looking for.
 
Last edited:
Of course it is biased, but if you compare the size and the performance per watt, that should raise some questions.

But the comparison here is so far off it *doesn't* make sense to compare anything. So far that I consider this well on the side of "Lies" instead of "Misleading marketing". It's not news that general purpose hardware will be beaten by specialized in it's domain. That's why that specialized hardware exists.

A better comparison would be a intel chip with a GPU and then compare that - I have no doubt nvidia will still win, but it'll be a lot closer, it being something a little closer to a meaningful comparison.
 
Yes, I also hadn't immediately got that it was a comparison between a largely GPU based solution and an intel CPU solution, although I hadn't given it much thought to be fair. While it's a valid comparable use case for a few users, for many more it's a misleading comparison.
 
Back
Top