Is X86/X64 slowly dying?


Kippykip

BFG 9000
Joined
Sep 6, 2016
Messages
517
Age
25
Location
'STRAYA
Website
kippykip.com
It seems to me that X86/X64 chips are kinda bad when it comes power efficiency and heat compared to other CPU architectures. Even Intel is dropping their Atom brand to focus on Arm apparently.
Feels like the only thing saving it is legacy support and compatibility. What do you all think? Or am I just being stupid?
 
It seems to me that X86/X64 chips are kinda bad when it comes power efficiency and heat compared to other CPU architectures.
Citation required? I'm sure it was Exophase (sorry if it wasn't) who said that the later (last?) Atoms were very competative with ARM chips.

Even Intel is dropping their Atom brand to focus on Arm apparently.
Doesn't mean they're worse, probably just means they're not selling...
 
Citation required? I'm sure it was Exophase (sorry if it wasn't) who said that the later (last?) Atoms were very competative with ARM chips.


Doesn't mean they're worse, probably just means they're not selling...
X86/X64 still seems to be the king of speed/calculations etc, but arm processors for example seem to be catching up with budget X86 processors except with less heat, resulting in less energy for the same processing work.
http://www.hectronic.se/embedded/arm-or-x86/arm-or-x86.php
Both seem to be bad and good in certain ways, but I was thinking for example back when Apple dropped PowerPC to move over to Intel X86/X64 since the PowerPC architecture became slow and wasted a lot of power for the similar processing work compared to the X86 architecture evolving at the time.
I'm just wondering is it possible for other architectures to evolve faster than the current X86/X64 in the future?
 
I'm just wondering is it possible for other architectures to evolve faster than the current X86/X64 in the future?
Maybe this is the question I should have asked the first time: Do you think we'll be using x86/x64 for the rest of human existance? That will give you your answer
 
I simply want a Hybrid Computer with all Architectures inside.
PowerPC,X86,Arm,68000 and others :)

No more Cpu Emulation needed then and every Software just use the right Cpu.
Arm based and all others are simply Coprocessors then.

All in One :)
 
Plus z80, 6502, MIPS, SH series, plus all the different variants of those - sure you don't just want a beefy FPGA that can emulate all of those, many at the gate-exact level. You'd still probably need an FPGA just to simulate all of the ASICs that old machines used to talk to storage devices and the likes. Plus some way to cope with different video hardware.

Myself, I'll stick with emulation on something small and efficent - that's already been designed to meet all of those obstacles using impressive software skills. Emulating the CPU is rarely what causes incompatibility in the latest emulators, it's normally that ASICs or GPUs that are only covered by hacky HLE emulation and so on.
 
@ingoreis & @levi : Well actually, replacing the decoding part of the CPU with an FPGA would get you a similar effect. But the microcode would be sooo heavy it wouldn't be worth it performance-wise. As Intel is looking into adding FPGAs to our CPUs, I think it would be great if it replaced the HD Graphics : use the FPGA as a video chip if you need so, or added specialized computation when you already have a GPU, but keep your processor x86-only.

Back on topic and joke aside : from what I learned, what makes x86 inefficient is its CISC nature, while ARM is RISC. That means the decoder needed to translate the instructions sets into actual code used by the cores takes a non-negligible part of the total transistors number, thus raising heat, productions costs and die size. I think it was useful when you needed specialized instructions for specific uses but weren't the perfect assembly programmer it took to make it run the fastest way. But now in the age of assisting software and online communities, I think it's slowly becoming a moot point versus the need to keep it simple, light, cheap and fast for the embedded market. I believe in ARM's capability to slowly replace our "IBM PC compatibles", as Intel is getting more and more interested, Windows starts supporting it, the tech is quickly becoming on-par performance-wise and real-time emulation is getting better. By the time we'll have to choose between x86 and ARM, most software needing the computing power emulation can't provide will probably be available on both. This standard is good, but far from perfect : it's in its destiny to evolve or disappear.
 
@ingoreis & @levi : Well actually, replacing the decoding part of the CPU with an FPGA would get you a similar effect. But the microcode would be sooo heavy it wouldn't be worth it performance-wise. As Intel is looking into adding FPGAs to our CPUs, I think it would be great if it replaced the HD Graphics : use the FPGA as a video chip if you need so, or added specialized computation when you already have a GPU, but keep your processor x86-only.

Back on topic and joke aside : from what I learned, what makes x86 inefficient is its CISC nature, while ARM is RISC. That means the decoder needed to translate the instructions sets into actual code used by the cores takes a non-negligible part of the total transistors number, thus raising heat, productions costs and die size. I think it was useful when you needed specialized instructions for specific uses but weren't the perfect assembly programmer it took to make it run the fastest way. But now in the age of assisting software and online communities, I think it's slowly becoming a moot point versus the need to keep it simple, light, cheap and fast for the embedded market. I believe in ARM's capability to slowly replace our "IBM PC compatibles", as Intel is getting more and more interested, Windows starts supporting it, the tech is quickly becoming on-par performance-wise and real-time emulation is getting better. By the time we'll have to choose between x86 and ARM, most software needing the computing power emulation can't provide will probably be available on both. This standard is good, but far from perfect : it's in its destiny to evolve or disappear.

Yeah, that's why I said you needed a 'beefy' FPGA - basically shorthand for something more powerful than exists today in all respects.

Re graphics subsystems, they're basically all big SIMDs these days - parallel chips that can take lots of inputs and do the same algebraic manipulation to them all in much less time that it would take a CPU to iterate over them all and do the same operations. In 3D mode they do this to apply matrices to 3d coordinates to rotate and translate objects in 3d space, and then to project them all to a 2D viewpoint. The actual chip space needed to raster that and drive some kind of video output (DVI/VGA etc) is tiny in comparison. NVidia showed us that with some small tweaks and an SDK, you can use these chunks of silicon to do more general purpose things than just manipulate 3d coordinates. It'd be super-nice if Intel did the same with their on-board graphics bits.

Re ARM being RISC, I'm less convinced about that every time I read about a new development of theirs, since I first learned about Jazelle back in the day. The Pandora CPU contains whole video decoding subsystems that have never actually been used by anyone to the best of my knowledge. Partly that doesn't matter because these days you can make such large chips with fewer failures than you used to be able to, and thanks to scale reduction you can fit all of those transistors into a space not much bigger than the original chips, so they don't end up costing any more. But it does make me wonder if ARM will end up being edged out by some future instruction set or chip layout that goes even more RISC than ARM.
 
I don't know if it's dying but I'm buying an AMD Ryzen in 2017.
So it's not dead yet.
 
@levi : Yeah, ARM isn't much RISC anymore. But it still is way more than x86-64. I still think reducing the complexity is as efficient as shrinking it. As for FPGAs, I'm sure they'll still be behind dedicated processors. That seems logical to me, but I can be proved wrong.
@benoitb : I planned so too, but looking at the Ryzen 7 reviews, it looks like the Zen architecture isn't for me. I had high hopes in overclocking a Ryzen 5 higher than Intel Core levels, but it isn't so and the performance-per-core-and-frequency is still too low. AMD memory can't bandwidth. But if you're more interested in heavily multi-threaded tasks like video rendering than games and sequential programs, then it is indeed pretty good for its price. I'll wait for the Ryzen 5 reviews anyway (sudden change or more likely lower i5 prices), but I think I'll stay with Intel.
 
Each generation the "legacy" decode-portion in the CPU gets smaller relative to the size of all the other bits. The GPU bits (hmm... not literal bits... portions :) ) already make the decoder seem like a "meh" part of the design these days. And both power consumption and heat are pretty much directly related to the relative size of the implementation...

No, IA architectures simply evolve, they won't disappear as long as Intel is around and even then the IPR would certainly be sold on to someone who would start licensing it to vendors (and I'm sure they would literally queue to get to it).

That's not to say the current state of "daily use" IA has much to do with what we used for MS-DOS during the late nineties, for example... "evolution" is just a pretty word for the process that actually means adding new stuff so cool it makes the old stuff practically irrelevant for most uses.

But incidentally this is something that never ceases to amaze me: The backwards compatibility of both Intel CPUs and Windows! The only thing coming even close in this regard is VMS. This really is something totally built into the DNA of both MS and Intel and a feedback loop of sorts.

I suspect at some point they will start sawing off the redundant old rotten limbs from their CPU designs, but no-one will probably even notice. In fact, I think I read from somewhere that such changes have already been done, but they don't really touch the average consumer who doesn't expect the latest Corei5 to boot DOS6.22 (given a suitable BIOS to begin with).

Should the architecture be "stale" it would have died already, that happens to all computing related things. But we have to remember, when we download a *-x86.tgz package that it is using a naming convention that Intel itself abandoned years ago already... so... did x86 actually die already?

Oh and I don't believe the ages old RISC/CISC-thing is a thing anymore... Venting about that was me back in the nineties when we were cluttered with Alphas, MIPSes, Sparcs and all the coolness... Then progress just happened, Intel went RISCish under the hood, the others died away and that was all folks :-(. No revolution where none was needed to begin with...

Edit addition:
Oh, should biocomputers ever become relevant, the likely scenario is: Intel buys a relevat startup, integrates the tech to what their current iteration of their architecture is and then the bacteria start talking familiar opcodes :). This might not happen if the "base" in biocomputing stops being 2 (which is sort of the case with anything quantum at the moment)... but even then my bets would be for a compatibility layer (which is sort of the targetcase with anything quantum at the moment?-).
 
Last edited:
Well I've always wondered about getting out of binary in the future. It's pretty limited in some ways, yet fitting to its use in the others. If we could compute using more states (maybe switch to a 10 base ?) we might get a speed boost, but how much ?
 
Well I've always wondered about getting out of binary in the future. It's pretty limited in some ways, yet fitting to its use in the others. If we could compute using more states (maybe switch to a 10 base ?) we might get a speed boost, but how much ?
I'd assume it would affect the speed quite a bit, but not sure out you would make something that doesn't use a ON/OFF circuit thingy.
 
Well, they've made MLC and TLC flash drives for a few years now that determines the flash cell logic level to get more states per bit. For similar reasons this could make sense for the caches and even the register banks. And thinking it through, there could even be benefits for doing multi level logic in the TTL gates inside the ALU - certainly for addition and subtraction, you wouldn't need any more carry lines if they all supported the same number of levels. Each cell would probably need more than one transistor to check the levels, while in todays circuits each transistor does a check if its inputs are in the 0 or 1 ranges effectively, but if you can get more bandwidth through the same interconnects that might overwhelm that increase in transistors, I dunno.
 
Maybe with better precision have at least an off/mid/full switch. Reducing large numbers to fewer "bits" might make a difference in calculation times.
On one hand, the public might not see much of an advantage to their use cases by the time it gets as fast as the binary of its time.
On the other hand, Microsoft though 640KB was enough for 10 years (lasted 6) and had the Internet "as a fifth or sixth priority" (in the early nineties).
 
Really, the balance has been moving away from "simple" instruction decoders into compressed instructions sets as cache use and memory bandwidth are getting relatively more expensive than complexity of the frontend on the cpu itself (e.g. arm THUMB) - and x86 could be considered a 'compressed' instruction set, many common instructions are extremely short and so the instruction cache gets well used.
 
Back
Top