Potential X86 (Non ARM) SoC Alternatives


To be honest, I'd be rather careful when going down to comparing x86 to ARM - as usual everything's filled with FUD and misleading information. For example, concerning power consumption many people simply forget that Intel is mostly providing values for the CPU alone while you usually have values of whole SoCs for ARMs - the additional chipset of a x86 as well as all the other stuff that's usually already included in a ARM SoC can burn a lot of energy.

The whole x86 architecture is quite a mess, lots of people learn to really hate it when trying to code in assembler for it. AMD64 was an attempt to clean up a bit, but a 64bit architecture is still not really adequate for a mobile device (Linux' hybrid x32 architecture might be a good idea, though). With 32bit x86 you'd still have the classic problem of finding a common base, you'd still need to compile specifically against the used CPU to actually get the most out of it, many programs are still being optimized for a 486 or 686 because there hasn't been any common base since then - and those old CPUs had e.g. a very limited amount of general purpose registers. That is actually one of the reasons why AMD64 is often faster - e.g.: all AMD64 CPUs are guaranteed to have SSE2, the Athlon 64 defined a whole new common base to target with its new architecture.

Staying with ARM, I would keep an eye on AMD: They are currently working on several high performance ARM SoCs, some of them combined with Radeon GPUs. The focus is mainly on servers, but mobile variants are not out of question. A strong and efficient GPU with proper open drivers would be quite awesome...
 
To be honest, I'd be rather careful when going down to comparing x86 to ARM - as usual everything's filled with FUD and misleading information. For example, concerning power consumption many people simply forget that Intel is mostly providing values for the CPU alone while you usually have values of whole SoCs for ARMs - the additional chipset of a x86 as well as all the other stuff that's usually already included in a ARM SoC can burn a lot of energy.
 Speaking of FUD... lol.  You really haven't bothered to actually read anything about the z3770 SoC, but are clearly quite sure that it has to all be FUD.

The z3770, the whole SoC has an SDP of 2.5W.  It has seen some independent reviews that have agreed with those specs.

http://www.legitreviews.com/intel-atom-processor-z3770-bay-trail-first-look-and-performance-testing_123335/3

The whole x86 architecture is quite a mess, lots of people learn to really hate it when trying to code in assembler for it. AMD64 was an attempt to clean up a bit, but a 64bit architecture is still not really adequate for a mobile device (Linux' hybrid x32 architecture might be a good idea, though). With 32bit x86 you'd still have the classic problem of finding a common base, you'd still need to compile specifically against the used CPU to actually get the most out of it,
You're overstating that complexity AND rewriting history. Most don't program in assembler. There simply isn't the need on modern hardware and compilers.
Nothing wrong with compiling directly on the machine - many are doing that now with the Pandora.

Why would you prefer 32bit x86 over 64bit x86? The OS is going to be a Linux variant. The z3770 is a 64bit CPU/SoC. There isn't really a good reason not to make the OS 64bit if you have an SoC and OS that supports it right.

Correcting history...

Intel attempted to ditch the x86 code base complexity when they made the IA64 Itanium line in 2001.

http://en.wikipedia.org/wiki/Ia64#Architecture

AMD came out with their AMD64 architecture, expanding the x86 code set making it even more complex in 2003.

http://en.wikipedia.org/wiki/Amd64

Intel followed suit with it's 64bit x86 chips in 2004.

http://en.wikipedia.org/wiki/Amd64

many programs are still being optimized for a 486 or 686 because there hasn't been any common base since then - and those old CPUs had e.g. a very limited amount of general purpose registers. That is actually one of the reasons why AMD64 is often faster - e.g.: all AMD64 CPUs are guaranteed to have SSE2, the Athlon 64 defined a whole new common base to target with its new architecture.
Yes, AMD's 64bit CPUs were faster than Intel hardware - roughly 2003-2005. From around there on out, though, Intel has had the perf/watt and max perf pretty well in pocket.
Staying with ARM, I would keep an eye on AMD: They are currently working on several high performance ARM SoCs, some of them combined with Radeon GPUs. The focus is mainly on servers, but mobile variants are not out of question. A strong and efficient GPU with proper open drivers would be quite awesome...
The Intel part is out and available now on their 22nm process. Intel has a 14nm process in the wings for next year. I could be wrong, but I don't see AMD catching up in perf/watt on mobile SoC's in the next couple of years.
Your opinion is welcome to vary.
 
To be honest, I'd be rather careful when going down to comparing x86 to ARM - as usual everything's filled with FUD and misleading information. For example, concerning power consumption many people simply forget that Intel is mostly providing values for the CPU alone while you usually have values of whole SoCs for ARMs - the additional chipset of a x86 as well as all the other stuff that's usually already included in a ARM SoC can burn a lot of energy.
Intel is the only one providing data sheets with actual TDP values, and yes this applies to the entire SoC. I haven't seen an ARM-based SoC provide a formal value like that.

Current generation (and previous generation..) Atom mobile parts deployed for phones and tablets don't have a chipset, their SoC is as integrated as their competitors are.

The whole x86 architecture is quite a mess, lots of people learn to really hate it when trying to code in assembler for it.
That's true but it doesn't matter very much, except for the few people who have to write that assembly.

Frankly, if you've ever heavily optimized NEON for Cortex-A8 or A9 you'll come to hate that too :p Not so much because of the ISA (although there's some garbage there) but because of the headache of decently hand-scheduling it.

With 32bit x86 you'd still have the classic problem of finding a common base, you'd still need to compile specifically against the used CPU to actually get the most out of it, many programs are still being optimized for a 486 or 686 because there hasn't been any common base since then - and those old CPUs had e.g. a very limited amount of general purpose registers. That is actually one of the reasons why AMD64 is often faster - e.g.: all AMD64 CPUs are guaranteed to have SSE2, the Athlon 64 defined a whole new common base to target with its new architecture.
Are you talking about compiling for arch or uarch here? Maybe what you said applies for conservative PC distros but it doesn't for Android NDK, for example, which assumes SSE2 as base despite being 32-bit. The same would surely be true for software on something like Pandora.

Staying with ARM, I would keep an eye on AMD: They are currently working on several high performance ARM SoCs, some of them combined with Radeon GPUs. The focus is mainly on servers, but mobile variants are not out of question. A strong and efficient GPU with proper open drivers would be quite awesome...
Right now the only ARM based CPUs they've announced are those server ones.. I'm not sure this is totally clear, ED wants an SoC now. Not something that might be announced in many months or even years.

But I think the odds of Pandora 2 actually using Atom anything are pretty close to zero, probably not even worth entertaining arguments for it anymore either..
 
Last edited by a moderator:
But I think the odds of Pandora 2 actually using Atom anything are pretty close to zero, probably not even worth entertaining arguments for it anymore either..
You're probably right. But it would have been so freekin' cool...
But yeah, ED's recent posts seem to still be leaning heavily towards another ARM SoC.

Apparently there are now 65 million Steam users:

http://www.engadget.com/2013/10/30/valve-steam-65-million-users/

I'm still betting that someone/some company is going to use the z3000 series SoC to create a mobile SteamOS device - and muck it up horridly.
 
As you know I figured Z3770 would be a very good choice too, opening up some things and offering very solid perf/W. But if ED and notaz manage to get something out that runs most PNDs and otherwise really does a good job building on the original Pandora, only strengthening it in obvious ways, I think they'll have a really nice product. To that end something like OMAP5 does make sense, and while I am a little nervous about the power consumption I think it'll work out okay if people are conservative. Right now it feels like Pandora's performance levels are a lot better than you'd expect for such old hardware, and you just need a "little" more, so I think people will be happy keeping it in check.

It'd be great if they do go for a daughterboard design and release the interface for it. Then someone else can start a kickstarter for making a Z3770 variant that you can swap out, maybe.
 
Last edited by a moderator:
It'd be great if they do go for a daughterboard design and release the interface for it. Then someone else can start a kickstarter for making a Z3770 variant that you can swap out, maybe.
How would that work, within the given form factor?  The current main board has all the ports including USB, SD card slots, and the board is well populated.  Where would a daughter board go?

And given ARM and Atom SoCs aren't exactly drop in socket compatible with each other, how would the bus(es) be handled to make that all work? 

And why do you think a Kickstarter for such a thing would:

A ) Draw enough interest to succeed

B ) Draw enough funds and interest for Intel to actually give them the time of day and vend to them

C ) Intrigue Intel enough to give their top binned SKU, instead of a average or meh bin SKU so they can provide top bin SKUs to customers that matter and not take a loss on the SKUs that don't meet the mark

D ) Would be able to fund and implement what is almost assuredly going to involve complex board fabrication for a niche of a niche?  Especially when your talking about putting it in a form factor that's smaller then the manufacturer, aka Intel, is targeting for that SKU and thus has added complications.
 
Last edited by a moderator:
I am tired of defending the idea, so this is the last time I'm going to mention it:

There are a number of industry standard (more or less) busses used to interface any non-SoC specific hardware. For example MIPI DSI or LVDS for LCD, I²S for audio, SDIO for storage and general purpose hi-speed perf. bus for interfacing other hardware like WiFi. And USB ofc.. And since we're dealing with SoC's, these interfaces are to be expected to be built in. Then there's interfaces that will vary between some SoC's, that might have to be left unused, like PCI-E, SATA etc..

The daughterboard would either have to go somewhere around where the CPU currently is, or the board made slightly bigger by widening the unit somewhat.

For comfort reasons, the base of the unit can't be made slimmer anyway, so might be possible to do some creative adjustment of the pcb resting points to accommodate the extra 3-4 mm that the board would take.

Example of an SoC on a small board would be Gumstix.

And, as a last mention: There are ultra low profile board-to-board connectors readily available.
 
If I understand you correctly you (and Exophase) are proposing a daughterboard that would take BGA processors that likely have no similarities related to their pins or power path expectations, and are not designed to be swapped out with each other with significant potential complications related to different firmware and chipset requirements between these chips.  Making that all work so that it transfers to universal buses and plugable power rails isn't a minor problem, and you're not inspiring my confidence this idea has been thought through instead of amounting to "just cram it in, it'll work somehow."

I'm dubious of the idea a small company with a single board designer, even as good as that board designer is, can handle that kind of problem effectively.  Let alone that board runs that may not crest a thousand of the sophistication I expect would be involved, would be practical or a financially sound.concept.  The no space issue is really just the cherry on top.

It doesn't help that Intel lists the Z3770's package size at 17x17mm which is already 2-3mm larger then an OMAP3 measuring off my Beaglebone, and thus already stretching your estimates just on the basis of board space requirements associated with the newer SoCs being discussed.
 
Last edited by a moderator:
Ofc, it wouldn't just be the size of the BGA. It would have to accomodate the PMIC, DRAM if not POP, and any possible glue. It would more likely be the size of a gumstix board if anything.
 
Putting the SoC, RAM, PMIC, possibly storage on a separate thin profile board (like HardKernel used to with ODROID) is a good idea regardless of ability to swap that board in and out. Had OPT done something like this the first time around it would have saved a lot of money, and both ED and MWeston agree it would have been the preferable approach.
 
We've checked various modules, but they are either too thick or too big (at least the standard ones).
It would make the unit up to 4mm thicker which just wouldn't work.
 
We've checked various modules, but they are either too thick or too big (at least the standard ones). It would make the unit up to 4mm thicker which just wouldn't work.
We're talking about making your own module, not using someone else's.
 
But making  your own module only adds costs and potential issues (connector problems after a while of usage).

If you already have a PCB design, changing it to a new SoC is basically as complicated as creating another SoC on a module.
 
Wow, it's like all the discussion we had about this years ago never even happened, and you don't remember saying that this would have been the preferable approach. From what I remember talking to MWeston about this he advocated it too..

It doesn't just add cost and issues, it improves yield, or at least with regard to average cost per unit. If you have a defect in one board and not the other you don't have to replace both. And if you have some flaw and need to update the design you don't need to update both. It can also have manufacturing cost advantages despite needing more overall PCB surface area, because the big complex many layer boards that the SoC and supporting chips need are much smaller. The big main board that the keyboard and stuff goes on can be made of fewer layers. There are also testing, repair, and upgrade benefits (as in, it can be easier for an end user to swap that board than the entire PCB)... as well as making it a lot easier for a third party to make a different SoC available without you providing all of the PCB drawings.

There are disadvantages, sure, but you can't just say there's no benefit.
 
But making  your own module only adds costs and potential issues (connector problems after a while of usage).

If you already have a PCB design, changing it to a new SoC is basically as complicated as creating another SoC on a module.
Did you ever have the opportunity to check availability and feasibility of using the z3770 in a Pandora Successor?
 
Speaking of FUD... lol.  You really haven't bothered to actually read anything about the z3770 SoC, but are clearly quite sure that it has to all be FUD.
Did I mention any specific CPU/SoC/whatsoever? No, I didn't, I was making a general statement. Yes, I haven't taken a look at it at all (or I probably already have and just don't remember because it didn't seem very special, as I read a lot of hardware-related news articles).

Correcting history...

Intel attempted to ditch the x86 code base complexity when they made the IA64 Itanium line in 2001.

http://en.wikipedia.org/wiki/Ia64#Architecture
...Which still ended up with a horrible x86 compatibility that remained in hardware until 2006, being replaced by a software-driven emulation layer. I'm aware of the "Itanic". They initially spend a lot of time to improve the incredibly bad native x86 execution time, which pretty much failed. Why did they even do all that if they originally wanted to 'ditch x86' with its introduction? Seems more like they originally more or less tried what AMD did later on.
They didn't even target an audience that would've cared about a new common base, Itaniums always have been intended to be high performance server processors - the usual target customer would already take care of using software that's optimized for the CPU, backwards compatibility is not really wanted (making the whole x86 compatibility fuzz even more absurd).

The application and game developers are the ones who need such a common base.

Nowadays it's just a dead horse that is slowly being ridden towards disappearing completely.

Yes, AMD's 64bit CPUs were faster than Intel hardware - roughly 2003-2005. From around there on out, though, Intel has had the perf/watt and max perf pretty well in pocket.
I don't really see what that has to do with my statement... I was arguing about extensions (that almost all have been introduced by Intel) that are guaranteed to be present as they are actually part of the architecture's specification, which in the end can speed up things on any AMD64 compatible CPU, no matter who created it.
AMD64 is just the original release name of the extended architecture (and the implementation), Intel 64 the name of a certain other implementation, x86_64/x86-64 have been used for the instruction set before it was released, and anything else was made up by others. Talking about AMD64 doesn't mean one's specifically referring to AMD CPUs, it's rather a question of taste - just like Debian prefers AMD64 while Red Hat uses x86_64 instead and Microsoft uses x64.

The Intel part is out and available now on their 22nm process. Intel has a 14nm process in the wings for next year. I could be wrong, but I don't see AMD catching up in perf/watt on mobile SoC's in the next couple of years.
The reason why AMD is staying behind can easily be found in their rather adventurous APU architecture. However, with ARM a lot is open once again and they've shown in the past that some good ideas can make huge differences.
Intel has always been the first one entering smaller manufacturing processes for CPUs, and I think that is pretty much one of the largest reasons why Intel is getting so close to ARM. However, we're slowly heading towards a physical border that will allow no further shrinking. The big question is: what happens if everyone ends up using the same processes, will Intel still hold up?

That's true but it doesn't matter very much, except for the few people who have to write that assembly.
I've seen some people complain that the decoding units of x86 CPUs are extremely large and complex due to that, eating up a lot of space on the die and therefore burning quite a lot of energy, which is gaining more significance with such ultra mobile targets as you don't have a lot of possibilities to simplify it like the rest of the CPU.
Are you talking about compiling for arch or uarch here? Maybe what you said applies for conservative PC distros but it doesn't for Android NDK, for example, which assumes SSE2 as base despite being 32-bit. The same would surely be true for software on something like Pandora.
The obvious result of having a x86 CPU would be that many users would just bring over binaries from other distros and playing proprietary games that are available for x86 Linux or even via Wine - and those binaries will suffer from that fact, as they are usually optimized for ancient x86 CPUs to gain max compatibility. Maybe an alcohol-induced fuzzy thought got the better of me at that statement, though.
 
Putting the SoC, RAM, PMIC, possibly storage on a separate thin profile board (like HardKernel used to with ODROID)
Based on Hardkernel's website, and other references, ODroid is a highly integrated Single Board Computer like the Raspberry Pi or Beaglebone Black.  I've seen no reference to them using a daughter board, and that is inconsistent with Hardkernel's own detail on their product.

Evidence for your assertions that the Raspberry Pi Foundation and all the groups that showed up with Single Board Computers following the Raspberry Pi's success are wrong and you're right, please.

is a good idea regardless of ability to swap that board in and out.
A ) You're moving goal posts from vendor independent module swapping to simply using a module, why?  Nevermind why you're out to contradict ED on this.

B ) Evidence, rather then pestering please, of advantages and that these advantages offer practical cost savings and/or other worthwhile advantages over the Single Board Computer approach everyone is using in the low cost computer space that's developed since the Raspberry Pi germinated demand in that area?
 
Last edited by a moderator:
Based on Hardkernel's website, and other references, ODroid is a highly integrated Single Board Computer like the Raspberry Pi or Beaglebone Black.  I've seen no reference to them using a daughter board, and that is inconsistent with Hardkernel's own detail on their product.

http://www.hardkernel.com/renewal_2011/products/prdt_info.php?g_code=G135270682824

Here is the the cpu module, you can purchase alone. I am not sure they used the cpu on odroid, but it is the same cpu and the same amount of ram, so it would be possible.
 
Okay.  To be fair it looks like they use it for the ODROID-X2, and as a means to sell such things to second parties based on their FAQ. 

However they haven't elected to do that with their new product line based on the Exynos 5, and the ODROID-U2 SBC is less expensive then said module is by itself.  In terms of the SBC industry that module doesn't make a compelling argument, and the fact they didn't update that part of their product line speaks to how successful their business related to being a middleman selling integrated SoC modules has been.

The argument being made was over cost.  Not that a decent sized company can't make a socketable daughterboard, and main boards designed to handle a single daughterboard with fixed specifications.  That's not a revelation.  If they'd made it swappable with other SKUs and cross generational then there might be something to be said related to earlier goalposts, but that's not what Hardkernel's doing.  Nor is this a general strategy across their product line and a revolution for SBCs, instead it's basically a one off experiment.

Proposing that you can do the equivalent of drop an i7 into a POWER7's Blade and all you'll need is a little daughterboard with no issues with chipsets and power rails is a rather different proposal then being able to drop that i7 into a board that's designed for it.
 
Last edited by a moderator:
Back
Top