[poll; updated] eMMC vs. uSD; modular eMMC

What would you prefer?

  • non replaceable eMMC - willing to pay a premium for it

    Votes: 22 24.7%
  • non replaceable eMMC - assuming roughly equal costs for both solutions

    Votes: 21 23.6%
  • uSD - assuming roughly equal costs for both solutions

    Votes: 23 25.8%
  • uSD - willing to pay a premium for it

    Votes: 23 25.8%

  • Total voters
    89

There's one way to settle this, show me the power draw of the 90MB/s uSD card you linked last time.
You show me its power draw :)

I've tried to find that information, but it's rather hard to find -- it's not something that is frequently mentioned or benchmarked, since overall power consumption tends to be dominated by other things.

Why would an SD card use more power though? There really is not that much difference between eMMC and SD. The memory itself is the same. The only difference is that eMMC tends to have a more advanced controller (which means it also tends to consume more power) so it can achieve somewhat higher speeds. What makes you think that there would be an order of magnitude difference in power consumption between the two?
 
An advanced controller could well be why it uses less power.

Unless we have some real stats from  fast SD card products, we wont have anything to compare against the real example product figures I obtained for eMMC. But I agree with you , finding product info like this is surprisingly difficult .

On a light note,  Good luck ED and team, looks like the community is split , ask a friend has drawn no decisive answer - your on your own lol  :D
 
Last edited by a moderator:
That is of academical interest if any. Battery failure and nand/flash is real, because they have limited cycles which you can spend in less than a ...

However much work you want to pile into a little better speeds, soldered eMMC is a 100% total fail right out of the gate if you are unlucky.
Its the same for other components on the pcb, why is this one so special ?

Why would an SD card use more power though?
Don't want to take any side, just an idea: due to material transistions maybe (but I'm not thinking about [mW] vs. [W] scales, more in the mW range)?
 
Last edited by a moderator:
Battery failure and nand/flash is real, because they have limited cycles which you can spend in less than a lifetime, and they can often be faulty
Everything on the board has a limited time frame of use, and can also be faulty. Everything wears out eventually. The focus is on the internal storage because there's some belief that it's a substantially shorter period than anything else, but what if we do the math and it turns out we're looking at an estimated 100 years before the NAND will wear out but only 5 years before the RAM gives out? Would we change focus to replaceable RAM then? That is, of course, an exaggeration to emphasize the point that knowing these numbers is not purely academic but can actually be put to use.
 
@Wizardstan,

I Found the stat you are after, it's MTBF - Mean Time Between Failures. For the eMMC product I listed previously, if you look on page 9 it says MTBF is 2,100,000 hours . Rounding down, that equates to 239 years. http://www.psism.com/MR-%20M1560%20eMMC.pdf
 
Last edited by a moderator:
Once more: failure is not the only reason why you want things to be replaceable. Not being limited in size, allowing ED to have different models with a range of internal storage sizes (which is unlikely to be possible with eMMC because it would complicate PCB population) or even no internal storage at all (for people on a tight budget who already have a microSD card), and needing less board space: those are much more compelling arguments to me.
 
:eek:

I thought I was kidding when I said "100 years". I expected, like, 10 years, 25 tops.

But how do they calculate that number? Is it based on some expected number of IO operations, or is that just the expected time period of failure if it were sitting there doing nothing? Like I write something to a bunch of chips today, and on average they become unusable after sitting in storage for 239 years? Also depends on what they consider a failure. Does all the memory need to be unavailable before it's a failure? Or just one block? Or somewhere in between, like 50% bad blocks is a failure?

Another thing to consider is the guarantees on these eMMC modules. If one fails, is it warranted for anything? If it is, is it just the cost of the part or are they so confident in their product that they add full replacement cost as well? Because if some company says they're chip is guaranteed for 5 years and they'll pay fully (up to some reasonable amount) to have it replaced if it goes bad then suddenly any misgivings I might have had about eMMC vs uSD vanishes.
 
vcoleiro: I wonder how they got that number :p . Averages should always come with variances :(

EDIT: Yay, ninja'd by the Wizard again!
 
Last edited by a moderator:
MTBF is a pretty much arbitrary number and does not relate to the real lifespan of such devices. Even if it did, you can not specify the lifetime of flash memory in hours. Its lifetime depends highly on the amount of data written to the device. If you just let the device sit there and do not write to it, it will live much longer than when you write to it all day long. A more useful measure for flash memory lifespan would be the amount of data that you can write to the flash memory. This is what SSD manufacturers do. Usually you can even check this value and the expected remaining lifetime giga bytes through the SMART interface.

The same is not true for RAM. Although DRAM may change its timing a little bit over very long time periods ( > 20 years), it does not die like flash memory does. CPUs do not die in such a way either and usually have a pretty much unlimited life expectancy, unless you run them out of their specified operating conditions.
 
Last edited by a moderator:
But how do they calculate that number?
Does not seem like a very "usuable" number, according to the linked document its calculated based on "Telcordia SR-332 Issue 1 Method 1, Case 1". But I can only find general informations about it:
Method I is similar to the MIL-HDBK-217F parts count and part stress methods. The standard provides the generic failure rates and three part stress factors: device quality factor (πQ), electrical stress factor (πS) and temperature stress factor (T).
SourceMaybe more interesting could be

Endurance: In normal operation condition, guarantees for 3 years product lifetime for half the e-MMC

capacity sequential write per day
quote from page2 of the linked doc.
 
Last edited by a moderator:
CPUs do not die in such a way either and usually have a pretty much unlimited life expectancy, unless you run them out of their specified operating conditions.
The OMAP3 in the Pandora has an estimated 20 year life when operating entirely under recommended specification. That's 100% CPU usage over 20 years of course, but still very finite.
Even if it did, you can not specify the lifetime of flash memory in hours
Of course not directly but people like time measurements, we like to know how long something will take even if it doesn't make sense to measure it in time directly. That's why companies'll take averages and estimate usage models. The question is what usage model is this 239 years estimate under?
A more useful measure for flash memory lifespan would be the amount of data that you can write to the flash memory
Something like 3 years, writing half the capacity every day?Would that then be 6 years writing quarter the capacity I wonder? And 30 years writing 10% the capacity every day? Average writing 1.6GB every day for 30 years before it fails does seem like a lot, I dunno. It still doesn't explain what a failure is, or it probably does in that document somewhere and I just didn't read it properly, I'll have another look later.

And how does that actually relate to the MTBF that was given before? Is that actually 239 years writing half the capacity every day? Or some other metric?

I can't believe you're not fascinated by this stuff.

Although DRAM may change its timing a little bit over very long time periods ( > 20 years), it does not die like flash memory does.
For record, how do you think flash memory "dies"? Because from your arguments it sounds like you're worried about the whole thing just one day failing but I'm pretty sure you don't think that.
 
The OMAP3 in the Pandora has an estimated 20 year life when operating entirely under recommended specification. That's 100% CPU usage over 20 years of course, but still very finite.
That's what the manufacturer specifies. This does not necessarily have to be the same as the real lifespan. I have yet to see a CPU that has failed due to normal wear out. Sure, it is possible that TI manufactures such chips without any saftey margin, but I guess their 20 years claim is rather something they can safely assume, not so much a real limit.

Something like 3 years, writing half the capacity every day?
Yes, that's a much more useful number.

Would that then be 6 years writing quarter the capacity I wonder?
Yes, it should work that way. But keep in mind, those numbers are specified for writing from beginning to end. No random access writes. This matters because if you randomly write data to the chip, actually much more data will be written to the flash memory, because of the flash's typically fairly large erase block size.

And 30 years writing 10% the capacity every day?
Probably not. For flash memory usually not only the amount of total data writes is specified, but also the time which the memory is guaranteed to retain its contents, which is usually in the order of 10 years. Older flash memory usually lasts longer than more modern memory. The smaller the process the memory is produced in, the smaller the amount of data that can be written as well as the amount of time the data is guaranteed to be readable.

Average writing 1.6GB every day for 30 years before it fails does seem like a lot, I dunno. It still doesn't explain what a failure is, or it probably does in that document somewhere and I just didn't read it properly, I'll have another look later.
1.6 GB of write to internal flash of a device like the Pyra seems like a lot indeed, but as those writes are typically rather random, the actual amout of data written can be a lot less, than what is written to the flash internally. Let's say you write lots of files with 4 kB of size each and they are randomly distributed over the disk, it could happen that for each of those files you have to first read 512 kB and then rewrite those 512 kB back to disk, assuming the erase block size is 512 kB (which is a typical value, could be even larger). In that case the actual amount of data would be more than 100 times of what you intended to write to disk. Both the flash controller as well as the OS try to prevent that, but it cannot be prevented entirely.

And how does that actually relate to the MTBF that was given before? Is that actually 239 years writing half the capacity every day? Or some other metric?
The metric for MTBF is usually pretty arbitrary and (obviously) a very theoretical value. MTBF is used in lots of scenarios, for example it is also used for harddisks. In that case the MTBF specifies the theoretical amount of time between two read errors (which can not be hidden from the OS by the harddisk's electronic). The MTBF numbers in hours for harddisks are often something like 1000000 hours, which most of you guys probably would agree, is not what one can expect as the harddisk's lifetime.

For record, how do you think flash memory "dies"? Because from your arguments it sounds like you're worried about the whole thing just one day failing but I'm pretty sure you don't think that.
No, it doesn't die that way. What usually happens is that at some point flash memory cells begin to fail. Depending on whether the flash memory has some internal memory reserved or not, you would see those errors sooner or later. What then happens is that you can no longer write to those damaged cells, but you can usually still read them. The flash controller tries to hide that from you as long as possible, but will eventually fail to do so. So the usable capacity shrinks over time. At some point all the memory will be read-only (in the best case) or no longer accessible at all (in the worst case). I've seen both.
 
Last edited by a moderator:
Why would an SD card use more power though? There really is not that much difference between eMMC and SD. The memory itself is the same. The only difference is that eMMC tends to have a more advanced controller (which means it also tends to consume more power) so it can achieve somewhat higher speeds. What makes you think that there would be an order of magnitude difference in power consumption between the two?
A more advanced controller may include more cache which, if at all effective, could reduce average power consumption instead of increase it. Exploiting more parallelism and reducing seek time can also reduce power consumption overall even if the controller spends more time in processing.

Ultimately, raw power numbers aren't that useful or interesting. It doesn't matter if a transfer takes twice as much power if it completes in half the time. Actually in this case I'd consider hurry up and sleep preferable since the rest of the device will tend to use more power the longer the transfer takes. I doubt that power consumption of any given SD/eMMC scales super-linearly with clock speed since their supply voltages are fixed and they probably do nothing to further regulate them. You want a storage device that's just all around power efficient, not necessarily a slower one.
 
I've seen eMMC quoted where they gave 100,000 hours of continous use as the fail point - if continous use is the better marker for some.  Of course no one will use it continuously for 100,000 hours.   In real time that, could be a very long time.   

In any event, all it shows is that eMMC is as suspected, very robust. That's all that's really being said
 
Last edited by a moderator:
I've seen eMMC quoted where they gave 100,000 hours of continous use as the fail point - if continous use is the better marker for some.
It isn't. Continuous use could very well mean simply being powered on or reading stuff from it. What wears flash memory out is writing (or more precisely erasing cells), so the only useful measure is the amount of data that can be written in total. So this number does not say anything about the robustness of eMMC.
 
Last edited by a moderator:
^ Continous write/read was the quote, not just powered on.

As I said, all the figures point to eMMC being long living, that's no different to what people expected.   Take those figures with some degree of exaggeration , sure maybe, but as I said, all in all, the indicators are that it is very robust. That's all that can be said. 

Look, I spend my time fixing electronic consoles, I now there are lots of failure points, eMMC looks to be in the upper longevity bracket, that's all that can be said.  Quite a lot of systems fail over time because of the electrolytic capacitors, things that you wouldn't even count in the list of things that go wrong, do , and in some cases regularly.   Look at the xbox 360 and PS3 , good luck with the dodgy lead free solder joints they used that has caused so many failures within the first year, even the solder is a failure point.  
 
Last edited by a moderator:
I am well aware that there are other failure points, but even eMMC memory has been known to fail on mobile devices. So saying it is generally robust simply is wrong. Sure, there are types of eMMC (and other flash memory) that are quite robust. The actual flash memory cells on modern memor chips are not that robust at all, you can be lucky if you get write endurance of 1000 writes per cell. What makes the flash memory somewhat robust is the flash memory controller which tries to distribute the writes evenly over all cells. Even if specifying something like 100.000 hours of read/write operation would make sense (to be at least somewhat useful you have to say how many of those hours are used for writing stuff), you cannot specify that without saying for which capacity this number is valid. It is a huge difference if those 100.000 hours are distributed over a size of let's say 8 GB versus a capacity of 128 GB.
 
Last edited by a moderator:
^ Huh, you say it's not generally robust then say it is quite robust.  This is getting ridiculous , lets not split hairs shall we.

Look if you want to sling mud, everyone can play that game, have you even considered that the SD card reader is a failure point, and has been known to fail ?.  Swapping the SD card is not going to help you then , is it?.   
 
Last edited by a moderator:
^ Huh, you say it's not generally robust then say it is quite robust. This is getting ridiculous , lets not split hairs shall we.
There is nothing ridiculous about what I said. That's is simply not what I said. I said that not all of the flash memory is robust. So you are denying that there is flash memory that fails?

Look if you want to sling mud, everyone can play that game, have you even considered that the SD card reader is a failure point, and has been known to fail ?. Swapping the SD card is not going to help you then , is it?.
I am not interested in playing games. If you want to do that, I suggest you do that elsewhere. Every component can somehow fail. Some components are more likely to fail than others, though. When deciding which memory option to use, it is important to at least understand how such memory works. Quoting huge numbers from manufacturers advertisements and/or datasheets without understanding what they mean does not help at all. I have been trying to explain why that is problematic and what those number actually do or do not say. If you are interpreting that as playing games, that's kind of strange.
 
Back
Top