Will It Be Able To Run Sdxc When It Comes Out?


Pleng said:
zevdawg said:
Slightly unrelated to the topic, but relevant to what you're saying about space:
I have a DS, with my games respectively backed up for my own use. I did a little searching around just out of curiosity, and the largest DS ROM I could find is just under ~300MB (uncompressed), I forget which game but according to reviews had a large/tolerable amount of cutscenes as well. The DS carts themselves are said to be able hold 2GB of data.

Wow impressive figures! Every DS Rom I had was either 32 or 64 Meg, but then again I only had a few of the bog standard Nintendo games.
I thought the devs who remade Final Fantasy IV had trouble fitting all the video and voice onto a 1GB cart?
Wait, we're so far off topic! I... I can't find the path to get back! WE'RE LOST! DOOMED TO WANDER IN THIS DARK FOREST OF ROMTALK AND SHADOWS UNTIL OUR CELLS START EATING THEMSELVES FOR WANT OF FOOD! I'm too svelte and socially well-adjusted to die like this!
 
Last edited by a moderator:
The current options for retail DS carts are 64 megabits to 4 gigabits (8–512 megabytes). The full range wasn't available at launch, higher capacities carts have been available to developers incrementally over the life of the system with 265mb carts being introduced mid-2008. No game I know of uses a 512mb cart.

Final Fantasy IV used a 1024Mbit (128mb) cart
 
Comrade Tanto said:
My understanding of SDXC is that it's basically SDHC with the 32 GiB cap removed (As SDHC theoretically allows for 2 TiB as well, but they capped it to 32 GiB), and a speed increase. I don't see any reason the firmware can't be updated to officially support reading of SDXC cards, but you wouldn't get any speed increase.

That would be quite possibly the stupidest thing a corporation would ever do. Why would anyone ever want to limit their product's capabilities?

Thankfully, from what I have read, it looks like the most likely possibility is a software patch that might take some work, or possibly a hardware mod. Thankfully, the developers intentionally left solder spaces on the board for such a modification.

Side note: maybe USB adapters could work? They would probably be 3.0 though, and hence not work with the pandora. A better use for the hardware mod-spots would be usb 3.0 support, but that's probably lots more complicated.

Edit: oh goody! looks like I get to put a topic back on track!
 
Last edited by a moderator:
Willrandship said:
That would be quite possibly the stupidest thing a corporation would ever do. Why would anyone ever want to limit their product's capabilities?
So that they can make more money by selling an all new licensed specification without that limit.
 
Last edited by a moderator:
Short answer:

Until we have an SDXC card, the answer is probably. It may not work out of the box, but if it can it will probably require a firmware update on the Pandora.
 
Willrandship said:
That would be quite possibly the stupidest thing a corporation would ever do. Why would anyone ever want to limit their product's capabilities?
All companies do this, for a variety of reasons.

Ex: Microsoft limits the amount of RAM 32bit Windows can use. That's to prevent people using it on heavy duty servers.

Intel and AMD limit the speed and cache size of their lower end CPUs. Every CPU (with the same die size) costs roughly the same amount to make, but prices scale from $50 to $1000+.

Car manufacturers often put artificial limits on their cars. A well engineered modern car can often go 160-180mph, but most won't actually go anywhere near that speed.

IBM did the same thing as Microsoft. If you licensed beefier hardware, they could give you a code that you enter to double the RAM or speed of their mainframes.

Even the Pandora is doing it to a lesser degree - we're being sold 600mhz Pandoras, but we know they can all do 720mhz (probably 850mhz). But unlike my other examples, this is a soft limit that the user can easily bypass. ;)

All companies do it. And none have the exact same reasons. :D
 
Last edited by a moderator:
Kramy said:
All companies do this, for a variety of reasons.

Ex: Microsoft limits the amount of RAM 32bit Windows can use. That's to prevent people using it on heavy duty servers.

Intel and AMD limit the speed and cache size of their lower end CPUs. Every CPU (with the same die size) costs roughly the same amount to make, but prices scale from $50 to $1000+.

Car manufacturers often put artificial limits on their cars. A well engineered modern car can often go 160-180mph, but most won't actually go anywhere near that speed.

IBM did the same thing as Microsoft. If you licensed beefier hardware, they could give you a code that you enter to double the RAM or speed of their mainframes.

Even the Pandora is doing it to a lesser degree - we're being sold 600mhz Pandoras, but we know they can all do 720mhz (probably 850mhz). But unlike my other examples, this is a soft limit that the user can easily bypass. ;)

All companies do it. And none have the exact same reasons. :D
whilst this is all very true i think its good to differentiate between situations where a company limits something in their tech in order to manage levels of their profitability at the expensive of the user compared to the safety of other road users (in the car scenario) or the clock rate of a chip where the listed rate is also the stable tested rate at which it is guaranteed to work. people mess with those things are their own risk. the IBM situation sounds interesting as i would have though that unless you were drawing resources directly from a company, buying something that you are unable to use to its full capacity without some extra paid for codes is weird. its almost seems like buying a hob but only being able to use 2 of the 4 rings or something.
 
Last edited by a moderator:
Willrandship said:
Comrade Tanto said:
My understanding of SDXC is that it's basically SDHC with the 32 GiB cap removed (As SDHC theoretically allows for 2 TiB as well, but they capped it to 32 GiB), and a speed increase. I don't see any reason the firmware can't be updated to officially support reading of SDXC cards, but you wouldn't get any speed increase.

That would be quite possibly the stupidest thing a corporation would ever do. Why would anyone ever want to limit their product's capabilities?

SDHC cards are already technically capable of up to 2TB sizes. The 32Gb limit was imposed by the SD Association and is a (semi) artificial cap. I say semi because the specification states that the filesystem used on SDHC cards must be FAT32, which maxes out at 32Gb (though there's nothing to stop you from formatting with another filesystem once you have actually received your SDHC card...) So I'd imagine that'd be the 'justification' for the restriction from SDA.

SDXC cards use Fat64 as the default file system.

Side note: maybe USB adapters could work? They would probably be 3.0 though, and hence not work with the pandora. A better use for the hardware mod-spots would be usb 3.0 support, but that's probably lots more complicated.

USB 3.0 will not be able to be hardware modded onto the Pandora. USB SDXC card readers would work for using SDXC cards, though they would drain more power than just the SD card itself.

It is likely that a USB 3.0 card reader would still work on a USB 2.0 device such as the Pandora, with a plug adapter, so long as it didn't require more power than the USB 2 port could offer.
 
Last edited by a moderator:
Pleng said:
The 32Gb limit was imposed by the SD Association and is a (semi) artificial cap. I say semi because the specification states that the filesystem used on SDHC cards must be FAT32, which maxes out at 32Gb
As I mentioned in the other SDXC thread, FAT32 has nothing to do with it. FAT32 is capable of addressing up to 2TB of data. The issue is addressing bits. SDHC cards have 22 addressing bits, but only 16 are used in the official spec. Using the six unused bits would allow addressing up to 2TB. As others have pointed out, it is unlikely that the SD assoc. will expand the spec to take advantage of these extra bits officially. Unofficially however, it's a near-certainty that someone will come out with larger-than-32GB SDHC cards.
 
Last edited by a moderator:
I don't see the advantage of SDXC, SDHC has the 32GB cap for really no reason from what I read, and XC isn't any faster, 22 MB/s? That isn't faster than SDHC, I get that speed on my class 6 SDHC right now. If they want to charge $600 for a card then they need it to be fast, really fast, before they can justify charging that much. Right now it just seems that they can say 'class 10' and even when they tell you the speed you supposed to think its faster.

I don't know, every time I hear about SDXC I feel like it's a giant scam, the acronym SDHC is too old now so they need something new to sell us. I would think bigger cards would be enough but apparently not.
 
Well, they needed a new revision of the SD standard to get away from FAT32 anyway. While FAT32 can support sufficiently large file systems, it can't support sufficiently large files, having a hard cap at 4 GiB. Camcorders recording at HD quality can easily fill up more than four gigs with one recording and that means either ugly hacks to split the recording over multiple files or a more modern file system.

Of course, that more modern file system is heavily tied up by Microsoft because they had deeper pockets than the OSTA.
 
j6cubic said:
Well, they needed a new revision of the SD standard to get away from FAT32 anyway. While FAT32 can support sufficiently large file systems, it can't support sufficiently large files, having a hard cap at 4 GiB. Camcorders recording at HD quality can easily fill up more than four gigs with one recording and that means either ugly hacks to split the recording over multiple files or a more modern file system.

Of course, that more modern file system is heavily tied up by Microsoft because they had deeper pockets than the OSTA.
I don't get the format thing either, why not just have the device pick what to format the card to? A HD camcorder could format to ext4 and in the manual explain how to put the install CD in and install the ext4 driver on your OS. Cards usually go form the device you bought them for to your SD slot on your computer, rarely from the device to another device.

I realize that this isn't the case, too many people would complain about data being lost by clicking ok on a pop-up window saying it can't read the card and needs to format it. Whatever file system is chosen it has to work on Mac also since they are the standard when it comes to photo/video editing, so NTFS is out as well as every open FS on Linux.

Still makes me mad when I think that Apple decided not to go ZFS, that would have fixed every thing.
 
Last edited by a moderator:
For instance because that would create a compatibility nightmare. For example, your ext4 device would only work with computers running Linux as ext4 is supported by no other OS and not downwards-compatible without nonstandard formatting options. Plus, most FSes are horrible choices for flash media as they multiply the amount of writes needed; everything with a journal qualifies. That brings our choices down to FAT32, NTFS with journaling disabled and UDF. If you can tolerate potentially unstable and/or commercial drivers, HFS+ without journaling and ext2 also qualify. JFFS2 et al are usually Linux-only.

Given that no manufacturer wants to be held responsible when "the drivers for the camcorder broke my Windows" because the ext2 IFS isn't stable on certain Windows builds, the realistic choices are FAT32, NTFS and UDF. NTFS is expensive in every regard and thus out. UDF can be had without paying exorbitant fees but it's more complex than the (until recently) good-enough FAT32. Thus we'd have the majority of all vendors using FAT32 already with the rest probably following suiit for cost reasons. Standardization just makes things easier for everyone at this point.


I wish they would've gone to UDF 2.50 Plain (or even 1.50 Plain if compatibility is that important) as the FS for SDXC, though. exFAT requires relatively small changes from how things were before but non-Microsoft OSes are only slowly gaining support and Microsoft can always wave the hammer of "reasonable and non-discriminatory licensing" – read: They can introduce licensing costs just slightly too high to allow the Linux kernel to retain exFAT support. Hey, as an added bonus they could then sell their own, patent lawsuit-free driver...
 
I just used ext4 as an example, it couldn't be used because it has no official support on OSX. There are no modern file systems that work on both OSX and Microsoft so I think they'll just stick to Fat32. ZFS would be perfect, open FS that is officially supported in Microsoft but Apple decided not to add official support. Its not in the Linux kernel because of the license it uses but every distro either comes with it or is easy to add.

Whatever they pick will force all 3 OSs to add support though, so even if they decided on ext4 both Apple and Microsoft would add support in updates to their OSs. I think they won't go with a completely open FS though and they won't go with a FS that is owned by one company either, so hopefully they'll pick ZFS or one like it.
 
They already did go with a file system owned by one company. exFAT is part of the SDXC standard and nothing will change that. If Microsoft decides to start charging people for it then everyone will have to pay up or successfully challenge Microsoft's IP rights to exFAT (or their licensing terms) in court. The chance of such a lawsuit succeeding would be fairly low, I'd wager.

Again, UDF would have been the obvious choice (it's free and already supported by all operating systems).


By the way, ZFS would not neccessarily be a good choice for a flash disk. It carries lots of overhead and is far more complex than existing file systems. It also lacks robustness against scenarios you can encounter when working with flash media such as users removing the disk before all writes are flushed (which can irreparably damage pools). Actually, given that people are encountering problems with ZFS and external hard drives (which can't be moved between hosts without the original host releasing the pools) I certainly wouldn't expect Joe Sixpack to be able to handle it safely.

(Also note that ZFS is patent-encumbered, which is one of the things keeping the Linux devs from just making a GPL'd rewrite they can put in the kernel. It does qualify for "owned by one company", that company being Sun.)
 
Oh, I misunderstood you, I thought they stuck to FAT32 and you were saying they could go with exfat. Microsoft's patents on FAT shouldn't be valid because they are too broad and it was patenting on pre existing technology, so even if it does go to court Microsoft would probably loose it's patents on FAT.

I used ZFS, moved it between computers on Linux and had no problems. Default setup probably doesn't have anything you describe. I also thought ZFS was patented in such a way that if you want to use it no one can collect licensing fees, Sun controls it but can't collect licensing fees. Anyway that is what I understood. It is a little too bloated to use on flash though, I agree with you there. I stopped using it when ext4 came out though, it was just easier because its built into the kernel.
 
The problem with exFAT is that, to my knowledge, Microsoft has patented some of the additions they have made. It's more encumbered than FAT32 (where only the VFAT extensions are patented).

Had Microsoft released exFAT to the general public with an irrevocable grant to any patents that may cover it I would be much happier with it. Well, and if Linux and OS X had proper support already.
 
second exodous said:
I don't see the advantage of SDXC, SDHC has the 32GB cap for really no reason from what I read, and XC isn't any faster, 22 MB/s? That isn't faster than SDHC, I get that speed on my class 6 SDHC right now.
SDXC is double the speed of SDHC. SDHC has a maximum clock speed of 50Mhz which gives us 24MB/sec, ignoring overheads. SDXC's maximum clock speed is 100Mhz, which would give us 48MB/sec. Of course, thats only transfer speed. If the flash inside isn't capable of 48MB/sec, then your not going to get it.

I'd say the 32GB cap on SDHC is there simply for marketing reasons. If you bring out a card and say it supports "upto 1000 Yottabytes", or you bring out a card that supports 1TB, and then bring out another card in a years time that supports 2TB when there is technology to support it, then you will get a race from the big manufacturers with who can get there product out to market first, and then they'll proudly explain that fact on there website "First to release a SDXC card.. blah blah..."
 
Last edited by a moderator:
RenegadeChic said:
Kramy said:
All companies do this, for a variety of reasons.

Ex: Microsoft limits the amount of RAM 32bit Windows can use. That's to prevent people using it on heavy duty servers.

Intel and AMD limit the speed and cache size of their lower end CPUs. Every CPU (with the same die size) costs roughly the same amount to make, but prices scale from $50 to $1000+.

Car manufacturers often put artificial limits on their cars. A well engineered modern car can often go 160-180mph, but most won't actually go anywhere near that speed.

IBM did the same thing as Microsoft. If you licensed beefier hardware, they could give you a code that you enter to double the RAM or speed of their mainframes.

Even the Pandora is doing it to a lesser degree - we're being sold 600mhz Pandoras, but we know they can all do 720mhz (probably 850mhz). But unlike my other examples, this is a soft limit that the user can easily bypass. ;)

All companies do it. And none have the exact same reasons. :D
whilst this is all very true i think its good to differentiate between situations where a company limits something in their tech in order to manage levels of their profitability at the expensive of the user compared to the safety of other road users (in the car scenario) or the clock rate of a chip where the listed rate is also the stable tested rate at which it is guaranteed to work. people mess with those things are their own risk. the IBM situation sounds interesting as i would have though that unless you were drawing resources directly from a company, buying something that you are unable to use to its full capacity without some extra paid for codes is weird. its almost seems like buying a hob but only being able to use 2 of the 4 rings or something.

All chip companies engage in product binning; it's really the only way they can make a profit. Intel actually does burn out parts of their chips to limit features on the lower end products, though. AMD was playing up that their low end chips have all the same features as their high end chips as a take that at Intel last time they both announced their new revisions.

The artificial limit on cars is to prevent driveline failure. Look at the Crown Vic. The civilian model is limited to 110 mph, but the police model is limited to 120 mph. The police model actually uses an aluminum driveshaft that fails at higher speeds than the civilian model. The 1993-2005 version actually went up to 135 mph, but the driveshaft was even more expensive than the regular aluminum ones (it was rated for over 150 mph). This isn't a plot to limit capabilities. It's just good engineering. You don't say an elevator has a maximum capacity of 4000 lb. if that's the fail point. You say that elevator has a maximum capacity of 2000-3000 lb. so there's less risk of failure.
 
Last edited by a moderator:
Back
Top