Double or Float?


pmprog

DNF (Did Not Finish)
Joined
Apr 25, 2011
Messages
4,150
I know any floating point numbers are more expensive than integers, but is there a difference between double and float on the Pandora? I know there's a precision difference, but what about processing time/speed?

Not like it's a massive issue in my case, I hardly push the boundaries of the device, but thought it'd be worth finding out
 
Neon handle float just fine while the double _have_ to be done on the GPP. There is a definate advantage of using float over double speed-wise.
 
The general rule is always use float for everything, always! If you have a real use case for double (some physics simulator perhaps), then fair enough, but it is generally quite rare.

Double performance is typically pretty bad, on some platforms it is actually done in software (not sure about Pandora).

Sorry I don't have an exact Pandora answer, but if you follow the general rule of avoiding doubles you don't need to worry too much about specific platforms :)
 
Use float as much as you can.

For example, Bloboat was using double for all is math. I changed double to float, and the result was at least x2 in fps on the Pandora (and no modificatons in rendering).
 
Last edited by a moderator:
Thanks for the replies, I'll replace doubles in my code then :)
 
Always use float wherever possible - this goes for most platforms I know.

I see code examples written in doubles and the first thing I do is change it to float... :p

So rule of thumb as everyone is pointing out already only use double if you really need the prescision.
 
Just a n00b question is there anything to worry about if switching double out for floats?
 
Reduced precision : instead of 64bits you have 32bits to store all the floating point data.

Most of the time it doesn't matter at all : 32bits are easily enough for most case.

I'm forcing float-only in many of my port (#define double float ftw :D ) But sometimes a software depend realy on the 64bits precision of the double. Like building octave this way would result in a deply broken piece
 
The general rule is always use float for everything, always! If you have a real use case for double (some physics simulator perhaps), then fair enough, but it is generally quite rare.

Double performance is typically pretty bad, on some platforms it is actually done in software (not sure about Pandora).

Sorry I don't have an exact Pandora answer, but if you follow the general rule of avoiding doubles you don't need to worry too much about specific platforms :)
Doesn't that mostly apply to 32bit platforms? If you've got 64bits to work with, then normally you're going to want to work with them as the processor handles that more efficiently. It's mostly times where you're working on a 32bit platform or are concerned about memory use that you'd want to stick with floats.

Then again, I tend to avoid floats and doubles whenever possible as they don't work as well as other data types.
 
Doesn't that mostly apply to 32bit platforms? If you've got 64bits to work with, then normally you're going to want to work with them as the processor handles that more efficiently. It's mostly times where you're working on a 32bit platform or are concerned about memory use that you'd want to stick with floats.

Then again, I tend to avoid floats and doubles whenever possible as they don't work as well as other data types.
The size of integer operations isn't relevant at all. 64-bit processors aren't more efficient at dealing with doubles than 32-bit ones. From a performance perspective, saving memory on variables is preferable regardless of what your address space is because it reduces cache pressure.

The general rule of thumb for performance with integer data types is this: use a type that's natural for the CPU for local variables, and use a type that's only as large as it needs to be for arrays and struct fields. The reason for this is that if your data size is smaller than the CPU handles naturally the compiler will try to enforce zero/sign extension to fill the natural size whenever this can be visible to the code. If the data comes from an array or struct or similar it'll generally go through loads and stores anyway, which will take care of the conversion for you.

But 64-bit CPUs tend to have at least partial native 32-bit integer support.
 
The FPUs are often working with entirely different sizes anyway. x86 FPUs usually work with 80 bit wide floats, you can even directly use them as such in C, that's the long double datatype.
 
I was wondering about this one too. I did use floats (10 bytes) in Delphi, but the conversion to ARM dropped those in FPC to 8 bytes, which screwed up some of my code which made stupid assumptions about the size of a float.

Now, AFAIK, FPC doesn't do NEON so would there be any benefits in my changing all my doubles to float?

D.
 
I was wondering about this one too. I did use floats (10 bytes) in Delphi, but the conversion to ARM dropped those in FPC to 8 bytes, which screwed up some of my code which made stupid assumptions about the size of a float.


Now, AFAIK, FPC doesn't do NEON so would there be any benefits in my changing all my doubles to float?


D.
For VFP (non-NEON) doubles can be slower than floats for variations of FMUL/FMAC, FDIV, FSQRT, and some of the integer conversion instructions (how much slower often depends on the operands). There's also "RunFast" mode, which can make several float operations a little faster but doesn't effect doubles. Then there's the secondary advantage of decreasing cache pressure.
 
Last edited by a moderator:
Excellent. So what is the difference between declaring Float and Single precision? AIUI, both are 4 bytes and as such should be the same?

D.
 
Excellent. So what is the difference between declaring Float and Single precision? AIUI, both are 4 bytes and as such should be the same?


D.
So long as we're talking C/C++ and several similar languages here, float means single precision and double means double precision. I don't know what it means for other languages but using single precision for "double" would be pretty dumb. Single and double precision were defined by IEEE 754-1985: http://en.wikipedia.org/wiki/Single_precision_floating-point_format http://en.wikipedia.org/wiki/Double_precision_floating-point_format

If you don't feel like reading, in a nutshell it's:

Single precision: 32-bits, 1 bit sign/7 bit exponent/23 bit significand

Double precision: 64-bits, 1 bit sign/11 bit exponent/52 bit significand

The normalized numeric format means that you basically get an extra bit of precision out of the exponent bit, so float allows 24 bits of precision and double 53 bits, so long as the numbers aren't denormal.
 
The message between the lines is: use float, but always be aware of what your compiler/target-platform has in mind for float-users, which may or may not always be in the best interests of plain ol' uint32 (as much software is, after all, just such a word) .. so on ARM you can have hard or soft float, but you can also have borked compilers.  Some flags being what they are (ignorable if 'virtualized') mean that its not always the right code-path is used to give float positive-performance.  One thing lifts you above all this: testing, testing, testing .. not always easy for distant platforms, but then again, if one is writing floating point for performance reasons, its always going to be the same: test, test, test.
 
Doesn't that mostly apply to 32bit platforms? If you've got 64bits to work with, then normally you're going to want to work with them as the processor handles that more efficiently. It's mostly times where you're working on a 32bit platform or are concerned about memory use that you'd want to stick with floats.


Then again, I tend to avoid floats and doubles whenever possible as they don't work as well as other data types.
The size of integer operations isn't relevant at all. 64-bit processors aren't more efficient at dealing with doubles than 32-bit ones. From a performance perspective, saving memory on variables is preferable regardless of what your address space is because it reduces cache pressure.


The general rule of thumb for performance with integer data types is this: use a type that's natural for the CPU for local variables, and use a type that's only as large as it needs to be for arrays and struct fields. The reason for this is that if your data size is smaller than the CPU handles naturally the compiler will try to enforce zero/sign extension to fill the natural size whenever this can be visible to the code. If the data comes from an array or struct or similar it'll generally go through loads and stores anyway, which will take care of the conversion for you.


But 64-bit CPUs tend to have at least partial native 32-bit integer support.
This really makes me wonder why there isn't more standardization over whether an int should be 32bits or 64 bits if it doesn't make a performance difference which one you're using in terms of processing time. I tend to avoiding just the plain old int when I can, just because I dislike allocating more bits than are really necessary to the variable. Especially in cases where I know for a fact that I'm never going to have a number larger than that, save some sort of programming error.

I wonder if this is one of those things that was somewhat true previously, but has been passed on as common knowledge since. I've seen it cropping up from time to time, and now that I think of it, I can't recall there ever being any explanation given as to why it would be the case.
 
This really makes me wonder why there isn't more standardization over whether an int should be 32bits or 64 bits if it doesn't make a performance difference which one you're using in terms of processing time. I tend to avoiding just the plain old int when I can, just because I dislike allocating more bits than are really necessary to the variable. Especially in cases where I know for a fact that I'm never going to have a number larger than that, save some sort of programming error.

I wonder if this is one of those things that was somewhat true previously, but has been passed on as common knowledge since. I've seen it cropping up from time to time, and now that I think of it, I can't recall there ever being any explanation given as to why it would be the case.
What are you referring to as a thing that was somewhat true previously but isn't anymore? You mean why you should use int instead of using smaller datatypes for local variables? If that's the case I already explained why..

I can see the mentality behind "int" - they wanted something that had a guarantee to be at least a certain length but you could still use larger lengths with. Consider when x86 switched from 16-bit to 32-bit. You could still use 16-bit but it was more expensive because it needed a mode change prefix. It'd be good to know that code didn't rely on the calculations being 16-bit (as most code wouldn't), which is what the int type should have meant. That wouldn't have stopped people from relying on that anyway, though.
 
With a few exceptions (e.g. SPARC64), int is always 32bit by default on modern systems. Windows as well as most unixoid systems have 32bit ints on 64bit systems - only long differs between those.
 
With a few exceptions (e.g. SPARC64), int is always 32bit by default on modern systems. Windows as well as most unixoid systems have 32bit ints on 64bit systems - only long differs between those.
Probably the result of too many people doing things like assuming sizeof(int) is 4 when allocating arrays, rather than assuming limited precision.
 
Back
Top