Double or Float?


I think it's a good thing that in C, "int" is something the size of a word. It makes the most sense to use the word size by default, and to use specific intN_t types in case you need to be sure about the range of the int.

I think floating-point arithmetic in general should be avoided. Sure, there are good uses of it, but too often people use floats just to represent rationals or fixed-point numbers. Floats are only useful if you need to be able to represent a large dynamic range, where it suffices to have good relative precision, and where you don't need to do a lot of arithmetic manipulation on the numbers (because that all to easily leads to cumulative errors), and where you don't rely on the rounding to be the same on all hardware.

Most of the time, it's better to just use integers (e.g. store money values in integer cents, not in floating-point dollars) if possible, or else to use a library like GMP if you need arbitrary-precision stuff.
 
I think floating-point arithmetic in general should be avoided. Sure, there are good uses of it, but too often people use floats just to represent rationals or fixed-point numbers. Floats are only useful if you need to be able to represent a large dynamic range, where it suffices to have good relative precision, and where you don't need to do a lot of arithmetic manipulation on the numbers (because that all to easily leads to cumulative errors), and where you don't rely on the rounding to be the same on all hardware.
Depending on the hardware and circumstances, floats can be faster than ints while offering enough precision to avoid errors. It really depends on your needs.
 
I think it's a good thing that in C, "int" is something the size of a word. It makes the most sense to use the word size by default, and to use specific intN_t types in case you need to be sure about the range of the int.
What particular definition of 'word' are you referring to? Taking very few exceptions aside, int is always 16 or 32bit width by default, even on most 64 and several 8bit (e.g. AVR) platforms. On x86 'word' is even used as a fixed definition for 16bit integers.
 
What particular definition of 'word' are you referring to? Taking very few exceptions aside, int is always 16 or 32bit width by default, even on most 64 and several 8bit (e.g. AVR) platforms. On x86 'word' is even used as a fixed definition for 16bit integers.
I don't think he means some definition at all, but the size of general purpose registers on the target CPU.

Back when C was still young there were C compilers written for CPUs with int sizes that weren't powers of 2 at all.

But there's no question that C has followed historical cruft in an effort to maintain backwards compatibility instead of following the original spirit of the int, short, and long keywords. By any sane reasoning on 32-bit x86 long would have already been 64-bit (and the long long keyword never invented unless there was a legitimate need for 64-bit in 16-bit code). And on x86-64 short would be 32-bit, int 64-bit, long 128-bit. But portability concerns trump all.
 
Last edited by a moderator:
C99 introduced types like int32_t you can use.
Yes, this is what we use, stdint.h for everything non MS, and some typedef's for MS. The only other type we use a fair bit of is size_t but that is really more to work with other libraries (STL being the biggest most common one).
 
I think floating-point arithmetic in general should be avoided.
I am obviously taking this out of context to your post, and the examples in your post certainly show where int can (and probably should) be used over float. But the idea that floating point arithmetic should be avoided isn't really true. Maybe it is more true in certain types of applications (Visual Basic Form based application maybe) but in lots of cases float is the best format to use. We are currently doing some work with Leap Motion (tracks hand/fingers position with a cheap little box), which involves a lot of working with floating point numbers, using trig, giving objects positions/heights etc. sure it is possible to write a fully fixed point engine, but that isn't what happens in games on PC/consoles any more (and hasn't been for a long time), it's all floating point vectors/matrices/quaternions/etc. So I believe the much better advise is user the right datatype for the right purpose! Use type safe enum's over ints[*], use ints for indexing or counting discrete quantities, use float for anything that requires decimcals, and avoid double unless you have a real use case for it (really needing high accuracy), use quats over matrices if you need to do blending, etc.

* the C (over C++) type of people may disagree and say use int's all the time over enum's, but in my opinion this just all round bad  :)  
 
Last edited by a moderator:
Maybe it is more true in certain types of applications (Visual Basic Form based application maybe)
Why would a form-based VB app be any different? If you're talking about WinForm based applications (irrelevant of language), then the use case is different to that of a game, and performance working with floating point is nowhere as relevant as the processes are nowhere near as intensive on calculations.
 
* the C (over C++) type of people may disagree and say use int's all the time over enum's, but in my opinion this just all round bad  :)  
I generally use proper enum typing in C++ but it's a pain to bother with in C. Mind you, most of the coding I do on my own time is C/ASM. I see it as just one of those other things you have to be careful with and know what you're doing with, like macros :p

Just so we're clear here, what I mean is declaring variables set to enum as some explicit integral type and not the enum define, not avoiding enums altogether in favor of defines (or no named values at all..). I do still typedef my enums. I think I used to generally declare enum types but it was occasionally painful making sure that the type was known when it was used and I didn't like the sizeof varying on me. I think the lack of actual type checking in C makes it borderline pointless anyway.
 
Last edited by a moderator:
This really makes me wonder why there isn't more standardization over whether an int should be 32bits or 64 bits if it doesn't make a performance difference which one you're using in terms of processing time. I tend to avoiding just the plain old int when I can, just because I dislike allocating more bits than are really necessary to the variable. Especially in cases where I know for a fact that I'm never going to have a number larger than that, save some sort of programming error.

I wonder if this is one of those things that was somewhat true previously, but has been passed on as common knowledge since. I've seen it cropping up from time to time, and now that I think of it, I can't recall there ever being any explanation given as to why it would be the case.
What are you referring to as a thing that was somewhat true previously but isn't anymore? You mean why you should use int instead of using smaller datatypes for local variables? If that's the case I already explained why..

I can see the mentality behind "int" - they wanted something that had a guarantee to be at least a certain length but you could still use larger lengths with. Consider when x86 switched from 16-bit to 32-bit. You could still use 16-bit but it was more expensive because it needed a mode change prefix. It'd be good to know that code didn't rely on the calculations being 16-bit (as most code wouldn't), which is what the int type should have meant. That wouldn't have stopped people from relying on that anyway, though.
What I'm saying is that you should work with the word size of the architecture you're working on for reasons of efficiency.

And it may very well no longer be the case as the move from 32bits to 64bits was much less problematic than the move from 8bits to 16bits was or the move from 16bits to 32bits.

Not that this is terribly important at this stage if it's primarily a matter of memory usage.
 
What I'm saying is that you should work with the word size of the architecture you're working on for reasons of efficiency.

And it may very well no longer be the case as the move from 32bits to 64bits was much less problematic than the move from 8bits to 16bits was or the move from 16bits to 32bits.

Not that this is terribly important at this stage if it's primarily a matter of memory usage.
It is important though. If you have a really hot array of structs that takes up 64KB because you sized everything int but would have taken 16KB if you used 8-bit variables then that can make a huge difference for performance in the right circumstances. If you're going for high efficiency you should try to be at least somewhat conscious of your data types and how it impacts locality of reference, which means trying to minimize size, trying keep things grouped together, and trying to work on relatively small batches.

If the variable is going to be resident in registers or a non-aggregate type resident on the stack then you should go with sizes most friendly for the CPU arch. Otherwise if you need a load/store to get to it you should consider sizing aggressively where appropriate (that includes not just using small data types but packing things and outright getting rid of fields where possible)
 
Last edited by a moderator:
float speed = 10.0f;

If you dont write the f the 10.0 is seen as a double value and it will be converted (casted) into a float during runtime which costs a few cycles (at least if the compiler is not optimizing it away).
I never write 10.0f.  If the compiler isn't "optimizing it away", I should not be using such a piece of shit compiler.

What's the point of doing manual optimization so that your code runs faster on a non-optimizing compiler?  First use an optimizing compiler!

As for float vs double, don't forget to use suitable optimization options for the Pandora, as featured in my sig!  this can make a huge difference
 
Last edited by a moderator:
use float for anything that requires decimals

I don't agree: if you need a fixed number of decimals (e.g. 2 in the case of money) and you want to work with exact numbers, it's not a good idea to use floats because when the amount becomes large enough, you'll lose the precision to store the decimals.

Floats are good when you need floating point decimals, that is, when the range of possible values is huge and you need a fixed amount of mantissa precision. This is sometimes the case, e.g. when dealing with high dynamic range images or when doing sound processing, since light and sound waves are better modeled on a logarithmic scale (which is what you're doing when using exponent-mantissa representations like floats) than on a linear scale.

When you use floats for numbers that are actually distributed on a linear scale (like uniformly distributed random values in some interval, e.g. [0,1]), you are wasting bits and/or needlessly sacrificing accuracy.

People tend to think of things as int = integer number, float/double = real number, but that's not a very useful way to think about it. Both are limited precision numbers and you can use both for integer or fractional numbers if you want. The main difference is that ints are useful for linear scales (since it has exactly the same resolution over its entire range), while floats are useful for logarithmic scales (since it has a non-uniform resolution because of the exponent-mantisse representation; there are many more floats in the neighborhood of zero than in the neighborhood of one billion).
 
I don't agree: if you need a fixed number of decimals (e.g. 2 in the case of money) and you want to work with exact numbers, it's not a good idea to use floats because when the amount becomes large enough, you'll lose the precision to store the decimals.
I think you meant to say that it's not a good idea to use binary floating point numbers. Using decimal floating point numbers is perfectly acceptable in these cases.
 
I don't agree: if you need a fixed number of decimals (e.g. 2 in the case of money) and you want to work with exact numbers, it's not a good idea to use floats because when the amount becomes large enough, you'll lose the precision to store the decimals.
I think you meant to say that it's not a good idea to use binary floating point numbers. Using decimal floating point numbers is perfectly acceptable in these cases.
No, using fixed point numbers is what you need in those cases. The base does not matter - and decimal is very rare anyway, although decimal coded in binary (4 bits per digit) has historically been used.

Try to understand the difference between fixed point and floating point. Then understand that fixed point can be represented as integers with an implicit divider (it could be a power of 2 so you can do the division/modulo with bitshifts, but that doesn't have to be the case).
 
The base does matter. Especially with money you don't want to use binary fractions, but decimal fractions.
 
float speed = 10.0f;

If you dont write the f the 10.0 is seen as a double value and it will be converted (casted) into a float during runtime which costs a few cycles (at least if the compiler is not optimizing it away).
I never write 10.0f.  If the compiler isn't "optimizing it away", I should not be using such a piece of shit compiler.

What's the point of doing manual optimization so that your code runs faster on a non-optimizing compiler?  First use an optimizing compiler!

As for float vs double, don't forget to use suitable optimization options for the Pandora, as featured in my sig!  this can make a huge difference
I see your point, but...


If you are writing portable code then it makes sense to get into the habbit of giving any potential compiler an helping hand...

Yes compilers are getting better at making optimisations, but if you know exactly what you want the compiler to do, then why not tell it?
 
The base does matter. Especially with money you don't want to use binary fractions, but decimal fractions.
The base does not matter if you're using a representation based on integers with an implicit divider (e.g. store 100 times the money amount instead of the amount itself).

Of course if you use a representation where the decimals are stored explicitly, then the base matters: in binary (whether you use fixed-point or floating-point is irrelevant), you cannot represent a number like 1/10 exactly with a finite number of bits, just like you cannot represent the number 1/3 exactly in decimal notation. In general you need some representation of rationals (e.g. GMP supports that), but if you know the largest denominator in advance, you can avoid that overhead by using ints with an implicit denominator.
 
if you need a fixed number of decimals (e.g. 2 in the case of money) and you want to work with exact numbers, it's not a good idea to use floats because when the amount becomes large enough
Well, you might start with exact units of currency, but then at some point need half unit (50% off 0.99 unit price), depending on your use case, you might wish 50% off 0.99 to leave you with 0.49 (you can't charge your customer a fraction), however in other cases you might wish to keep the full accuracy (for example when the 50% off is part of a larger calculation, as in if I give a third party 30% of the revenue my product makes after 11% of advertising overheads are subtracted). So I'd really just stick with my advise of using the right data type for the right job. On the whole, I certainly agree there can be cases where int is good for working with money, no argument there.. my argument of if you need decimals then use float is not right on this grounds, I guess this rule needs tweaking a little!

I never write 10.0f.  If the compiler isn't "optimizing it away", I should not be using such a piece of shit compiler.
Well you need to write the .f suffix to specify whether you wish to use a double or float number, consider : num / 12.0, do you wish to perform double precision division or single precision division here? The compiler can't simply optimize it assuming you always want to use floating point numbers (unless you tell it and it supports flags to allow this). It is like : float result = num / 12, in this case if num is integral you'll get an integral divide of result and 12, which is largely different to a floating point division. On top of this, if you consider function overloading means you can have two different functions that will be invoked based on whether you pass float or double, e.g. MyFunction( 1.0f ) and MyFunction( 1.0 ) can call different pieces of code. Sure you might not ever do this, but what about the libraries you are calling.

Why would a form-based VB app be any different? If you're talking about WinForm based applications (irrelevant of language), then the use case is different to that of a game, and performance working with floating point is nowhere as relevant as the processes are nowhere near as intensive on calculations.
I selected form based VB Apps as my guess is if I Google for the first form based VB apps I can find, and look at the data types being used, I think there is a sporting chance it will have less floating point numbers than the code I typically work on. But this is not substantiated at all, and your point is valid. Really though, all I was saying is that there might be some types of coding which uses less floats than others, which is fine. There was certainly no intent to 'diss' VB of form applications.

I generally use proper enum typing in C++ but it's a pain to bother with in C. [...] I think the lack of actual type checking in C makes it borderline pointless anyway.
Sounds like you fight the good fight :) the pointlessness in C is true, and some 'hardcore C' coders take the same practises to C++, where typedef enum's can make coding easier (in terms of knowing what to pass a function, and also being told when you have made an error). I can't tell you how many times I have seen (sometimes written myself, sometimes written by others) OpenGL C function calls where an 'enum' value is passed something complete wrong, with no compile time error thrown, and can often be hard to spot when looking through the source (especially if the value being passed looks on the fact of it to be sensible).
 
Last edited by a moderator:
Sounds like you fight the good fight :) the pointlessness in C is true, and some 'hardcore C' coders take the same practises to C++, where typedef enum's can make coding easier (in terms of knowing what to pass a function, and also being told when you have made an error). I can't tell you how many times I have seen (sometimes written myself, sometimes written by others) OpenGL C function calls where an 'enum' value is passed something complete wrong, with no compile time error thrown, and can often be hard to spot when looking through the source (especially if the value being passed looks on the fact of it to be sensible).
If it helps any, I generally name my enum values very.. descriptively. To the point where it becomes really obvious if you use them out of context.
 
Back
Top