Double or Float?


I never write 10.0f.  If the compiler isn't "optimizing it away", I should not be using such a piece of shit compiler.
Well you need to write the .f suffix to specify whether you wish to use a double or float number, consider : num / 12.0, do you wish to perform double precision division or single precision division here?


MyFunction( 1.0f ) and MyFunction( 1.0 ) can call different pieces of code
You're probably right, and it's not worth arguing about this, but ... I have nothing better to do ... ;)

If I'm dividing by an integer, I would normally write it like that:  x / 12

Sometimes, rarely, I would write:  x / 12.0  to cast x from int to float, it might would be better to do that differently.

I very rarely if ever divide by a non-integer literal value.

It's generally better to give those values a symbolic name, like: float x = 10.0

As for C++ overloading, pfft to that!   :)
 
Last edited by a moderator:
The base does matter. Especially with money you don't want to use binary fractions, but decimal fractions.
 The base does not matter if you're using a representation based on integers with an implicit divider (e.g. store 100 times the money amount instead of the amount itself).

Of course if you use a representation where the decimals are stored explicitly, then the base matters: in binary (whether you use fixed-point or floating-point is irrelevant), you cannot represent a number like 1/10 exactly with a finite number of bits, just like you cannot represent the number 1/3 exactly in decimal notation. In general you need some representation of rationals (e.g. GMP supports that), but if you know the largest denominator in advance, you can avoid that overhead by using ints with an implicit denominator.
When you choose an imlicit divider, you are in fact choosing your base.If you store 100 times the money amount instead of the amount itself, then you're using decimal fractions.

If your divideris a power of 2 so you can do the division/modulo with bitshifts, then you're using binary fractions.
 
The base does matter. Especially with money you don't want to use binary fractions, but decimal fractions.
 
The base does not matter if you're using a representation based on integers with an implicit divider (e.g. store 100 times the money amount instead of the amount itself).


Of course if you use a representation where the decimals are stored explicitly, then the base matters: in binary (whether you use fixed-point or floating-point is irrelevant), you cannot represent a number like 1/10 exactly with a finite number of bits, just like you cannot represent the number 1/3 exactly in decimal notation. In general you need some representation of rationals (e.g. GMP supports that), but if you know the largest denominator in advance, you can avoid that overhead by using ints with an implicit denominator.
When you choose an imlicit divider, you are in fact choosing your base.
If you store 100 times the money amount instead of the amount itself, then you're using decimal fractions.


If your divideris a power of 2 so you can do the division/modulo with bitshifts, then you're using binary fractions.
You're not choosing the base, you're choosing a denominator. You can still represent 100 times the money amount in a standard 32 bit binary int.

But of course if the implicit divider is 100, that corresponds to having the precision of 2 fixed-point decimals in decimal notation.
 
If it helps any, I generally name my enum values very.. descriptively. To the point where it becomes really obvious if you use them out of context.
Personally I always used enums enclosed in structs, e.g.:


struct BlendMode
{
enum T
{
None = 0,
Additive,
Modulative,
Num
};
};

As well as compile time checking, the IDE I use gives me an auto-complete selection of arguments which I personally like.


m_Sprite.SetBlendMode( BlendMode::Modulative );

When I was first introduced to enum's always being enclosed in structs I didn't like it (felt a bit bloated, maybe even hacky) but I bit the bullet and started using it (when working with someone elses code I typically try and adopt their style) and became a convert. I feel like I am going a bit of topic at this point, but hey this C/C++ corner of the forum doesn't get much traffic, why not let this deviate a little. It is more interesting that discussing software licenses for a change :)
 
Last edited by a moderator:
It's not available in C though, and for using C++ you have to pay a lot ("a lot" for me that is, most people don't care about things like dragging in stl lib but I do).
 
Last edited by a moderator:
I don't know C++, so probably dumb question: is that (enum within struct) different from enclosing the enum in any other kind of namespace?
 
It's not available in C though, and for using C++ you have to pay a lot ("a lot" for me that is, most people don't care about things like dragging in stl lib but I do).
well, in C we can of course use BlendMode__Modulative instead of BlendMode::Modulative.

It's a bit more verbose, but no big deal unless you want some magic type checking against yourself being an idiot.

( I should get an award for blindingly obvious dumb post of the day. )
 
Last edited by a moderator:
When you write your own stuff you can use whatever you like to make argument passing make sense, but it's when you use other libraries. I have seen this multiple times:

glEnable( GL_TEXTURE );

Compiles fine, doesn't do what you want, as you probably wanted:

glEnable( GL_TEXTURE_2D );

If it had been:

glEnable( GL_ENABLE_TEXTURE )

It would have been a little more clear. But the way it is leaves potential for problems, this is further more increased by the fact a lot of the GL defines actually can be used in multiple places. Like GL_TEXTURE_2D is both a 'TextureTarget' and an 'Enable'. I don't want to pick too much specifically on GL here, it is just a common library that is used in a lot of projects, and seems to have people making the same mistakes over and over.

Edit: For what it is worth, GL_TEXTURE it used for setting the matrix mode..
 
Last edited by a moderator:
Back
Top