GP2X Fixed Vs. Float

Which is faster?

  • Fixed-point

    Votes: 0 0.0%
  • Floating-point

    Votes: 0 0.0%

  • Total voters
    0

if we had a divide and divided a 24.8 by a 24.8 the result will have zero places to the right of the radix point. You have to subtract the number of places in the divisor from the dividend to get the number of places in the quotient. Of course your subtraction loop would just subtract the two numbers till the remainder is less that the divisor. The loop count is the integer part of the quotient and the remainder is the numerator of the fraction remainder/divisor(this was stupid of me sorry). Generally what you would see for systems with hardware divide is the dividend would have places added to the fractional part (as long as no overflow occurs) or drop some of the fractional bits of the divisor.
 
A_SN posted on Feb 1 2007 at 02:31 PM said:
Gary Miller posted on Feb 1 2007 at 02:21 PM said:
depending on how accurate you need the result to be you could drop some fractional bits to reduce the the chance of overflow. That might meets your needs but as always fixed point math will always have some issues with accuracy. If the integer part of the 24.8 time the integer part of the 8.24 will not exceed 32767 or be less that -32768 then you could right shift the 8.24 by 16 to make it an 8.8 and then do the multiply times the 24.8 to produce a number that will fit in a 16.16. Based on the number ranges you could play with both or just one of the numbers to reduce the chance of overflow.

Actually my fixed point maths gives results much more precise than with floats, this is because floats only have about 7 significant digits. I think you're right, I need to determine how much precision I really need. being said, my final algorithm will be completely from this one anyways.

rixed : yeah I guess I could try and make it like it's a 16.16 * 16.16 multiplication instead. I don't really understand the mix of ASM and C (usually I deal with functions in separate .s files), I think I'll have to compile your function to figure it out.

As for your qdiv function, yours is unsigned, mine is signed. Btw what's your inputs and output fixed format? Did you try both of my qdiv functions? As for the strange results you got, did you make sure that both your inputs were in 24.8 and that your result was in 8.24?

Single precision has around 23 bits of precision with a sticky bit or two for rounding. double and extended of course have more (and use an assumed 1.0 bit which gives you one more bit of precision, single for some reason wastes that bit). You can certainly do a hybrid where you use say a full 32 bit word for significant bits and another for the exponent. Perform the math the same way as the soft fpu does, end up with more precision and less overhead (none of that IEEE garbage that is tacked on). The benefit to an fpu is that it does the normalization fast, in software you have to do it manually (and they consume lots of logic to do the multiplies and divides in fewer clocks). Although I assume you could make a neat table driven thing to save some steps in software I have not looked at the sun code that everyone uses in a long while. Also an fpu, soft or hard, keeps the significant bits left justified where fixed generally thinks in terms of right justified. Left justified is much easier if you are willing to sacrifice the least significant digits. I have been thinking about this watching this discussion and wondering what games could be played at the cost of precision, for example to keep your 20-32 bit precision you do need to do 40-64 bit multiplies and divides (you only need a 33 bit for add). What if you tossed the precision first, shifted both operands right 16 and only do a 32 bit operation. Basically you have only 16 bits of mantissa. It would be like living in a 16 bit world but you dont have to worry about things like multiplying 65536 times 7. A nice thing about float or pseudo float is that if your divides are constants then you can turn them into multiplies. Dividing by 7 is the same as multiplying by 1/7, which you can do with a float system and take advantage of the hardware multiply (sure you can synthesize this in a fixed system too).

it would be a very interesting project to keep the mantissa and exponent separate. Another would be to simply trim the ieee garbage out of the stock soft fpu. Very few actually understand the whole IEEE spec and its ramifications much less use it...Alternate formats are faster, I dont know why someone (gcc) doesnt offer an alternative for soft float situations...Because of IEEE, few if any hard FPUs are bug free, to claim IEEE compliance they therefore have to monitor everything through a wrapper or trigger and handle certain exceptions in software thus defeating performance for compliance (or like Intel they just let the bugs flow right to the application, few Intel FPUs pass testfloat)...It took our team about 3-4 years to get a fully IEEE compliant FPU in hardware (passed testfloat level 2). Using the TI floating point format, it took less than three DAYS to design the hardware, test software, and pass a test that exceeded testfloat level 2 (around half a billion test vectors). Granted we only did multiply and add on that one. Give us a month we could have had div, sqrt, etc. You get an idea of what I am talking about. TI DSPs take the speed approach, most if not all applications would never know about an overflow, underflow, quiet or signaling nan, infinity, etc. Get rid of all of it, have overflows give the properly signed max value, instead of the properly signed infinity on a divide by zero you get the properly signed max value. Quiet and signaling nans will drive you crazy as well as the status bits. Most developers dont check for anything, run their programs, dont get crashes or exceptions and are happy, when they get a crash or exception add code to avoid that problem. No different on a non-IEEE format you hit a max value or zero along the way and you will know about it, go back and change the code to prevent it from happening. I think the only question is do you round or truncate, once you say round then you have to ask do you go to the extremes of IEEE and offer round up, round down and round to zero or do you just pick one?

Anyway, the point was even single precision has more than 20 bits of mantissa, yes there are around 7 bits of exponent, that is true, but many many more than 7 bits of precision in any floating point number. Hopefully I did not go on this rant because I mis understood the 7 bits of precision thing, if so, sorry...I have been pretty good keeping my mouth shut on this one thus far.


Note, if/when testing fixed vs float it is a good idea to use primes, avoid one and zero as the fpu will take shortcuts. Also be very careful of your precision

float a,b,c;

a=b+1.0; is NOT the same as a=b+1.0F;

The second one is faster, the first, b has to be converted to double, the operation is performed as a double, then converted back to single to store in a. Where the latter, both operands are single, the result is single and immediately stored in a.

Even better:

a=b+1;

Depending on your compiler, integers are exact in C floats are not, things like this often happen with C compilers:


a=3.0;
b=2.0;
c=a-b=1.0;

if(a!=0.0) printf("something is wrong\n");

Probably not with those specific constants, but very often you will not get the right number. The comparision in the if is doing a bit level check, not a value check. IEEE allows for plus and minus zero, if the result is minus zero and your hand coded constant above is turned into a plus zero by the compiler you wont get the result you expect and may never figure out why. (The TI format has a similar problem with zero). This doesnt always happen just with zero.

So what you should do is develop a habit of:


int i;

i=(int)a; if(i!=0) printf("something is wrong);

or

if(((int)a)!=0) printf("something is wrong");

Sorry for the tangent there, years of floating point work and frustration with the compilers. I realized that I am surprised that anyone every gets floating point to work in C. The odds of the compiler doing what you asked are low when it comes to floating point. this is relevant I guess, the early posts all said look at the assembler, the more you look at the assembler for float code the more you realize what each compiler does and doesnt do. Also remember according to the test float guy (Hauser) I think it was most of the bugs we see in the hard fpus today (intel for example) is in the precision conversion. So make sure that F is on the end of all single point constants and not there for all double constants. Speed improvements in soft fpus and bug reduction in hard fpus.

Have fun...
 
Last edited by a moderator:
in IEEE floating point all numbers are stored using "hidden" bit notation. The mantissa is left shifted till the high order bit is 1 then it is shifted again so all mantissa's are 1.xxxxxx this allows the high order bit to be discarded and gives the extra precision without taking storage. The sign bit and the exponent bits are what take up the rest of the space in the floating point encoding. As dwelch said the sizes of each of the fields varies (except for the sign field, 1 bit) for single and double precision. The exceptions in storage are for zero (zero exponent and zero mantissa) and the case where all the bits are on in the exponent (mantissa = 0 then either negative or positive infinity or if the mantissa != 0 NaN 'not a number'). Floating point format has the advantage of compact storage and a good window of precision (leading zeros tossed and others) but requires more complex manipulation (so a justification for FPU's). Pixed point has the advantage of using standard integer instructions (so no FPU needed) so it is generally faster than FPU operations (x86 FPU's are a stack machine so the loading and extraction of values along with the time to do the operations can be slower than the integer equivalents). The linited range and extra work to keep track of the number formats are the disadvantages of fixed point. There is no "perfect" solution so you need to pick your poison. Floating point also has errors that can creep in to your operation when combining numbers of different magnitude. Most of the literature uses the term epsilon as the smallest number you can represent in floating point but when adding or subtracting numbers of different magnitude this does not apply because of the way the numbers are stored (IEEE format). If you take any of the IEEE floating point format numbers, zero the mantissa, if the value of the exponent is greater that the width of the mantissa subtract that number, the resulting number is the value of the last bit in the mantissa and any number less than that number could be added to or subtracted from the original number forever without changing the original value. Most FPU's do the math internally in higher precision that the native format so some effect would be there internally but the storage in the native format would leave the original number unchanged. The work around to this is to do the floating point operation on numbers of like magnitude and then at the end do the operations on unlike magnitude number so the error hits once rather than each step.

Since we don't have a FPU conventional wisdom says use fixed point but it depends on the libraries available and the short cut the tools (compiler and such) can do for us. Any constant expression will be evaluated at compile time so these types of operations are not good runtime test (since the value is already calculated). Any values calculated at initialization time will generally not have impact on performance in the later parts of our code. So we need to concentate on the numbers we use in the critical performance parts of our code and know what precision we need so we can make good guess-ti-mates of what approach to use.

Long winded and you probably don't care but that's my 2 cents worth.
 
dwelch posted on Feb 3 2007 at 06:58 PM said:
Anyway, the point was even single precision has more than 20 bits of mantissa, yes there are around 7 bits of exponent, that is true, but many many more than 7 bits of precision in any floating point number. Hopefully I did not go on this rant because I mis understood the 7 bits of precision thing, if so, sorry...I have been pretty good keeping my mouth shut on this one thus far.

I said 7 digits of precision, not bits :)

Anyways, your post is very interesting. Indeed since we can't have hard FPU, why stick to IEEE 754 and why not make your own soft floats. I don't need it currently because I'm fine with fixed point, but I might need it for another game of mine which uses float coordinates, and possibly for future DSP programs. By the way do you have any good documentation on doing that?

Funny I didn't know that decimal constants were double by default, but it kind of makes sense. I'll have to look through my code for this because I know I've been doing that.

And Gary Miller is right about the first hidden bit. It's there for both IEEE floats and doubles, and it avoids you having multiple binary representations for the same number. And yeah I've already dealt with adding a float to another but one float was smaller than the smallest value needed to change the other float so that value would never change so I had to move on to doubles.
 
Last edited by a moderator:
To keep from switching to doubles you can do things like this.

Potential large error value:
Code:
float other_large_value;

// Assume table[i] is a small value

// Error each loop interation because of magnitude difference
for (i=0; i < max; i++)
  other_large_value  += table[i];

Smaller error value:
Code:
float total  = 0.0;
float other_large_value;

// Lower potential error because of like magnitude
for (i=0; i < max; i++)
  total += table[i];

// Now add values of different magnitude
other_large_value += total;
 
A_SN posted on Feb 3 2007 at 02:35 PM said:
dwelch posted on Feb 3 2007 at 06:58 PM said:
Anyway, the point was even single precision has more than 20 bits of mantissa, yes there are around 7 bits of exponent, that is true, but many many more than 7 bits of precision in any floating point number. Hopefully I did not go on this rant because I mis understood the 7 bits of precision thing, if so, sorry...I have been pretty good keeping my mouth shut on this one thus far.

I said 7 digits of precision, not bits :)

Anyways, your post is very interesting. Indeed since we can't have hard FPU, why stick to IEEE 754 and why not make your own soft floats. I don't need it currently because I'm fine with fixed point, but I might need it for another game of mine which uses float coordinates, and possibly for future DSP programs. By the way do you have any good documentation on doing that?

Funny I didn't know that decimal constants were double by default, but it kind of makes sense. I'll have to look through my code for this because I know I've been doing that.

And Gary Miller is right about the first hidden bit. It's there for both IEEE floats and doubles, and it avoids you having multiple binary representations for the same number. And yeah I've already dealt with adding a float to another but one float was smaller than the smallest value needed to change the other float so that value would never change so I had to move on to doubles.

So I did misunderstand, sorry. I think in bits and hex not decimal (makes it very hard to balance my checkbook and my timecard).

One of the TI DSP documents talked about the float format they use, and even had detailed instructions on how the add and multiply worked. shift this operand or that operand right until the exponent matches do the addition, normalize, etc. I assume the IEEE 754 has to be purchased, not sure if it is published online.

I started off on a long paragraph, basically I see that Wikipedia has some good looking informaiton on IEEE floating point numbers. They specifically walk you through going from an ascii or decimal notation number to the bits in a double and you can see that the most significant 1 in the mantissa, the one to the left of the decimal point is stripped off making room for one more bit of precision. I think that double and extended have this implied one and singles waste the bit, will have to find my spec or dig into the wikipedia references or just convert numbers and print out the bits on some computer...

Somewhere it was determined that the standard floating point operation in C is double precision, so if you use a floating point constant (has a decimal in the number) that constant is assumed to be double unless you put an F on the end of it. Fixed or float C uses the highest precision for any operation, so if the constant is double and the variable in single the operation is performed as a double, if the result is a single then it has to convert back down. You can see how this will kill you with a soft fpu, and/or you can take someones slow code and maybe make it run much faster with this knowledge. It turns out the whetstone benchmark, FWIW, is excellent for testing a compiler/fpu. it is HORRIBLY written, a near constant stream of mixed precision operations. The tests themselves are done nicely but the code that wraps around it keeping stats on how the test is going is quite mixed. The FPU we designed supported single, double and extended (double) precisions. If you start thinking about how many tests you might want to run on each operation, multiply that by three rounding modes, multiply by two if it is a dyadic (two operand, one result, sqrt is monadic, one in one out). Already you are swamped, even with single precision, the fastest computer today would take many centuries to execute every combination. Now multiply that by six to handle multi precision. Some fpus let you put a single in one register and a double in another and do a single multiply which implys a precision conversion. We refused to support that because it multiplies the number of tests by six (single = single + single, single = single + double, single = single + extended, single = double + double, single = double + extended, single = extended + extended). Anyway, I had to hack the gcc backend to cause explicit precision conversions (instead of single = single + double; I would replace that with single = double (explicit presision conversion), single = single + single) and the short end to a long story is I am extremely sensitive to mixed precision in C code. And like to give out uninvited free advice about it when I can...
 
Last edited by a moderator:
This is a C version of add from an older gcc, looks like the newer gcc's use hand coded assembler for each platform.

Looks like you can get a little improvement by removing rounding. overflow and underflow would still require an if then else (not sure if I see overflow and underflow in the C code here). The hand assembler in gcc 4.1.1 (gcc-4.1.1/gcc/config/arm/ieee754-sf.S) does appear to handle such things. Anyway, removing IEEE compliance stuff is not going to give a huge performance gain, something small but measureable.

Would love to hear more about fixed shortcuts from anyone willing to pitch in...

/* add two floats */
float
__addsf3 (float a1, float a2)
{
long mant1, mant2;
union float_long fl1, fl2;
int exp1, exp2;
int sign = 0;

fl1.f = a1;
fl2.f = a2;

/* check for zero args */
if (!fl1.l) {
fl1.f = fl2.f;
goto test_done;
}
if (!fl2.l)
goto test_done;

exp1 = EXP (fl1.l);
exp2 = EXP (fl2.l);

if (exp1 > exp2 + 25)
goto test_done;
if (exp2 > exp1 + 25) {
fl1.f = fl2.f;
goto test_done;
}

/* do everything in excess precision so's we can round later */
mant1 = MANT (fl1.l) << 6;
mant2 = MANT (fl2.l) << 6;

if (SIGN (fl1.l))
mant1 = -mant1;
if (SIGN (fl2.l))
mant2 = -mant2;

if (exp1 > exp2)
{
mant2 >>= exp1 - exp2;
}
else
{
mant1 >>= exp2 - exp1;
exp1 = exp2;
}
mant1 += mant2;

if (mant1 < 0)
{
mant1 = -mant1;
sign = SIGNBIT;
}
else if (!mant1) {
fl1.f = 0;
goto test_done;
}

/* normalize up */
while (!(mant1 & 0xE0000000))
{
mant1 <<= 1;
exp1--;
}

/* normalize down? */
if (mant1 & (1 << 30))
{
mant1 >>= 1;
exp1++;
}

/* round to even */
mant1 += (mant1 & 0x40) ? 0x20 : 0x1F;

/* normalize down? */
if (mant1 & (1 << 30))
{
mant1 >>= 1;
exp1++;
}

/* lose extra precision */
mant1 >>= 6;

/* turn off hidden bit */
mant1 &= ~HIDDEN;

/* pack up and go home */
fl1.l = PACK (sign, exp1, mant1);
test_done:
return (fl1.f);
}
 
dwelch posted on Feb 6 2007 at 04:57 AM said:
This is a C version of add from an older gcc, looks like the newer gcc's use hand coded assembler for each platform.

Looks like you can get a little improvement by removing rounding. overflow and underflow would still require an if then else (not sure if I see overflow and underflow in the C code here). The hand assembler in gcc 4.1.1 (gcc-4.1.1/gcc/config/arm/ieee754-sf.S) does appear to handle such things. Anyway, removing IEEE compliance stuff is not going to give a huge performance gain, something small but measureable.

Would love to hear more about fixed shortcuts from anyone willing to pitch in...

/* add two floats */
float
__addsf3 (float a1, float a2)
{
long mant1, mant2;
union float_long fl1, fl2;
int exp1, exp2;
int sign = 0;

fl1.f = a1;
fl2.f = a2;

/* check for zero args */
if (!fl1.l) {
fl1.f = fl2.f;
goto test_done;
}
if (!fl2.l)
goto test_done;

exp1 = EXP (fl1.l);
exp2 = EXP (fl2.l);

if (exp1 > exp2 + 25)
goto test_done;
if (exp2 > exp1 + 25) {
fl1.f = fl2.f;
goto test_done;
}

/* do everything in excess precision so's we can round later */
mant1 = MANT (fl1.l) << 6;
mant2 = MANT (fl2.l) << 6;

if (SIGN (fl1.l))
mant1 = -mant1;
if (SIGN (fl2.l))
mant2 = -mant2;

if (exp1 > exp2)
{
mant2 >>= exp1 - exp2;
}
else
{
mant1 >>= exp2 - exp1;
exp1 = exp2;
}
mant1 += mant2;

if (mant1 < 0)
{
mant1 = -mant1;
sign = SIGNBIT;
}
else if (!mant1) {
fl1.f = 0;
goto test_done;
}

/* normalize up */
while (!(mant1 & 0xE0000000))
{
mant1 <<= 1;
exp1--;
}

/* normalize down? */
if (mant1 & (1 << 30))
{
mant1 >>= 1;
exp1++;
}

/* round to even */
mant1 += (mant1 & 0x40) ? 0x20 : 0x1F;

/* normalize down? */
if (mant1 & (1 << 30))
{
mant1 >>= 1;
exp1++;
}

/* lose extra precision */
mant1 >>= 6;

/* turn off hidden bit */
mant1 &= ~HIDDEN;

/* pack up and go home */
fl1.l = PACK (sign, exp1, mant1);
test_done:
return (fl1.f);
}

Interesting, unfortunately there's too many things lacking to figure out the code, but seems to be pretty straight forward. I should think about looking into the GCC code more often.
 
Last edited by a moderator:
Back
Top