GP2X Fixed Vs. Float

Which is faster?

  • Fixed-point

    Votes: 0 0.0%
  • Floating-point

    Votes: 0 0.0%

  • Total voters
    0

I can't see how the float math can even be in the same universe as the fixed point math. The fixed point should be WAY faster but I won't belabor the point.

Simulating the FPU in software verses using integer operations should be no comparison with the integer Operations winning.
 
So, which one is slower than it's floating point counterpart ?
Everything is. Well as a whole, as used by my function that calls them, I haven't tested individually the performance of each.

I can't believe it.

You pretend that :

Code:
int32_t a = 12<<8;
int32_t b = 5<<8;
int32_t c = qmul(a,b);

is slower than :

Code:
float a = 12;
float b = 5;
float c = a*b;

??

Please have a look at the assembly for both versions, this should give us a good hint.

As you've quoted me, I haven't tested individually the performance of each, and I can't claim that qmul is slower than a*b in float. however as a whole, using qmul, qmul2 and qdiv, it's slower than with only floats. Maybe the problem is with qdiv I suppose.

That would be cool if some knowledgable people about fixed-point arithmetic wrote wiki pages on this topic with example algorithms so that nobody who tries to use that goes wrong.
 
Last edited by a moderator:
That qdiv function is a heap of crap, you should be ashamed for writing something so terrible.
 
Ok. I see two reasons to have floating point faster than fixed point :
1) Your test function is optimised very cleverly by the compiler. If you test is something like :
Code:
a = 3.141592
b = 6.666
c = a * b
The compiler is able to precompute 3.141592 * 6.666 for the floating point version of this code, so the compiled code will be
Code:
c = 20.941852272
It's the fastest code in the west of everywhere you want :lol:
In contrast, with a fixed point version of this code, the compiler is no longer able to precompute the result,and then the compiled code will effectivly do the computation, that is slower than copying a constant.
Conclusion for case 1) --> You MUST check the generated assembler code.

2) Your algorithm is badly designed. A naive use of fixed point leads to slowniness and non controled numerical precision. Show it to us, don't be that shy :p
 
Heh, my guess is for the optimization. gcc seems to do some very small 'safe' optimizations even when you don't ask for them, as I've found out by going through generated assembler. To make it a valid comparison, you'll have to run the float operations in a separate function I think, so make a
Code:
float floatmult(float a,float b){return (a*b);}
to do it and I think that will keep the compiler from optimizing it :)
 
Oh, I'm sorry, my fault to keep with that 'dynamic vertical reading' :p

I just looked on your comments (I always find reading the comments easier than trying to understand the code, don't you?) and thought that you would fall for that (very common) mistake

I don't recall exactly, but AFAIK (that's definitely not for sure) floating point numbers (dunno if all of them either) have the shift location written somewhere in the value (that's the exact "floating point offset"), a value of 1.5 for example can have 1 bit of precision (so it may increase in only .5 steps) or 20 bits of precision, it's all governed by the leftmost value and/or simplification (like 1/2 will always have a 1-bit precision to get more performance)

anyway I'm not quite sure about this, this were conclusions I've made when I was 13 or 14 years old trying to understand floating point on my 386, that's 13 years ago..

why don't you produce the assembly code for that sample and try to look at it?
 
As you've quoted me, I haven't tested individually the performance of each, and I can't claim that qmul is slower than a*b in float. however as a whole, using qmul, qmul2 and qdiv, it's slower than with only floats. Maybe the problem is with qdiv I suppose.

That would be cool if some knowledgable people about fixed-point arithmetic wrote wiki pages on this topic with example algorithms so that nobody who tries to use that goes wrong.

Your first qdiv version uses gcc's integer division, which is properly optimized and should not be slower than gcc floating division (nor much faster, for that matter).

I suspect there is something wrong with your code, tool chain or compilation flags.

There are many ARM asm division example on google, BTW.
 
Last edited by a moderator:
I have done some testing of my own with (long long) calculations with GCC.
In some simple cases GCC optimizes the code really really well.
I don't know much about GCC internals but of what i have understood GCC can emit
small optimized procedures for simple (long long) operations.
So small functions like:
Code:
FP32 mul(FP32 a, FP32 b)
{
  return ((long long)a * (long long)b) >> 16;
}
Gets some really nice optimization.

However, when those functions gets inlined, and GCC for some reason i don't know
can't emit those optimized code snippets the result is quite horrible.

Thats what i've noticed in some cases with GCC4.1 anyway, GCC3.4 seems to handle
it slightly better.

So, one thing to try could be to make sure the functions you profile don't
get inlined, or build it with GCC3 (if you used GCC4).
 
Ok. I see two reasons to have floating point faster than fixed point :
1) Your test function is optimised very cleverly by the compiler. If you test is something like :
Code:
a = 3.141592
b = 6.666
c = a * b
The compiler is able to precompute 3.141592 * 6.666 for the floating point version of this code, so the compiled code will be
Code:
c = 20.941852272
It's the fastest code in the west of everywhere you want :lol:
In contrast, with a fixed point version of this code, the compiler is no longer able to precompute the result,and then the compiled code will effectivly do the computation, that is slower than copying a constant.
Conclusion for case 1) --> You MUST check the generated assembler code.

2) Your algorithm is badly designed. A naive use of fixed point leads to slowniness and non controled numerical precision. Show it to us, don't be that shy :p

lol I'm not stupid I'm not using constants :). Well now that I think about it I may have a few values that are constant.. heh I'll have to change that and test again. OK let's test again.. as low as 13.12 seconds for fixed, 12.54 for floats, no, there's no mistake, and all my values are modified as often as they should be so that there's none of the kind of optimization you mentionned.

Now, depending on the chosen qdiv() function, the shortest one makes the program take 15.52 s as the longest one takes 13.42. The shortest qmul2() takes 13.42 s as the longest takes 13.77, and the shortest qmul() takes 13.42 as the longest takes 13.12 s (didn't even know it was faster, now I'm gonna use that one)

Anyways, my algorithm along with my fixed point function are designed to use the necessary precision, but alright, I show you my code

Code:
void distance(coord p1, coord p2, coord p3, coord *p0)	//fixed
{
	int32_t u;	//u is in 8.24

	u = qdiv(qmul((p3.x - p1.x), (p2.x - p1.x)) + qmul((p3.y - p1.y), (p2.y - p1.y)), qmul((p2.x - p1.x), (p2.x - p1.x)) + qmul((p2.y - p1.y), (p2.y - p1.y)));
	if (u<0)
	{
		p0->x = p3.x - p1.x;
		p0->y = p3.y - p1.y;
	}
	else
		if (u>(1<<24))
		{
			p0->x = p3.x - p2.x;
			p0->y = p3.y - p2.y;
		}
		else
		{
			p0->x = p1.x + qmul2(u, (p2.x - p1.x));
			p0->x -= p3.x;
			p0->y = p1.y + qmul2(u, (p2.y - p1.y));
			p0->y -= p3.y;
		}

	if (p0->x<0)	//negativity check
		p0->x = -p0->x;
	if (p0->y<0)	//negativity check
		p0->y = -p0->y;
}

coord is depending whether I'm testing fixed or floats made of int32_t x, y or float x, y. Now here's the float version for it

Code:
void distance(coord p1, coord p2, coord p3, coord *p0)	//float
{
	float u;

	u = ((p3.x - p1.x) * (p2.x - p1.x) + (p3.y - p1.y) * (p2.y - p1.y)) / ((p2.x - p1.x) * (p2.x - p1.x) + (p2.y - p1.y) * (p2.y - p1.y));
	if (u<0.0)
	{
		p0->x = p3.x - p1.x;
		p0->y = p3.y - p1.y;
	}
	else
		if (u>1.0)
		{
			p0->x = p3.x - p2.x;
			p0->y = p3.y - p2.y;
		}
		else
		{
			p0->x = p1.x + u * (p2.x - p1.x);
			p0->x -= p3.x;
			p0->y = p1.y + u * (p2.y - p1.y);
			p0->y -= p3.y;
		}

	if (p0->x<0)	//negativity check
		p0->x = -p0->x;
	if (p0->y<0)	//negativity check
		p0->y = -p0->y;
}

this function is called with the following code in main() in a loop (and yes I know this is highly dumb and inefficient but it was all about checking if the results were good, not making something efficient) :

Code:
		p2.y = (120 + (i<<4))%240 << 8;
		p2.x = (160 + (i<<4))%320 << 8;
		p1.y = (0 + (i<<3))%240 << 8;
		p1.x = (0 + (i<<2))%320 << 8;

		for (iy=0; iy<240; iy++)
			for (ix=0; ix<320; ix++)
			{
				p3.x = ix<<8;
				p3.y = iy<<8;
				distance(p1, p2, p3, &p0);
				if (p0.x<GAUSS_SIZE && p0.y<GAUSS_SIZE)
				{
					#ifdef fixed
					g_val = g_lut[p0.y*GAUSS_SIZE + p0.x];
					#else
					g_val = g_lut[((int32_t) p0.y)*GAUSS_SIZE + ((int32_t) p0.x)];
					#endif
					screen16[iy*320 + ix] = ((g_val&248)<<8) | ((g_val&252)<<3) | ((g_val&248)>>3);
				}
				else
					screen16[iy*320 + ix] = 0xffff;
			}
 
Last edited by a moderator:
Heh, my guess is for the optimization. gcc seems to do some very small 'safe' optimizations even when you don't ask for them, as I've found out by going through generated assembler. To make it a valid comparison, you'll have to run the float operations in a separate function I think, so make a
Code:
float floatmult(float a,float b){return (a*b);}
to do it and I think that will keep the compiler from optimizing it :)

I see what you mean but I'd rather inline my functions that do this because I wouldn't do it in a real implementation, so it's irrelevant.

Lint : The 32-bit IEEE-754 float has a 1-bit sign, a 8-bit 2e7 exponent and a 23 bit mantissa, I know this :)

Your first qdiv version uses gcc's integer division, which is properly optimized and should not be slower than gcc floating division (nor much faster, for that matter).

I suspect there is something wrong with your code, tool chain or compilation flags.

There are many ARM asm division example on google, BTW.

As I've just tested my first qdiv() version is god-awfully slow, at least compared to my other version of qdiv(). As for my compilation flags I use -lm -O3 -msoft-float -ffast-math (the two last ones are only useful for the float version of course)

mithris : yeah I heard that gcc 4 had an ugly handling of long long's as compared to gcc 3 on ARM, so I guess I could optimize by hand over there, dunno..
 
Last edited by a moderator:
A_SN said:
Code:
	int32_t u;	//u is in 8.24

	u = qdiv(qmul((p3.x - p1.x), (p2.x - p1.x)) + qmul((p3.y - p1.y), (p2.y - p1.y)), qmul((p2.x - p1.x), (p2.x - p1.x)) + qmul((p2.y - p1.y), (p2.y - p1.y)));

Code:
	float u;

	u = ((p3.x - p1.x) * (p2.x - p1.x) + (p3.y - p1.y) * (p2.y - p1.y)) / ((p2.x - p1.x) * (p2.x - p1.x) + (p2.y - p1.y) * (p2.y - p1.y));

One thing that may matter is that probably gcc did not recognize the squaring in your fixed-point computation.
Try to write a specific square fixed-point routine and let us know how it goes :)
 
Last edited by a moderator:
Just to say that I reproduced the same behavior : your code runs here (400MHz ARM920T, obviously not a GP2X :)) approximatively 10% faster with libfloat than with fixed point.

I'm investigating. Very interesting.
 
rixed: Have you looked at the generated code ... this is really strange. Could the soft-float and fast math optimization be doing things without the float library?
 
OK, understood !

The softfloat library have a _big_ advantage here : nowhere it has to multiplies nor divides 64 bit numbers.

Take qmul for example (return (int32_t) (((int64_t) a*B) >> 8)) : although a single smull should be enough, because both a and b are 32 bits, we cannot in C have operands with different size than result, so we cast a into 64 bits, which, as consequence, imply that b also will be casted to 64 bits and the mul will in fact multiplies two 64 bits integers, giving a 64 bit result. This version of qmul leads to 2 MULs instead of one.

Take now qdiv ( return (int32_t) (( (int64_t) a << 32 )/B)>> 8) ) : here again we are asking GCC to divide two 64 bits integers, althoug we only have 32 significants bits per value.

By not using 64 bit cast anymore in qdiv yields an improvement of about 40% in speed, while removing 64 bits cast from qmul add another aprox 10% (very rought numbers). The result is then twice faster, that is much faster than the float version. But, of course, the results are false :)

What you should need : code an inlined qmul function in asm that use smull (very easy).
Then, code your own divide that do not use int64_t integers (should be easy in C, more tricky in asm).

And I will do exactly the same in gpu940 : I just checked that my Fix_mul function suffer the same problem :)

Thank you for raising some attention on this !
 
Code:
	int32_t u;	//u is in 8.24

	u = qdiv(qmul((p3.x - p1.x), (p2.x - p1.x)) + qmul((p3.y - p1.y), (p2.y - p1.y)), qmul((p2.x - p1.x), (p2.x - p1.x)) + qmul((p2.y - p1.y), (p2.y - p1.y)));

Code:
	float u;

	u = ((p3.x - p1.x) * (p2.x - p1.x) + (p3.y - p1.y) * (p2.y - p1.y)) / ((p2.x - p1.x) * (p2.x - p1.x) + (p2.y - p1.y) * (p2.y - p1.y));

One thing that may matter is that probably gcc did not recognize the squaring in your fixed-point computation.
Try to write a specific square fixed-point routine and let us know how it goes :)

Interesting idea, but how should my square function do it? just like a multiplication?

rixed posted on Feb 1 2007 at 12:22 AM said:
OK, understood !

The softfloat library have a _big_ advantage here : nowhere it has to multiplies nor divides 64 bit numbers.

Take qmul for example (return (int32_t) (((int64_t) a*B) >> 8)) : although a single smull should be enough, because both a and b are 32 bits, we cannot in C have operands with different size than result, so we cast a into 64 bits, which, as consequence, imply that b also will be casted to 64 bits and the mul will in fact multiplies two 64 bits integers, giving a 64 bit result. This version of qmul leads to 2 MULs instead of one.

Take now qdiv ( return (int32_t) (( (int64_t) a << 32 )/B)>> 8) ) : here again we are asking GCC to divide two 64 bits integers, althoug we only have 32 significants bits per value.

By not using 64 bit cast anymore in qdiv yields an improvement of about 40% in speed, while removing 64 bits cast from qmul add another aprox 10% (very rought numbers). The result is then twice faster, that is much faster than the float version. But, of course, the results are false :)

What you should need : code an inlined qmul function in asm that use smull (very easy).
Then, code your own divide that do not use int64_t integers (should be easy in C, more tricky in asm).

And I will do exactly the same in gpu940 : I just checked that my Fix_mul function suffer the same problem :)

Thank you for raising some attention on this !

Interesting, but as I said earlier the other version of qmul (that doesn't use a long long) is faster. Same for qdiv, the other version that doesn't use a long long nor a divide is much faster. However it's the shortest version of qmul2 that is the fastest, because the longest version has to perform a whole another fixed point multiplication inside itselves, with either a 64 bit multiplcation and shift or a function similar to the longest version of qmul. This is because at some point it has to multiply the fractional part of the 8.24 number with the integer part of the 24.8 number, which cannot be performed simple since it should need 48-bits.
 
Last edited by a moderator:
depending on how accurate you need the result to be you could drop some fractional bits to reduce the the chance of overflow. That might meets your needs but as always fixed point math will always have some issues with accuracy. If the integer part of the 24.8 time the integer part of the 8.24 will not exceed 32767 or be less that -32768 then you could right shift the 8.24 by 16 to make it an 8.8 and then do the multiply times the 24.8 to produce a number that will fit in a 16.16. Based on the number ranges you could play with both or just one of the numbers to reduce the chance of overflow.
 
A_SN posted on Feb 1 2007 at 07:24 AM said:
Interesting, but as I said earlier the other version of qmul (that doesn't use a long long) is faster.

The one that needs 4 muls ? Forget it.

Here is a simple qmul that does 16.16 x 16.16 in a single mul :

Code:
static inline int32_t Fix_mul(int32_t a, int32_t b) {
		int32_t ret;
		__asm__(
				"smull %[lo], r1, %[a], %[b]\n"
				"mov %[lo], %[lo], lsr #16\n"
				"orr %[lo], %[lo], r1, lsl #16\n"
				: [lo] "=&r" (ret)	  // rhi, rlo and rm must be distinct registers
				: [a] "r" (a), [b] "r" (b)
				: "r1"
		);
		return ret;
}

Same for qdiv, the other version that doesn't use a long long nor a divide is much faster.

I didn't tested the handmade qdiv. Im going to, so more on that later.
 
Last edited by a moderator:
Try this :

Code:
#include <stdio.h>
#include <stdint.h>

static inline uint32_t qdiv(uint32_t R, uint32_t D)
{
		if (R < D) return 0;
		uint32_t normD;
		uint64_t next_normD = D;
		uint32_t Q = 0;
		uint32_t e = 0, next_e = 1;
		do {
				normD = next_normD;
				e = next_e;
				next_normD <<= 1U;
				next_e <<= 1U;
		} while (next_normD <= R);
		do {
				// can not be done more than 32 times
				if (normD <= R) {
						Q += e;
						R -= normD;
				}
				if (R < D) return Q;
				normD >>= 1U;
				e >>= 1U;
		} while (1);
}

int main(void)
{
		unsigned i;
#	   define RUN 9999999
		for (i=1; i<RUN; i++) {
				uint32_t volatile ax, bx, q;
				ax = 1<<27;
				bx = i;
				q = ax/bx;
		}
		printf("GCC done\n\n\n\n");
		for (i=1; i<RUN; i++) {
				uint32_t volatile ax, bx, q;
				ax = 1<<27;
				bx = i;
				q = qdiv(ax, bx);
		}
		printf("QDIV done\n\n\n\n");
		for (i=1; i<RUN; i++) {
				float volatile af, bf, q;
				af = 1<<27;
				bf = i;
				q = af/bf;
		}
		printf("SOFTFLOAT done\n\n\n\n");
		return 0;
}

GCC integer division is the clear winner.
My small div function is about twice slower.
float divs are way behind.

I tried your qdiv but had strange results (plus it was slower). Probably you should do this test.
 
Gary Miller posted on Feb 1 2007 at 02:21 PM said:
depending on how accurate you need the result to be you could drop some fractional bits to reduce the the chance of overflow. That might meets your needs but as always fixed point math will always have some issues with accuracy. If the integer part of the 24.8 time the integer part of the 8.24 will not exceed 32767 or be less that -32768 then you could right shift the 8.24 by 16 to make it an 8.8 and then do the multiply times the 24.8 to produce a number that will fit in a 16.16. Based on the number ranges you could play with both or just one of the numbers to reduce the chance of overflow.

Actually my fixed point maths gives results much more precise than with floats, this is because floats only have about 7 significant digits. I think you're right, I need to determine how much precision I really need. being said, my final algorithm will be completely from this one anyways.

rixed : yeah I guess I could try and make it like it's a 16.16 * 16.16 multiplication instead. I don't really understand the mix of ASM and C (usually I deal with functions in separate .s files), I think I'll have to compile your function to figure it out.

As for your qdiv function, yours is unsigned, mine is signed. Btw what's your inputs and output fixed format? Did you try both of my qdiv functions? As for the strange results you got, did you make sure that both your inputs were in 24.8 and that your result was in 8.24?
 
Last edited by a moderator:
Back
Top