Pandora About Floating Point On The Pandora


dflemstr

It's a ball.
Joined
Jul 31, 2008
Messages
2,514
Location
Stockholm, Sweden
Website
Visit site
OK, I know that this has been mentioned and discussed before, but there's never been a general consensus on the subject, nor a centralized source of information. I therefore want to put a few specific questions out there that I hope will be answered quickly, so that this thread can get a quick and painless death.
  1. Is there a significant performance penalty for using floats on the Pandora (to matter for those of us who don't count the instruction cycles in our code manually)?
  2. If so, how big is the impact, specifically? E.g. by how many % can I expect the performance to go down in float-dense code (relative to a processor having a full-fledged pipelined FPU)?
  3. Can I use some kind of special compiler, e.g. the CodeSourcery 2007q3 one that supposedly possesses magical powers, or another one, that will delegate float operations to a coprocessor at the flip of an "-f" switch, so that floats perform better?
  4. Are explicit (compiler-implemented) softfloats to be preferred?
  5. If floats are absolutely out of the question, can anyone recommend a good fixed-point C math library, that has about the same functionality as math.h?
  6. There was a thread about this a while ago, but what became of it? What do I as a developer gain by using that library?
Thanks, and please excuse my prompt practical approach to this issue... I'm kind of in a hurry.
 
Last edited by a moderator:
1. there's a significant performance penalty compared to a properly-pipelined FPU. whether that'd be a deal breaker for you depends entirely on what you intend to do.
2. depends on the processor you compare A8's VFP to. most mobile-class CPUs are not exactly FP monters, anyway. the best such CPU i've seen myself has had a FP throughput of 2 ops/ 3 clocks (apparently fully pipelined) - cleary better then A8's VFP, no argument there, but still a modern desktop super-scalar FPU would easilly beat that under many scenarios. so if you have some desktop-league FP workload, you may be out of luck on the A8.
4. just stick to VFP's RunFast mode (if possible in your case) and no softfloat will ever come close performance-wise.
6. you gain certain speed improvements for certain functions commonly found in the gcc's math libraries. definitely worth using, if you can live with potential minor precision drops here and there.
 
I can tell you my experience from iPhone 3gs which is using the same CPU.

Essentially, I am using a lot of handwritten asm VFP/NEON code to handle various transformation + C versions of the same code.
The compiler on 3gs is actually generating NEON code ( not very optimal and not vectorized but it does generate genuine NEON code)

Ok, here are some very specific numbers ...

Transforming a model with 400 000 vertices ( position + normal) - just happened to be testing this stuff recently....

Code:
1. iPhone 3g 	( arm 11) C code     	   - 270 ms
2. iPhone 3g 			 		asm VFP code   - 90 ms
3. iPhone 3gs 	( cortex A8)    C code 		   - 90 ms
4. iPhone 3gs     				asm VFP code   - 160 ms
5. iPhone 3gs  					asm NEON code  - 40 ms

As you can see the VFP unit on Cortex 8 is really slow .... the same hand-tuned VFP which code running on Cortex 8 is actually almost twice as slow as the arm 11 version !
On the other hand , the C code, which IS using basic Neon instructions, is actually twice as fast as my VFP asm code on Cortex 8.

Of course, resonably optimized Neon code outperforms everything.


The bottom line is this - if your compiler is able to generate basic Neon float code (as it does on the iPhone 3gs) then your floating point stuff will run very fast.
On the other hand, if it ends up generating VFP code , it will be slow as hell ... actually slower than the same code on arm 11.
 
I can tell you my experience from iPhone 3gs which is using the same CPU.

Essentially, I am using a lot of handwritten asm VFP/NEON code to handle various transformation + C versions of the same code.
The compiler on 3gs is actually generating NEON code ( not very optimal and not vectorized but it does generate genuine NEON code)

Ok, here are some very specific numbers ...

Transforming a model with 400 000 vertices ( position + normal) - just happened to be testing this stuff recently....

Code:
1. iPhone 3g 	( arm 11) C code     	   - 270 ms
2. iPhone 3g 			 		asm VFP code   - 90 ms
3. iPhone 3gs 	( cortex A8)    C code 		   - 90 ms
4. iPhone 3gs     				asm VFP code   - 160 ms
5. iPhone 3gs  					asm NEON code  - 40 ms

As you can see the VFP unit on Cortex 8 is really slow .... the same hand-tuned VFP which code running on Cortex 8 is actually almost twice as slow as the arm 11 version !
On the other hand , the C code, which IS using basic Neon instructions, is actually twice as fast as my VFP asm code on Cortex 8.

Of course, resonably optimized Neon code outperforms everything.


The bottom line is this - if your compiler is able to generate basic Neon float code (as it does on the iPhone 3gs) then your floating point stuff will run very fast.
On the other hand, if it ends up generating VFP code , it will be slow as hell ... actually slower than the same code on arm 11.
 
I can tell you my experience from iPhone 3gs which is using the same CPU.

Essentially, I am using a lot of handwritten asm VFP/NEON code to handle various transformation + C versions of the same code.
The compiler on 3gs is actually generating NEON code ( not very optimal and not vectorized but it does generate genuine NEON code)

Ok, here are some very specific numbers ...

Transforming a model with 400 000 vertices ( position + normal) - just happened to be testing this stuff recently....

Code:
1. iPhone 3g 	( arm 11)                       C code         - 270 ms
2. iPhone 3g 			 		asm VFP code   - 90 ms
3. iPhone 3gs 	( cortex A8)                    C code 	       - 90 ms
4. iPhone 3gs     				asm VFP code   - 160 ms
5. iPhone 3gs  					asm NEON code  - 40 ms

As you can see the VFP unit on Cortex 8 is really slow .... the same hand-tuned VFP which code running on Cortex 8 is actually almost twice as slow as the arm 11 version !
On the other hand , the C code, which IS using basic Neon instructions, is actually twice as fast as my VFP asm code on Cortex 8.

Of course, resonably optimized Neon code outperforms everything.


The bottom line is this - if your compiler is able to generate basic Neon float code (as it does on the iPhone 3gs) then your floating point stuff will run very fast.
On the other hand, if it ends up generating VFP code , it will be slow as hell ... actually slower than the same code on arm 11.
 
Opps... sorry .... somehow ended up posting multiple copies of the same stuff ( Chrome was freaking for some reason )
 
1. Yes.
2. Most x86 processors can issue 2 FP per cycle. On A8 one FP every 7 or 8 cycles (can't remember exactly).
3. How could a compiler get around a chip limitation? :)
4. No.
5. Unless you are able to do math to compute error ranges, fixed-point and FP are not exchangeable.
6. The approach used by Adventus is the best one to get good speed out of an A8. Use NEON whenever you can (that is when you don't need double precision and when you need neither special rounding or NaN handling). I think his
 
So the big question is of course, whether our compiler in the PandoraOS suite is going to be able to generate NEON instructions or not .. I too am looking for the 'best' FP performance on Pandora (soft-synths, yo) and not seeing much that will just 'solve' the problem for me .. looks like a lot of homework is necessary. Great thread, btw.
 
Hey, thanks for the great replies so far!

Quick amendment to #5, however, since it seems to have been misunderstood in my hurry. I'm not saying that I'm looking for a 1:1 replacement for math.h, but rather a fixedp library that provides similar functionality; e.g. "fixedp_tan(FIXEDP_PI)" would return 0x7fff.ffff instead of some kind of NaN, to hell with standard compliance. The point is that I don't want to create a fixed point library from scratch if there already is one around.

BTW, 16.16 fixedp numbers are enough for my needs, but what about 32.32, how fast would those theoretically be on the cortex, or with the support of SIMD stuff?

Another BTW; if there aren't any good fixed point libraries around, I'll be making my own fixed.h in C++ that would allow for something like "typedef fixed<23, 9> my_fixed;" and tight integration with floats and doubles for converting stuff, so give me a shout if you're interested. No need to reinvent the wheel.
 
dflemstr: I'm interested and am monitoring this thread for progress! I'd like for nothing more than someone else to fix the FP headache on ARM for good .. ;)
 
i've written 16.16 fixedpoint math replacement for the wiz - i compile the same app on windows with floating point and with fixed point on the wiz. works fine so far. interested? :)

*edit*
trenki (lurking around here in the forums) also has some fixed point stuff on his page:
http://www.trenki.net/content/view/17/37/

*edit2*
dont know if it applies to the pandora (dont know much about that) but maybe this http://code.google.com/p/vfpmathlibrary/ is also of interest...
 
crow_riot said:
i've written 16.16 fixedpoint math replacement for the wiz - i compile the same app on windows with floating point and with fixed point on the wiz. works fine so far. interested? :)

*edit*
trenki (lurking around here in the forums) also has some fixed point stuff on his page:
http://www.trenki.ne...ent/view/17/37/

*edit2*
dont know if it applies to the pandora (dont know much about that) but maybe this http://code.google.c...vfpmathlibrary/ is also of interest...
Trenki's stuff is great... been using it for my vectors in Penjin.
 
Last edited by a moderator:
dflemstr said:
Quick amendment to #5, however, since it seems to have been misunderstood in my hurry. I'm not saying that I'm looking for a 1:1 replacement for math.h, but rather a fixedp library that provides similar functionality; e.g. "fixedp_tan(FIXEDP_PI)" would return 0x7fff.ffff instead of some kind of NaN, to hell with standard compliance. The point is that I don't want to create a fixed point library from scratch if there already is one around.

BTW, 16.16 fixedp numbers are enough for my needs, but what about 32.32, how fast would those theoretically be on the cortex, or with the support of SIMD stuff?
32.32 wouldn't be good as you'd need 64-bit integer operations for it to perform well.

I'd be surprised if any fixed-point implementation was faster than NEON (which you could use given you don't care about compliance :) ).

EDIT: I meant fixed-point on the core vs FP on NEON. Perhaps fixed-point on NEON could be good, but probably not faster than FP, so why bother?
 
Last edited by a moderator:
Hey everyone, I checked out your links but wasn't quite satisfied, and it turns out that using floats won't be possible for me after all (for reasons unrelated to all that's been mentioned), so I started on my own fixed point implementation that uses some modern C++ features (Trenki's version is awesome but maybe not for arbitrary precision). Credits go to Evan Teran who gave me the idea for this from his own similar implementation.

The current implementation I have enables you to write e.g. "fixedp<true, 8, 24> myvar(5.3);" to get a signed 8.24 fixed point variable with the value "5.3".
You can put any two numbers as the second two template arguments as long as they add up to either 8, 16 or 32 (64 would be supported if stdint.h would provide a 128-bit type, which is needed for multiplication), and the first argument is true for signed and false for unsigned.
You can of course add a typedef for it if you want to save yourself some typing, I've added some by default (sfix8, ufix8, sfix16 etc). There's no overhead to using the template or the class; the values still use their advertised amounts of memory and all operations should be optimized.

You'll find the code at http://gist.github.com/294959 (hosted as a Gist) and you can *contribute* to the class by updating the Gist (just click "Fork", make your changes, and I can pull them back in à la Git).

What's missing now is an implementation of math.h-style functions (which might become a PITA to implement) and some operators like e.g. "(int)+(fixedp)" and so on (currently, only "(fixedp)+(int)" is defined).
There are also some bugs related to signedness and bit shifts, but I'll work them out.
 
What's missing now is an implementation of math.h-style functions (which might become a PITA to implement) and some operators like e.g. "(int)+(fixedp)" and so on (currently, only "(fixedp)+(int)" is defined).
There are also some bugs related to signedness and bit shifts, but I'll work them out.
What about mixing Fixed point values of different precision? ie adding a 16.16 with 8.24, how do you decide whether the final value is 16.16 or 8.24? I've found that fixed point is typically a real pain in the arse, you either have to be very aware of what the range of each variable is or you have to add in overhead to make sure overflow/underflow doesn't occur.

I have absolutely no idea how you could do a decent precision math.h in fixed point.... I suspect it would be slower than cmath because alot of the functions use floating point tricks. The first trick in the cmath bible is to reduce the range of the value so you only need the polynomial to be accurate over a subset, often this is achieved by handling the exponent separately from the mantissa. The Fixed Point cmath's I've seen use loops and division, and i would suspect are not very fast or accurate. It would probably be better/faster to convert to float, then do the cmath function, then back to fixed point.

You might spend your time better by instead writing a class that wraps the NEON intrinsics (I've been thinking about this myself), or simply making your code vector friendly and using my math library (it handles matrix/vector functions now).
 
I was starting to wonder if I was alone thinking fixed-point wasn't a good idea :)

Also dflemstr, look at the assembly generated by your code. Oh and note the division will be extremely poor as it will call a library function (there's no hardware integer division on ARM, but you have fast reciprocal for FP NEON).
 
I wish someone would write a NEON FP library in Assembly that can be dropped in as a replacement for math.h .. this would be very useful, but I don't have the skills with NEON to do such a thing .. ah well, one for the list.
 
torpor said:
I wish someone would write a NEON FP library in Assembly that can be dropped in as a replacement for math.h .. this would be very useful, but I don't have the skills with NEON to do such a thing .. ah well, one for the list.
+1 for this. Won't be using floats for my current project, but might do for another.
 
Last edited by a moderator:
torpor said:
I wish someone would write a NEON FP library in Assembly that can be dropped in as a replacement for math.h .. this would be very useful, but I don't have the skills with NEON to do such a thing .. ah well, one for the list.
I assume by FP you mean Fixed Point. I don't think it would be very suited to NEON. The Fixed Point cmath functions I've seen implemented rely on loops to get decent accuracy. Loops do not agree with neon, Every time a NEON dependent branch is necessary a NEON register must be sent to the ARM... >20 cycle stall. You might be able to unroll the loops, but you'll probably need much more iterations than float to get decent accuracy because you cannot range reduce as effectively.

However, the best point of all is that NEON INT can do 4x add/cycle (twice A8) but only 1x mul/cycle (same as A8).... NEON Float is 2x add/cycle and 2x mul/cycle, If you use NEON you may aswell use floats.
 
Last edited by a moderator:
I believe they meant floating point. They would not need to use fixed point if fast neon-accelerated floating point was a drop-in alternative.
 
Back
Top