How to make your ARM device run 20-40% faster


bluramon

Still Fresh
Joined
Nov 30, 2011
Messages
2
I came across this powerdeveloper forum thread today on hacker news( http://news.ycombinator.com/item?id=3337246) about a significant performance gain in ubuntu for ARM by changing a compilation option which make wonder: the pandora being an arm device, would it gain a speed improvement from this compilation option ?
 
We've been discussing this in the Forget-Me-Not topic, and yes I feel that there can be substantial performance increases. However, until the toolchains used to build Pandora games and apps are updated to support hard-abi we'll not be able to make use of this option.


D.
 
We've been discussing this in the Forget-Me-Not topic, and yes I feel that there can be substantial performance increases. However, until the toolchains used to build Pandora games and apps are updated to support hard-abi we'll not be able to make use of this option.


D.
Would this always mean a performance increase, or just on certain applications that use some part of C that would benefit from this, and other applications would be worse?


How about for those who build on the Pandora (I tend to), should this be something I use?
 
It's not only a toolchain thing, all system libraries would have to rebuilt with hardfp, or they will just not link. If you rebuild all system, all software would have to be rebuilt, meaning every released .pnd so far would become incompatible and would have to be rebuilt. Also SGX drivers would become incompatible and impossible to use.


Possible solution for this is maybe to use something like debian multiarch, i.e. have both soft and hard abi libraries in fs, but that would probably no longer fit in pandora's NAND, not to mention I'm not aware of any distribution allowing this now.


In general I really don't think it would make the device 20-40% faster, this only affects specific subset of programs (that use floating point heavily AND use lots of functions with float arguments).
 
In general I really don't think it would make the device 20-40% faster, this only affects specific subset of programs (that use floating point heavily AND use lots of functions with float arguments).

But I do think that it's a shame we're not able to utilise the Pandora's power as much as we could - of course, I want the best compilation options possible for my port (FMN in particular makes heavy use of FP being passed many thousands of times per frame in a great many procedures and functions and hard-abi might make it a lot more efficient). I'm not sure what other differences there are between the iPhone 3GS's toolchain and the Pandora's (though I'm fairly sure I read that iOS apps now use hard-abi), but apps built for both systems run noticeably slower on the Pandora - and that's just not right.


And because the OS wasn't built with hard-abi from the outset, we're stuck with this problem now? There must be some way to fix this, surely?


D.
 
Last edited by a moderator:
There must be some way to fix this, surely?
From the sounds of it, on a grand scheme changing the underlying libraries breaks everything else.


However, there's nothing stopping individual PNDs from shipping with their whole set of libraries (I've done it for a couple). So, for FMN, you could compile all the libraries it uses with the optimized flag, and ship those in the PND itself. Sure, the PND would be larger, but if it's making the game run better, most people would accept that.
 
There must be some way to fix this, surely?
From the sounds of it, on a grand scheme changing the underlying libraries breaks everything else.


However, there's nothing stopping individual PNDs from shipping with their whole set of libraries (I've done it for a couple). So, for FMN, you could compile all the libraries it uses with the optimized flag, and ship those in the PND itself. Sure, the PND would be larger, but if it's making the game run better, most people would accept that.

Certainly looks like that's what we're going to have to do, assuming that those who build the OS make the optimised binaries available... And a compiler that can take advantage of them.


Was there some reason why we didn't use hard-abi from the outset?


D.
 
But I do think that it's a shame we're not able to utilise the Pandora's power as much as we could - of course, I want the best compilation options possible for my port (FMN in particular makes heavy use of FP being passed many thousands of times per frame in a great many procedures and functions and hard-abi might make it a lot more efficient).
You could cheat with this a little and make wrapper C files that #include other files which do float arguments. With that gcc will see all functions at once and be able to inline functions eliminating VFP->ARM moves needed for argument passing and performance penalty that results from it.


Another option is to store all floats in structures and pass pointers to those structures around instead of float values themselves, compiler will be able to load/store them directly from/to VFP avoiding costly VFP->ARM->VFP moves.

However, there's nothing stopping individual PNDs from shipping with their whole set of libraries (I've done it for a couple). So, for FMN, you could compile all the libraries it uses with the optimized flag, and ship those in the PND itself. Sure, the PND would be larger, but if it's making the game run better, most people would accept that.
That won't work because FMN uses gles, and that is only provided as softfp libraries by TI. Last time I checked they had no plans to provide hardfp libraries, but even if they changed their plans it will no longer help because they dropped support of our chip from their drivers.

Was there some reason why we didn't use hard-abi from the outset?
There were several reasons:


1. gcc did not support hardfp abi at the time of release. Ok when first pandora shipped there was probably some early support in gcc but I don't think entire OS would have built with that cleanly (I don't know if Angstrom supports that now even).


2. sgx blobs only provided with softfp abi.


Note that "softfp" here only means the ABI, the hardware FPU (VFP) is still fully utilized, it's that the ABI requires to pass arguments as integers, and moving from VFP to integer registers is slow because of the way CortexA8 pipeline was designed.
 
You could cheat with this a little and make wrapper C files that #include other files which do float arguments. With that gcc will see all functions at once and be able to inline functions eliminating VFP->ARM moves needed for argument passing and performance penalty that results from it.

People might be surprised (though some not!) that I have no idea what this means - I'm not a C or C++ coder at all :)


Can you point me to any documentation or examples that I can study? I've never come across this before!

Another option is to store all floats in structures and pass pointers to those structures around instead of float values themselves, compiler will be able to load/store them directly from/to VFP avoiding costly VFP->ARM->VFP moves.

Now that I can do - though it will require quite a lot of grind to convert all the calls. I'll see what happens.


Thanks for your input :)


D.
 
You could cheat with this a little and make wrapper C files that #include other files which do float arguments. With that gcc will see all functions at once and be able to inline functions eliminating VFP->ARM moves needed for argument passing and performance penalty that results from it.
People might be surprised (though some not!) that I have no idea what this means - I'm not a C or C++ coder at all :)
Basically you usually have many C files in a project:



Code:
example.c

file_with_float_arg_functions.c

file_with_float_function_callers.c

...

and a Makefile:



Code:
...

OBJECTS = example.o file_with_float_arg_functions.o file_with_float_function_callers.o



The idea is to write a new c file (say super.c) that just has this:



Code:
#include "example.c"

#include "file_with_float_arg_functions.c"

#include "file_with_float_function_callers.c"

And change Makefile OBJECTS to just super.o


Now when gcc compiles super.c it will see callers and functions in all those C files and can perform an optimization - merge them into single block of code, eliminating function calls altogether, along with the float arg penalty.


Of course this heavily depends on project structure, not all files can be compiled together (some may have symbol clashes and such), and gcc might decide not to inline anyway because function is too large or due to some other reason.
 
Last edited by a moderator:
If you're going to try building everything as one module make all of your functions static. Then gcc is not only able to inline them but can also decide on whatever calling convention makes the most sense instead of sticking to a standard ABI. I don't know if this will make it pass by VFP registers but it might.


If you really want to utilize the Pandora's power as much as you can in a situation where performance of small float instructions is important you should rewrite the code to use NEON ASM over larger input batches, assuming you don't need double precision. You'll never get good performance using VFP.
 
Same rules as usual apply; do a profile first and then act on the results. If hard-abi support was all there ready to be used, and you just had to add the compile flags to your project then sure, why not, get the extra performance gains. But typically there will be many (many, many, many) optimizations that can be made using everything currently available before moving onto completely unsupported features (which it sounds like are very unlikely to ever be supported for the current hardware version of the Pandora).


I'm also wondering what functions really are that float heavy? If I had a float heavy function that got called large amounts of time per frame, I would definitely be considering making that function inline, as presumably the function is relatively short. If the function is actually quite long, one would need to consider what the overhead actually is for the argument passing (somewhat large admittedly with the stall). If you really can't trust your compiler to inline when you feel it is absolutely critical (and all the hints you throw at the compile to inline are being ignored) you can always fallback to using a macro, okay it is a lot uglier, but you can be sure then it will be inlined.


Steve
 
Same rules as usual apply; do a profile first and then act on the results. If hard-abi support was all there ready to be used, and you just had to add the compile flags to your project then sure, why not, get the extra performance gains. But typically there will be many (many, many, many) optimizations that can be made using everything currently available before moving onto completely unsupported features (which it sounds like are very unlikely to ever be supported for the current hardware version of the Pandora).


I'm also wondering what functions really are that float heavy? If I had a float heavy function that got called large amounts of time per frame, I would definitely be considering making that function inline, as presumably the function is relatively short. If the function is actually quite long, one would need to consider what the overhead actually is for the argument passing (somewhat large admittedly with the stall). If you really can't trust your compiler to inline when you feel it is absolutely critical (and all the hints you throw at the compile to inline are being ignored) you can always fallback to using a macro, okay it is a lot uglier, but you can be sure then it will be inlined.

The DrawImage() function is called by every function that wants to draw something. This means all the sprites, the tilemap and the flowers in the game. The game updates at 60fps, so that's a few hundred to a thousand calls every frame. It takes three ints, one pointer and five floats every time it's called. It then condenses those down into a rect and a pointer, and passes a pointer to the rect along with the other pointer to another function that buffers that rect.


Although it's not a significant amount of CPU spent calling this function, it would help enormously if it could be reduced. I'll have a look at what Exophase and Notaz have said and try to implement it. I'm not a very good coder though (C/C++ is possibly the worst syntax I've ever come across, I just cannot understand it), so I may well fail. A compile-flag would be much more to my tastes!


D.
 
Last edited by a moderator:
Hi,


@ ZXDunny: I hope these new tips will help you improving FMN's speed :)


Bye and many thanks, Magic Sam
 
@ ZXDunny: I hope these new tips will help you improving FMN's speed :)

Hehe :)


I've not done any of these detailed above yet, but even so I think you'll be pleasantly surprised by the next version ;-)


D.
 
Back
Top