Pandora So slow an actual Pandora


Miner49er

Active Member
Joined
Mar 1, 2004
Messages
655
Age
49
Website
www.lessermatters.co.uk
Hi folks,


(forgive me if I've asked this exact same question before)


I've been working on a game for a while now. It's a full-screen-scrolling platformer using SDL for graphics/input. On my computer at home, it uses about 4-9% CPU (quad-core 2.5 Ghz CPU with intel graphics-card) - which is just fine and dandy.


Last night, I compiled it for my pandy and was saddened to see it use upwards of 90% CPU!! And it was noticebly slower than on my computer.


This makes me very sad as I worked to get cpu usage down as low as possible.


Can _anybody_ give me any advice/tips on how to improve this?


So as to not waste time, here's what I've done so far:


1. Convert all loaded graphics to bpp of screen


2. Removed delay from main loop (for pandora) so theoretically it should be using 100% cpu


If anyone has any clue or tips I would very much appreciate it :)


cheers,


m
 
Well there could be many reasons, maybe you're using lots of float math? Alpha blending? Something like SDL_image (I think), which is slow on ARM? What are your compile flags?


You could try some profiling tools like oprofile of perf, or you can give me non-stripped binary and I could have a go through those.


Edit: also you're likely only getting 90% because running in a window, ~10% is taken by X to copy your window contents to screen. If you use "my" SDL in fullscreen mode (included in firmware), you can remove that overhead, see http://notaz.gp2x.de/cgi-bin/gitweb.cgi?p=sdl_omap.git;a=blob;f=README.OMAP .
 
Last edited by a moderator:
Not saying this works on all programs, but the performance of my Fachoda-Complex port was greatly improved by setting the compiler to unroll loops. Add this flag (-funroll-loops )to the appropriate environment variables or hack it in the configure file if your using one..
 
Last edited by a moderator:
Using notaz' SDL also makes most blitting much faster, so that should help too. Also of course don't forget to use compiler flags to optimize for the specific cpu of the Pandora.


Other than that it's probably best to use a profiler to see what are the major bottlenecks and try to improve those.
 
Well there could be many reasons, maybe you're using lots of float math? Alpha blending? Something like SDL_image (I think), which is slow on ARM? What are your compile flags?


You could try some profiling tools like oprofile of perf, or you can give me non-stripped binary and I could have a go through those.


Edit: also you're likely only getting 90% because running in a window, ~10% is taken by X to copy your window contents to screen. If you use "my" SDL in fullscreen mode (included in firmware), you can remove that overhead, see http://notaz.gp2x.de...b;f=README.OMAP .

I'm not using any floats, so it's not that. Also not using alpha blending...but I am using the color-key functionality.


I am, however relying quite heavily on SDL_image but this is only used for loading in images is it not?


I am blitting 3 extra screens worth of graphics (for scrolling). It's lazy, I know but there was absolutely no performance impact on my PC as SDL must clip graphics that are off-screen - could the Pandora SDL not be doing this perhaps?


I'm not really up on compile flags - what specifically could I use for my Pandora build?


I haven't tried Notaz's SDL yet, will give that a crack tonight.
 
I probably mixed SDL_image with SDL_gfx, the later is the one that's slow.


When compiling something for pandora, you should use at least:



Code:
-O2 -mcpu=cortex-a8 -mtune=cortex-a8 -mfloat-abi=softfp -mfpu=neon


As far as clipping goes, it should not differ much from PC version as pandora is mostly using the same thing. But if you really are blitting 3 screens worth of graphics each frame, it will be slow as pandora doesn't have much memory bandwidth. It might be that you are also using a type of blit that is not NEON accelerated yet, if you gave your binary I could check if anything can be done from SDL side.
 
I'm not sure what is going on from your description... I only managed to max out CPU when using lots of alpha and buffering surfaces around for effects...


Anyway code snippets you code provide?


EDIT: Yeah SDL_gfx.. is very slow- if you are using it for scaling and rotation, I would suggest caching your surfaces.
 
Last edited by a moderator:
No, I'm not using SDL_gfx...but rotation! I like the sound of it :)


I'll try to sort out some [relevent] code to demo what I'm doing


But for now, here's my binary that I compiled last night:


http://lessermatters.homeunix.com/LemmingsSDL/Platformer_pandora.zip


Please be kind: I haven't actually made a game as such yet - just an engine...and my 8 year old help 'design' the levels that are there right now!
 
I probably mixed SDL_image with SDL_gfx, the later is the one that's slow.


When compiling something for pandora, you should use at least:



Code:
-O2 -mcpu=cortex-a8 -mtune=cortex-a8 -mfloat-abi=softfp -mfpu=neon

Ah, this looks good! I can't wait to try this out! Exciting stuff...damn, I have to wait till home-time to try this...grrrr...unless...unless I set up the cross-compiler here at work! No, best do what I'm paid to do :-(


EDIT: I've tried both thing and now the game runs a lot faster, not as fast as on my computer but fast enough I reckon :) Thanks for all your help!


EDIT AGAIN: Just halfed the resolution (something I had been intending on doing for a while) and enabled vsync - it's amazing now!
 
Last edited by a moderator:
Back
Top