Sdl (Simple Directmedia Layer) Speed


RotsiserMho

Still Fresh
Joined
Nov 5, 2009
Messages
23
Hi all,

I've not been following this closely so I apologize if there's another topic out there I missed.

I don't have my Pandora yet (somewhere around #3500+ so it'll be a while) but I love SDL and I'm guessing tons of applications make use of it. So I'm wondering what's going on with the Pandora port of it that makes it so slow. I'm assuming there's no hardware acceleration going on? I have a lot of embedded development experience and am fairly familiar with Linux and SDL so I might be able to help out if someone can point me in the right direction. I'm guessing it just needs a back-end written to take advantage of the Pandora's LCD/framebuffer.

Thoughts/findings anyone?
 
I think paeryn is working on a hardware accelerated version of SDL - he did the same for GP2X years ago, too.
 
How slow is it exactly? How are you quantifying this..? If you don't have a Pandora then you're probably going on hearsay. I'm not questioning it, but I want to know what's really being measuring so it's clear what's SDL and what isn't, and what actually stands any chance of being improved.

What it needs is hardware acceleration for screen blits (SDL_UpdateRects/SDL_Flip), and get free double buffering, wait for vsync, and possibly any necessary color space conversion in the process. I don't really know the exact options for this in windowed mode, if overlays can really work well for instance... or maybe texture streaming or just DMA. For fullscreen it's not much of an issue.. and just improving the fbcon backend and making SDL use this on fullscreen would be sufficient. Fullscreen for full performance isn't that bad of a scenario.

Anything further, ie accelerated surface to surface blits and fills (SDL_Blit/SDL_FillRect) probably won't give you very much because they wouldn't support RLE or alpha blending and since SDL isn't asynchronous it really isn't that great for hardware acceleration on these anyway, except when there's something that makes hardware much faster at it than software (in this case there wouldn't be).
 
SDL's not that slow at the moment. The test program that I use which is a sort of starfield blitting program runs just over twice as fast on the pandora than it did on the gp2x (both software blitting).

I'm merging the X11 and fbcon drivers so that fullscreen will use fbcon. There's no real way of getting vsync in windowed mode (X11 has no concept of it), nor double buffering.

Using the SGX is out, for the few cases where it would be faster it's not worth it overall.

Doing plain or colour-keyed blits and fillrects can be done by dma on hardware surfaces, and I'll try and write some NEON code to speed up software and aplha-blending.

SDL can do asynchronous blitting, I enforced asyncronous hardware blitting on the gp2x. The DMA engine can be quicker in that it bypasses the caches and the cpu can get on with other things.
 
paeryn said:
SDL can do asynchronous blitting, I enforced asyncronous hardware blitting on the gp2x.

How does it work? Is it enforced by requiring the surfaces to be locked to modify them? Do you queue the actual blits in another thread? Or do you just allow it to get one ahead?

paeryn said:
The DMA engine can be quicker in that it bypasses the caches and the cpu can get on with other things.

Memory allocated by fbcon won't go through cache.
 
Last edited by a moderator:
paeryn said:
SDL's not that slow at the moment. The test program that I use which is a sort of starfield blitting program runs just over twice as fast on the pandora than it did on the gp2x (both software blitting).
I guess it's because the GP2X has not even half the MHz of the Pandora? ^^ Or was the Test with equal clock Speed? I would like to know the per-MHz power of the Pandora compared to the one from GP2X. Can make the Pandora more out of one MHz than hte GP2X could? (weird comparing I know, but for me as a average User it could help to see how fast the Pandora really is, when the higher clocking is ignored :) )
 
Last edited by a moderator:
fusion_power said:
I guess it's because the GP2X has not even half the MHz of the Pandora? ^^ Or was the Test with equal clock Speed? I would like to know the per-MHz power of the Pandora compared to the one from GP2X. Can make the Pandora more out of one MHz than hte GP2X could? (weird comparing I know, but for me as a average User it could help to see how fast the Pandora really is, when the higher clocking is ignored :) )

It isn't that simple. If the application is purely computationally bound and cache resident then it'll usually perform better per clock than GP2X. If it's purely memory bound then its performance will be more dictated by Pandora's memory bus, which is not clocked synchronously with the CPU like it is on GP2X. Most real applications are somewhere between the two.

In terms of raw write bandwidth Pandora has more than twice GP2X has - at stock clock speeds, 166MHz DDR vs 100MHz SDR, it's about 3.33x. Unfortunately, memory write tests show that you only get some fraction of this; you get the most if you use NEON, (and for memory to memory copies it pays to use the cache preload engine). The problem is probably with how L2 cache is working. From what I understand L2 is normally being configured as write allocating, although it can be configured as write-through and this can be done differently for different memory regions. This means that every write has to go through L2 cache and causes a miss, which has overhead of several cycles. What's worse if you have writes which are not clearing the entire cache line (either through bursts or possible write combining on the write buffer), because then they have to be merged with reading in the non-modified parts from memory. This is probably why NEON code tends to run better, because it's easier to get it to burst whole cache lines.

So you can see, with standard C code performing blits to normal L2 cached memory.. this isn't going to be 100% memory bandwidth limited, but getting twice the performance is still pretty good.

When writing straight to the framebuffer you don't go through L2 cache, so you don't have these problems. But if you want to do alpha blending you would prefer for the framebuffer to be cached if it'll fit (320x240x16bpp will). If you compare SDL alpha blending performance between GP2X and Pandora I bet Pandora has much more of an advantage. For alpha blending the best choice is probably memory allocated configured w/o L2 write allocation.. maybe this can be an option in ofbset or something.

One other thing: even w/o caching interfering you still can't get full bandwidth because some is being used for drawing the screen. This is the case for GP2X too, but it only has 320x240 vs 800x480 on the Pandora, which I think will only go away if you completely turn off the non-overlay DSS layer and use the overlaid ones scaled (actually, ofbset /dev/fb0 -en 0 might work, but also cause terrible things to happen if you don't turn it back on afterwards? Need to try this later)
 
Last edited by a moderator:
Exophase said:
paeryn said:
SDL can do asynchronous blitting, I enforced asyncronous hardware blitting on the gp2x.

How does it work? Is it enforced by requiring the surfaces to be locked to modify them? Do you queue the actual blits in another thread? Or do you just allow it to get one ahead?

In SDL you should always lock a surface before manually modifying it, it's there to allow SDL to finish any work it's doing to the surface. When you requested to lock a surface I checked to see if a blit was happening to/from it and if so waited until the blitter finished before returning a lock.
Straight after the blitter is started the blit function returns. If you can run some code that doesn't use either surface for a short while after a blit you get the blit done (bandwidth permitting) practically free. Needless to say, back-to-back blitting just gained you free set-up time.

I could only keep one blit active, a second blit would setup the blitter but then wait until the previous blit ended before starting the next. I wanted to be able to queue them but that would've needed access to the blitter interrupt (kernel level) and I never managed to get Open2X to compile properly.

Exophase said:
paeryn said:
The DMA engine can be quicker in that it bypasses the caches and the cpu can get on with other things.

Memory allocated by fbcon won't go through cache.
Exactly, that slows down the software blit even more - I meant that DMA will do its copy without affecting the CPU, but only available for hardware surfaces. If we don't use DMA then we don't need hardware surfaces (other than the screen).

Sorry for any ambiguities, I'm full of cold and not thinking entirely straight.
 
Last edited by a moderator:
paeryn said:
In SDL you should always lock a surface before manually modifying it, it's there to allow SDL to finish any work it's doing to the surface. When you requested to lock a surface I checked to see if a blit was happening to/from it and if so waited until the blitter finished before returning a lock.
Straight after the blitter is started the blit function returns. If you can run some code that doesn't use either surface for a short while after a blit you get the blit done (bandwidth permitting) practically free. Needless to say, back-to-back blitting just gained you free set-up time.

I could only keep one blit active, a second blit would setup the blitter but then wait until the previous blit ended before starting the next. I wanted to be able to queue them but that would've needed access to the blitter interrupt (kernel level) and I never managed to get Open2X to compile properly.

Yeah, I understand how it works.. embarrassingly I completely forgot about locking surfaces, I guess because I don't use blits in SDL and I never lock the screen since I never have to on the platforms/options I've messed with.

Maybe you'll be able to queue them on Pandora. Since some kernel interface has to be done for the DMA anyway, so if it could include waiting for interrupt or polling that'd be good... then you could have a queue thread.

Exophase said:
Exactly, that slows down the software blit even more - I meant that DMA will do its copy without affecting the CPU, but only available for hardware surfaces. If we don't use DMA then we don't need hardware surfaces (other than the screen).

For blitting to uncached memory the CPU should be able to achieve close to full memory bandwidth. Write buffering gives you the same spatial locality of reference benefits cache would, allowing you to burst efficiently without the overhead of going through cache. On Wiz for instance you can achieve full write bandwidth this way.
 
Last edited by a moderator:
What can we expect to see in terms of (hardware) scaling? I'm trying to weigh up whether it's worth doing any more SDL -> framebuffer conversions or better to just wait for a Pandora-optimised SDL.
 
Back
Top