Sdl: Lots Of Clips Or Lots Surfaces?


Miner49er

Active Member
Joined
Mar 1, 2004
Messages
655
Age
49
Website
www.lessermatters.co.uk
HI there,

I've recently introduced myself to SDL and finding it quite an excellent library. My intention is to write some little multiplayer games for the Pandora when it arrives.

Anyway, I'm just looking at animating sprites. And I have a question that I should probably know the answer to but here goes anyway. Is it more efficient to have one whole SDL Surface containing all animation frames, then have lot's of clip rectangles to blit them onto the screen, or it better to have lots of Surface pointers?

To me, it seems like it has to be slower using the former method, while the latter would be more memory overhead?

Or am I way off?

Don't laugh, I'm new to SDL...

cheers,

m
 
Last edited by a moderator:
Your off a bit. Your surfaces no matter in what form will be loaded by SDL into memory. SDL give you a pointer to that surface.
You can SDL_Rect which are nothing more than a struct to specify coordinates and dimensions, i.e. x,y,w,h

I recommend highly that you find the lazy foo tutorials, thats what I learned from :)
 
I recommend using a single file for your spritesheet, and at runtime building individual SDL_Surfaces for each sprite. I remember noticing a significant speed drop on the GP2X when blitting parts of a very large surface. Aside perhaps more subtle reasons, I`m sure that it`s simply faster for SDL to blit an entire surface instead of needing to add pixel offsets to each line to get to a subarea.
 
It's always a balance of CPU Vs RAM.
As Alex says using a single surface and clips will give a little cpu overhead... possibly. This is the method I use, however.
Using individual surfaces will have a little memory overhead, so i really is horses for courses.
 
Last edited by a moderator:
A bit off topic, but recently I was messing with canvas elements in Firefox. It has blitting that's a mix between SDL and Java2D for method names and parameter order.

I discovered that it handles sprite sheets very poorly. First, it makes a copy of the image you're blitting. Then, it figures out the pixels you want to copy. Then it copies that data to an RGBA buffer. Next it copies those pixels into the target canvas.

With a sprite sheet, rendering a frame was taking 15-16 seconds. With separate 16x16 images(all 1500 of them), just 30 miliseconds. ;)

I'm biased towards separate images. They always work. If you go for sprite sheets, I'd separate each logical sprite. Sometimes I do animation frames on the horizontal, animations on the vertical, and each sprite is a separate image.
 
Last edited by a moderator:
'Alex.' said:
I recommend using a single file for your spritesheet, and at runtime building individual SDL_Surfaces for each sprite. I remember noticing a significant speed drop on the GP2X when blitting parts of a very large surface. Aside perhaps more subtle reasons, I`m sure that it`s simply faster for SDL to blit an entire surface instead of needing to add pixel offsets to each line to get to a subarea.
I was thinking that the difference in speed was likely due to the fact that when using the sprite sheet and breaking it down into individual surfaces you were making the new surfaces the same pixel format as the screen. In SDL this is a huge deal and makes all the difference to the blitter since if you're loading individual files you have to take the extra step to convert each on the the same pixel format as the screen, otherwise each time an image gets blitted, it has to get converted first, which SDL will do silently and happily for you no questions asked. Oh, by the way, It takes FOREVER! in terms of cycles.

I remember running into this problem when making Death Trap Remix, and noticing a huge performance boost from pre-converting my single file sprites at load-time to the screen's pixel format. This is a habit you should get into with SDL, because you can't always match your resource files to the screen format because of endianess, or pixel depths not being available on all platforms. That is, as long as you care about portability.
 
Last edited by a moderator:
'Kramy' said:
A bit off topic, but recently I was messing with canvas elements in Firefox. It has blitting that's a mix between SDL and Java2D for method names and parameter order.

...

With a sprite sheet, rendering a frame was taking 15-16 seconds. With separate 16x16 images(all 1500 of them), just 30 miliseconds. ;)

I'm biased towards separate images. They always work. If you go for sprite sheets, I'd separate each logical sprite. Sometimes I do animation frames on the horizontal, animations on the vertical, and each sprite is a separate image.
30ms? That's almost playable..
Anyway, this is interesting. I didn't care to learn about SDL, so I've been doing more stuff with hardware-accelerated OGL sprites, but I'll keep this in mind.
I suppose one could ask whether OpenGL has a similar slowdown due to extra texture coordinate processing, but one might respond, "It's so fast that it doesn't matter unless you're animating on something like individual bullets in a Touhou game"
 
Last edited by a moderator:
'mindlord' said:
I was thinking that the difference in speed was likely due to the fact that when using the sprite sheet and breaking it down into individual surfaces you were making the new surfaces the same pixel format as the screen.
That is of course something to look out for, but in this case the large surface was converted to the screen format too. I was blitting the table background in Airplyr, and being a large screen-high sprite the speed gain I got from making it a separate surface was surprisingly big.
 
Last edited by a moderator:
There shouldn`t be any (performance) difference between blitting from a sprite sheet or separate surfaces. The same internal functions get called i.e. if you don`t supply a srcrect (full surface blit) then the internal routine gets called with a temporary one filled with the dimension of the surface.

You don`t need cliprects on the source surface - the srcrect should be sufficient. Likewise for the destination, a global one is okay if you need it, but wasteful if you keep defining one for each blit.

Alex said:
Aside perhaps more subtle reasons, I`m sure that it`s simply faster for SDL to blit an entire surface instead of needing to add pixel offsets to each line to get to a subarea.
Nope, the starting pixel of the whole blit is always calculated as CODE
lineStart = surfaceBitmapStart+(y*surfacePitch+x*pixelwidth)
and after each horizontal line the next start address is calculated by CODE
lineStart = lineStart + surfacePitch
So there is no difference in speed between a full surface blit and a partial rect blit.
 
Last edited by a moderator:
I'd say that cache usage and memory bandwidth would be far better utilized with independent bitmaps, than having to read a discontiguous block of RAM for each line of a subrect in a huge bitmap.

So yeah, one surface per frame. Memory bandwidth tends to be a precious commodity especially on handhelds.
 
Last edited by a moderator:
Well, thanks for those replies :)

I think paeryn's explanation seems to make the most sense.

To pickle; I've been using the lazy foo tutorials, that was what started the confusion! I would have used seperate surfaces had it not been for that.

I will create an array of rects :)

thanks,

m
 
Last edited by a moderator:
'Miner49er' said:
I think paeryn's explanation seems to make the most sense.
Trust Paeryn... Trust Paeryn... :) Seriously, he's the author of one of the accelerated ports of SDL to the GP2X, so he probably knows what he's talking about.

You will really want to watch the Surface formats, though, because as has been pointed out, blitting between surfaces of different formats hits performance hard. You'll probably want to just make sure that you convert all surfaces to the same (screen) format before you do anything else.

I don't think there is any "right" answer, though.

With one large "contact sheet" of sprites you:

- Might have a larger image than necessary (i.e. your images might have to be made to a fixed size, which means some would waste space etc.)
- Have one large contiguous block of memory instead of many disparate blocks (and you would know instantly if you had enough RAM to run the game, rather than finding out when you try to load the 1000th sprite).
- Would use a small array of probably only byte-sized offsets to describe each potential sprite.
- Have one bitmap decode once into memory for the whole program.
- Have a lot of stuff in RAM that may *never* get used.

With many small sprites you:

- Can load and unload them dynamically (i.e. only load those particular sprite necessary at that moment).
- Can easily change a single sprite without having to re-ship the whole bitmap.
- May incur a lot of alignment / surface / pointer / malloc / free overhead if you have a LOT of images.
- Will have to do lots of small decodes on small image files.
- Will waste some disk space in filesystem allocation (unless you store the sprites as a contact sheet but split them into individual surfaces once they are in RAM).

I would personally use small, independent bitmaps to start with until you have a way for array access to those bitmaps and some nice structured bitmaps on disk to play with and, most importantly, a game to play. That's purely for simplicity and because until you actually run into performance issues, the easiest way is usually more than good enough.

If I started going further with that particular program, then I'd probably have larger per-sprite sheets with their complete animations for a single sprite and load them as and when necessary (e.g. the sheet for the boss character or whatever will only be in RAM when he's around, but I would load *every* animation frame to do with him in one hit as soon as he appeared by loading one large bitmap of him and blitting individual frames from it).

If in doubt, test, and test on the actual hardware you are targetting. I always find the best function to include in any program I write is one that prints a message preceded by the exact time... then I can just add a line (usually inside pre-processor macros) in my code to time how long individual functions/lines of code last. (Yes, you can do this with fancy profilers and debuggers, but I find manual printf's are easier to customise.) Then you just try a small experiment with two spriteloaders and see if they differ... if they differ by a significant amount, try to find out why (it might well be something stupid like surface formats, etc.) and if you can't find out why then use the one that gives best performance. Such things are usually so dependent on your particular needs (i.e. does it need to multitask with another thread, does it need to load a large boss character *instantly*, does it do heavy animation, are your bitmaps large enough to kill caches etc.) that the only way is to try for yourself and see.

When starting in coding, write *everything* in a function/procedure/whatever you want to call it with a fixed interface... then when LoadSprite and BlitSpriteToScreen are slow, you just write replacment functions that tries a different way (e.g. full contact sheet instead of individual bitmaps, even if that means that LoadSprite becomes effectively a null function). If it doesn't work faster, you can keep the code for the corner cases that don't need to be fast, or as a double-check that your fast function works the same. If it does work faster, a simple search and replace means the whole program benefits instantly.

I did this with one of my programs... I needed a "Draw Polygon Outline To Screen" function, and I had SDL_gfx's version, but I was having debugging problems - my polygons weren't always closing properly. So I quickly knocked up function that took the same interface and made one which used SDL_gfx's "Draw Line" in a loop to draw the entire polygon. It helps you test performance, see whether you can reduce code use, and spot problems in your functions (in this case, the function was over-running the data given because of a stupid off-by-one and drawing the last lines of the polygon to a random location).

I've even deliberately gone back and rewrote entire game-solver routines (which are NOT easy to write) in a mathematically different way in order to check that my code wasn't doing anything too stupid on corner cases that I didn't want to verify manually. Once, I got stuck on a particular function I needed, so I whipped up a stupid, slow version that worked, built the rest of the program I needed to use it (which was the bit that takes WEEKS, and would have just been held back if I'd spent a long time on trying to perfect the small function it used), and after I was getting the results I wanted, but just too slow, I got other people to write a replacement function that worked the same (I think paeryn even submitted an entry for that!). Because of the modular approach, it was simply a matter of copy/paste to test ten people's submissions to see if they worked better, and a small edit to put two functions side-by-side and check they always got the same answers.

Think modular... don't necessarily tie your program into one way of working and when you *do* decide to write the other method out of interest, you instantly get verification of results, performance comparisons, and plug-in replacement of critical functions. Then, if you move from SDL onto something else, you can just replace the functions to do similar things for Allegro, DirectX or whatever and not have to completely re-write the program.
 
Last edited by a moderator:
'lulzfish' said:
Anyway, this is interesting. I didn't care to learn about SDL, so I've been doing more stuff with hardware-accelerated OGL sprites, but I'll keep this in mind.
I suppose one could ask whether OpenGL has a similar slowdown due to extra texture coordinate processing, but one might respond, "It's so fast that it doesn't matter unless you're animating on something like individual bullets in a Touhou game"
Unless I've been woefully mistaken it's always best to separate OpenGL sprites into as few textures as possible. GPUs will routinely create textures up to the next largest power of 2 in width and height, which will probably waste a ton of memory unless your separated sprites aren't always a power of 2 in width and height themselves.

And even then glBindTexture calls are already one of the places that most devs optimize by drawing everything using the same texture in one single go, so batching sprites in OpenGL will cut down on the number of those calls even more and be even faster.
 
Last edited by a moderator:
'Miner49er' said:
Well, thanks for those replies :)

I think paeryn's explanation seems to make the most sense.

To pickle; I've been using the lazy foo tutorials, that was what started the confusion! I would have used seperate surfaces had it not been for that.

I will create an array of rects :)

thanks,

m
It's also worth noting that you can simply write function to handle both ways... I've actually realised that I do this, although I still favour spritesheet loading rather than separate images.
 
Last edited by a moderator:
Yeah, it's much cheaper to send different texture coordinates to the GPU than to wait for it to swap textures out.
But since SDL is all software, I'm going to go with the "I don't know, why don't you run a benchmark and tell us?" response.
 
Last edited by a moderator:
'Eniko' said:
GPUs will routinely create textures up to the next largest power of 2 in width and height, which will probably waste a ton of memory unless your separated sprites aren't always a power of 2 in width and height themselves.

As far as I know, this is a myth. I remember reading that this was fixed around the GeForce 2 era.

But since I have no knowledge about the SGX... doesn't hurt to play it safe. ;)

I keep all my textures at shiftable sizes, just for good measure.
 
Last edited by a moderator:
'Kramy' said:
'Eniko' said:
GPUs will routinely create textures up to the next largest power of 2 in width and height, which will probably waste a ton of memory unless your separated sprites aren't always a power of 2 in width and height themselves.

As far as I know, this is a myth. I remember reading that this was fixed around the GeForce 2 era.

But since I have no knowledge about the SGX... doesn't hurt to play it safe. ;)

I keep all my textures at shiftable sizes, just for good measure.
I'm fairly certain this is an OpenGL thing. There are plenty of tutorials out there explaining how to use non-power of 2 textures in OpenGL. But for the purposes of SDL and surfaces, size doesn't matter. It's not aligned on a power of 2 boundary.
 
Last edited by a moderator:
Since SDL puts surfaces in system memory and I specified GPU, yes, it's an OpenGL thing.
QUOTE
As far as I know, this is a myth. I remember reading that this was fixed around the GeForce 2 era.
It may have been fixed but as far as I know it's not exactly standard and difference between graphics hardware can vary enough from one setup to the next that it's easier to just assume it's true, in my experience.

Of course this assumes that you're not developing exclusively for Pandora and/or the SGX can't do non power of two textures.
 
Last edited by a moderator:
'Eniko' said:
Since SDL puts surfaces in system memory and I specified GPU, yes, it's an OpenGL thing.
QUOTE
As far as I know, this is a myth. I remember reading that this was fixed around the GeForce 2 era.
It may have been fixed but as far as I know it's not exactly standard and difference between graphics hardware can vary enough from one setup to the next that it's easier to just assume it's true, in my experience.

Of course this assumes that you're not developing exclusively for Pandora and/or the SGX can't do non power of two textures.
If you are going to use OpenGL backed SDL on Pandora then atlas textures are DEFINATELY way to go - in fact with the OpenGL backend , having separate textures will kill your performance big time – glBindTexture is pretty darn expensive.

If you are not going to use OpenGL based SDL then , on a device like Pandora, you are wasting your time :)
 
Last edited by a moderator:
Back
Top