Another Little Sdl Question


Awakening

Active Member
Joined
Mar 5, 2009
Messages
666
Location
Sweden
Website
www.digitalawakening.se
It the last blog update I noticed this line:
There is room for speed improvement in the future, SDL is not using hardware surfaces yet, but that will follow.

I also noticed that the SDL_SetVideoMode function have a SDL_HWSURFACE flag. I changed this in my game code for my screen surface and the program ran fine. Also did it with my SDL_CreateRGBSurface function (so far only used on one surface). So, I'm interested in knowing if this will increase the speed of my program and if I need to do anything else in my code to take advantage of hardware surfaces, for example do I need to somehow convert all my surfaces to SDL_HWSURFACE?
 
Awakening said:
It the last blog update I noticed this line:
There is room for speed improvement in the future, SDL is not using hardware surfaces yet, but that will follow.

I also noticed that the SDL_SetVideoMode function have a SDL_HWSURFACE flag. I changed this in my game code for my screen surface and the program ran fine. Also did it with my SDL_CreateRGBSurface function (so far only used on one surface). So, I'm interested in knowing if this will increase the speed of my program and if I need to do anything else in my code to take advantage of hardware surfaces, for example do I need to somehow convert all my surfaces to SDL_HWSURFACE?

If SDL is unable to get a SDL_HWSURFACE for the main video display, it will probably silently ignore any requests to create HWSURFACEs for any other surfaces after that point. It depends on the implementation, you'd have to look at the source code.
 
Last edited by a moderator:
I think you misunderstood my questions.

1. Would this speed up blitting surfaces in my game?

2. Do I need to do anything with my surfaces to make them hardware? Most are created by loading images using the function below.

Code:
SDL_Surface *load_image( std::string filename )
{
    SDL_Surface *loadedImage = NULL;
    SDL_Surface *optimizedImage = NULL;

    loadedImage = IMG_Load( filename.c_str() );

    if( loadedImage != NULL ) {
        optimizedImage = SDL_DisplayFormatAlpha( loadedImage );

        SDL_FreeSurface( loadedImage );
    }

    return optimizedImage;
}
 
Awakening said:
2. Do I need to do anything with my surfaces to make them hardware?
No, it's just an SDL switch to give it a hint of which memory to store a surface in. Software surfaces are easier to write to, but hardware surfaces might blit faster. I don't know how SDL uses hardware acceleration. I usually have just asked it to make an OpenGL context and then done everything there, though now I'm planning to have Qt do that instead.
 
Last edited by a moderator:
Looking through the documentation for Pygame, a python wrapper around SDL, I spotted this guide. Take a look at rule 7; apparently hardware surfaces are not as nice as you might expect (including potentially being slower). But I don't know enough to tell you how those issues will affect the Pandora.
 
Awakening said:
In the end I guess I'll have to try this out myself when I get my Pandora.

Im in exactly the same boat (SDL, without opengl).
I was going to post a similar question (about using dga/fullscreen and double buffering) but then i thought for now its better to stick with SDL_SWSURFACE (compiling for my x86-64 pc) trying to make it run as well as possible that way.
I'm only going to worry about SDL_HWSURFACE, SDL_DOUBLBUF, dga|fullscreen etc when I have a real device to compile for. I wouldn't be paricularly surprised if SDL_SWSURFACE turns out to run the best anyway.
 
Last edited by a moderator:
Back
Top