Sdl Smooth Scrolling Tips?


skeezix said:
And the base of it it all is.. from your description and window size, there is no way it should be too much load for anything :) On a 100mhz GP32 I was trendering a full screen background and 50 or 100 little crappy sprites without slowing it down much (20fps or so) .. There must be somethign very unexpected going on for it to slow down any modern machine.

(ie: Like all your artwork is being scaled and rotated and colour-shifted every render, or you have an audio thread kicking in and consumingall your cpu or something :) Are you playing mp3s in background?

jeff

The slowdown is ONLY visible on horizontal scrolling (I only have horizontal...) and that is barely visible. I think I may have just become every so slightly obsessed with having things perfect.

I am, as it happens, scaling the entire image before flipping it (some monitors don't like 320x240, so I added a 2xScale( this has barely any effect on CPU usage. I switch scaling off and still notice jitter. Like I said though, my workmate reckons it looks absolutly fine so perhaps I'm jsut obsessing too much (I don't think I am obsessing TOO much - I mean, I want this to look lovely, not half-arsed)
 
Last edited by a moderator:
Try this:
Code:
#define FRAME_AVG_COUNT 16
#define WANTED_FPS 60.0f

        float frameDeltas[FRAME_AVG_COUNT];
        const float wantedDelta = 1f / WANTED_FPS;
        int frameIdx = 0;
        int lastTicks = SDL_GetTicks();
        int i;
        
        for(i = 0; i < FRAME_AVG_COUNT; i++)
                frameDeltas[frameIdx] = wantedDelta;

        while(!exitProgram)
        {
                int newTicks = SDL_GetTicks();
                float delta = (float)(newTicks - oldTicks) / 1000f;
                oldTicks = newTicks;

                frameDeltas[frameIdx] = delta;
                frameIdx = (frameIdx + 1) % FRAME_AVG_COUNT;
                delta = 0;
                
                for(i = 0; i < FRAME_AVG_COUNT; i++)
                        delta += frameDeltas[frameIdx];
                
                delta /= (float)FRAME_AVG_COUNT;
                
                //Now use delta to scale all actions (don't rely on anything else)

                if(menu->screenMode == PlayingGame)
                {
                        gameLoop(delta);//this checks input and animates one frame
                        game->Draw(screen);

                        //example:
                        myObject->x += 3 * delta; //moves 3 pixels per second; x has to be float too btw
                }
                else
                {
                        menuLoop(delta);//this checks input and animates one frame
                        menu->Draw(screen);
                }
                
                SDL_Flip(screen);
                
                if(delta < wantedDelta)
                        SDL_Delay((int)((delta - wantedDelta) * 1000f));
        }

I may have made a few mistakes since I don't have gcc available but you get the general idea. Use multiple smoothing systems on top of one another and floats for everything, and you can't fail.
 
dflemstr said:
Try this:
Code:
#define FRAME_AVG_COUNT 16
#define WANTED_FPS 60.0f

        float frameDeltas[FRAME_AVG_COUNT];
        const float wantedDelta = 1f / WANTED_FPS;
        int frameIdx = 0;
        int lastTicks = SDL_GetTicks();
        int i;
        
        for(i = 0; i < FRAME_AVG_COUNT; i++)
                frameDeltas[frameIdx] = wantedDelta;

        while(!exitProgram)
        {
                int newTicks = SDL_GetTicks();
                float delta = (float)(newTicks - oldTicks) / 1000f;
                oldTicks = newTicks;

                frameDeltas[frameIdx] = delta;
                frameIdx = (frameIdx + 1) % FRAME_AVG_COUNT;
                delta = 0;
                
                for(i = 0; i < FRAME_AVG_COUNT; i++)
                        delta += frameDeltas[frameIdx];
                
                delta /= (float)FRAME_AVG_COUNT;
                
                //Now use delta to scale all actions (don't rely on anything else)

                if(menu->screenMode == PlayingGame)
                {
                        gameLoop(delta);//this checks input and animates one frame
                        game->Draw(screen);

                        //example:
                        myObject->x += 3 * delta; //moves 3 pixels per second; x has to be float too btw
                }
                else
                {
                        menuLoop(delta);//this checks input and animates one frame
                        menu->Draw(screen);
                }
                
                SDL_Flip(screen);
                
                if(delta < wantedDelta)
                        SDL_Delay((int)((delta - wantedDelta) * 1000f));
        }

I may have made a few mistakes since I don't have gcc available but you get the general idea. Use multiple smoothing systems on top of one another and floats for everything, and you can't fail.

Okay, so you're actually altering the amount an object moves by depending on the average time taken to draw stuff?
what if you want to move one pixel at a time? I shall look into this tonight, thanks :)

Edit: Although, I was always tought to use ints and scale up/down to avoid using costly floats. Does the Pandora CPU have a Maths unit that allows floats? I like not having a maths unit - more challenging!
 
Last edited by a moderator:
Miner49er said:
Edit: Although, I was always tought to use ints and scale up/down to avoid using costly floats. Does the Pandora CPU have a Maths unit that allows floats? I like not having a maths unit - more challenging!
It does not have a FPU in the core IIRC, but I think that you get access to floats anyways through the DSP or some related chip, and I was told that the float performance you get out of using it is only slightly worse compared to i686.

If that's not the case, then just replace all floats by fixed point values or even ints with manual scaling; I don't care as long as you have the ability to store fractional values somehow. I still believe that your jump issues are caused by an inaccurate timer and stacking rounding errors, and using averages and fractions respectively are the only solutions to those problems that I'm aware of.
 
Last edited by a moderator:
Miner49er said:
dsh said:
I believe that if you could post your main loop's code that would allow us to help you more.

Code:
	while(!exitProgram)
	{
		fps.start();

                if(menu->screenMode == PlayingGame) 
			gameLoop();//this checks input and animates one frame
		else
			menuLoop();//this checks input and animates one frame

		if(menu->screenMode == PlayingGame) 
			game->Draw(screen);
		else 	
		 	menu->Draw(screen);
		
		SDL_Flip(screen);

		diff = (1000 / updateFPS) - fps.GetTicks();
		if( diff > 0 )
			SDL_Delay( diff );
	}

Edited: Because I pasted a load of crap by accident. fps class just records the number of ticks (GetTicks return the time passed since Start)
What if you move the SDL_Flip call after the delay? Then it should update the screen at a regular rate, rather than drawing to the buffer at a regular rate (unless I'm missing something)
 
Last edited by a moderator:
http://olofson.net/examples.html has some SDL examples that use OpenGL for sub-pixel-precision rendering and parallax scrolling. You'd have to update them to use OpenGL-ES but it should get you started.
 
And watch for the classic blunder --

Once you start factoring delay in, you can easily screw up. Old FPSs in the Quake era famously had this issue -- based on lag, you woudl move the objects. So if you're a few frames behind, you move more, so that you have the same effective speed. Great.. until you run into 1fps situation, and suddenly your guy is jumping 60 frames worth in one move .. and if you don't calculate every possibly ollision for each of those steps that were not taken, you end up teleporting through walls. (ie: most ofthe time you calculate collisions where y are, and where you wish to enter .. but you have to b careful to allow for all possible inbetween as well, especially when youi potentially skip around due to delay lag..)

jeff
 
dflemstr said:
It does not have a FPU in the core IIRC, but I think that you get access to floats anyways through the DSP or some related chip, and I was told that the float performance you get out of using it is only slightly worse compared to i686.

If that's not the case, then just replace all floats by fixed point values or even ints with manual scaling; I don't care as long as you have the ability to store fractional values somehow. I still believe that your jump issues are caused by an inaccurate timer and stacking rounding errors, and using averages and fractions respectively are the only solutions to those problems that I'm aware of.

No...

Cortex-A8 on Pandora has an FPU called VFPlite, but it's not pipelined so most useful operations take a minimum of several cycles, somewhere around 7. That's throughput, not just latency. VFPlite can handle both single and double precision and can conform to IEEE754 standards for formats and operations.

It also has a vector FPU called NEON, that can start a two-wide single precision operation per cycle. It's not IEEE754 compliant, and the compiler right now won't generate scalar code (mis)using it, as far as I know. It can vectorize code to use it, but not everything is vectorizable and I doubt GCC is that amazing at it anyway.

The DSP is purely fixed point.

I'm certain his problem has nothing to do with accumulated roundoff error from moving things, because he's moving at one pixel per interval. Making a 2D game time based instead of frame based is a waste if you can guarantee that the framerate will be acceptably high. Consoles have had fixed framerate and frame based timing for years. Since 2D doesn't have that dynamic a load it's not that hard to do it, and if it really does become a problem you can use frameskip to alleviate it, which will have similar results. The main thing is that you want to time to vsync and you want to have a consistent framerate as much as possible.

Anyway, here's what I think.. yes, SDL_Delay is not going to have perfect precision, probably no better than 1Hz, and it errs liberally. Even regardless of the precision you can end up being late. If SDL_Flip is also waiting for vsync (depends on your setup, but this is at least what you want it to be doing - and I'm going to guess it is because you said you're using double buffering) then your delay being late by even the most minute amount will cause it to miss the vsync you wanted to wait for, and SDL_Flip will now be waiting for the next one. Meaning an entire frame will be missed and it'll look jumpy.

The first thing you want to do is verify the problem. Keep a running average of frames per second by using SDL_GetTicks and a frames displayed counter. Run it for a long time. If it doesn't stabilize towards the fps you wanted then you know it's busted and it's not just mid-frame jitter. Now, to fix it, what you're going to want to do is delay for a lower period. You might have to do experiment to find out how much you can get away with. In fact, you can make a routine which does this at startup - try several delays back to back in tight loops and see how off you end up in the end by looking at the ticks. You can use this to find timer precision and overhead. You want to take that precision and round down towards it and then add the overhead. The goal will be to wait for as much as you feel safe waiting for, then let SDL wait for vsync for the rest. This is only necessary because I'm assuming SDL is spin locking to wait for vsync. I say this because I don't think that there are good generic ways of having the OS do it for its Windows or Linux targets. If I'm wrong then that means you should take out the delay entirely. In fact, you may want to try that first just to see if it fixes things.

Pandora won't have this problem because it will have a way to wait for vsync by using Linux's event system. Linux will be free to schedule other things in the mean time or halt the CPU, but once the vsync interrupt triggers the device driver waiting for it will wake up your process, which will probably be scheduled immediately thereafter.
 
Last edited by a moderator:
Didn't read the whole thread.

You're describing Vsync. There's a "tear" somewhere across the screen whenever it jumps. Why you think it's jumping, is because part of the screen(say the top part) is displaying frame #32, and the bottom part is displaying frame #33. It makes it look like it jumped a frame, but really it's just vsync being off and timings not matching your monitor.

Vsync is pretty much the only way to match up. No timer will sync with an LCD.
 
Kramy said:
Didn't read the whole thread.

You're describing Vsync. There's a "tear" somewhere across the screen whenever it jumps. Why you think it's jumping, is because part of the screen(say the top part) is displaying frame #32, and the bottom part is displaying frame #33. It makes it look like it jumped a frame, but really it's just vsync being off and timings not matching your monitor.

Vsync is pretty much the only way to match up. No timer will sync with an LCD.

No he isn't. Read the rest of the thread ;P
 
Last edited by a moderator:
calc84maniac said:
What if you move the SDL_Flip call after the delay? Then it should update the screen at a regular rate, rather than drawing to the buffer at a regular rate (unless I'm missing something)

My thoughts exactly... as you can read from my post before :)
 
Last edited by a moderator:
Exophase said:
dflemstr said:
It does not have a FPU in the core IIRC, but I think that you get access to floats anyways through the DSP or some related chip, and I was told that the float performance you get out of using it is only slightly worse compared to i686.

If that's not the case, then just replace all floats by fixed point values or even ints with manual scaling; I don't care as long as you have the ability to store fractional values somehow. I still believe that your jump issues are caused by an inaccurate timer and stacking rounding errors, and using averages and fractions respectively are the only solutions to those problems that I'm aware of.

No...

Cortex-A8 on Pandora has an FPU called VFPlite, but it's not pipelined so most useful operations take a minimum of several cycles, somewhere around 7. That's throughput, not just latency. VFPlite can handle both single and double precision and can conform to IEEE754 standards for formats and operations.

It also has a vector FPU called NEON, that can start a two-wide single precision operation per cycle. It's not IEEE754 compliant, and the compiler right now won't generate scalar code (mis)using it, as far as I know. It can vectorize code to use it, but not everything is vectorizable and I doubt GCC is that amazing at it anyway.

The DSP is purely fixed point.

I'm certain his problem has nothing to do with accumulated roundoff error from moving things, because he's moving at one pixel per interval. Making a 2D game time based instead of frame based is a waste if you can guarantee that the framerate will be acceptably high. Consoles have had fixed framerate and frame based timing for years. Since 2D doesn't have that dynamic a load it's not that hard to do it, and if it really does become a problem you can use frameskip to alleviate it, which will have similar results. The main thing is that you want to time to vsync and you want to have a consistent framerate as much as possible.

Anyway, here's what I think.. yes, SDL_Delay is not going to have perfect precision, probably no better than 1Hz, and it errs liberally. Even regardless of the precision you can end up being late. If SDL_Flip is also waiting for vsync (depends on your setup, but this is at least what you want it to be doing - and I'm going to guess it is because you said you're using double buffering) then your delay being late by even the most minute amount will cause it to miss the vsync you wanted to wait for, and SDL_Flip will now be waiting for the next one. Meaning an entire frame will be missed and it'll look jumpy.

The first thing you want to do is verify the problem. Keep a running average of frames per second by using SDL_GetTicks and a frames displayed counter. Run it for a long time. If it doesn't stabilize towards the fps you wanted then you know it's busted and it's not just mid-frame jitter. Now, to fix it, what you're going to want to do is delay for a lower period. You might have to do experiment to find out how much you can get away with. In fact, you can make a routine which does this at startup - try several delays back to back in tight loops and see how off you end up in the end by looking at the ticks. You can use this to find timer precision and overhead. You want to take that precision and round down towards it and then add the overhead. The goal will be to wait for as much as you feel safe waiting for, then let SDL wait for vsync for the rest. This is only necessary because I'm assuming SDL is spin locking to wait for vsync. I say this because I don't think that there are good generic ways of having the OS do it for its Windows or Linux targets. If I'm wrong then that means you should take out the delay entirely. In fact, you may want to try that first just to see if it fixes things.

Pandora won't have this problem because it will have a way to wait for vsync by using Linux's event system. Linux will be free to schedule other things in the mean time or halt the CPU, but once the vsync interrupt triggers the device driver waiting for it will wake up your process, which will probably be scheduled immediately thereafter.

Just quickly tried something along these lines (can't fully investigate as I'm at work!).
I added a bit to my main loop to display the FPS by checking for elapsed ms of >= 1000. I also print out the difference:

fps: 25, out: 1005
fps: 25, out: 1003
fps: 25, out: 1007
fps: 25, out: 1003
fps: 25, out: 1002
fps: 25, out: 1003
fps: 25, out: 1002
fps: 25, out: 1002
fps: 25, out: 1004
fps: 25, out: 1006
fps: 25, out: 1003
fps: 25, out: 1004
fps: 25, out: 1004
fps: 25, out: 1002
fps: 25, out: 1003
fps: 25, out: 1002
fps: 25, out: 1009
fps: 25, out: 1003
fps: 25, out: 1001
fps: 25, out: 1003
fps: 25, out: 1005
fps: 25, out: 1009
fps: 25, out: 1005
fps: 25, out: 1005
fps: 25, out: 1003
fps: 25, out: 1005
fps: 25, out: 1002
fps: 25, out: 1004
fps: 25, out: 1002
fps: 25, out: 1004
fps: 25, out: 1003
fps: 25, out: 1002

As you can see it differs by up to 9 ms - could this explain the jitter. I'm not sure how to compensate for this right now though, and I should get back to not quite so interesting c# :-(
 
Last edited by a moderator:
You're trying to get it to update at 25fps, but what refresh rate is your display running at? It has to be some multiple of the rate you're trying to synchronize at or you'll get the jitter for sure.
 
Exophase said:
You're trying to get it to update at 25fps, but what refresh rate is your display running at? It has to be some multiple of the rate you're trying to synchronize at or you'll get the jitter for sure.

He said Windows in the first post, and it's probably on an LCD. If I had to guess, 60hz, since that's what most LCDs will default to.

I must be honest - I've never seen a 25hz or 50hz LCD. :)
 
Last edited by a moderator:
Kramy said:
Exophase said:
You're trying to get it to update at 25fps, but what refresh rate is your display running at? It has to be some multiple of the rate you're trying to synchronize at or you'll get the jitter for sure.

He said Windows in the first post, and it's probably on an LCD. If I had to guess, 60hz, since that's what most LCDs will default to.

I must be honest - I've never seen a 25hz or 50hz LCD. :)

I think, this LCD monitor is running at 60hz. Okay, I'll try changing to 30fps.

Result: No noticable difference.

I just tried gianas return and the scrolling on that is pretty abysmal IMHO. Perhaps, I just need to accept that scrolling on PC's is rubbish? I certainly remember perfectly smooth scrolling on my Amiga...
 
Last edited by a moderator:
Yea. Loads of stuff going on in the background on the PC that Amiga programmers would just kill.

You said earlier other people said it looks OK.

They are right. Don't get hung up about the small stuff - I do it too but I'm officially mental.

*edit* syntax
 
In my game scrolling is smooth so I highly doubt that's impossible to achieve on a PC running windows or anything else for that matter.

This runs OpenGL though. But first versions used 100% software blitting and I didn't notice any jitter either.
 
Back
Top