A Query About Game Engines And Timers


timbobsteve

Member
Joined
Oct 4, 2005
Messages
301
Hi All,

Just a quick query to see how everyone else manages timers in their own code. Let me give some background. Currently I have a sprite class that loads in a descriptor file that tells it what sprite-sheet to use, the number of frames, the frame-rate, the dimensions of each frame on the sheet and whether to oscillate the animation. This is all working fine, but I'm experiencing some slow-down and I think it might be because my Sprite class has it's own LastTicks variable and calculates the new LastTicks value every time Sprite->Animate() is run. e.g.
Code:
Sprite::Animate() {
    long newTicks = SDL_GetTicks();
    float secondsPassed = (float)newTicks - (float)oldTicks / 1000

    // If need to animate based on time then do it here
    if(secondsPassed >= 1) {
         // Increase frame by 1
         frameCount++;
         oldTicks = newTicks;
    }
}
I was wondering if querying SDL for ticks on ever cycle is what could be affecting the performance (e.g. when my sprite moves it "jitters" across the screen).

Would it be a good move to move the timer stuff to the main Engine code and then pass the Ticks-Passed value to each object when calling OnLoop()? How else would you allow child objects to query their creator (the Engine) whenever they need to know the time-passed? (I wouldn't want to start passing pointers to the Engine, just so children could use engine->GetTicks()... it seems unsafe).

Any help is appreciated :)
Timbobsteve.
 
Did you profile your code or look at the SDL Source?

However, I would still have done what you said from the beginning: Pass a parameter or use a global variable to tell all functions how much time passed since the last call / or what time it is in that moment instead of adding a call to some other function everywhere (which should always output the same value, even worse: it can run out of sync while rendering if one object takes longer to be calculated, you might see objects moving through each other then).
 
Most engines do and what I do is render everything in for one frame then wait/sleep the remaining time thats left. So if you wanted 60 fps you would measure how long it took and subtract that from 17 ms.
 
OK Cool,

I will try wiring in the Ticks values into the engine and changing the Object template class to use OnLoop(int ticks) to pass the values across.

Cheers,
Timbobsteve.
 
Pickle said:
Most engines do and what I do is render everything in for one frame then wait/sleep the remaining time thats left. So if you wanted 60 fps you would measure how long it took and subtract that from 17 ms.
This, of course, assumes that the engine will be able to render a steady 60 FPS under all circumstances on all machines.

I always do it like this instead: I have one drawing thread and one "logic" thread. The logic thread is run like this (pseudocode):
Code:
old time = current time
loop forever {
    new time = current time
    delta = new time - old time
    old time = new time

    for each game object o {
        o.update(delta)
    }
    thread.yield()
}
Aka, all objects are updated as fast as possible.

Then I have a drawing thread that also runs as quickly as possible, but if it runs faster than 60 FPS I let the thread sleep() for as long as it takes to bring down the FPS to 60, like Pickle described.

I can then balance drawing and updating by giving the threads different priorities. Works like a charm.
 
Last edited by a moderator:
dflemstr said:
... snip ...
Then I have a drawing thread that also runs as quickly as possible, but if it runs faster than 60 FPS I let the thread sleep() for as long as it takes to bring down the FPS to 60, like Pickle described.

I can then balance drawing and updating by giving the threads different priorities. Works like a charm.

I might have to try this. I've never touched threading in C++ (only in C#) so I'm not too sure what is involved. It is definitely worth thinking about though.
 
Last edited by a moderator:
Having everything update in fractional, variable intervals makes things a lot more difficult to handle, and having to deal with thread synchronization adds a ton of complication to a 2D game design. It's a lot easier if you have all of the game logic update on a fixed interval internally, then drop frames if you're running behind or incur slowdown. You can still separate game logic and video into separate threads, which can make frameskip a little easier. Audio should be in a different thread as well.

For 2D a fixed framerate is fairly attainable and quite desirable - ie, a fixed 30 fps will look smoother than something varying between 35-45 fps. Even 3D games often choose this, but for 2D games it makes an even bigger difference since the screen motion is more constrained and the animation is less continuous.
 
Here's a related question for everyone to ponder...

If you have a loop that is constantly updating fractional time, is this more of a drain on the battery than an app that does what it needs to do within a target framerate, and then sleeps the rest of the time?

Conversely, if you have a game that has slower animation and doesn't have a lot of things moving every animation tick (this applies to menus to), are you burning a lot of battery time by rendering stuff every animation tick rather than only rendering when things change? (This is a question I'm pondering right now with my IMGUI widgets sitting on top of SDL).

As for the original poster's question, I'd query time from the system once, perform the "amount of time passed" calculation once, and then pass that down to the objects that need it (or have it accessible from a known global variable / function). That way all of your objects keep in sync rather than time elapsing between checking your first object and your last object). Usually in my main game thread I'll step through all active objects, update them in time, and then step through them a second time to render them (or rather render only the visible ones).

Cheers,
Michael
 
dflemstr has the right idea.

You have to think more OOP. A "frame" is basically a single object. Every single thing happening within the frame happens at the same time, even though technically the CPU does it all sequentially. So the delta must be the same for every object.


Exophase makes good points. Collision is far easier to handle with fixed framerates. Rather than setting a sprite's speed to X per 1000ms, you set it to x pixels per tick. ;) If the ticks slow down... well, the game slows down. (unless you implement frame skipping) Big deal - it'll rarely happen, and it avoids introducing logic/collision bugs.


mduffor: No, it's not a bigger drain. How do you think something comes out of sleep? Somewhere, something is looping.

SDL has many rendering layers. From what I remember when I worked with it, it has a final blit to the framebuffer/screen that always happens - but you also have the compositing layer. In this layer, you can gain speed by scrolling the entire layer and blitting around the edges. Of course it'd be faster to do direct framebuffer updates, but then you can not exceed the time available per frame, even for a single frame, and if your LCD refreshes slow you can still get tearing. Note - I'm not using SDL terminology. (Surfaces, etc.)
 
mduffor said:
Here's a related question for everyone to ponder...

If you have a loop that is constantly updating fractional time, is this more of a drain on the battery than an app that does what it needs to do within a target framerate, and then sleeps the rest of the time?

It doesnt matter what your running its always going to use the about the same amount of energy, the clock speed is the same. The cpu is still doing NOP's and other background stuff.
 
Last edited by a moderator:
Pickle said:
It doesnt matter what your running its always going to use the about the same amount of energy, the clock speed is the same. The cpu is still doing NOP's and other background stuff.

That's not really true. First, if all threads yield the kernel will put the CPU to sleep, woken up by interrupt. This sleep mode uses the most power, but it's still less than active. Second, there actually is some potential for active consumption variation based on what the code is actually doing, for instance which CPU pipelines are hit and how often L2 cache or main memory is accessed.
 
Last edited by a moderator:
Exophase said:
Pickle said:
It doesnt matter what your running its always going to use the about the same amount of energy, the clock speed is the same. The cpu is still doing NOP's and other background stuff.

That's not really true. First, if all threads yield the kernel will put the CPU to sleep, woken up by interrupt. This sleep mode uses the most power, but it's still less than active. Second, there actually is some potential for active consumption variation based on what the code is actually doing, for instance which CPU pipelines are hit and how often L2 cache or main memory is accessed.

Yeah i didnt want to say that they would be exactly the same, since you could be using more hw in one case and not the other. Also i think it would depend on the cpu type and power saving support. In my statement I had a device like the gp2x in mind where the clock doesnt change, i think any 2 apps would generally use the same amount of power according to the clock.
 
Last edited by a moderator:
The Waterphoenix is using time based animation and it's kind of weird because the slow downs and speed ups are noticeable, but I think about it and on a machine where people can multitask, I can't really guarantee everyone will see the same performance despite using the same device.
 
darien said:
The Waterphoenix is using time based animation and it's kind of weird because the slow downs and speed ups are noticeable, but I think about it and on a machine where people can multitask, I can't really guarantee everyone will see the same performance despite using the same device.
I'm going to write a program that eats CPU cycles when I hold down the R trigger, it will be like a free bullet time effect.
 
Last edited by a moderator:
What we usually do to hide that noticeable slowdown is use a computed delta time.
We never feed the real delta time to game objects but instead use a weighted value from the last few frames (as few as 3, as many as 10 - it differs from each engine I used).
It is not technically correct as on some frames you will have advance more than you should have, but the difference is negligible for the aesthetic gain.
Let's say you use the last 4 frames you could go: 50% of the time taken by the last frame, 25% of the previous, 15% of the previous, 10% of the previous.
 
gosse said:
What we usually do to hide that noticeable slowdown is use a computed delta time.
We never feed the real delta time to game objects but instead use a weighted value from the last few frames (as few as 3, as many as 10 - it differs from each engine I used).
It is not technically correct as on some frames you will have advance more than you should have, but the difference is negligible for the aesthetic gain.
Let's say you use the last 4 frames you could go: 50% of the time taken by the last frame, 25% of the previous, 15% of the previous, 10% of the previous.
Well, let's think about this for a second.

For a human to think of a game as being "fluent", I'd say that the game would have to "sync" every 80 milliseconds (so if you "took a screenshot" every 80 ms, the speed of the game should look stable on those screenshots, kinda... or put differently: the game shouldn't change its speed noticeably within a time interval of 80 ms). This number could of course be changed arbitrarily for more intense or slow-paced games etc.

Now, how to do the syncing correctly? Without going into frame predictions and stuff like that (e.g. it would be possible to to some extent extrapolate the frame delta time history and go from there), it should be pretty easy to implement what gosse suggested: you store the delta times from previous frames, add them together with distance-proportional weights, divide by the number of "history frames" and use that for your delta.

Considering these two aspects, I come to the conclusion that we should basically keep a frame history that is enough to keep the frame rate stable for at least 80 ms. So, let's say that we want to keep our game at a constant 60 FPS and want to sync to that frame rate as well as we can. 60 FPS would mean a frame time of 16.7 ms, and our sync frame would have to be at least 5 frames long to fill the 80 ms we want to compensate for. But it isn't possible to keep 80 ms stable if you only have frame timings for those 80 ms to go by, so let's do the following: we double the number of frames for our sync frame (to 10 frames or 167 milliseconds) and use a linear(ish) distribution for the weights so that, for the FPS curve over time, we get a steady Bezier-style function that only changes "slope" by a maximum of 0.5 units per 80 ms (which is what we want)

So, in code, this would mean that you keep a sync frame of 10 game frames in a ring buffer, and to calculate the delta, you take the contents of the buffer and multiply them by:
Code:
0.2, 0.18, 0.16, 0.14, 0.12, 0.1, 0.08, 0.06, 0.04, 0.02
This distribution doesn't add up to exactly 1 (it adds up to 1.1 in fact) but it can be approximated easily by bit shifts so that's why I chose these numbers. You could of course subtract 0.01 from each number to get exactly 1 as the sum...
Then you sum the results and average that with your current calculated delta. This should give a stable enough frame rate...

If we want absolutely, mathematically correct curves where the max deviation per 80ms-period is equal to 1/the euler constant we can use a gaussian distribution instead, but that needs calibration that I don't want to deal with right now.
 
Last edited by a moderator:
Back
Top