Overclocking GPU/RAM: Let's get the facts :)


@ptitSeb (If you read this):

Does YGOPro use the GPU?

In fast or slow mode or both?

Couldn't notice some difference but I can't tell for sure.
No, current version is completely CPU bounded. No GPU used (no luck for now on putting GPU in, YGOPRO seems to use everything the doesn't work with GLES1 port of Irrlicht, like reading texture, so still a lot of work on this)
 
I can get the (max?) clockspeed of the RAM from /proc/pandora/sys_mhz_max (dividing it by 2, assuming this divider is always 2). As for the clockspeed of the GPU, I wouldn't know how to know whether the divider is 2 (on the new DM3730 units) or 3 (on the older OMAP3 units).

I have no clue where to find any information on how much use the GPU gets (I suspect that the information isn't available anywhere in procfs or sysfs), and I don't know if and what idle states it may have, let alone where to find information on their use.

I'll add the RAM clockspeed to sysinfo when I find the time, this is probably the most important number anyway.

Some preliminary testing indicates that (as one would expect) the RAM clockspeed does have an effect on power consumption. Here are some numbers for my ReBirth unit with the cpu clocked at 980MHz, OPP5, nothing running except sysinfo battery panel:

168mA when RAM is overclocked to 380/2 MHz (haven't tried anything higher yet)

165mA when RAM is clocked at the default 332/2 MHz

159mA when RAM is underclocked to 250/2 MHz

154mA when RAM is underclocked to 150/2 MHz

(when I try to underclock further, there's no more power reduction - maybe the very slow RAM starts causing the CPU to waste more cycles, and if I go too low it crashes)

So for an improved suspend-to-ram power consumption, it probably makes sense to underclock the RAM as far down as possible.

@notaz: when overclocking, is it the RAM or the GPU that causes instability? I don't care that much about the GPU, and if it's the one causing the problems, maybe it would be nice to have a way to use a larger divider for the GPU so I can get faster RAM while staying in a safe range for the GPU.
 
I have no clue where to find any information on how much use the GPU gets (I suspect that the information isn't available anywhere in procfs or sysfs), and I don't know if and what idle states it may have, let alone where to find information on their use.
Yeah that info is not available and GPU power states are controlled by binary blob anyway (although kernel could probably collect some stats on requests from blob, I see that as lots of work for little gain).

@notaz: when overclocking, is it the RAM or the GPU that causes instability? I don't care that much about the GPU, and if it's the one causing the problems, maybe it would be nice to have a way to use a larger divider for the GPU so I can get faster RAM while staying in a safe range for the GPU.
I suspect it's the RAM, when you are on desktop or run PCSX/DraStic the GPU is completely off anyway. Remember that when you overclock this you also overclock some internal L3 bus in OMAP3 that might cause some OMAP internal transfers to fail/corrupt, not only RAM accesses.
 
Last edited by a moderator:
^

Why is the RAM speed so important for you?

It was 3% speed improvement in those Games not using the GPU.

Which applications have more benefit from RAM speed?

Isn't GPU important for games?

Especially for a system monitor RAM isn't that important for me.

I set it once and that's it.

GPU usage is more interesting for me.

@notaz: when overclocking, is it the RAM or the GPU that causes instability? I don't care that much about the GPU, and if it's the one causing the problems, maybe it would be nice to have a way to use a larger divider for the GPU so I can get faster RAM while staying in a safe range for the GPU.

I think it's the other way round.

If I remember correctly Notaz said you probably could clock the GPU higher if you could do it seperated from RAM.

Here: http://boards.openpandora.org/index.php/topic/8960-set-clockspeed-for-sgx/#entry234796

See Notaz post of March 22.

EDIT:

Notaz :ph34r:
 
Last edited by a moderator:
^

Why is the RAM speed so important for you?

It was 3% speed improvement in those Games not using the GPU.

Which applications have more benefit from RAM speed?
Well, basically everything benefits from faster RAM, because everything is using RAM all the time. The GPU on the other hand is not really that commonly used, although that may change with the new OpenGL to GL|ES wrapper. So in terms of getting speedups by overclocking, I'm more interested in faster RAM than in a faster GPU.

Also in terms of getting lower power consumption by underclocking, there's more to gain by underclocking RAM than by underclocking the GPU, since the GPU is turned off most of the time anyway, while RAM is of course always on.

Isn't GPU important for games?

Especially for a system monitor RAM isn't that important for me.

I set it once and that's it.

GPU usage is more interesting for me.
Oh, I agree that it would be nice to have GPU usage stats, but based on notaz' reply, I don't think that's going to be possible. I can only show statistics that can be (easily) collected.

@notaz: when overclocking, is it the RAM or the GPU that causes instability? I don't care that much about the GPU, and if it's the one causing the problems, maybe it would be nice to have a way to use a larger divider for the GPU so I can get faster RAM while staying in a safe range for the GPU.

I think it's the other way round.

If I remember correctly Notaz said you probably could clock the GPU higher if you could do it seperated from RAM.
That makes sense. Would it be possible then to use a higher divider for the RAM so you can clock the GPU higher without making the RAM instable?
 
168mA when RAM is overclocked to 380/2 MHz (haven't tried anything higher yet)

165mA when RAM is clocked at the default 332/2 MHz

159mA when RAM is underclocked to 250/2 MHz

154mA when RAM is underclocked to 150/2 MHz
This means 2% more power usage. Means at 10 hours= 600 minutes = 12 minutes less uptime time when you use it for 10 hours. Seems ok for me :) .
 
Last edited by a moderator:
It was 3% speed improvement in those Games not using the GPU.

Which applications have more benefit from RAM speed?
By your claims PCSX-reARMed gave a 10% improvement and it doesn't use the GPU at all. At least for the one game you gave speeds for, "3FPS faster" for the other could mean anything.

DraStic isn't going to be representative of everything, and you'll get very different performance testing a game that doesn't use the 3D engine much. When emulating the 3D engine the emulator spends a lot of time in very slow FPU instructions (especially divisions), NEON to scalar stalls, and branchy code. PCSX-reARMed may be an opposite extreme if you're using it with the high resolution mode, since it has to do a lot of memory accesses for the framebuffer that's much less likely to stay in L2 cache.
 
Last edited by a moderator:
168mA when RAM is overclocked to 380/2 MHz (haven't tried anything higher yet)

165mA when RAM is clocked at the default 332/2 MHz

159mA when RAM is underclocked to 250/2 MHz

154mA when RAM is underclocked to 150/2 MHz
This means 2% more power usage. Means at 10 hours= 600 minutes = 12 minutes less uptime time when you use it for 10 hours. Seems ok for me :) .
Are you comparing the default with the overclocked situation? If you compare the underclocked value with the default one, the difference is not 12 minutes but 2 hours (from ~25 hours to ~27 hours).

Under load (I used the cpu stress test to put some load on it), I get the following numbers (still at 980MHz, OPP5):

- 150/2 MHz: ~640mA or 6h38m to empty a full battery

- 250/2 MHz: ~665mA or 6h22m to empty a full battery

- 332/2 MHz (default): ~679mA or 6h14m to empty a full battery

- 380/2 MHz: ~686mA or 6h10m to empty a full battery

So there's a noticeable but small impact on power consumption.

I hoped to get a better suspend-to-ram with this, but I couldn't measure any difference; whether clocked at 150/2MHz or 380MHz, the suspended power consumption stayed around 19.5mA (~ 9 days). So I'm assuming that while suspended, the effective clock speed is much lower anyway, so it doesn't matter what max clockspeed you selected.

In standby (lid closed, nothing running except sysinfo), I also couldn't measure any difference, in fact, it even seemed to consume slightly _more_ power when the RAM was underclocked. My standby power consumption seems to hover around 27mA (~ 6.5 days).
 
I hoped to get a better suspend-to-ram with this, but I couldn't measure any difference; whether clocked at 150/2MHz or 380MHz, the suspended power consumption stayed around 19.5mA (~ 9 days). So I'm assuming that while suspended, the effective clock speed is much lower anyway, so it doesn't matter what max clockspeed you selected.
On that note, are you able to time how long it takes to go to and from suspend-to-RAM with the overclocked bus speed?
 
It was 3% speed improvement in those Games not using the GPU.


Which applications have more benefit from RAM speed?
By your claims PCSX-reARMed gave a 10% improvement and it doesn't use the GPU at all. At least for the one game you gave speeds for, "3FPS faster" for the other could mean anything.


DraStic isn't going to be representative of everything, and you'll get very different performance testing a game that doesn't use the 3D engine much. When emulating the 3D engine the emulator spends a lot of time in very slow FPU instructions (especially divisions), NEON to scalar stalls, and branchy code. PCSX-reARMed may be an opposite extreme if you're using it with the high resolution mode, since it has to do a lot of memory accesses for the framebuffer that's much less likely to stay in L2 cache.
Doh.

My bad.

I confused FPS with %.

PCSXreARMed shows the FPS, not %.

So that's good then.

:)
 
I hoped to get a better suspend-to-ram with this, but I couldn't measure any difference; whether clocked at 150/2MHz or 380MHz, the suspended power consumption stayed around 19.5mA (~ 9 days). So I'm assuming that while suspended, the effective clock speed is much lower anyway, so it doesn't matter what max clockspeed you selected.
On that note, are you able to time how long it takes to go to and from suspend-to-RAM with the overclocked bus speed?
I wouldn't know how to time that accurately, at least not from userspace. Either seems to take about a second or so (probably more if wifi was enabled).

Also, for the record: on my ReBirth unit, I can overclock to 388/2MHz without any problem (it seems), while at 390/2MHz it immediately crashes.

Small benchmark result for Microbes:

test setup: first 1000 frames of the main menu of "Microbes 2" on difficulty setting "insane" and graphics detail set to "full", with frameskip and frame limiter disabled; cpu clocked to 980MHz (this does not use the GPU at all, but RAM speed may have an impact on all the blitting)

at 332/2MHz (default), I get 36.38 FPS

at 388/2MHz, I get 40.64 FPS (a 11.71% speedup for a 16.87% overclock)

at 300/2MHz, I get 32.50 FPS

at 200/2MHz, I get 20.51 FPS

at 100/2MHz, I get 7.32 FPS

so the performance of Microbes seems to scale more or less linearly with the bus/RAM speed, which is not surprising given the amount of framebuffer blitting that's going on.
 
Wow, you're disturbingly main RAM limited. That screams sub-optimal usage patterns. There are probably more cache-friendly ways you can be using that framebuffer..
 
Wow, you're disturbingly main RAM limited. That screams sub-optimal usage patterns. There are probably more cache-friendly ways you can be using that framebuffer..
No doubt. Every frame of the game starts by blitting a 800x480 static background (that only changes every second or so), then drawing all the enemies and effects on top of that, then blitting a 800x480 overlay (at some offset) with alpha-blending on top of that, and then in the main menu that's followed by blitting all of the text surfaces and other menu stuff on top of that. So that's quite RAM intensive, but I wouldn't know how to make something like that cache-friendlier. Do you have any suggestions?

Oh and I did the benchmarks with a version without compiler optimizations (-O0), so that probably also has a small impact - although I doubt it, given that most time is spent in the SDL blitters and not in my code, but let me check anyway with a more typical -O2 build...

at 388/2MHz, I get 44.11 FPS (12.75% speedup for a 16.87% overclock)

at 332/2MHz, I get 39.12 FPS

at 300/2MHz, I get 34.99 FPS

at 200/2MHz, I get 21.39 FPS

at 100/2MHz, I get 7.6 FPS

Yep, looks like it's still RAM-bottlenecked.

It's not really a big problem for Microbes - it's designed for 30 FPS and the actual game is graphically less demanding than the main menu because 1) the actual game window is a bit smaller than 800x480 and 2) there's no menu to render, so it shouldn't be a big problem to get the 30 FPS it needs. But I would still be interested in optimizing the performance, if only to save some battery life :)
 
No doubt. Every frame of the game starts by blitting a 800x480 static background (that only changes every second or so), then drawing all the enemies and effects on top of that, then blitting a 800x480 overlay (at some offset) with alpha-blending on top of that, and then in the main menu that's followed by blitting all of the text surfaces and other menu stuff on top of that. So that's quite RAM intensive, but I wouldn't know how to make something like that cache-friendlier. Do you have any suggestions?
If you want to make it less RAM limited there are two things you should do - tiling and prefetching. Tiling means splitting the framebuffer into smaller sections and performing an entire frame's worth of processing one tile at a time. The tile should at least comfortably fit into L2 cache (going all the way down to something that fits in L1 cache may even make sense). Probably want to be able to fit at least two tiles so the thing you're blitting doesn't push it out.

The downside to tiling is you have to break the things you clip the things you're blitting against the triangle edge. This isn't that bad for 2D sprite-based stuff since all the blits are rectangles anyway. It's not that hard to setup SDL to do this for you; make the tile a surface and go through a loop that blits everything to it then blit the tile to the screen. If you're rendering lots and lots of little blits like a tilemapped background you may want to do some higher level culling to quickly ignore most of the stuff that definitely won't be visible. If your sprites have some hierarchical ordering to them (maybe to make collision detection faster) you can do that for them too, otherwise you'll have to iterate all of them but that's probably not that big of a deal since the bounding box test should be fast.

Prefetching can be used to hide some of the latency of bringing the big things you're blitting into cache. It's pretty self explanatory. You can do it with inline ASM and probably with intrinsics. If you're really hardcore you can use the dedicated preload engine instead, but that's really tricky.

The biggest expense right now is most likely that 800x480 alpha blended blit since it needs to read from the framebuffer. Not only do the loads themselves cause cache misses but they probably force the write buffer to drain which prevents it from working as effectively for the stores to the framebuffer.
 
Last edited by a moderator:
Thanks for the suggestions!

Prefetching is already done by notaz' NEON blitters in his tweaked SDL. It made a noticeable difference indeed. It's because these blitters are so efficient, that the RAM speed becomes the bottleneck - with a crappy naive blitter the RAM would have much less trouble to keep the CPU busy.

Tiling would be tricky because the enemy microbes and effects are no sprites, but they're drawn using primitive drawing functions like lines, circles and discs, and disc segments (pies) - I use that method because it makes it easier to smoothly animate and scale things. Most of the time a disc will have parts of it on 4 tiles (or more if the disc is large and the tiles are small), which means that it essentially gets drawn 4 times instead of once because the clipping happens at the pixel level for those objects - the total number of pixels actually drawn would of course be the same, but there would be lots of clipped pixels that still have a cost. How big a tile size are we talking here to fit in L2 cache?

The big 800x480 alpha blended blit is purely cosmetic and disabled in the default setting of graphics detail. It is costly indeed. Maybe with some trickery I could get a similar but slightly different effect in a more efficient way, e.g. by using some kind of dithering instead of real transparency, or by replacing the blit image (which is kind of random cloudy stuff) by some simple mathematical formula that looks similar (would improve memory read locality since only the target surface has to be read if the source "image" is computed), and/or by blitting only 1 color plane (e.g. green) instead of all 3 and hoping that that still looks nice.
 
Tiling would be tricky because the enemy microbes and effects are no sprites, but they're drawn using primitive drawing functions like lines, circles and discs, and disc segments (pies) - I use that method because it makes it easier to smoothly animate and scale things. Most of the time a disc will have parts of it on 4 tiles (or more if the disc is large and the tiles are small), which means that it essentially gets drawn 4 times instead of once because the clipping happens at the pixel level for those objects - the total number of pixels actually drawn would of course be the same, but there would be lots of clipped pixels that still have a cost. How big a tile size are we talking here to fit in L2 cache?
L2 cache is 256KB. How big a tile can be depends on your pixel format. If it's 16bpp then that's 64K pixels for half of L2 (128KB). That can be a 256x256 tile, but the tile doesn't have to be square. In fact, if you use tiles that are 800 wide or 480 tall then you don't have to clip in both dimensions, but that's moot if you're already clipping against the edge of the screen. The ideal sizing will be whatever minimizes clipping, and divides the screen into a whole number of tiles (so instead of 256x256 something like 400x160 or 800x80, 6 tiles total) I don't know what your graphics look like but unless your if they're made up of a bunch of small vector primitives then the individual primitives will mostly not clip.
 
Also, for the record: on my ReBirth unit, I can overclock to 388/2MHz without any problem (it seems), while at 390/2MHz it immediately crashes.
Strangely enough, the above overclocking result was only true with SZ 1.53; I just upgraded to 1.54-final (which has a newer kernel and GPU driver), and now I can only overclock the bus clock to 362MHz, with an immediate crash at 364MHz (while previously 388MHz worked fine).

Does anyone (notaz?) have a clue what changed (presumably in the kernel or GPU driver) to cause this difference?
 
Back
Top