GP2X Leading By Inappropriate Examples?


Laser

Still Fresh
Joined
Feb 28, 2007
Messages
51
Location
UK
Website
lasernet.plus.com
Well it's usually a bad idea for your first forum post to be contentious, but I'm going to anyway! :p I hope people will take this seriously and discuss the subject in the constructive manner intended. :ph34r:

For some time I've been following the GP2X scene and am on the verge of buying one. I've some experience of both desktop and embedded development and I love the concept of an open handheld. There is, however, one particular aspect I've seen repeatedly which I feel really gives a bad impression...

Upon downloading the SDK, the curious newbie is presented with some example event-driven SDL code to open a display, listen to the stick, and exit. Fair enough. Except the example busy-waits on an event by using GetEvent() in a loop, rather than the functionally-equivalent WaitEvent() style. I have, sadly, seen this replicated any number of times in the (otherwise highly commendable B) ) programs created by the community.

Why is this a big deal?

I feel the GP2X is struggling with a couple of usability-related issues. 1) Battery life and 2) the frequent advice to mess about with the CPU clock frequencies. Both of these could potentially be lessened by appropriate use of idle-wait-style programming instead of busy-waiting.

The busy-wait style came about when old computers had a single-thread system, or inadequate interrupt resources. Large numbers of MS-DOS programs are written like this. On a desktop machine it makes relatively little difference, although multi-threading performance can be badly affected. On an embedded device, particularly a battery-powered device, arranging the CPU(s) to be inactive when nothing is happening can drastically affect power consumption.

Now, it may be that the SDL Wait does nothing more than internally poll the Get, which of course will get us nowhere. However, if it doesn't already, it certainly could be arranged so that it would halt or idle the CPU until it needed to wake up, assuming the timer, keypad, vsync, etc sources had interrupt capability. During this time there is very little power consumption, reducing the average power consumption of the device.

This also removes the need to find out what the slowest acceptable clock frequency is for any given application, since the device can run at it's maximum rate, but for less time. Only in situations where unsynchronised activity is happening (very rare in an interactive application) and the user is happy to wait longer for it, does the clock speed need to be reduced.

The reduction in the need to educate the average user about CPU clocks is desirable, and if we could extend the battery life by even 10%, it has got to be worth something.

It is entirely likely that some programmers are already using these techniques, but it IMHO it needs to pervade the whole platform, including the system firmware. I would like to hear the opinions of other developers here on this subject. In particular I propose that the SDK examples could be changed to idle-wait models, and the wiki resources updated to educate the new programmer to the benefits. I'd also like to hear from anyone who knows whether the current SDL Wait function(s) idle the processor, or indeed whether the underlying Linux kernel supports this. Since uCLinux is becoming ever more popular for embedded environments, it is reasonable to assume it is feasible.


Thanks for reading. I await your comments with interest! :unsure:
 
Code:
int SDL_WaitEvent (SDL_Event *event)
{
	while ( 1 ) 
	{
		SDL_PumpEvents();
		switch(SDL_PeepEvents(event, 1, SDL_GETEVENT, SDL_ALLEVENTS)) 
		{
			case -1: return 0;
			case 1: return 1;
			case 0: SDL_Delay(10);
		}
	}
}
 
Some of the devices in the system so not have a I/O wait mode so when you read from them you either get data, get an indication that there is no data, or get the last data that was read if no new data is available. I am unsure from the SDL interface which of these types of devices it is set up to read from since I am just starting to use SDL. It would appear that not all of the battery conserving approaches have been used at the lowest levels so trying to use appropriate approaches at some levels are "difficult".
 
critical posted on Mar 1 2007 at 12:39 PM said:
Code:
int SDL_WaitEvent (SDL_Event *event)
{
	while ( 1 ) 
	{
		SDL_PumpEvents();
		switch(SDL_PeepEvents(event, 1, SDL_GETEVENT, SDL_ALLEVENTS)) 
		{
			case -1: return 0;
			case 1: return 1;
			case 0: SDL_Delay(10);
		}
	}
}
That's the actual SDL code? Bit of a downer. :angry:

Although it is doing a Delay() (which may or may not idle to CPU - it technically could/should) and will therefore reduce battery consumption, it is unfortunately going to mean WaitEvent() doesn't return immediately an event happens.

By the standards of other embedded systems, this is quite broken. :blink:

I am not familiar enough with the inner workings of the Linux kernel to comment further on what could be done, but I feel that SDL could potentially implement this better (or an alternative offered) if the kernel does allow idling of the CPU.


Are there other systems than SDL that people are using for GP2X development?

Do we know how much of the Linux kernel is used by the SDL implementation vs. whether it tries to re-implement everything itself?

If someone came up with an SDL-like (but not directly compatible) library that offered better battery life if used, do you think people would generally consider using it, or would it be ignored?


Thanks for the interesting replies do far. :D
 
Last edited by a moderator:
Laser posted on Mar 1 2007 at 02:08 PM said:
That's the actual SDL code? Bit of a downer. :angry:
Although it is doing a Delay() (which may or may not idle to CPU - it technically could/should) and will therefore reduce battery consumption, it is unfortunately going to mean WaitEvent() doesn't return immediately an event happens.

By the standards of other embedded systems, this is quite broken. :blink:
You don't get hardware interrupts for most button presses, so that's a big problem.

I think squidge suggested a while ago sampling the buttons on vblank would be the best option.

I've dredged up a previous thread on a similar-ish topic:
http://www.gp32x.de/board/index.php?showtopic=33788

EDIT: and this one...

http://www.gp32x.de/board/index.php?showtopic=31979
 
Last edited by a moderator:
We've had a discussion about select here


Edit: I would expect Delay() to idle the CPU, the best you expect is your process to be descheduled - Linux is running other processes as well as yours.
 
Last edited by a moderator:
critical posted on Mar 1 2007 at 02:28 PM said:
You don't get hardware interrupts for most button presses, so that's a big problem.

I think squidge suggested a while ago sampling the buttons on vblank would be the best option.
It's a shame the buttons didn't get an interrupt considering the target applications here, but as you state, a periodic poll is probably acceptable since the code would likely be doing other things at a fairly high rate that the button would be pressed in response to.

On a 200MHz processor, even waiting a single millisecond between scans can be a considerable saving.

I've dredged up a previous thread on a similar-ish topic:
http://www.gp32x.de/board/index.php?showtopic=33788
Heh! I tried to be particularly careful with my words to avoid an argument like that. :rolleyes:

Just as the OP in that thread, the actual motivation is to improve things constructively, not attack the excellent work that has already been done.


Parkydr posted on Mar 1 2007 at 02:53 PM said:
I would expect Delay() to idle the CPU, the best you expect is your process to be descheduled - Linux is running other processes as well as yours.
That is of course a very valid point. Unless an application kills the system, other things may well be happening. (And indeed the wise programmer may take advantage of that.) It would be hoped though, that if nothing much is expected to be going on then other activities wouldn't interfere to any great degree.

I assume (hope?) that the Linux kernel is not as offensive as MS Windows in regularly stealing the CPU for significant periods. :eek:
 
Last edited by a moderator:
Linux can and does put the processor into an idle mode when nothing is wanting power, but there are some things you can do yourself depending on your applications target:

Disable backlight - for music applications where you really see the screen for example (saves 40mA)
Put NetChip 2272 in sleep mode - requires no USB lead connected, saves another 40mA.
Decrease CPU clock (upto 90mA saving)
Let linux deschedule your app, resulting in a WFI (Wait For Interrupt) on the ARM9 core (upto 90mA saving)

There's a 140mA "standard" current draw by the looks of it too. By "standard" it's always there, but I've no idea actually where its being used up.

However, providing 3.7V via the battery compartment instead of 2.4V gives a noticeable improvement. I assume this is because there is a DCDC convertor in the gp2x to get 3.3V out of 2.4V or whatever. This is bypassed when you use the DC-In jack, which explains the random crashes if you try and provide battery voltage via that socket.
 
Squidge posted on Mar 1 2007 at 06:17 PM said:
Linux can and does put the processor into an idle mode when nothing is wanting power,
....
Decrease CPU clock (upto 90mA saving)
Let linux deschedule your app, resulting in a WFI (Wait For Interrupt) on the ARM9 core (upto 90mA saving)
Hm, perhaps you have answered my original points indirectly here! I'm no Linux or ARM guru, but from those three statements, it seems possible that the CPU is using around 90mA, and any savings from idle-waiting etc. will be unlikely to create any greater benefit.

Two cores at 90mA, 40mA backlight, 40mA USB chip, 140mA other stuff... (frantically adds up in head...) 400mA accounted for, which must be the most part of the power consumption? (5-6 hours with decent batteries.)

So for many intensive apps, taking small fractions of that 90mA away isn't going to perform any miracles. :( Menus and stuff might still benefit, though.

However, providing 3.7V via the battery compartment instead of 2.4V gives a noticeable improvement. I assume this is because there is a
DCDC convertor in the gp2x to get 3.3V out of 2.4V or whatever.
And that is very interesting. Normally one would expect the current to remain the same or go up, so it does imply the DC-DC converter. For once, adding batteries may actually increase battery life! There are a number of interesting possibilities here, depending on the actual implementation.

Hmmm!...

Thank you. Good post. B)
 
Last edited by a moderator:
Squidge posted on Mar 1 2007 at 06:17 PM said:
However, providing 3.7V via the battery compartment instead of 2.4V gives a noticeable improvement. I assume this is because there is a DCDC convertor in the gp2x to get 3.3V out of 2.4V or whatever. This is bypassed when you use the DC-In jack, which explains the random crashes if you try and provide battery voltage via that socket.

Sorry about off topic here but does this mean you could power the device with a standard(those easy to get cylindrical standard batteries could work) Li-ion battery if it is connected to the socket instead of the battery compartment? Li-ions are 3.7V, would that work or is that voltage too high? Have you ever tested something like that?
 
Last edited by a moderator:
2nd cpu is disabled by default, so it's 180mA for the first core. When the second is enabled, it doesn't use much more power at all.

DaveC: Yes, 3.7V seems to be the "best" voltage for the gp2x - less power is wasted trying to increase the voltage, so more of it can be used for the rest of the 2x.
 
Squidge posted on Mar 1 2007 at 09:18 PM said:
2nd cpu is disabled by default, so it's 180mA for the first core. When the second is enabled, it doesn't use much more power at all.

Thanks Squidge, that is very interesting to know! :)
 
Last edited by a moderator:
Squidge posted on Mar 1 2007 at 09:18 PM said:
2nd cpu is disabled by default, so it's 180mA for the first core. When the second is enabled, it doesn't use much more power at all.

DaveC: Yes, 3.7V seems to be the "best" voltage for the gp2x - less power is wasted trying to increase the voltage, so more of it can be used for the rest of the 2x.


Hmmm. How do you connect 2 batteries in parallel but keep them from discharging into each other? Some sort of blocking diode? How do you connect?
 
Last edited by a moderator:
DaveC posted on Mar 2 2007 at 03:37 AM said:
Hmmm. How do you connect 2 batteries in parallel but keep them from discharging into each other? Some sort of blocking diode? How do you connect?
You don't! You may damage your batteries. At best, you could use some ballast resistors or diodes, but this will be wasteful and still poor practise.

Since Squidge points out that there's a constant-power regulator (DC-DC switching converter), the best solution here is more volts. That means batteries in series, not parallel.

When I get my GP2X I'll have to have the lid off to determine what the acceptable max voltage is, and what consequences it may have. Unless GPH have released schematics somewhere?

EDIT: The thunder-pack currently on the front page seems to be taking advantage of this effect.
 
Last edited by a moderator:
Laser posted on Mar 2 2007 at 09:08 AM said:
DaveC posted on Mar 2 2007 at 03:37 AM said:
Hmmm. How do you connect 2 batteries in parallel but keep them from discharging into each other? Some sort of blocking diode? How do you connect?
the best solution here is more volts. That means batteries in series, not parallel.

Well not really. I wanted to make a pack with 2 of these in parallel:

http://www.all-battery.com/index.asp?PageA...amp;ProdID=1600
 
Last edited by a moderator:
DaveC posted on Mar 2 2007 at 04:37 AM said:
Squidge posted on Mar 1 2007 at 09:18 PM said:
2nd cpu is disabled by default, so it's 180mA for the first core. When the second is enabled, it doesn't use much more power at all.

DaveC: Yes, 3.7V seems to be the "best" voltage for the gp2x - less power is wasted trying to increase the voltage, so more of it can be used for the rest of the 2x.


Hmmm. How do you connect 2 batteries in parallel but keep them from discharging into each other? Some sort of blocking diode? How do you connect?

Why would batteries in parallel discharge each other? Parallel means connecting the + with the + and the - with the - pol. No current runs in that configuration. But be caureful about those LithiumIon batts, they need a special discharge chip or something like that.

My idea (already explained in detail in the thunderpack thread) would be to use 6 AAA (3 in row, that those packs in parallel) giving you 2400mAh@3,6V and they would fit into the (slightly modified in the inside) battery compartment of the GP2X.
I have calculated that those should last 7-8 hours with one charge, And AAs could probably still be used.
 
Last edited by a moderator:
If you just want to multiply the capacity and not the voltage, then yeah, just connect them together in parallel. Ie. +ve to +ve, -ve to -ve. Make sure they are all charged first though. If some are charged more than others, then the batteries could eventually be damaged as they would be taken past there "recharge" point.
 
Back
Top