GP2X Gp2x Demo Development


Dzz posted on May 28 2006 at 09:49 PM said:
Squidge posted on May 28 2006 at 02:39 PM said:
value = MSP_GPIOAPINLVL;
Yeah, that that makes it obvious just by looking at it what's going on! :lol:

Well, it makes sense to me far more than what could be a seemingly random number :)

Dzz posted on May 28 2006 at 09:49 PM said:
That HH thing sounds pretty nifty. Now that the gp2x boots pretty fast I might be interested in working in that environment. Can you do file I/O to the SD card from an HH app?

No. File system and sound is still missing.
 
Last edited by a moderator:
Well, I am having bizarre results from my experiments with the locking. Unfortunately I will be away from the forum for the next week, but will look into the issue further on my return.

Until then, I advise everybody NOT to use the mutual exclusion mechanism from the demos, it is not stable (although the code itself is correct, something else is going on that I do not understand yet).
 
Dzz posted on May 29 2006 at 01:46 AM said:
Well, I am having bizarre results from my experiments with the locking. Unfortunately I will be away from the forum for the next week, but will look into the issue further on my return.

Until then, I advise everybody NOT to use the mutual exclusion mechanism from the demos, it is not stable (although the code itself is correct, something else is going on that I do not understand yet).

The semephore locking is working fine for me, it was at one point not working at all, but that was down to my own bugs, not having a stack in IRQ mode for one. Doh! :rolleyes:

Also mine is in uncached memory, at the mo i'm running 4meg cached, 12 meg uncached on the 940. This will change when I start caching everything bar the command buffer and 'important' vars. Staying away from the upper 16 meg mine field (only a little bit of it is dangerous) till I know my code is correct. ;)
 
Last edited by a moderator:
As an aside, I linked the demo4 code using uClibc, after fixing all of the references to refer to posix functions, and the unoptimized binary only went from 9k to 29k.

20K might not be too much of a burden for most people if if brings along the convenience of not having to implement every syscall you need to access.

If you're trying to fit under 64k, it might be a problem, but I know I don't plan on keeping under 64k.
 
This thread has been great! Please add more when you get time. It's all good.
 
This thread has been great! Please add more when you get time. It's all good.
Glad it has been helpful.

I have spent some time trying to unravel the mutex mystery but so far I still don't have an explanation.

In the process of this study I have come up with some interesting performance numbers. These are crudely measured, but it seems as if some various costs are roughly:

Cache miss (on a read): 25 cycles
Reading a long from uncached memory: 40 cycles
Writing a long to uncached memory: 15 cycles

Those numbers are all for the 920.

So far the overhead for a cache miss on a write to uncached/unbuffered memory seems to be quite small, smaller than makes sense to me. Since the root of the synchronization issue seems to be that the write of the 0 to reset the lock is not being propagated correctly, I wonder if this is somehow related.

If the "uncached" memory on the 920 is buffered but not cached, that could explain the behavior. I will attempt to determine this next.
 
Last edited by a moderator:
I can confirm that when you mmap an address >32mb, Linux clears both the "cachable" and "bufferable" bits in the mmu table descriptors.
 
One thing I have noticed is that when you mmap an area, Linux sets up the range you mmap as "readonly" and then modifies it later on when you access it. I've no idea why it does this (perhaps some leftovers from a allocate-on-write or similar algorithm?), but if you mmap, and then do a memset on the range, everything is guaranteed to be setup properly from then onwards.

Have you tried that?
 
One thing I have noticed is that when you mmap an area, Linux sets up the range you mmap as "readonly" and then modifies it later on when you access it. I've no idea why it does this (perhaps some leftovers from a allocate-on-write or similar algorithm?), but if you mmap, and then do a memset on the range, everything is guaranteed to be setup properly from then onwards.

Have you tried that?
In this case, it's only a single 4-byte value holding the lock. That value is initialized to zero at the start and goes through many lock/unlock cycles before the problem pops up, which is only when the lock is actually needed (that is, when one processors wants access while the lock is held by the other). It looks to me as if the SWP instruction on the 920 returns "1" for as much as 50-100 cycles after the "0" was written from the 940. The mechanism relies on getting that zero which is never delivered -- I assume because the SWP itself stuck a 1 in the lock.

I have not yet designed a reciprocal test where the 920 is the one holding the lock, but it's on my list of things to try.

It is possible that I have not properly set up the memory regions and maybe the lock variable is buffered on the 940 side. That would explain the behavior I think (I have to study how the buffered memory works on the 940), and it's probably the next thing I'll look at.
 
Last edited by a moderator:
Last edited by a moderator:
Squidge posted on Apr 11 2006 at 06:14 PM said:
@Squidge: As we aim for 64K demo here, we will need malloc to get mem for precalculates LUTs (I am thinking of a sine table) and even iterative generation of gfx as 64K is very little...

Someone obviously didn't research what the system call 'brk' does :)

So did you get malloc working? With newlib it is fairly simple:

syscalls.c:

unsigned long heap_end=0x80000;
unsigned long prev_heap_end;

...

caddr_t
_sbrk (int incr)
{
prev_heap_end = heap_end;
heap_end += incr;
return (caddr_t) prev_heap_end;
}

...

At least a quick and dirty one with a hardcoded starting address.
 
Last edited by a moderator:
I have been working exclusively with the 940T the last few days, and am currently standing on the shoulders of all the major players on this website (meaning that with respect for doing the dirty work and leaving info about it laying around so that we can pick it up and learn from it). My preference is a separate gcc build, clean, made for non-linux embedded stuff like the 940T. Unfortunately I have not been able to build on windows for a while now, need to figure out what the latest tricks are (generally you have to find that magic option in the make file and delete it, or remove a file here or there), so I do these on linux (note vmware player is a good way to get linux on windows if you are not interested in dual booting or having two computers side by side) http://www.dwelch.com/gp2x has the files. There should also be a generic loader based wholly on rlyehs hello world example, just added a call to the file based loader in main (and re-wrote his file based loader to not be a hardcoded binary size). I hope rip.c is included in there somewhere, it reads the elf file and extracts the real code, quite useful for dealing with elfs. And a trick I struggled with to use a memory map to force the vector table to be first thing linked. I use newlib because the cross compiler instructions I borrowed from way back when did. I will have to look at this uClibc or whatever the spelling is. with newlib, I have malloc and printf working, they were pretty easy to get going, now I just carry the code around. Anyway, maybe I should try another library and figure out printf and malloc all over again (even though IMO you should never malloc in an embedded program). Currently I have the 940t setup at 0x03000000 so I/O is at 0xBD000000, the HH header files make that a breeze.

What was the term I just saw? "I dont know how buffered memory works on the 940". what is buffered memory? Maybe I need to re-read the thread (but I dont have days to spare). The cache and write buffer on the 940 is the same as on the 920, only smaller. If you happened to be talking about the write buffer, I assume it is just a small fifo, or you can think of it like a cache. Your code writes to memory, but it goes into the fifo fast and returns control so that your code can execute the next instruction and the write buffer can wait for an opportunity when the processor is not talking to memory to write out your data. Probably a good idea to have the cache on with the write buffer. I assume an STM with a few registers followed by an ldrb from the location the last of those registers data will land will return the value before the write and not after (a synchronization problem). Will have to test that, with a cache I would assume it writes through the cache to the write buffer and prevents these sorts of problems. I could be talking out my ass at this point, sounds like a project to research and test.

I saw some questions/talk about this in this thread. As far as worrying about the MMU on the 920 side affecting the 940, it doesnt the mmu is in the 920 core and the mpu in the 940 core, same goes for the caches and write buffers. So caching this abc region controlled by this mmap this and that in the 920 has no affect on the 940 in that region, there is no way that it can. What you do have to worry about is timing. If you are using shared memory to transfer from one to the other you have to wait for the cache to flush out. Probably a good idea to disable the cache and write buffer for any region you want to share between processors. The memory bus is the long pole in the tent anyway, so cache or no cache, write buffer or no write buffer it still should take the same amount of time. (sounds like another research project). If you have the cache off and write buffer on to smooth out performance you still have to wait for the write buffer to drain the last few items out (as many as 16 words) before you can signal the 940 to take the data. And I think there is a command for that.

My thinking on the 940 for aiding the 920 is to keep it I/O bound, have a small bit of code that hopefully runs completely out of cache. Have it only talk to peripheral registers (like page flipping the video). The MMP2 data sheet states that it was designed to have 4 of the 9 devices talking in parallel. I assume this means the video hardware can be pulling pixel data out of memory while the 940 is polling the keyboard and/or video registers, while the 920 is running and reading and writing from another segment of memory and none of them causing wait states and slowing the others down. Here again, another research project...

Anyway there were a lot of 940 questions in this thread, hopefully by taking from rlyeh, squidge, dzz, rob and others work I have made something more useful than not. BTW it should be real easy to throw out my video code and replace it with the HH libraries.
If anyone knows the trick for getting gcc4 to build on mingw I would be eternally grateful.
 
dwelch posted on Jul 15 2006 at 02:40 AM said:
good idea to have the cache on with the write buffer. I assume an STM with a few registers followed by an ldrb from the location the last of those registers data will land will return the value before the write and not after (a synchronization problem). Will have to test that, with a cache I would assume it writes through the cache to the write buffer and prevents these sorts of problems. I could be talking out my ass at this point, sounds like a project to research and test.


[00069C] 0xE92D03F0 STMDB R13!,{R4,R5,R6,R7,R8,R9}
[0006A0] 0xE3A00A09 MOV R0,#0x00009000 ;@(0x09 RR 20)
[0006A4] 0xE59F1024 LDR R1,[R15,#+0x024] ;@(000006D0)
[0006A8] 0xE59F2024 LDR R2,[R15,#+0x024] ;@(000006D4)
[0006AC] 0xE59F3024 LDR R3,[R15,#+0x024] ;@(000006D8)
[0006B0] 0xE59F4024 LDR R4,[R15,#+0x024] ;@(000006DC)
[0006B4] 0xE59F5024 LDR R5,[R15,#+0x024] ;@(000006E0)
[0006B8] 0xE880001E STMIA R0,{R1,R2,R3,R4}
[0006BC] 0xE5950000 LDR R0,[R5,#+0x000]
[0006C0] 0xE8BD03F0 LDMIA R13!,{R4,R5,R6,R7,R8,R9}
[0006C4] 0xE1A0F00E MOV R15,R14,LSL#0
...
[0006D0] 0x12345678 EORNES R5,R4,#0x07800000 ;@(0x78 RR 12)
[0006D4] 0x56789ABC LDRPLBT R9,[R8]-R12, LSR #21!
[0006D8] 0xABCDEF12 BLGE S0337C328
[0006DC] 0xAB1234CD BLGE S0048DA18
[0006E0] 0x0000900C ANDEQ R9,R0,R12,LSL#0

There are no synchronization problems related to the write buffer from a single side perspective
After the quick test above a review of the TRM says that a load from noncacheable memory will drain the write buffer. So I dont know why you would want to setup a register to do an mcr p15, 9, Rd, c7, c10, 4 when a simple ldr will do the same job in one instruction.

0x69C STMDB on the stack would have filled up 6/8ths of the write buffer.
0x6A9 the MOV R0 is a freebie, it executes in the shadow of the write buffer assuming it had already been prefetched and was decoding while the STMDB was executing (cache is off for this test). With the write buffer off this instruction would had to have waited. Although if there was an instruction fetch during this
execution cycle then that instruction fetch would have drained the write buffer instead of the next instruction
0x6A4 LDR R1 would have drained it the write buffer (if an instruction fetch had not already)
0x6B8 STMIA filles up 1/2 of the write buffer
0x6BC LDR would have drained it before reading so that the value that had been in R4 is written to 0x900C before R0 tries to read it.

Note the above test used only one memory region for data, code, and the test area. cache was off and write buffer on which is why instruction fetchs could also drain the write buffer. To do a real test I should have put the memory under test in its own region separate from the program. It doesnt matter now because I RTFM, and the answer is right there.

Now saying that, there is nothing (in the 940 or 920) to prevent synchronization problems between the cores. If you were to do a good sized STMIA (on the 940), then write to one of the dual-cpu registers to cause an interrupt on the 920. The race is on. With the number of things that have to happen I cant imagine that the 920 would win. It gets an interrupt, has to finish the instruction/activity. Read the vector table, execute that instruction, then no doubt the first thing in the interrupt hander is an STMDB with a bunch of registers which would take as long or longer than the write buffer on the 940. You could try but I doubt that you could beat the 940s write buffer (using an interrupt for dual-cpu communication). going the other direction you might have a better chance, if you filled up the write buffer on the 920 (which is twice the size of the 940s) then interrupted the 940, which may have nothing better to do than wait for interrupts, and may be written as event/interrupt driven application only in the sense meaning it doesnt necessarily have to preserve registers, you might get lucky and have the 940 read a location before the 920 write buffer writes it. I would recommend that the 920 does a read in that region before writing to the dual-cpu register to flush the write buffer. Extremely unlikely that you will have problems, but possible.

So basically having the write buffer on for shared memory is safe and actually a good idea for performance reasons. DO NOT have it cached though, that will definitely kill you. You would have to keep flipping the cache on and off or invalidating parts of the cache, and it wouldnt gain you any performance improvement. Instead do a fast copy (four register LDM/STMs instead of ldrb/strb as a memcpy probably does) to a cached memory region and operate on it from there. Going to cost you the same amount of time as trying to invalidate the cache and then re-fill it as you read the region.

So in a nutshell, for shared memory between the 920 and 940 (IMO)
1) write buffer enable both sides
2) do not data cache enable either side
3) use LDM/STM pairs, probably 4 register, to fast copy from a cached region into the shared memory just before signaling the other side
4) read at least one address in the shared region before signalling the other side
5) when you receive a signal to read the shared memory from the other side use LDM/STM pairs to quickly move the new data into a data cached memory region.
6) operate on the copy of the shared buffer in your application

I am not sure you can do much better than that as a balance of performance and safety, but now that I have written this it is open for all the negative comments you can think of...flame on...
 
Last edited by a moderator:
Thanks for this research, it is very interesting, even if I don't follow exactly what your example code is attempting to test.

Intuitively it seems that using the dualcpu interrupts to implement safe interprocessor communication is way overkill, both in terms of implementation difficulty (how do you replace the 920 interrupt handler in a nice way on the 920 running linux?) and in terms of overhead. I had thought this was exactly what the SWP instruction was invented for, but I never could figure out why my attempted implementation based on that was unreliable. I like writing games more than dealing with this type of issue so I gave up and moved on.
 
Dzz posted on Jul 15 2006 at 11:27 AM said:
Thanks for this research, it is very interesting, even if I don't follow exactly what your example code is attempting to test.

Intuitively it seems that using the dualcpu interrupts to implement safe interprocessor communication is way overkill, both in terms of implementation difficulty (how do you replace the 920 interrupt handler in a nice way on the 920 running linux?) and in terms of overhead. I had thought this was exactly what the SWP instruction was invented for, but I never could figure out why my attempted implementation based on that was unreliable. I like writing games more than dealing with this type of issue so I gave up and moved on.

Ahh, see I like doing this stuff more than writing games.

One thing you can do is shoot linux in the head and be done with it do what you want. Another is to design your system to be one directional, only shoot data to the 940 and never back. Only the 940 would need an interrupt handler, and not the 920. Have the 920 to some of the image processing lets say, but not all of it. Let the 940 finish the processing and display it allowing the 920 more time to do whatever, say emulate. For example if you wanted to emulate the GBA on the gp2x (which I see a number of threads on) the 940 could do the tile engine, pass data from the 920 to the 940 not far removed from what the gba does with the hardware video, but here the 940 could pick it up and finish the job, and not have to bother the 920. Even something as simple as a task of page flipping during a so called refresh would improve the performance of a 920 app that might normally have to poll. Esp if the 920 wanted to poll vs interrupt.

Another example would be math intensive real time video decompression. Do part of the job on one side, toss it over the wall to the other processor and let it finish the work and display it. Wait, I think this platform has already used both processors for something like that <g>

I would think if both sides were polling to share data back and forth, forces both sides to dwell on the same memory space or i/o space (memory mapped), and this creates a conflict in the same area and both sides slow down (to slower than the speed one could do by itself). What I really need to understand more is how this chip is supposed to allow 4 bus masters at once (assuming that is the term they used). And then take advantage of that. Otherwise two processors are no faster than one...But we know that cannot be true, otherwise mes flat out lied in their data sheet. If for no other reason that each core has a cache there is no excuse for two processors not being faster than one, its a system engineering problem, not a software engineering problem.

Here is the quote "MP2520F can handle four parallel data access operations through allowing four of the nine bus masters to access one of the four internal data busses independently and concurrently."

Anything you can do to keep the 940 i/o bound and running from its cache is a freebie to the 920, not only will it not slow the 920 down in any way, it will take some tasks away from the 920 so it can do more.
 
Last edited by a moderator:
Thanks for these tutorials DZZ, totally having to re-learn ASM (thought I'd done enough, but evidently - NOT!).

Whilst I was trying to understand what's going on in all the code, I googled some (read: EVERY) command you used in ASM, in some vein attempt to figure out what's going on. Whilst doing that, I stumbled across this:
http://www.arm.com/support/faqip/14676.html
It says that ARM are trying to "discourage the use of" MOV PC, LR (when returning from a function) to BX LR.

While I don't fully understand what's going with the differences, thought it might be of use/interest to some of you.
 
"BX LR" is indeed the best instruction to use for future-proofing, but the current GP2X only uses V4T, and I'd be very surprised if the next gp2x is a V7 core (Basically, BX can be decoded more efficiently than a MOV PC, LR, so it would be better on the pipeline...)
 
Back
Top