Caching And Array Value Copying Slowness


A_SN

Active Member
Joined
Jun 8, 2006
Messages
899
As some of you know, I'm working on a rotation function, and as I have just tested it on my GP2X, I noticed that the way I do things it gets really slow, but let me explain.

Basically, my rotation function is a loop containing two parts, the first one calculates indexes to read and write to, and the second part does the read/write job

The problem is that that second part, the read/write thing, that is too slow. Normally it consists in : o[index_o]=i[index_i]; the problem is that with foo = i[index_i]; instead, it takes exactly 10 times less time and with o[index_o] = bar; takes 6 times less time, so the two combined (while not being run at the same time) would be almost 4 times faster than the way I do it now.

So I'm guessing it has to with caching, but the question is, how do I do to get the content of i[index_i] in o[index_o] faster than that?

PS : On a side note, for those who care, my rotation function rotates a 255x255 24-bit color image at 15.7 FPS, so far.
 
Am not sure how rotation functions work, so don't take my words for granted, they might even be not related in anyway to your problem, am just trying to help using some basics of OpenGL I studied while I was still in uni:

Whenever I used to try rotating an object and felt it was kinda slow, I increased the degree of rotation, so instead of making the loop rotate the object half a degree in each iteration, I usually increased that figure and made it two degrees for an example, this will decrease the total number of iterations and thus speed the rotation, but it wont look as smooth as it would have been if the rotation degree was smaller.

Hope that helped.
 
sehs33 posted on Sep 13 2006 at 07:37 PM said:
Am not sure how rotation functions work, so don't take my words for granted, they might even be not related in anyway to your problem, am just trying to help using some basics of OpenGL I studied while I was still in uni:

Whenever I used to try rotating an object and felt it was kinda slow, I increased the degree of rotation, so instead of making the loop rotate the object half a degree in each iteration, I usually increased that figure and made it two degrees for an example, this will decrease the total number of iterations and thus speed the rotation, but it wont look as smooth as it would have been if the rotation degree was smaller.

Hope that helped.

oh well yeah don't worry about that, i can skip as many degrees as needed (i said skip because my degrees are indexed in a lookup-table), not a problem. i'll just try to squeeze as much performance as I can out of my thing, and it appears that the one line where all the power goes is actually pretty much unrelated to rotation, we can see my problem as a pure array to array value copying problem.
 
Last edited by a moderator:
Are you just copying one array to another? If so, use pointers rather then array indexes.
Can you post up the entire code section so I can see what you are trying to do in general?
 
yaustar posted on Sep 13 2006 at 08:50 PM said:
Are you just copying one array to another? If so, use pointers rather then array indexes.
Can you post up the entire code section so I can see what you are trying to do in general?

I'm not copying one array to another, the two indexes don't have a lot in common, that's why.

I wanted to avoid having to copy the entire function because I wouldn't like it to drift off topic you know. Disclaimer : there's nothing to optimize in the code I'm about to paste besides the line this whole topic is all about

Code:
void rotation(uint8_t *i, uint8_t *o, uint16_t diameter, rotentry *rot, uint16_t rot_s, uint16_t ia, uint16_t rot_a, uint16_t rot_r, uint8_t i_power)
{
	int32_t iy_o, iyx_o=-4, i_index, bar, foo;
	uint16_t ix, iy, rot_x, rot_y, radius=diameter>>1;
	int32_t offset_x, prev_offset_x, offset_ye, offset_yw, ia_mod=(ia%(rot_a-1))<<(rot_r<<1), ia_mod2=(rot_a-(ia%(rot_a-1))-1)<<(rot_r<<1);
	int8_t ic, step=1<<rot_s, quarter=ia/(rot_a-1);

	foo = (diameter>>rot_s)*(diameter>>rot_s)<<(rot_s-1)<<(rot_s-1);

	for (iy=0; iy<diameter; iy+=step)	//TODO incrementations optimizations
	{
		bar = (iy - radius)*(iy - radius );
		if (iy>=radius)		//South
		{
			offset_ye = ia_mod + (((iy - radius)>>rot_s) << rot_r);
			offset_yw = ia_mod2 + (((iy - radius)>>rot_s) << rot_r);
		}
		else			//North
		{
			offset_ye = ia_mod2 + (((-iy + radius)>>rot_s) << rot_r);
			offset_yw = ia_mod + (((-iy + radius)>>rot_s) << rot_r);
		}

		for (ix=0; ix<diameter; ix+=step)
		{
			if (ix!=radius+1 && iy!=radius+1)
				iyx_o+=4;
			if ((ix - radius)*(ix - radius) + bar <= foo)
			{
				if (iy>=radius)		//South
				{
					if (ix>=radius)		//East
					{
						offset_x = offset_ye + ((ix - radius)>>rot_s);
						rot_y = rot[offset_x].y + radius  - 1;
						rot_x = rot[offset_x].x + radius  - 1;
						switch (quarter)
						{
							case 0:
								i_index = ((rot_y<<i_power) + rot_x)<<2;
								break;
							case 1:
								i_index = (((diameter - rot_x - step)<<i_power) + rot_y)<<2;
								break;
							case 2:
								i_index = (((diameter - rot_y - step)<<i_power) + diameter - rot_x - step)<<2;
								break;
							case 3:
								i_index = ((rot_x<<i_power) + (diameter - rot_y - step))<<2;
								break;
						}
					}
					else		//West
					{
						offset_x = offset_yw + ((-ix + radius)>>rot_s);
						rot_y = rot[offset_x].y + radius  - 1;
						rot_x = rot[offset_x].x + radius  - 1;
						switch (quarter)
						{
							case 0:
								i_index = ((rot_x<<i_power) + rot_y)<<2;
								break;
							case 1:
								i_index = (((diameter - rot_y - step)<<i_power) + rot_x)<<2;
								break;
							case 2:
								i_index = (((diameter - rot_x - step)<<i_power) + (diameter - rot_y - step))<<2;
								break;
							case 3:
								i_index = ((rot_y<<i_power) + (diameter - rot_x - step))<<2;
								break;
						}
					}
				}
				else		//North
				{
					if (ix>=radius)		//East
					{
						offset_x = offset_ye + ((ix - radius)>>rot_s);
						rot_y = rot[offset_x].y + radius  + 1;
						rot_x = rot[offset_x].x + radius  + 1;
						switch (quarter)
						{
							case 0:
								i_index = (((diameter - rot_x)<<i_power) + (diameter - rot_y))<<2;
								break;
							case 1:
								i_index = (((rot_y - step)<<i_power) + (diameter - rot_x))<<2;
								break;
							case 2:
								i_index = (((rot_x - step)<<i_power) + (rot_y - step))<<2;
								break;
							case 3:
								i_index = (((diameter - rot_y)<<i_power) + (rot_x - step))<<2;
								break;
						}
					}
					else		//West
					{
						offset_x = offset_yw + ((-ix + radius)>>rot_s);
						rot_y = rot[offset_x].y + radius  + 1;
						rot_x = rot[offset_x].x + radius  + 1;
						switch (quarter)
						{
							case 0:
								i_index = (((diameter - rot_y)<<i_power) + (diameter - rot_x))<<2;
								break;
							case 1:
								i_index = (((rot_x - step)<<i_power) + (diameter - rot_y))<<2;
								break;
							case 2:
								i_index = (((rot_y - step)<<i_power) + (rot_x - step))<<2;
								break;
							case 3:
								i_index = (((diameter - rot_x)<<i_power) + (rot_y - step))<<2;
								break;
						}
					}
				}
				/*for (ic=0; ic<3; ic++)
					o[iyx_o + ic] = i[i_index + ic];	//THATS THE VERY LINE I WAS TALKING ABOUT*/
				((uint32_t *)o)[index_o>>2] = ((uint32_t *)i)[index_i>>2]; //that's what I'm using now
			}
			if (ix==diameter-step && iy!=radius+1)
				iyx_o+=4;
		}
	}
	for (ic=0; ic<3; ic++)
		o[(((((radius-1)>>1)*radius) + radius-1)<<2) + ic] = 0;
}

Edit : changed the line we're interested in into what I'm using now.
 
Last edited by a moderator:
Okay, I don't think I am understanding the problem correctly.

Are you saying that this section of code is slow?
Code:
for (ic=0; ic<3; ic++)
	o[iyx_o + ic] = i[i_index + ic];	//THATS THE VERY LINE I WAS TALKING ABOUT

Try this (Warning: unchecked):
Code:
uint8_t * pO = &(o[iyx_o]);
uint8_t * pI = &(i[i_index]);

uint8_t * pIEnd = &(i[i_index + 2]);

while(pI != pIEnd)
{
	*pO++ = *pI++;
}

edit: Noticed an error in the code;
 
I hope this work for you A_SN, yaustar, your code rang many bells in my head, respects to your optimization B)
 
yaustar posted on Sep 13 2006 at 09:39 PM said:
Okay, I don't think I am understanding the problem correctly.

Are you saying that this section of code is slow?
Code:
for (ic=0; ic<3; ic++)
	o[iyx_o + ic] = i[i_index + ic];	//THATS THE VERY LINE I WAS TALKING ABOUT

Try this (Warning: unchecked):
Code:
uint8_t * pO = &(o[iyx_o]);
uint8_t * pI = &(i[i_index]);

uint8_t * pIEnd = &(i[i_index + 2]);

while(pI != pIEnd)
{
	*pO++ = *pI++;
}

edit: Noticed an error in the code;

for a loop of 3 I doubt that will make any significant difference, and in fact simply getting rid of the loop and performing the 3 assignments would be quicker!

are you running the code in a debugging environment? If so does it have array bounds checking switched on as that might explain why

x = y is faster than x = y as there are less array bounds to be checked in the first statement
 
Last edited by a moderator:
yaustar posted on Sep 13 2006 at 10:39 PM said:
Okay, I don't think I am understanding the problem correctly.

Are you saying that this section of code is slow?
Code:
for (ic=0; ic<3; ic++)
	o[iyx_o + ic] = i[i_index + ic];	//THATS THE VERY LINE I WAS TALKING ABOUT

Try this (Warning: unchecked):
Code:
uint8_t * pO = &(o[iyx_o]);
uint8_t * pI = &(i[i_index]);

uint8_t * pIEnd = &(i[i_index + 2]);

while(pI != pIEnd)
{
	*pO++ = *pI++;
}

edit: Noticed an error in the code;

Now that's why I didn't want to paste the code. If I got what you posted right, what you tried optimizing was related to the for loop in my original code, as I left this detail out of my original post because I knew it was not what caused the performance hit. The performance hit is, to me, quite obviously due to something about caching/memory/mmu not being done the same as when I do foo = i[index_i]; and o[index_o] = bar; the reason for that is that the values of iyx_o and i_index are obviously cached and the time it takes to add ic to them is unsignifying.

I maintain that I think that the performance hit is due to having the memory going back and forth between distant places in memory. Which makes me think that I could read all 3 bytes at once and write them all at once instead of doing it one by one, but it's not the solution I'm looking for. I'm looking for a solution in which whatever happens when I do o[index_o] = i[index_i]; and that doesn't appear when I do things separatly would be "fixed".

Upon further thinking I guess I could have an intermediary array to put a certain amount of read values in and write them all at once, however I know too few about how caching and all that works on the GP2X to know how I should do it, and I'm pretty sure that we can find a simpler way that would be easier to implement. Which makes me think I probably should ask directly to Squidge

spoyser posted on Sep 14 2006 at 12:18 AM said:
yaustar posted on Sep 13 2006 at 09:39 PM said:
Okay, I don't think I am understanding the problem correctly.

Are you saying that this section of code is slow?
Code:
for (ic=0; ic<3; ic++)
	o[iyx_o + ic] = i[i_index + ic];	//THATS THE VERY LINE I WAS TALKING ABOUT

Try this (Warning: unchecked):
Code:
uint8_t * pO = &(o[iyx_o]);
uint8_t * pI = &(i[i_index]);

uint8_t * pIEnd = &(i[i_index + 2]);

while(pI != pIEnd)
{
	*pO++ = *pI++;
}

edit: Noticed an error in the code;

for a loop of 3 I doubt that will make any significant difference, and in fact simply getting rid of the loop and performing the 3 assignments would be quicker!

are you running the code in a debugging environment? If so does it have array bounds checking switched on as that might explain why

x = y is faster than x = y as there are less array bounds to be checked in the first statement


I'm not running it in a debugging environment. I have it compiled statically with -O3 and simply running it on the GP2X, just plain normally.. never heard of array bounds checking before, however I wonder why it would do that, and mostly, if it did it probably wouldn't have to do it everytime.. well i don't know, however I don't think that it is the problem
 
Last edited by a moderator:
Try -O2 instead of -O3. -O3 can actually give worst results over -O2.

Have you tried hinting to the compiler that you want certain variables cached using the 'register' keyword? Also try declaring the varaibles used closer to where they are actually used.

eg
Code:
// At the top where the variables are declared:
register int32_t iyx_o, i_index;
register int8_t ic;

for (ic=0; ic<3; ic++)
	o[iyx_o + ic] = i[i_index + ic];

Something you might want to look at: http://www.gamasutra.com/features/20060913/whitaker_02.shtml
 
yaustar posted on Sep 14 2006 at 03:35 AM said:
Try -O2 instead of -O3. -O3 can actually give worst results over -O2.

Have you tried hinting to the compiler that you want certain variables cached using the 'register' keyword? Also try declaring the varaibles used closer to where they are actually used.

eg
Code:
// At the top where the variables are declared:
register int32_t iyx_o, i_index;
register int8_t ic;

for (ic=0; ic<3; ic++)
	o[iyx_o + ic] = i[i_index + ic];

Something you might want to look at: http://www.gamasutra.com/features/20060913/whitaker_02.shtml

Oh yeah I have tried everything from -O0 to -O4 and measured the performance. -O2 and -O3 give just the same performance.

I didn't even know about the register keyword, so thanks for telling me about that :) i'll have to try that however i think we're still avoiding the core of the problem.

oh and thanks for the link, unfortunatly it's not like they're giving me the solution to my problem straight away ;)

EDIT : I have tried using register for all these 3 variables and surprisingly enough it makes things slower by 4.2%, and yes it's signficant (the timings I get from a full rotation to another change by less than 0.14%, so when I average ten or twenty of them..)
 
Last edited by a moderator:
This is going to be a pure educated guess. In foo = i[index_i]; index_i is cached and stays there because it can fit (total, 32 bits). However, o[index_o] = i[index_i], the bit size of index_o + index_i is larger then the cache (64 bits) so it suffers from a cache miss every loop. If you can, try using int16_t instead of int32_t for the variables and post up the results. I am curious myself to see if this works.

If this doesn't work, you may have to restort to bit masks and use one 32 bit variable to hold 2 counters. One in the lower 16 bits and one in the upper.
 
For loops introduce a branch at the end of every iteration, which causes a prefetch flush and reload, so it'll be quicker if you just did each one manually.

Secondly, if it's still too slow, write the slow part in assembler, then you can use LDM (Load Multiple) and STM (Store Multiple), which means your entire 'for' loop can be done in two assembler instructions with zero branching. This means using 32-bit arrays instead of 8-bit, but don't assume 8-bit is faster - the compiler may be placing additional instructions to ensure the results are 8-bit.
 
I'm assuming this is arbitrary rotation and not simply 90 degrees (otherwise why not use the hardware rotate?), which I would expect to have poor cache locality. If that's the case then there probably isn't much you can do on that line.

You're copying 4 bytes... are they 32-bit aligned? Could you cast the pointers to 32-bit and copy once? (No wait, you're copying 3 bytes... RGB? Can you make it RGBx?)
 
rabidcow posted on Sep 15 2006 at 09:47 PM said:
I'm assuming this is arbitrary rotation and not simply 90 degrees (otherwise why not use the hardware rotate?), which I would expect to have poor cache locality. If that's the case then there probably isn't much you can do on that line.

You're copying 4 bytes... are they 32-bit aligned? Could you cast the pointers to 32-bit and copy once? (No wait, you're copying 3 bytes... RGB? Can you make it RGBx?)

Yup this is arbitrary rotation. I'll fix the cache issue by calculating the index in the output array instead of the other way around, thus I won't have reads scattered all around and this way I'll hardly have any cache issues.

I am now casting it so it directly copies data by words, and I had already made it RGBx from the start, although I'll be using the Alpha channel now (since it's essentially free).

yaustar posted on Sep 14 2006 at 02:01 PM said:
This is going to be a pure educated guess. In foo = i[index_i]; index_i is cached and stays there because it can fit (total, 32 bits). However, o[index_o] = i[index_i], the bit size of index_o + index_i is larger then the cache (64 bits) so it suffers from a cache miss every loop. If you can, try using int16_t instead of int32_t for the variables and post up the results. I am curious myself to see if this works.

If this doesn't work, you may have to restort to bit masks and use one 32 bit variable to hold 2 counters. One in the lower 16 bits and one in the upper.

I'm gonna stick to 32-bits now, and use the last byte as the Alpha channel. My original plan was to use RGB, and to sometimes rotate the Alpha layer separatly if needed, but now that I know what I know, I'm better off doing it RGBA8888, mostly that now the Aplha rotation is essentially free as I said before.

As for the cache miss, as I just replied to someone else I'll read things in order and write it in whatever order it ends in, since apparently the order doesn't matter when it comes to writing.
 
Last edited by a moderator:
I was refering to the bit size of "index_i" and "index_o" NOT the arrays. Since they are both used several times for the loop, only one variable can fit in the cache (speculative) hence the cache miss per loop.
 
yaustar posted on Sep 16 2006 at 12:35 AM said:
I was refering to the bit size of "index_i" and "index_o" NOT the arrays. Since they are both used several times for the loop, only one variable can fit in the cache (speculative) hence the cache miss per loop.

alright, that makes me wonder, is there anywhere were caching is useful to me or is it only good at slowing me down with cache misses?
 
Last edited by a moderator:
Please, somebody, JUST DISASSEMBLE THE CODE AND LOOK AT IT, dont guess.

Pointers are not always faster than indexes, its all relative to the platform and the specific code.

-O2 is sometimes faster than -O3 on gcc and sometimes -O3 is faster than -O2.

Unless this is something new to gcc 4.x -O4 doesnt exist, gcc uses a modulo three so -O4 is the same as -O0 which is the same as no optimization so you dont want that.
-O2 is the same as -O6 for example
At least it was this way in the gcc 2.x and 3.x days.

I need to see how you came to the conclusion that
foo = i[index_i];
o[index_o] = bar;
Are different?

One is
ldr r2, [r0, r3, asl #2]
and the other:
str r3, [r1, r2, asl #2]
In the purest form.

Unfortunately gcc is not that great of a compiler for performance so you will see that
both the address of the array and the index are loaded from memory each time and if the index changes the it may or may not get loaded again then stored, so:

foo = i[index_i];

ldr r0,[rx,#y] go get the address of the array
ldr r3,[rx,#y] go get the index
ldr r2, [r0, r3, asl #2] do the operation
str r2,[rx,#y] you changed foo so it has to get stored
assuming index_i changes:
ldr r3,[rx,#y] sometimes it will do this, sometimes not
add r3,r3,#4 this assumes a counter, which I assume you are not doing so more reads of and writes to get the value to change index_i
str r3,[rx,#y] and lastly index_i has to get stored at some point before moving on

Yes, some quick tests showed that gcc with -O2 or -O3 behaves very much like volatile was used when it wasnt.

Without disassembling :
arm-thumb-elf-objdump -D myprog.elf, or even better simply arm-thumb-elf-gcc -O3 -S myprog.c
You will get nowhere with assumptions on what the compiler is doing. How long or short your functions are (too short can be bad, too long may or may not be bad) how many things are passed in. how many local variables how many global (global is usually good and local bad which is probably opposite what you were taught). How many if any variables or other code you have in there to test the code you think isnt working. Printing results and where that code is relative to the code or function that contains the code under test.

There are only so many registers to work with, you may or may not think your code uses a lot of registers but if you add that one more line of c code in a loop to cause the compiler to run out of registers it has to dump a register back to memory to free it up, use it then free it up then go back and replace what was in it. The volatile word has a huge performance hit, register will help but not guarantee your variable remains in a register the whole time.

gcc did something interesting in a test:

while(1)
{
a[ra]=rb;
}

.L22:
ldr r2, [ip, #0]
ldr r3, [r0, #0]
str r3, [r1, r2, asl #2]
b .L22
.L25:


while(1)
{
ra=b[rb];
}

add r2, r3, r1, asl #2
.L28:
ldr r3, [r2, #0]
str r3, [r0, #0]
b .L28


Is that the smoking gun? Is ra=b[rb] faster than a[ra]=rb because of this? No, one got lucky and the optimizer pre-calculated a fixed address but chose to not precalculate another. Make the indexes change and that optimization goes away and the two end up the same.

Now that I see your code
o[iyx_o + ic] = i[i_index + ic];

Is a HUGE difference from o[index_o] = i[index_i] in your original question (for this platform).

As far as that little loop goes, if you are worried about performance, use a single ldm/stm pair...DONE.

How did you isolate foo=a[ra] and b[rb] = bar? Do you have that example and where did you do your timing? Did you check the code generation to see that the timing itself was done at the right place, I have seen optimizers do funny things with where the timer was actually read.

Not sure how any of this (foo=a[ra] and b[rb] = bar) relates to the cache, that is another point of confusion I am having. I can check again but your cache lines are not measured in bits here they are measured in bytes. What will affect your performance from a cache perspective. Say this is how your arrays are in your source:

unsigned long o[100];
unsigned long i[100];
unsigned long someotherdata[100];

Doing this:

unsigned long o[100];
unsigned long someotherdata[100];
unsigned long i[100];

Can sometimes make a huge difference because of caching, sometimes faster, sometimes slower, sometimes no real change at all.

Or even this

unsigned long o[100];
unsigned long x;
unsigned long i[100];
unsigned long someotherdata[100];

Can make a difference, but probably only if you are dwelling on a single specific rotate on a single specific data pattern on a single test.

What matters is where your cache line boundaries are.

if this were some tight loop executed a zillion times because there was no way to optimize it:


.L22:
ldr r2, [ip, #0]
ldr r3, [r0, #0]
str r3, [r1, r2, asl #2]
b .L22
.L25:
add
sub

if the whole loop (this is just code I borrowed from above, dont try to interpret what it is doing or relate that to the discussion) plus the two instructions after are all within a cache line, thats a good thing. It will hopefully stay in the cache as it only takes one cache line to store it. If the cache line boundary falls within the loop, then it takes two cache lines to hold it, your chances double that it will get kicked out and cause misses. Something I assume people dont realize (I could be wrong) is the instruction after the branch, that gets fetched before the branch executes, so in this case if the cache line is between the b .L22 and the add (That I clearly threw in without registers) it still takes two cache lines to execute this loop, you get a prefetch of the add after the prefetch of the branch, if the branch executes (unconditional in this case but pretend it was conditional like a real loop) then the add and perhaps the subtract are kicked out of the pipe (flush the pipe) and execution stops while the two ldrs and str work there way through the pipe to the execution unit. This is why unrolling loops increases performance (Sometimes). I was taught to call this the branch shadow. Some processors actually execute the code in the branch shadow instead of flush it, that makes for some really interesting optimization choices, normally, I assume, nops follow most branches on those architectures. So where am I headed with this? If a cache line boundary lies in or just after your loop it takes two cache lines to store a highly used loop, if the loop uses memory that has an address with a similar pattern (for example the code is at address 0x1000000 and the memory it is accessing is at 0x2000000, then they will compete for the same cache line, if you had a 4 deep cache lets say then after a few times executing the code would be in one place and the data in another and they may never kick each other out, but if you had one array at 0x1000000 another at 0x2000000 another at 0x3000000 another at 0x4000000 and the code at 0x5000000 THAT is where you would get cache misses each time through the loop. This is an extreme example of course, Where I was headed by re-arranging the definitions of arrays in your code, by rearraging you may end up moving 0x2000000 to 0x22320134 and putting on a different cache line and now there are four items four deep cache everything else held equal those four items may settle into the cache and not get bumped, making for maximum performance. Also adding a single unsigned long x; can cause sections of the program to move by 4 bytes in memory, that four bytes can move either data segments or code segments into our out of cache line boundaries, sometimes resulting in noticeable performance differences (faster OR slower). Likewise one or two or ten asm{"nop"); lines in the code can have the same affect. Rearranging function definitions in the code, re-arraging the order the .c files are listed in your gcc command line will affect your cache performance because you are moving heavy and light used code segments around where they land relative to cache line boundaries and relative to other busy code/data that share the same cache lines.

Having a 16kbyte cache for example doesnt mean you can write 16kbytes worth of program and have it live "within" the cache, you may not get 4kbytes to stay in the cache. You might if you know what you are doing, you might royaly screw it up if you dont. Caches are black magic, and even if you know a lot of the details about the cache, you probably dont know exactly how they chose which cache line to dump when the time comes, if it is a randomizer how is it calculated, where are you in the random pattern at any one given time?

Sequential code segments ( a[ra] = and = b[rb]) are guaranteed not to have a cache affect on each other as far as the code itself goes, they are either on the same exact cache line or the one next to it and cant bump each other.

Then once you understand all of this you get to understand the write buffer, all writes to the memory, go to memory eventually (meaning writes to the cache work their way to memory later). To optimize for the write buffer you want larger items, two byte writes in a row are worse than a single halfword write as it takes twice as much of the write buffer. the ideal is multiple registers in an stm. As soon as any MEMORY read occurs (not cache read), prefetch, or data, or cache line read, everything stops and waits for the write buffer to dump. So this code:

stm r1,{r2,r3,r4,r5} trying to optimize here
add r7,r8,r9
sub r7,r7,r3
sub r7,r7,r1

The stm only does you any good for performance if these instructions are in the cache, if the instructions are in the cache, while the add and sub and sub are being fetched and executed the stm intiated write buffer writes are still going on. But if the prefetch of add for example causes a read from memory (prefetch or cache line read) then the write buffer has to flush before even fetching the add, thus no performance gain.

Well that is enough for now, I could ramble on and on. What is the short version of this? If that is your code, a 3 item copy, and you are completely convinced that of all the code in that function that loop is the timing killer, make a custom memcpy using ldr/str or ldm/stm. Retest performance and see if it made a difference and then move on.
The short answer to understanding the cache, dont worry about it, its black magic, dont try too hard to optimize for it, every (in)significant line of code you add or remove from the project is going to
affect your cache performance, its like prediction the weather you aint gonna get it.
If there are places where you do good sized or large memory moves/copies, lining the data up on even word boundaries, and having a good ldm/stm based memcpy will make a huge difference.
As far as what is the compiler doing, how is it optimizing? You have to disassemble and examine EVERY time you make a(n) (in)significant change to the code. One tiny litle change or re-arrangement of lines in a function can make a huge difference to the optimized output. If you are serious about performance you need to know the asm, know the cache, write buffer, and your compiler very well (which will lead you to find a different compiler before long).
Certainly dont be mislead to thinking that writing your code in asm will result in faster code. 100% asm means to me someone wasted a huge amount of time and have a project they can never port to another platform without starting over. And it may run faster than C but I can guarantee for a large project, that a good compiler, C, and hand optimizing the problem areas will out run 100% asm projects EVERY TIME.

If I am speeking in some foreign language with the cache line talk and write buffer I can write another long post to cover the basics (at least what I know about it). I am sure somewhere out there on the net someone has a nice writeup. (They may use different terminology though).
 
dwelch posted on Sep 16 2006 at 10:00 AM said:
I need to see how you came to the conclusion that
foo = i[index_i];
o[index_o] = bar;
Are different?

These two lines are not put in the same build of the program, in other words, they are run separately. Btw now I'm using -Os, it turned out to be faster than either -O2 or -O3 in my case

dwelch posted on Sep 16 2006 at 10:00 AM said:
How did you isolate foo=a[ra] and b[rb] = bar? Do you have that example and where did you do your timing? Did you check the code generation to see that the timing itself was done at the right place, I have seen optimizers do funny things with where the timer was actually read.

It's a shame that you focused on the use of foo and bar, because at no point did I use those two lines to get a viable result. You can check the function that I posted, I updated it because I'm no longer using a loop anyways and rather I'm copying entire words. Anyways, to do my timing measurement, I did not isolate the data copying line, and I placed it outside the function, and made it execute once every full 360° rotation. To tell that this line was taking that long, I then did the measurement again with that line commented out, and did the substraction.

Anyways, you guessed it, I hardly know anything about write buffers or caches, that's a new thing to me, however, I have performance issues that are due to having my index_i jumping all over the place depending on the angle of rotation, as index_o increases by 4 (I should make it 1 now that I'm using whole words) or sometimes by 0 from a loop iteration to another, I'm considering having it working the other way around, which means having the index used to read, index_i, to increment in a regular manner and have index_o, the write index, to jump all over the place.

While it will undoubtfully make the reading process much faster, how the write buffer would like that in terms of performance?
 
Last edited by a moderator:
Okay, that was partially horse siht (I am talking about my post)...Shouldnt ramble that late at night perhaps...

The 920 has 8 words per cache line (32 bytes per line).

dcache and icache are separate.

address bits 31 downto 8 are used in the tag, bits 7 downto 5 are used for the segment, then bits 4 downto 2 select where in the cache line. So if bits 7 downto 5 dont match for two addresses then they cant affect each other in the cache (sequential execution, small loops). So you are looking for 31 downto 8 to not match and 7 downto 5 to match to have an effect 0x1000000 vs 0x2000000 or 0x1000000 and 0x1000100
but not 0x1000000 and 0x1000080.

a[ra+ic]=b[rb+ic];

a[ra+ic] and b[rb+ic] should not affect each other, the code needs to be longer than 256 bytes or 64 instructions (8 cache lines).

This:

int one ( void )
{

for(ic=0;ic<3;ic++)
{
a[ra+ic]=b[rb+ic];
}
return(0);
}

gives this:

.L2:
ldr r3, [r5, #0]
ldr r2, [r6, #0]
add r3, r0, r3
ldr r1, [lr, r3, asl #2]
add r2, r0, r2
str r1, [r4, r2, asl #2]
ldr r3, [ip, #0]
add r0, r3, #1
cmp r0, #2
str r0, [ip, #0]
bls .L2

for both -O2 and -O3 (gcc 4.1.1)

12 words plus branch shadow (this is assuming you were accessing 32 bit quantities)
two or three cache lines depending on how they line up, it wont kick itself out of the cache. I would assume the first time through the cache lines are most likely going to be misses, the next two times through they will hit (so long as you dont have any interrupts going on).

If a[] and b[] are 16 bit quantities it actually unrolled the loop in my simple test:

ldr r3, .L3+12
ldr lr, [r2, #0]
mov ip, r1, asl #1
ldr r4, [r3, #0]
ldrh ip, [ip, lr]
mov r5, r0, asl #1
strh ip, [r5, r4] @ movhi
add r3, r1, #1
mov r3, r3, asl #1
ldrh r3, [r3, lr]
add r2, r0, #1
mov r2, r2, asl #1
strh r3, [r2, r4] @ movhi
add r1, r1, #2
mov r1, r1, asl #1
ldrh r1, [r1, lr]
add r0, r0, #2
mov r0, r0, asl #1
ldr r3, .L3+16
strh r1, [r0, r4] @ movhi
mov r2, #3
mov r0, #0
str r2, [r3, #0]

Much more likely to have all of it miss in the cache, 23 instructions so 4 or 5 cache lines no branches though, no dumping of the pipe and losing two clocks (or more, how deep is the pipe?).


But you are really wanting to do bytes,

.L2:
ldr r3, [r4, #0]
ldr r1, [r5, #0]
ldr r2, [r7, #0]
add r1, r1, r3
ldr r3, [r6, #0]
ldrb r0, [r1, ip] @ zero_extendqisi2
add r2, r2, r3
strb r0, [r2, ip]
ldr r3, [lr, #0]
add ip, r3, #1
cmp ip, #2
str ip, [lr, #0]
bls .L2

gcc rolled it up again. 14 instructions, two or three cache lines, second and third time through no cache misses.



Does this loop iterate three times because of red, green and blue? (I am heading off on a wild tangent based on this assumption) Why do you have the three separate, why not work with 16 bit pixel colors instead of the component colors? You should see some performance gains by going to 4 bytes per pixel (red, green, blue, dontcare), assuming you dont want to work with 16 bit colors but the red, green blue components instead. This is because you can do things like this rotate on word boundaries (no need for the loop) a[ra]=b[rb]. the loop goes away, it becomes a single word transfer:

ldr r3, [lr, #0]
ldr r2, [r4, #0]
ldr r1, [r0, r3, asl #2]
str r1, [ip, r2, asl #2]

Much simpler, faster, still one or two cache lines, and there are odds that they will hit but even with that its faster than either of the two above.

This only works of course if you take the time to align your pixels on word boundaries:

unsigned char myscreendata[320*240*4];


wont work, you either need to align it:

unsigned char rawscreen[(320*240*4)+8];
unsigned char *myscreendata;
unsigned long ra;
...

(one time initialization code)
ra=rawscreen;
if(ra&3) ra=(ra+4)&0xFFFFFFFC;
*myscreendata=ra;


Or:

unsigned long rawscreen[320x240];
unsigned char *myscreendata;

...
(one time init)
myscreendata=rawscreen;


Then you can deal with the component colors using myscreendata[].


I think with the arm you are likely to find that using unsigned longs for all pixels, doing all pixel based work at the unsigned long level then if you need to get into componet colors use a shift:

red=(unsigned char)(mypixeldata[index]>>16);
green=(unsigned char)(mypixeldata[index]>>8);
blue=(unsigned char)(mypixeldata[index]>>0);

As the ARM has a good barrel shifter.


red=(unsigned char)(a[ra]>>16);
green=(unsigned char)(a[ra]>>8);
blue=(unsigned char)(a[ra]>>0);


ldr r3, [r1, #0]
mov r3, r3, lsr #16
strb r3, [lr, #0]
ldr r2, [r1, #0]
mov r2, r2, lsr #8
strb r2, [ip, #0]
ldr r3, [r1, #0]
strb r3, [r0, #0]


And if you like you can even help the compiler:

rb=a[ra];
red=(unsigned char)(rb>>16);
green=(unsigned char)(rb>>8);
blue=(unsigned char)(rb>>0);

ldr r3, [r0, #0]
mov r2, r3, lsr #16
mov r1, r3, lsr #8
strb r2, [r4, #0]
strb r1, [lr, #0]
strb r3, [ip, #0]
str r3, [r5, #0]


Rebuilding a pixel, perhaps something like this:

rb=red; rb<<=8;
rb|=green; rb<<=8;
rb|=blue;
a[ra]=rb;

ldrb r1, [r7, #0] @ zero_extendqisi2
ldrb r3, [r6, #0] @ zero_extendqisi2
ldrb r2, [r5, #0] @ zero_extendqisi2
orr r3, r3, r1, asl #8
ldr r0, [lr, #0]
orr r2, r2, r3, asl #8
str r2, [r4, #0]
str r2, [ip, r0, asl #2]

Or

a[ra]= (((unsigned long)red)<<16) | (((unsigned long)green)<<16) | blue;

ldrb r1, [r4, #0] @ zero_extendqisi2
ldrb r3, [lr, #0] @ zero_extendqisi2
ldrb r0, [r5, #0] @ zero_extendqisi2
orr r3, r3, r1, asl #16
ldr r2, [r6, #0]
orr r3, r3, r0, asl #16
str r3, [ip, r2, asl #2]

probably would have been the same had I not had the intermediate rb on the one above this one.


Maybe:

ucptr=(unsigned char *)&a[ra];
*ucptr++;
*ucptr++=red;
*ucptr++=green;
*ucptr++=blue;

OUCH, that was a step in the wrong direction:

ldr r1, [r6, #0]
ldr r3, [r5, #0]
ldrb r2, [lr, #0] @ zero_extendqisi2
add r3, r3, r1, asl #2
strb r2, [r3, #1]
ldrb r1, [ip, #0] @ zero_extendqisi2
add r3, r3, #1
strb r1, [r3, #1]
add r3, r3, #1
ldrb r2, [r0, #0] @ zero_extendqisi2
add r1, r3, #2
strb r2, [r3, #1]
str r1, [r4, #0]


Anyway, I was just guessing that your loop from zero to three in a pixel rotate might be related to red, green and blue bytes, I think you definitely want to burn the extra 25% more memory to word align those pixels and do all pixel work at the word level. Its all relative though you know how much pixel work there is vs color work (I would assume all the artwork is done pre-compile time which is why I dont understand why there would be any color work anyway, and thus use pre-compile 16 bit colors ready for video memory).



Using this:
((uint32_t *)o)[index_o>>2] = ((uint32_t *)i)[index_i>>2];

Instead of this:

for (ic=0; ic<3; ic++)
o[iyx_o + ic] = i[i_index + ic];


Only helps if your triplets are word aligned. It should still result in three byte reads and writes with an index or pointer...Lets see, just disassemble it...


int one ( unsigned char *a, unsigned char *b )
{

while(1)
{
((unsigned long *)a)[ra>>2]=((unsigned long *)B)[rb>>2];
}
return(0);
}


one:
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
stmfd sp!, {r4, lr}
ldr r4, .L6
ldr lr, .L6+4
mov ip, r1
.L3:
ldr r3, [lr, #0]
ldr r2, [r4, #0]
mov r3, r3, lsr #2
ldr r1, [ip, r3, asl #2]
mov r2, r2, lsr #2
str r1, [r0, r2, asl #2]
b .L3



Does the non-loop code even work? You are forcing unaligned word accesses, which most non-ARM gurus, and even many ARM gurus jump up and down about the ARM ARM saying you cant do, in reality what is supposed to happen is something like:

Before:

[0x1000] 0x11
[0x1001] 0x22
[0x1002] 0x33
[0x1003] 0x44
[0x1004] 0x55
[0x1005] 0x66


[0x2000] 0xAA
[0x2001] 0xBB
[0x2002] 0xCC
[0x2003] 0xDD
[0x2004] 0xEE
[0x2005] 0xFF


*i = 0x1000
*o = 0x2000

Byte based (for(ic=0;ic<3;ic++) o[base+ic]=i[base+ic];)

[0x2000] 0x11
[0x2001] 0x22
[0x2002] 0x33
[0x2003] 0xDD
[0x2004] 0xEE
[0x2005] 0xFF

Word based o[index>>2]=i[index>>2];

[0x2000] 0x11
[0x2001] 0x22
[0x2002] 0x33
[0x2003] 0x44
[0x2004] 0xEE
[0x2005] 0xFF

An innocent bystander was taken out


but if you are not word aligned

*i=0x1001
*o=0x2000

byte based:

[0x2000] 0xAA
[0x2001] 0x11
[0x2002] 0x22
[0x2003] 0x33
[0x2004] 0xEE
[0x2005] 0xFF

Word based should do something like:

[0x2000] 0x44
[0x2001] 0x11
[0x2002] 0x22
[0x2003] 0x33
[0x2004] 0xEE
[0x2005] 0xFF

So you wipe out a byte.

Or for the case

*i = 0x1000
*o = 0x2002

byte based:

[0x2000] 0xAA
[0x2001] 0xBB
[0x2002] 0x11
[0x2003] 0x22
[0x2004] 0x33
[0x2005] 0xFF


word based:

[0x2000] 0x33
[0x2001] 0x44
[0x2002] 0x11
[0x2003] 0x22
[0x2004] 0xEE
[0x2005] 0xFF


Not the same at all and you hose two innocent bystanders.


At least this is the way I remember it working on the ARM7, I cannot find my reference that explains
unaligned addressing in the ARM, I am pretty sure it is how I have described above. I have not tested this on the gp2x (yet).

The newer ARM ARM mentions "if alignment faults are enabled", that ones new to me.
I guess we can go complain to gcc about this bug in the compiler:

int one ( unsigned char *a, unsigned char *b )
{
while(1)
{
((unsigned long *)a)[ra>>2]=((unsigned long *)B)[rb>>2];
}
return(0);
}

Producing this:

one:
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
stmfd sp!, {r4, lr}
ldr r4, .L6
ldr lr, .L6+4
mov ip, r1
.L3:
ldr r3, [lr, #0]
ldr r2, [r4, #0]
mov r3, r3, lsr #2
ldr r1, [ip, r3, asl #2]
mov r2, r2, lsr #2
str r1, [r0, r2, asl #2]
b .L3

And see what they say about it, most likely explain that it is a bug in the C code for this architecture, not a bug in the backend.
 
Back
Top