Dingoo Mips-Specific Optimizations?


dsh

Still Fresh
Joined
Apr 23, 2008
Messages
53
Location
Cracow, Poland
Website
byteboy.x25.pl
Hello I started writing my own blitter as I suspect that SDL does poor at alpha blended blitting.

So i wrote a "basic", simple, not optimized, reference code to see how it goes.

On my laptop I get (quite nice result for an unoptimized version):
~250fps with SDL
~210fps with my blitter (even though it memcpy's the final image to sdl's screen surface every frame)

On dingoo (dingux) on the other hand I get*:
~90-100fps with SDL
~20fps with my blitter

*no SDL Video - straight to mmaped fb transfer, and this is not the problem since the memcpy-to-sdl-screen version runs with similar performance

This difference is quite mind-numbing since from what I've heard, SDL is highly otimized with hand crafted assembly as far as x86 is concerned, but lacks such optimizations for architectures often used in embedded solutions, i.e. arm, mips.

I use only integer math. Is there something I should know about mips. Any operations that are particularly slow? Any types I should avoid? Any compiler switches I should use?

I tried to compile with various levels of optimization and compiler flags, to no avail.

The blitting function: http://pastebin.com/47kZ5yvb
The macros: http://pastebin.com/6vWWiwq6

Could somebody look at this and tell me what's wrong with that? 50% in comparison to SDL would be pretty slow but acceptable given no optimizations, but 20% of SDL (taking into accound that it's ~85% on the PC)? Come on.
 
I can only guess but chances are that you're hitting things like cache-misses, memory-misalignments and such things. Without an *intimate* knowledge of the processor you're targeting, the chances are you won't be able to beat the supplied SDL version. Compiler optimisations will sort some of those things out for you but are no substitute for *knowing* the hardware.

Considering that your code hits similar speeds on the PC, these look to be the most likely reasons. I'd try recompiling with every possible compiler optimisation turned on and I would guess that would get you closer but if it got you performance that was identical / better than the supplied SDL, I'd be very surprised. There's always room to manoeuvre when it comes to optimising already-optimised code, but I don't think you'll get any ground-breaking increases unless you have an intimate knowledge of the exact architecture that you're aiming for. Assembly-level, and hardware-level, optimisation is damn hard nowadays. It used to be as simple as counting cycles used for each instruction and finding cycle-saving tricks by using unusual instructions to get the job done, but now lots of caching and bus accesses make it almost impossible to be certain you have the best possible way of doing things on a particular architecture.

Good luck with your endeavour, though.
 
I don't think it's cache misses; I don't see anything you're doing that could be hindering cache performance vs whatever the SDL version is doing.

You say so yourself in the code, there are several things that are bad for performance. Some of these things the compiler will optimize out, but the problem is that for MIPS GCC isn't really that good, and bizarrely these deficiencies sometimes show up even in places you expect to be completely frontend and platform neutral.

Unless I see the compiler generated ASM I can't really know for sure what the compiler is doing and not doing and how to best massage the code for it. Maybe you could post an ASM listing (-S to gcc).

A big issue is all the work generated around srcImage->bytesPerPixel being variable within the loop. If the loop were multiplexed to different byte variations srca and srcad would become constants and it would eliminate the if. This is something gcc is not likely to do. It's uncertain whether or not it'll even cache the value in a register. So in the 16bpp case you're probably looking at a cmp, failed branch, move, move, subtraction, addition, and possibly three loads per loop iteration.

Another problem is the way you're actually doing the blending, in that you're not taking advantage of any parallelization. You're performing 6 ANDs, 9 shifts, 2 ORs, and 6 multiplies. The multiplies are probably not very fast - I doubt they have an early out for small sizes on this platform; it'll likely take 4 or more cycles. You can perform blending in 5 ANDs, 3 shifts, 3 ORs, and 2 multiplies, a substantial improvement over what you're doing. (something subtle and minor that won't hurt you on MIPS but might on other platforms is that you'd be better off shifting then ANDing to save on needing distinct and large immediates).

This post happens to describe the algorithm:

http://www.gp32x.de...post__p__884947

Admittedly this is a little different since it cuts out the multiplication range you can use, but for 16bpp there's not much benefit having 256 levels, instead of say, 17 levels and using a true range division without the add. But I don't remember what SDL uses.

(although, I haven't tested that exact code so there might be a mistake.. but I've done blending like this all the time)

Finally, you should invert the loop so you count downwards, instead of counting a variable upwards and comparing (against something that might remain resident in memory, no less). You would think gcc would make this simple loop inversion optimization for you, but in my experience it doesn't.

So you're right that SDL won't have MIPS optimized versions (probably), but there's still a lot you can do better in the C code. Examining the ASM can help you do things a little differently in the C to avoid certain problems. gcc still tends to do some boneheaded things both in frontend and backend - the former you can often avoid by changing your C (sometimes to something pretty ugly), the latter you're less likely to. So chances are pretty decent that for pure MIPS you can do maybe slightly better than GCC on this, even after massaging your code. You can also use the XBurst instructions on Dingoo, which will probably be more helpful for an application like this.

One last word, pipelined single issue in order architectures with L1 cache, simple write buffers and no automatic prefetching aren't very hard to predict performance for. You won't know when a cache miss happens exactly, but neither will the compiler, so it's all pretty moot. What I'm saying is that a lot of still currently used CPUs like the one in Dingoo or say, ARM9, don't require some great complex understanding to write ASM for.
 
Last edited by a moderator:
With the two muls I think you can reduce it to one.

If it's color = color1 * alpha + (1 - alpha) * color2
It can be color = color2 + (color1 - color2) * alpha
That's assuming color was float from 0 to 1.

But because we want the integer version, this is one old code I wrote in the past.
r = r2 + (((r1 - r2) * alpha) >> 8);
g = g2 + (((g1 - g2) * alpha) >> 8);
b = b2 + (((b1 - b2) * alpha) >> 8);

I am not sure how this could apply in exophase's version (it's interesting, I haven't thought of this way to do it before, I have to study it more to understand the bit twiddling here)
But maybe in yours. I am not sure why you add 1 at the end (ok, I am a bit tired now, had a hard flight :p)

I remember my alpha blending (I never used the one provided from SDL, didn't even know there is one) I used in some GP2X demo (with the r,g,b code above) made slightly more than 60fps to crossfade two pictures. I think. Or maybe over 30. I don't remember. But that was ARM and this is MIPS. I seriously don't know.

Interesting discussion though.
 
Sorry guys, I didn't respond earlier. I thought I had email notifications turned on.

So cutting to the chase. The first problem was the aligning. When I changed it so that each pixel of the alpha blended images takes 4 bytes instead of 3 the performance increased by 150%, but still slower than SDL. I don't care about malloc since according to docs it should return addresses that are multiply of 4.

That's when I started optimizing. The best approach I found to this problem is the one described here: http://www.virtualdub.org/blog/pivot/entry.php?id=117. In short, if you have enough headroom after each component then you can blend them two or even all at once.

I was still very curious how SDL does it so I read/skimmed its blitter sources:
- SDL (blitter) doesn't have any arm or mips optimizations, it's pure sw on this hardware
- It has MMX, 3DNow! and Altivec (PPC?) optimizations only
- The sw blitting routine of interest blends all components at once as described above with duff's device loop unrolling

It turned out that sdl stores 16bit + alpha as 4 bytes per pixel so I looked at this function void BlitARGBto565PixelAlpha(SDL_BlitInfo *info). Basically what it does is that it converts the source ARGB8888 and destination RGB565 pixel to 0G0R0B565565 and blends them at once. So I thought that since I'm writing a 16bit optimized library and that space is already wasted I can store the source pixels in that format from the beginning. Since we have only 5bits for R and B there will be no loss of precision on these channels when we have only 5bits of alpha, the only (negligible) loss could take place on the G channel. It's a tradeoff we can live with.

Since the previous benchmark wasn't good enough I wrote a more synthethic one: it just blits a specified amount of images without displaying anything and spits out the time it took. This version easily outperformed SDL on my x86 Intel Core box! I don't know what optimizations my SDL was compiled with since I use ArchLinux, but I'm pretty sure it's at least MMX since the packages are i686. That was impressive so I rushed to test it on dingoo. To my astonishment SDL is still faster there! WTF?

How is this possible? I basically have almost the same code as SDL but with fewer instructions since I don't have to reformat the source pixel. Also I tried with duff's loop unrolling - it only speeds up bigger blits a bit... I tried different compiler options and optimization levels and nothing yielded any results, but I learned that O1 produces exactly the same code as O2 and O3 with this compiler...

So here are the bench results:
x86 - Intel Core Duo 1.6Ghz running stock SDL from ArchLinux repos
Code:
Starting bench with 5000 blits.
***UFB***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.020000 s == 144.000000 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.650000 s == 126.030769 Mpx/s

***SDL***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.030000 s == 96.000000 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.720000 s == 113.777778 Mpx/s

x86 with duff's loop
Code:
Starting bench with 5000 blits.
***UFB***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.020000 s == 144.000000 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.580000 s == 141.241379 Mpx/s

***SDL***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.030000 s == 96.000000 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.710000 s == 115.380282 Mpx/s

Dingoo A320 - without duff's loop -- -O3 -finline-functions -fomit-frame-pointer -fno-exceptions -mips32 -funroll-loops
Code:
Starting bench with 5000 blits.
***UFB***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.260000 s == 11.076923 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.919998 Mpx / 8.160000 s == 10.039216 Mpx/s

***SDL***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.230000 s == 12.521739 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.919998 Mpx / 6.010000 s == 13.630615 Mpx/s

The second strange thing is that SDL gets faster with bigger blits and my code gets slower. Since they're almost exactly the same how's this possible?

If you have some spare time, take a look at the code:
ufb_blitter.c
sdl's blitting function
[ASM] ufb_blitter.c compiled for dingoo with debugging symbols
[ASM] ufb_blitter.c compiled for dingoo without debugging symbols

The possible explanations that come to my mind:
- SDL uses some other function than the one I posted. Very unlikely - this is the fastest software blitting function with alpha blending in SDL I found. Any other would be slower.
- SDL on dingoo is a patched version with some crazy mips optimizations (checked it - not true)
- The compiler is really broken - unlikely - the same compiler/toolchain (as it was also prepared by him) was used by BooBoo to compile the SDL that is used on dingoo right now
- I missed some crucial compiler options, or the build process is screwed in some other way
- The most likely: I have a big plain 'ol BUG

Does anyone have a clue? This is driving me crazy and I've got other projects needing some love.
 
Post ASM of SDL's too and it'll be clear. You should actually disassemble the SO here, not compile with -S (use $(CROSS_COMPILE)objdump -D)

EDIT: Forgot to ask - is this still going straight to framebuffer in your version? Framebuffer tends to be setup with write buffering on but caching off, which is bad for reads. If you tried going to the same position on the 24x24 instead of random ones you'd probably see very different results on the SDL version, if this is what's happening. Still, even if you're blowing cache 100% on most or every blit you most likely get a big benefit on using it vs not using it, because the loads from the framebuffer will benefit from spatial locality of reference and a cacheline load is most likely faster than each pixel loaded individually, even as 32-bits each (in fact, sometimes uncached loads can be awfully slow, like on the Wiz).

If it's at all possible it's nice to split your screen updates into several few-scanline spans and bin blits into these, so that you can get the benefit of temporal locality on reusing cache per span and not have your memory bandwidth requirements scale with the number of blits you're doing. Think IMG's tile based rendering for an example of this sort of thing. If you have any sort of prefetching instruction that can also help a lot.
 
Exophase said:
Post ASM of SDL's too and it'll be clear. You should actually disassemble the SO here, not compile with -S (use $(CROSS_COMPILE)objdump -D)

dissassembled BlitARGBto565PixelAlpha
ufb_blitter.o dissassembled for convenience <- it has calcRects inlined

Exophase said:
EDIT: Forgot to ask - is this still going straight to framebuffer in your version? Framebuffer tends to be setup with write buffering on but caching off, which is bad for reads. If you tried going to the same position on the 24x24 instead of random ones you'd probably see very different results on the SDL version, if this is what's happening. Still, even if you're blowing cache 100% on most or every blit you most likely get a big benefit on using it vs not using it, because the loads from the framebuffer will benefit from spatial locality of reference and a cacheline load is most likely faster than each pixel loaded individually, even as 32-bits each (in fact, sometimes uncached loads can be awfully slow, like on the Wiz).

The benchmark doesn't display anything. It just does the blits to pure software image/surface. I also have written some convenience functions to setup the FB and when testing if blitting actually works I blit to an offscreen image then memcpy it to mmaped FB at once each frame. BTW Current dingux setup prepared by BooBoo uses uclibc. memset and memcpy functions seem a bit too slow for this hardware but I'm no expert in this...

PS In case you have an impression that I didn't read your previous post. I did read all of the posts but since I already had some developments I decided to write up what I have done so far instead of replying to each post part-by-part. So thanks everybody for your tips!

---

I tried looking at this asm, but I'm no asm guy, I usually do higher level stuff. This is exactly why I'm playing with this, to learn something. A lot of things look suspicious even for me though, but I'm probably wrong. Why is it splitting the bitwise AND's like that? A lot of operations look like they're done on words instead of whole 32bits?
 
Last edited by a moderator:
I'm looking at the code right now. Quick question - do you have any alpha = 0 or alpha = 31 pixels? SDL is special casing them, you aren't. Otherwise your version looks like it should be faster.
 
Exophase said:
I'm looking at the code right now. Quick question - do you have any alpha = 0 pixels? SDL is skipping them, you aren't.

Yes, a lot of them. There's no skipping in the SDL sources, so I commented it out in mine for the comparison to be more true. I wonder where did BooBoo get the sdl from... I examined SDL 1.2.14 sources and SDL on dingoo is 1.2.13. Only patches that BooBoo applied seem to be some joystick and mixer ones.

F**k, that explains almost everything. I'll try with alpha=0 skipping and post the results.


x86 with alpha=0 skipping
Code:
***UFB***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.020000 s == 144.000000 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.500000 s == 163.840000 Mpx/s

***SDL***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.030000 s == 96.000000 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.710000 s == 115.380282 Mpx/s

dingoo with skipping
Code:
***UFB***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.250000 s == 11.520000 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.919998 Mpx / 7.470000 s == 10.966533 Mpx/s

***SDL***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.230000 s == 12.521739 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.919998 Mpx / 6.190000 s == 13.234249 Mpx/s

That's unbelievable. On x86 larger blits got faster like with SDL and on dingoo larger blits are still slower than smaller ones and the overall performance is still lower than SDL's. That's just insane.

Comparing the results with previous one shows that performance gains with skipping are negligible on dingoo. WTF?
 
Last edited by a moderator:
Reread my post (I edited it since your reply), having a fast case for alpha == 31 is vital too. Could you post the code again?

It's strange you say SDL didn't have alpha 0 and 31 special casing in your source because you can see it in the C file you linked.
 
Exophase said:
Reread my post (I edited it since your reply), having a fast case for alpha == 31 is vital too. Could you post the code again?

It's strange you say SDL didn't have alpha 0 and 31 special casing in your source because you can see it in the C file you linked.

I was previously looking at different SDL blending function (without the special cases handling) and I think that it somehow stayed in my mind... I don't know how I could miss that. I'll test it and post the results.

Thanks a lot Exophase!

With both special cases:

x86
Code:
***UFB***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.020000 s == 144.000000 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.420000 s == 195.047619 Mpx/s

***SDL***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.030000 s == 96.000000 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.710000 s == 115.380282 Mpx/s

dingoo
Code:
***UFB***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.220000 s == 13.090909 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.919998 Mpx / 6.600000 s == 12.412121 Mpx/s

***SDL***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.230000 s == 12.521739 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.919998 Mpx / 6.180000 s == 13.255664 Mpx/s

It's a bit better but still crazy.
 
Last edited by a moderator:
Yeah, this is all pretty weird. I think at this point I'd start incrementally chewing out pieces of the function until you broke the trends and isolated what's causing it. Actual profiling won't work that well since it's not fine grained enough :/
 
Exophase said:
Yeah, this is all pretty weird. I think at this point I'd start incrementally chewing out pieces of the function until you broke the trends and isolated what's causing it. Actual profiling won't work that well since it's not fine grained enough :/

Valgrind (cachegrind/callgrind) can profile single lines of code (and probably single instructions) as far as I know. I used it a few times and kcachegrind has a "source view" where you have percentage of cpu time spent on each source line next to it. Unfortunately nobody ported valgrind to dingoo (probably not a trivial task [uclibc and stuff...]), and on x86 everything works as expected so profiling doesn't give any hints.

I used clock() to measure the time, but it doesn't work as expected so I switched to SDL_GetTicks() and now results for x86 are a lot different for smaller blits but on dingoo they're still almost the same. This thing is bugging me.

x86 results with SDL_GetTicks() for measuring time
Code:
***UFB***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.017000 s == 169.411765 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.311000 s == 263.408360 Mpx/s

***SDL***
        Testing 5000 random position blits of image of size 24x24
                2.880000 Mpx / 0.034000 s == 84.705882 Mpx/s
        Testing 5000 random position blits of image of size 128x128
                81.920000 Mpx / 0.725000 s == 112.993103 Mpx/s

I have a few ideas for optimization: Instead of using duff's loop, manually unroll line loop (with macro) and add second loop to process line leftovers (done, works great). What do you think about this: I figured that usually you blit sprites onto the backbuffer, so there are a lot of this blits, but copying back buffer to screen is a once-per-frame operation. So, I could store the backbuffer in the same format as images with alpha (without alpha of course), then reformatting the source upon read and write wouldn't be necessary. The only expenses are: moving green channel to its place when "blitting" backbuffer onto the screen, backbuffer would take twice the memory since each pixel would take 4 bytes instead of 2.
 
Last edited by a moderator:
How much did the non-duffs unroll improve things? The only real benefit of duff's device is saving space/icache pressure, which is usually not worth it as you found (non-duff's inline get the benefit of improved scheduling vs just saving looping overhead). But I doubt it's going to make a big difference at this point. If I were you I'd just give the blits a size multiple requirement and scrap the leftovers altogether. That's the nice thing about not writing a general purpose library.

You shouldn't be copying backbuffer to framebuffer, you should be flipping the pointers. If Dingoo doesn't support this then you should start questioning it. This is especially true if you aren't using DMA to accomplish this, and if there's room for changing the format then you aren't. If Dingoo's write buffering is any good you'll probably be overlapping a lot of your work with the memory operations anyway, especially if you're blitting things from cache (useful tip, if you can at all try to blit the same bitmaps in succession). Unless your games are sprite hell you'll probably have more background and otherwise non-alpha pre-overdraw real estate than alpha stuff anyway. On a platform like Dingoo with presumably weak memory characteristics and no L2 cache you'll probably be hit pretty hard with doubling your backbuffer size.

Try using gettimeofday for measuring time.. and increase the number of iterations, especially on x86.
 
Exophase said:
You shouldn't be copying backbuffer to framebuffer, you should be flipping the pointers. If Dingoo doesn't support this then you should start questioning it. This is especially true if you aren't using DMA to accomplish this, and if there's room for changing the format then you aren't.

Flipping pointers is not possible, unless I missed something. It's just an mmaped /dev/fb0 and the buffer is probably in kernel space or it's the LCD controller's buffer, but I'll investigate this since I don't know much about this kind of stuff. One thing is sure, SDL doesn't do any better than my memcpy-to-mmaped-fb approach.

I suppose there is no write-buffering. First I wanted to optimize the blits, then start doing the rest.

I've seen some headers with funcs for dma transfer specific to this chip somewhere so I'll try using DMA to copy the backbuffer later.

EDIT: I checked - the LCD controller has only 172.8kbytes of mem so not enough for two buffers.

Exophase said:
If Dingoo's write buffering is any good you'll probably be overlapping a lot of your work with the memory operations anyway, especially if you're blitting things from cache (useful tip, if you can at all try to blit the same bitmaps in succession). Unless your games are sprite hell you'll probably have more background and otherwise non-alpha pre-overdraw real estate than alpha stuff anyway. On a platform like Dingoo with presumably weak memory characteristics and no L2 cache you'll probably be hit pretty hard with doubling your backbuffer size.

I'll try this. But first I'll test it with the XBurst instructions. The problem is that they (ingenic, jz4740 chip makers) didn't write gcc extensions for their instruction set so first I have to compile with -S, then feed it through their awk script and finally compile this to .o. The thing is that my scons build script isn't so easy to adapt to this scenario so I'll spend more time rewriting the build system than actually doing the work... Anybody knows a way to do this properly and fairly elegant in scons?

Exophase said:
Try using gettimeofday for measuring time.. and increase the number of iterations, especially on x86.

I doubt that gettimeofday will improve much. I was too lazy to test it with more iterations. Of course, with 5000 the values aren't stable enough on x86. I tested with 100K iterations and the values doesn't differ much from what I posted but are much more stable. Thanks.
 
Last edited by a moderator:
dsh said:
Exophase said:
You shouldn't be copying backbuffer to framebuffer, you should be flipping the pointers. If Dingoo doesn't support this then you should start questioning it. This is especially true if you aren't using DMA to accomplish this, and if there's room for changing the format then you aren't.

Flipping pointers is not possible, unless I missed something. It's just an mmaped /dev/fb0 and the buffer is probably in kernel space or it's the LCD controller's buffer, but I'll investigate this since I don't know much about this kind of stuff. One thing is sure, SDL doesn't do any better than my memcpy-to-mmaped-fb approach.

I suppose there is no write-buffering. First I wanted to optimize the blits, then start doing the rest.

I've seen some headers with funcs for dma transfer specific to this chip somewhere so I'll try using DMA to copy the backbuffer later.

EDIT: I checked - the LCD controller has only 172.8kbytes of mem so not enough for two buffers.

Exophase said:
If Dingoo's write buffering is any good you'll probably be overlapping a lot of your work with the memory operations anyway, especially if you're blitting things from cache (useful tip, if you can at all try to blit the same bitmaps in succession). Unless your games are sprite hell you'll probably have more background and otherwise non-alpha pre-overdraw real estate than alpha stuff anyway. On a platform like Dingoo with presumably weak memory characteristics and no L2 cache you'll probably be hit pretty hard with doubling your backbuffer size.

I'll try this. But first I'll test it with the XBurst instructions. The problem is that they (ingenic, jz4740 chip makers) didn't write gcc extensions for their instruction set so first I have to compile with -S, then feed it through their awk script and finally compile this to .o. The thing is that my scons build script isn't so easy to adapt to this scenario so I'll spend more time rewriting the build system than actually doing the work... Anybody knows a way to do this properly and fairly elegant in scons?

Exophase said:
Try using gettimeofday for measuring time.. and increase the number of iterations, especially on x86.

I doubt that gettimeofday will improve much. I was too lazy to test it with more iterations. Of course, with 5000 the values aren't stable enough on x86. I tested with 100K iterations and the values doesn't differ much from what I posted but are much more stable. Thanks.

you can use "awk script" to retrieve the way it issues words to do the same with gcc :

#define VADD16(dd1, dd0, ds1, ds0, dt1, dt0) __asm__ __volatile__ (".word (0xXXXXXXXX|%0|%1|%2|%3|%4|%5)", , "i"(DD0(dd0)), "i"(DD1(dd1)), "i"(DS0(ds0)), "i"(DS1(ds1), "i"(DT0(dt0)), "i"(DT1(dt1)))

and #define DD0(x) ((((x)&DD0_MASK)<<DD0_SHIFT)

etc.

or :

Code:
__asm__ __volatile__ (
    VADD16(1,0, 2,0, 3,0) 
    ...
);

where :

#define VADD16(dd1, dd0, ds1, ds0, dt1, dt0) __asm__ __volatile__ (".word (0xXXXXXXXX|" DD0(dd0) "|" DD1(dd1) "|" DS0(ds0) "|" DS1(ds1) "|" DT0(dt0) "|" DT1(dt1))

and #define DD0(x) "((((" #x ")&DD0_MASK)<<DD0_SHIFT)"

etc.
 
Last edited by a moderator:
Thanks for the help, I really appreciate it, but I decided to not go with the XBurst instructions. Initially I though that they would enable me to do calculations for at least 2 pixels at once, but apparently they can't do that, so unless I'm wrong it stays as it is for now...
 
Back
Top