sparrow3D - multi platform game engine


Hi all :)

This is slightly off topic, though related to your discussion about NEON...

Did you guys ever try ProjectNe10 in your projects ? @ Ziz: could its vector functions be useful to you ?

Cheers, Magic Sam
Hi,
I never heard of this library. However I don't think, it will help. I already tested just using NEON for some vector instructions, but this didn't help (Exophase explained, why). This library is neat, but I have enough knowledge about gcc to use the neon intrinsics on my own. This part of sparrow3d needs to be very very fast. If I need to call a function (which is not inlined) for every pixel I have already lost.
 
Okay, the last days I played a lot with NEON and I learned:

Sparrow3d was not made very NEON friendly. I would need to change the whole triangle render code for a good result. Right now most of the times the NEON code is even slower than the code not using NEON. There are different reasons.

My idea is the following: For every pixel I load the texel and then I do some tests like alpha test, z test, pattern test, etc. If this test fails I can stop consider this pixel and keep on with the next pixel.

So what is the problem with NEON? With NEON I consider 4 pixels at one time in parallel. However I can't just cancel the execution for one of the four parallel lanes. I can only set a mask and tell NEON to use the mask e.g. for pixel drawing. So I do all the tests for all pixels, even if a very early test fails. Furthermore I need to pause using NEON for loading the texels. I can't cancel easy if all lane masks are set to "not drawing", too, there is not such a NEON command. I would need to load the values to the memord before I can use them. Hower in the time needed for this, I can just continue using useless NEON instructions. ;)

All in all only for some very special cases NEON is for me faster than not using it.

Fortunately Hase is such a special case. So I use NEON now in Hase to get more fps and lower the needed cpu clock. However I ran in an interesting problem, I didn't have with my approach without NEON. My theory is, that my approach is better optimizable. The problem are cache misses in the texture buffer in certain rotations. I iterate over x. If the texture is placed orthogonal, I iterate in the texture space over y. This is bad. In Hase this is the reason I sometimes have 35 FPS (at 700 Mhz) and sometimes only ~20 FPS.

However I don't have any other game, which would profit. So I will not continue implementing NEON optimizations (e.g. for triangles without textures). I just don't have games, which would profit. ;)

NEON optimization is not enabled by default, too. If you need it or want to this with it, build with "PARAMETER=-DPANDORA_NEON".

Anyway: Thanks for your help, Exophase! The little vector machine in the pandora is very neat.

I am considering using my knowledge for some SSE optimizations for the PC build. Hopefully I don't have the same bottlenecks there. ;)

Greetings, Ziz
 
Optimizing a 3D renderer for NEON is not for the faint of heart. Definitely a lot of work from the ground up if you want to get something generally worth using.

I touched a little bit on the issue you bring up about having to mask in my earlier post. You're right, you no longer have the ability to directly have an early exit on pixels. This isn't quite as bad as it seems, since at least for some polygons that early test won't be good on branch prediction, but it can hurt sometimes. There are still some things you can do, though:

- Check if all of the pixels in the polygon fail and abort the whole thing. This doesn't take much overhead. Let's say you calculate visibility for 16 pixels at a time using a vectors of 16x8-bit, so 0xFF means visible and 0x00 means not visible. If you keep a 16x8 accumulator register and subtract the masks from it at each iteration, then after you're done parallel add the results together, you will get a number of how many pixels are visible. If that number is 0 it means you don't have to render the polygon. If that number is the total pixels in the polygons it means you can use a function that doesn't check the masks. If you're rendering to tiles instead of the entire screen, large polygons will often intersect against the edges of the tile, which will give you more chances to have polygon segments that are entirely obscured.

- Do some kind of hierarchical or partial Z checking where you don't render pixels in some tile that all fail the Z test. Can be done with a hierarchical Z comparison as well. A lot more complex and maybe not conducive to CPU operations or worth the overhead. I haven't ever tried this.

- Run-line compress the pixel stream to remove obscured pixels. I haven't tried this either.

Compression would work something like this: let's say your polygon is two rows and visibility masks that look like this:

0xFF 0xFF 0xFF 0xFF 0xFF 0x00 0x00 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0x00 0x00 0x00 0xFF 0x00 0x00

0xFF 0xFF 0xFF 0xFF 0x00 0x00 0x00 0x00 0xFF 0xFF 0xFF 0xFF 0x00 0x00 0x00 0xFF

First you can convert this to a bitmap per row, so it looks like this:

1111100111111000100b

1111000011110001b

(where the most significant bit is the leftmost one)

This can be done fairly efficiently with NEON.

Then you iterate through those bitmaps to create a list of spans. You load up a 32-bit value, use clz (__builtin_clz in C, there's also a NEON version) to see how many 0 bits to skip, then another clz on the inverted value (or maybe clo, count leading ones) to see how large the span is. Shift off the spans you accounted for as you go. Repeat until the bitmap is zero. So you'd have a list that looks like:

5, 2, 6, 3, 1

4, 4, 4, 3, 1

Which says how many pixels to render then how many to skip.

Then you can compress the pixel data you have so the ones to skipped are skipped, and resume processing them with SIMD functions on the compact version. Whether or not this is worth it really depends on how much work you're doing on the pixels after the depth test. It's probably not worth doing if it has tests that need the texel loaded, like alpha test. Pattern test you can handle by ANDing the visibility bitmap with the pattern.

Then when it comes time to write out the pixel and update depth you use the list that says how many to draw and how many to skip to determine where to write it.

I'm not doing anything too extreme for hidden surface removal in DS emulation because the system requires that opaque polygons are rendered from the top of the screen to the bottom (and if you don't some games break, like missing text and things like that). And because higher things on the screen tend to be further away this works against early Z. But in your case, games can apply some level of sorting (for example, on a per-object basis) to try to render objects nearest to the camera first, to maximize early Z. This could be beneficial even with the early Z exit you have now.
 
Last edited by a moderator:
  • Like
Reactions: Ziz
Thanks for the long explanation. Yes, I would need to rearrange a lot. However if I want to start from the very beginning, I could just use GL ES. Writing a software renderer (which will not work on any other device than the pandora) again is not very motivating. :D

Pattern test you can handle by ANDing the visibility bitmap with the pattern
What I actually do: ;)
Code:
mask = vand_u16(((x >> 2) & 1) ? pattern_2 : pattern_1,mask);
Greetings, Ziz
 
I have ~32 functions, which differ only in details (like with or without z test). I need to write one big #ifdef monster (which I already did for the function itself, but not for the pixel set...).
If it is C++ (vs plain C) you have the option of using templates here too. I actually did something similar many years back on PS2, there was a function that was rendering any dynamic objects, and the function checked/handled many different things that may need to be done depending on the type of object. In one use case, the game wanted to render huge numbers of extremely simple geometry, and the profiler showed this single dynamic render function was a huge bottleneck, so I created a stripped down version that does exactly what was required for the games use case, performance for the particular code got hugely better. Rather than having various functions that were almost the same, I instead created a single, templated function, where the template parameters control which features are required. I checked the generated assembly, and could see that in release builds it had created multiple variations of the function, and the variations had stripped out all the code as per the template parameters. I personally preferred this over a #ifdef approach (where presumably you include the same piece of code multiple times, with different defines set). Courses for horses and all that, but worth a ponder!
 
I have ~32 functions, which differ only in details (like with or without z test). I need to write one big #ifdef monster (which I already did for the function itself, but not for the pixel set...).
If it is C++ (vs plain C) you have the option of using templates here too. I actually did something similar many years back on PS2, there was a function that was rendering any dynamic objects, and the function checked/handled many different things that may need to be done depending on the type of object. In one use case, the game wanted to render huge numbers of extremely simple geometry, and the profiler showed this single dynamic render function was a huge bottleneck, so I created a stripped down version that does exactly what was required for the games use case, performance for the particular code got hugely better. Rather than having various functions that were almost the same, I instead created a single, templated function, where the template parameters control which features are required. I checked the generated assembly, and could see that in release builds it had created multiple variations of the function, and the variations had stripped out all the code as per the template parameters. I personally preferred this over a #ifdef approach (where presumably you include the same piece of code multiple times, with different defines set). Courses for horses and all that, but worth a ponder!
Interesting idea. I will use this the next time I ran into such problems. I will not change this dead horse from C to C++ just for this cosmetic change, but nice, that C++ does that. To this part:
where presumably you include the same piece of code multiple times, with different defines set
No, I have one code, but with ifdefs like
Code:
x += x_d;
y += y_d;
#ifdef Z_FUN
  z += z_d;
#endif
However I have 32 different function defines... ^^ I could improve this too, but I don't see a reason (now).
 
Back
Top