What the hell is NEON?


Hồng Thất Công

Đả Cẩu Bổng Pháp
Joined
Dec 19, 2012
Messages
4,386
Location
Cái Bang
I've been hearing notaz and Exophase and others mention NEON a lot.  So what exactly is it?  Why is it so good?  And why does it help emulators on Pandora run faster?  I probably can google this but would like to hear from the real persons and pros here :)
 
Last edited by a moderator:
Neon-Green-Flower-green-20988898-500-313.jpg
 
If you consider the main processing elements of the pandora's SOC, you get:

  • General-purpose processing: This is capable of pretty-much any computation and is responsible for almost everything happening on your pandora.
  • Graphics processing: Brilliant at drawing 3D projections of triangles, almost useless at anything else
  • DSP: Designed to perform the same operations repeatedly on a constant stream of data. Good for audio processing and the like.
  • NEON (SIMD): Good for performing parallel operations on groups of similar data
The last three are harder work to code for (since you have to use them in conjunction with the General-purpose processor), but if your program requires the specific sort of operation that each processor specialises in, then they can provide a dramatic performance boost

GPUs are better at drawing triangles than CPUs, for example.

Hopefully that will suffice until someone comes up with a more accurate answer - I am by no means an expert on this stuff
 
http://upload.wikimedia.org/wikipedia/commons/3/3e/Electron_shell_010_Neon_-_no_label.svg

I'd sum up NEON on Pandora's Cortex-A8 processor with the following things that make it nicer than the normal instructions:

- It's wider. Where normal instructions let you do something like add two 32-bit numbers NEON lets you add 4 32-bit numbers in one instruction.

- It's more specialized. There's a bunch of operations like widening/narrowing, min/max, abs, select, averages, negative variable shifts, and other more exotic stuff that would take a few instructions otherwise. There's even a small amount of 64-bit integer arithmetic (add/sub and shifts)

- It has (single precision) floating point instructions that don't suck, but also don't fully conform to the floating point standard. ARM has an instruction set called VFP for the normal floating point stuff but on Cortex-A8 it's slow, so if you want good floating point you want to use NEON. There are some specialized instructions here too, like reciprocal approximation.

The downside is that it's hard to write NEON code that's actually fast. GCC has a long way to go before it even understands how to turn a lot of complex C code into good NEON, and if you use intrinsics (basically NEON instructions from inside C/C++ code) you still often end up with a bunch of garbage in between. So the best course of action - at least last I've found - is to write assembly code by hand. But it's not easy. Getting NEON code fast on Cortex-A8 is very challenging because pretty much all of the instructions have longer latency than their scalar counterparts, and there's a bunch of stuff that will randomly slow you down tremendously if you're not careful (that ARM doesn't really document). So you have to interleave independent instructions, which can involve some creative approaches (unrolling, software pipelining) and very tight use of registers.

I encourage people to try if they're interested, but always benchmark your code to make sure you're actually making something faster. Consider yourself very lucky if your benchmark can come even remotely close to a hand-count of the timing.
 
What about other compilers than gcc (e.g. FreeBSD recent swap to Clang ) ?
 
What about other compilers than gcc (e.g. FreeBSD recent swap to Clang ) ?
They are improving: http://www.phoronix.com/scan.php?page=news_item&px=MTMwMTk

But... all recent benchmarks (on Linux x86, to be fair), the GCC is slightly faster than Clang on most of the tested softwares (and softwares related to very different tasks). I won't cite them, but the ones I saw were from the same site I cited above. Look for yourself. Maybe clang is better on ARM.
 
http://upload.wikimedia.org/wikipedia/commons/3/3e/Electron_shell_010_Neon_-_no_label.svg


I'd sum up NEON on Pandora's Cortex-A8 processor with the following things that make it nicer than the normal instructions:


- It's wider. Where normal instructions let you do something like add two 32-bit numbers NEON lets you add 4 32-bit numbers in one instruction.


- It's more specialized. There's a bunch of operations like widening/narrowing, min/max, abs, select, averages, negative variable shifts, and other more exotic stuff that would take a few instructions otherwise. There's even a small amount of 64-bit integer arithmetic (add/sub and shifts)


- It has (single precision) floating point instructions that don't suck, but also don't fully conform to the floating point standard. ARM has an instruction set called VFP for the normal floating point stuff but on Cortex-A8 it's slow, so if you want good floating point you want to use NEON. There are some specialized instructions here too, like reciprocal approximation.


The downside is that it's hard to write NEON code that's actually fast. GCC has a long way to go before it even understands how to turn a lot of complex C code into good NEON, and if you use intrinsics (basically NEON instructions from inside C/C++ code) you still often end up with a bunch of garbage in between. So the best course of action - at least last I've found - is to write assembly code by hand. But it's not easy. Getting NEON code fast on Cortex-A8 is very challenging because pretty much all of the instructions have longer latency than their scalar counterparts, and there's a bunch of stuff that will randomly slow you down tremendously if you're not careful (that ARM doesn't really document). So you have to interleave independent instructions, which can involve some creative approaches (unrolling, software pipelining) and very tight use of registers.


I encourage people to try if they're interested, but always benchmark your code to make sure you're actually making something faster. Consider yourself very lucky if your benchmark can come even remotely close to a hand-count of the timing.
vMdYxND.gif
This is why IMHO devs are the smartest folks on this planet!
 
http://upload.wikimedia.org/wikipedia/commons/3/3e/Electron_shell_010_Neon_-_no_label.svg


I'd sum up NEON on Pandora's Cortex-A8 processor with the following things that make it nicer than the normal instructions:


- It's wider. Where normal instructions let you do something like add two 32-bit numbers NEON lets you add 4 32-bit numbers in one instruction.


- It's more specialized. There's a bunch of operations like widening/narrowing, min/max, abs, select, averages, negative variable shifts, and other more exotic stuff that would take a few instructions otherwise. There's even a small amount of 64-bit integer arithmetic (add/sub and shifts)


- It has (single precision) floating point instructions that don't suck, but also don't fully conform to the floating point standard. ARM has an instruction set called VFP for the normal floating point stuff but on Cortex-A8 it's slow, so if you want good floating point you want to use NEON. There are some specialized instructions here too, like reciprocal approximation.


The downside is that it's hard to write NEON code that's actually fast. GCC has a long way to go before it even understands how to turn a lot of complex C code into good NEON, and if you use intrinsics (basically NEON instructions from inside C/C++ code) you still often end up with a bunch of garbage in between. So the best course of action - at least last I've found - is to write assembly code by hand. But it's not easy. Getting NEON code fast on Cortex-A8 is very challenging because pretty much all of the instructions have longer latency than their scalar counterparts, and there's a bunch of stuff that will randomly slow you down tremendously if you're not careful (that ARM doesn't really document). So you have to interleave independent instructions, which can involve some creative approaches (unrolling, software pipelining) and very tight use of registers.


I encourage people to try if they're interested, but always benchmark your code to make sure you're actually making something faster. Consider yourself very lucky if your benchmark can come even remotely close to a hand-count of the timing.
vMdYxND.gif
This is why IMHO devs are the smartest folks on this planet!
Please continue to refer to us by the word "dev". I don't appreciate when people use the word "programmer" or "coder", because we do more than just code.

And I'm sure there are smart people in any area, not only around the devs.

;)
 
I'm an engineer but without programs you devs developed, we're engineers, doctors, and all professions that require computers, are just sitting ducks.....
 
Back
Top