Neon-ized Math.h Like Library


Exophase said:
So of course I want to know the nature of these problems. Are you talking about compiler bugs in CodeSourcery? Bugs specific to OMAP35xx? Bugs specific to Cortex-A8 that have not been resolved? Or are you just uncomfortable about people using it because you're concerned about implementation costs in future CPUs due to the mixed length decoding?
First of all, let me say this is personal preference, not the view of my company.

Without going into too many details, the variable length instructions cause huge complexity in instruction fetching and branch prediction. That complexity can lead to hardware bugs in the CPU, especially since validating all the corner cases can be extremely difficult. Enough said I guess :)

If that'd be me, T2 would just be deprecated and removed from CPUs that have ARM instruction set (that is leave T2 for CPUs designed for platforms where code density is a must). An alternative would be to make an evolution to T2 that'd remove 16-bit instructions, but then the code density gain would vanish.
 
Last edited by a moderator:
Laurent said:
First of all, let me say this is personal preference, not the view of my company.

Without going into too many details, the variable length instructions cause huge complexity in instruction fetching and branch prediction. That complexity can lead to hardware bugs in the CPU, especially since validating all the corner cases can be extremely difficult. Enough said I guess :)

If that'd be me, T2 would just be deprecated and removed from CPUs that have ARM instruction set (that is leave T2 for CPUs designed for platforms where code density is a must). An alternative would be to make an evolution to T2 that'd remove 16-bit instructions, but then the code density gain would vanish.

I understand and respect your personal stance, but don't you think that telling people not to use something because you think it might have bugs due to its complex nature is unconvincing? Other platforms (ie, x86) do a good enough job with much more complex instruction lengths, so I have faith that ARM can handle Thumb-2. You probably know things we don't though.
 
Last edited by a moderator:
I dislike T2 not only because it makes the I cache and branch predictors more complex, hence making it potentially buggy, but also because supporting 2 different ISA on the same CPU is counter-productive. You'd better focus on one ISA and make it better. And since T2 has variable-length instructions, let's focus on ARM ISA :)

And x86 does a more than decent job with VL instructions because it devotes a lot of silicon to that (and also there are some shortcuts here and there than can kill your performance, such as Intel cores being able to only fetch 16-byte of instructions at once, hence making a bottleneck for x86_64 [AMD does 32-byte]).
 
Laurent said:
I dislike T2 not only because it makes the I cache and branch predictors more complex, hence making it potentially buggy, but also because supporting 2 different ISA on the same CPU is counter-productive. You'd better focus on one ISA and make it better. And since T2 has variable-length instructions, let's focus on ARM ISA :)

And x86 does a more than decent job with VL instructions because it devotes a lot of silicon to that (and also there are some shortcuts here and there than can kill your performance, such as Intel cores being able to only fetch 16-byte of instructions at once, hence making a bottleneck for x86_64 [AMD does 32-byte]).

I thought the primary aim of supporting both was that Thumb2 is the desired future path for ARM (this seems to be the implication from the fact that certain things are now supported ONLY in T2, such as NEON predication), but for an app processor there is a significant advantage in offering both - for example, if you are Apple and you want to speed up your new phone-like product! In fact, any companies that rely on binary compatibility will require that ARM is still supported, but ARM are obviously considering the future, which is that Thumb2 is, at least in their mind, the future.

As long as there are no fundamental issues with Thumb2 on the A8 (and ARM are very annoying and don't provide errata publically, or so it would seem), then it would seem the better approach, providing a denser instruction encoding and hopefully better icache usage surely?
 
Last edited by a moderator:
Laurent said:
I dislike T2 not only because it makes the I cache and branch predictors more complex, hence making it potentially buggy, but also because supporting 2 different ISA on the same CPU is counter-productive. You'd better focus on one ISA and make it better. And since T2 has variable-length instructions, let's focus on ARM ISA :)

And x86 does a more than decent job with VL instructions because it devotes a lot of silicon to that (and also there are some shortcuts here and there than can kill your performance, such as Intel cores being able to only fetch 16-byte of instructions at once, hence making a bottleneck for x86_64 [AMD does 32-byte]).

I'm still waiting for those errata you talked about. All I'm hearing is speculation backed by your own personal interests. You might not like Thumb-2's presence on Cortex-A8 but since it's there I don't see why you'd be against using it. That, and I don't think it's going to be going away. If anything I agree with andys, since ARM has already transitioned its highly embedded cores to Thumb-2 only it makes sense that they'd want to push this trend further. Of course I understand that these cores have a much simpler pipeline and no branch prediction or cache, alleviating a lot of the headaches with variable length decoding (and still having some anyway), but I can't see ARM introducing Thumb-2 across the board then dropping it, AND probably dropping support for 16bit Thumb at the same time.

In terms of functionality per instruction I believe Thumb-2 to be the preferred ISA. I'd rather have the extended immediates, movw, movt, wide add/subtract, the bit field operations, compare + branch, and the other less interesting instructions than predicate per instruction. I feel that ARM was right to ditch those 4bits per instruction in favor for improved instruction capability at the expense of having an IT instruction. I wouldn't be surprised if that IT instruction can inevitably be folded for 0 cycle cost ahead of time (if something isn't already doing this). Improvements to branch prediction should come naturally as well, having less expenses due to the word unaligned instructions. But all told, I feel that 32-bit only Thumb-2 would be an improvement to ARM - if they're going to go with an ISA then I'd prefer they pick the one that was rethought to better fit new trends. Although the Thumb-2 32bit space is rather polluted as it is now by the inclusion of 16bit Thumb.

Maybe a further compromise would be to require 32bit instructions be aligned to 32bit boundaries. I think there's no way ARM would go for this, though (especially since it'd break compatibility anyway)
 
Last edited by a moderator:
Exophase said:
In terms of functionality per instruction I believe Thumb-2 to be the preferred ISA. I'd rather have the extended immediates, movw, movt, wide add/subtract, the bit field operations, compare + branch, and the other less interesting instructions than predicate per instruction. I feel that ARM was right to ditch those 4bits per instruction in favor for improved instruction capability at the expense of having an IT instruction. I wouldn't be surprised if that IT instruction can inevitably be folded for 0 cycle cost ahead of time (if something isn't already doing this). Improvements to branch prediction should come naturally as well, having less expenses due to the word unaligned instructions. But all told, I feel that 32-bit only Thumb-2 would be an improvement to ARM - if they're going to go with an ISA then I'd prefer they pick the one that was rethought to better fit new trends. Although the Thumb-2 32bit space is rather polluted as it is now by the inclusion of 16bit Thumb.

Maybe a further compromise would be to require 32bit instructions be aligned to 32bit boundaries. I think there's no way ARM would go for this, though (especially since it'd break compatibility anyway)

I think that handling mixed 16/32 bit instructions, while not as "nice" as plain word-aligned 32bit instructions is not that evil anyway - I think when you go superscalar you would want to be able to handle two instructions straddling a cache-line by pre-fetching (or whatever), in which case you already have to handle the nasty cases in the A8 or you would have dire performance anyway, and according to some documents I just randomly looked up (http://www.arm.com/products/CPUs/archi-thumb2.html - The whitepaper bit) it gives something like a 30% code density improvement, which they probably consider well worth the added space and verification requirements - especially given the massive differences in running in and out of cache. Also it's certainly nowhere near as nasty as the Intel instruction set - I believe it's meant to be pretty obvious with a quick look at the first 16 bits exactly how long the instruction is - there's no complex prefix malarky which means instructions can be 13 bytes or so long - it's 2 or 4 bytes, and it's obvious from the first couple of bytes exactly how long it is.

From discussion on realworldtech - it seems like quite a few processors are going this way - having a variable length instruction set to improve icache usage, including POWER at least (http://en.wikipedia.org/wiki/Power_Architecture - search for VLE) and I'm sure somebody mentioned MIPS, but I can't find a link - although they do have a 16bit thumb-esque thing. I don't disagree that a single fixed size is very nice for programmers, but I think as long as the encoding is fairly regular it's not a disaster, and from a cursory examination I think Thumb2 is pretty regular, merely giving multiple ways of encoding some instructions, rather than creating a lot of complexity. That said, there may be some hidden warts I missed while looking!
 
Last edited by a moderator:
Heh, I said it from the very beginning: that's my point of view as a member of a CPU design team, nothing else.

About errata, they're unexpectedly related to branch prediction and 4 KB page crossing. Some of them require to disable some features of branch predictors, which will reduce code performance.

BTW almost all the T2 instructions you describe are in ARM instruction set (the only missing ones being IT and compare and branch [which is so CISCy that its implementation is probably bad on most advanced cores]).

Now you should explain me how to fold an IT instruction, some of my colleagues would be very interested :lol: This instruction creates a total mess by making all the following instructions in the block dependent on a supplementary register. It also makes sure you get some supplementary state machine in your decoder.

I've seen programs going slightly faster running T2 (reduced Icache pressure). The problem is that the gain is so small that if the transistor budget and the engineering time had been spent on ARM, every program would have been faster.

Now if you really want some CISC core (to me it's what T2 looks like, and ARM too but to a lesser extent), wait for the next generation of Atom ;)
 
andys said:
I believe it's meant to be pretty obvious with a quick look at the first 16 bits exactly how long the instruction is - there's no complex prefix malarky which means instructions can be 13 bytes or so long - it's 2 or 4 bytes, and it's obvious from the first couple of bytes exactly how long it is.
The problem is that looking at these 2 bytes and doing some masking has to be done with transistors, and these might very well be on a critical path. Do you really want to pay a pipe stage or have to reduce frequency?

I don't disagree that a single fixed size is very nice for programmers, but I think as long as the encoding is fairly regular it's not a disaster, and from a cursory examination I think Thumb2 is pretty regular, merely giving multiple ways of encoding some instructions, rather than creating a lot of complexity. That said, there may be some hidden warts I missed while looking!
The problem is not that 32-bit are nicer to programmers, the problem is that VL instructions are a pain for CPU architects and designers :)
 
Last edited by a moderator:
Laurent said:
Heh, I said it from the very beginning: that's my point of view as a member of a CPU design team, nothing else.

About errata, they're unexpectedly related to branch prediction and 4 KB page crossing. Some of them require to disable some features of branch predictors, which will reduce code performance.

You could stand to be more specific still ;P Or are errata secrets too?

Laurent said:
BTW almost all the T2 instructions you describe are in ARM instruction set (the only missing ones being IT and compare and branch [which is so CISCy that its implementation is probably bad on most advanced cores]).

Then I think ARM needs to change its quick reference card so that it doesn't label them all as "T2." You have a pretty liberal definition of "CISCy" to consider that instruction SO much so - but I would compromise for a way to load and set flags. Probably load-use penalty is going to get you for an instruction like that anyway.

Laurent said:
Now you should explain me how to fold an IT instruction, some of my colleagues would be very interested :lol: This instruction creates a total mess by making all the following instructions in the block dependent on a supplementary register. It also makes sure you get some supplementary state machine in your decoder.

Not that I thought about it very much. It didn't seem much worse than folding branches - I suppose it'd depend on how your icache is organized/annotated to try to help this.

Laurent said:
I've seen programs going slightly faster running T2 (reduced Icache pressure). The problem is that the gain is so small that if the transistor budget and the engineering time had been spent on ARM, every program would have been faster.

Then why do you think ARM included support for Thumb-2? Is it just a hold over from when CPUs like ARM7TDMI were cacheless and not guaranteed a 32bit interface to code memory? Thumb-2 makes obvious sense in Cortex-M3/M0 but I can't really imagine ARM caring so much about compatibility with those.

Laurent said:
Now if you really want some CISC core (to me it's what T2 looks like, and ARM too but to a lesser extent), wait for the next generation of Atom ;)
This is why I don't like "CISC" and "RISC", because the terminology is so subjective. I'll admit that variable length instruction sets are more of a CISC staple, but 2/4 byte is hardly the same as what most variable length ISAs provide and there are a whole host of other things to differentiate an ISA like x86 from ARM or Thumb-2.
 
Last edited by a moderator:
Ahhh Ye olde grill Laurent for answers game. :)

Just posting to say i've entered this library in the Beagleboard sponsored projects scheme. If they're interested they'll send me a beagleboard, which will be highly useful considering my Pandora looks to be ~3 month away (I'm near the end of the list).

Anyway, one question that i'm unsure about. Would it be beneficial, instead of transferring directly from NEON to ARM register, to VSTR from the NEON then, when its needed, LD in the ARM? Would this allow the processor to hide the nasty 20 cycle latency if the result is not required straight away?
 
There's no way I'll help you since we are now fighting in the contest :p

In fact I don't know the answer to your question :)
 
Adventus said:
Anyway, one question that i'm unsure about. Would it be beneficial, instead of transferring directly from NEON to ARM register, to VSTR from the NEON then, when its needed, LD in the ARM? Would this allow the processor to hide the nasty 20 cycle latency if the result is not required straight away?
There are some benchmarks on this:
http://hardwarebug.org/2008/12/31/arm-neon-memory-hazards/
 
Last edited by a moderator:
Adventus said:
Anyway, one question that i'm unsure about. Would it be beneficial, instead of transferring directly from NEON to ARM register, to VSTR from the NEON then, when its needed, LD in the ARM? Would this allow the processor to hide the nasty 20 cycle latency if the result is not required straight away?
while i haven't tried this out in practice yet, i've pondered on the subject, and a highly speculative answer would be 'yes, as long as you mind the pipeline hazards'.

the reasoning is quite simple: while the MRC is effectively an interlock mechanism between the two pipelines, a store-load path would employ the l/s units in each pipeline, ergo, will be pipelinable in both pipelines. what remains is watching out for l/s hazards (partially described in the article notaz referred to).
 
Last edited by a moderator:
the reasoning is quite simple: while the MRC is effectively an interlock mechanism between the two pipelines, a store-load path would employ the l/s units in each pipeline, ergo, will be pipelinable in both pipelines. what remains is watching out for l/s hazards (partially described in the article notaz referred to).
Yeah, thats what i was thinking.

ssvb on the beagleboard irc kindly tested it for me. Heres the results: http://pastebin.ca/1495517

Basically it confirms what i thought. The VMOV instruction produces a stall, while the VSTR / LDR pair is worse when they occur directly after one another but significantly better when you do some ARM work in between. In real world terms this means that the vector versions of my functions may be faster in certain circumstances because they return by reference.

There's no way I'll help you since we are now fighting in the contest
Ohhh, we're neck and neck. One point each. :)
 
Well I've just spent my first weekend with a devkit, was an interesting ride. Since it has almost zero dependencies the first task i set myself was to get my math-neon project up and running. Heres some tests with the SoftFP abi:

Code:
RUNFAST: Disabled
Function	Range			Number	Max Error (%)	Time (us)
---------------------------------------------------------------------------
sinf       	[-3.14, 3.14]		1000000	0.00e+00	1312500
sinf_c     	[-3.14, 3.14]		1000000	8.38e-03	1101562
sinf_neon  	[-3.14, 3.14]		1000000	8.38e-03	492188
expf       	[0.00, 50.00]		1000000	0.00e+00	2210938
expf_c     	[0.00, 50.00]		1000000	2.19e-04	1046875
expf_neon  	[0.00, 50.00]		1000000	2.19e-04	406250
logf       	[1.00, 10000.00]	1000000	0.00e+00	1570312
logf_c     	[1.00, 10000.00]	1000000	1.70e-03	953125
logf_neon  	[1.00, 10000.00]	1000000	1.70e-03	406250
floorf     	[1.00, 10000.00]	1000000	0.00e+00	437500
floorf_c   	[1.00, 10000.00]	1000000	0.00e+00	421875
floorf_neon	[1.00, 10000.00]	1000000	0.00e+00	289062
sqrtf      	[1.00, 10000.00]	1000000	0.00e+00	1078125
sqrtf_c    	[1.00, 10000.00]	1000000	1.06e-03	1046875
sqrtf_neon 	[1.00, 10000.00]	1000000	2.94e-05	437500
---------------------------------------------------------------------------

RUNFAST: Enabled
Function	Range			Number	Max Error (%)	Time (us)
---------------------------------------------------------------------------
sinf       	[-3.14, 3.14]		1000000	0.00e+00	960938
sinf_c     	[-3.14, 3.14]		1000000	8.38e-03	757813
sinf_neon  	[-3.14, 3.14]		1000000	8.38e-03	484375
expf       	[0.00, 50.00]		1000000	0.00e+00	2078125
expf_c     	[0.00, 50.00]		1000000	2.19e-04	671875
expf_neon  	[0.00, 50.00]		1000000	2.19e-04	390625
logf       	[1.00, 10000.00]	1000000	0.00e+00	1140625
logf_c     	[1.00, 10000.00]	1000000	1.70e-03	609375
logf_neon  	[1.00, 10000.00]	1000000	1.70e-03	382813
floorf     	[1.00, 10000.00]	1000000	0.00e+00	429688
floorf_c   	[1.00, 10000.00]	1000000	0.00e+00	390625
floorf_neon	[1.00, 10000.00]	1000000	0.00e+00	281250
sqrtf      	[1.00, 10000.00]	1000000	0.00e+00	1078125
sqrtf_c    	[1.00, 10000.00]	1000000	1.06e-03	726563
sqrtf_neon 	[1.00, 10000.00]	1000000	2.94e-05	437500
---------------------------------------------------------------------------
Notes:
- You can see how i'm testing here: http://code.google.com/p/math-neon/source/browse/trunk/math_debug.c
- Clearly enabling Runfast mode is a good idea.
- A significant amount of time is being used transferring from neon->arm registers, but i cannot test the hard floating point abi yet. For instance disabling the transfer in the sinf_neon function gave me a runfast Time of 382812, thats about a 20% difference.
- the *_c functions are c implementations of the *_neon algorithms.
- Some of the _neon functions could be doing two of these operations in the same time, ie sqrtf_neon and floorf_neon are only using half the NEON pipeline.
- "Time" represents how long it took to do "Number" of calls to the function over the full "Range".
 
Nice :)

Why does runfast mode affect NEON versions? I thought it only had an impact on VFP instructions. Or are you using VFP instructions?

I took a very quick look at your code. Given how inexperienced I am with NEON, the only advice I can propose you is to ensure that static arrays are aligned to a good value (play with __attribute__((aligned (n))).
 
Why does runfast mode affect NEON versions? I thought it only had an impact on VFP instructions. Or are you using VFP instructions?
I think it purely because my testing code uses a few float operations, like this: for(float x = x0; x < x1;x += dx) (*func)(x). I have to test over a decent range because the inbuilt functions sometimes have branches that optimize for certain cases.

I took a very quick look at your code. Given how inexperienced I am with NEON, the only advice I can propose you is to ensure that static arrays are aligned to a good value (play with __attribute__((aligned (n))).
Thats a good point, i did add a macro for it but i forgot to use it anywhere.... :)
 
BTW you should probably state what libm you're using.

What prevents you from using hardfp ABI? Lack of compiler and/or lib? My understanding is that CodeSourcery arm2009q1 supports it.
 
BTW you should probably state what libm you're using.
I'm linking to libm.so.6
What prevents you from using hardfp ABI? Lack of compiler and/or lib? My understanding is that CodeSourcery arm2009q1 supports it.
Yeah, The compiler supports it... kind of. It compiles the individual objects fine, but when it links them it says that your program doesn't use VFP arguments. I vaguely remember someone saying it's because libgcc / libc were compiled with softfp....
 
There are a few open source 3d/2d engines/libraries available for the iPhone (oolong, cocos2d) ... with the current licence I don't think it would be possible to use your code in these engines simply because there is no way to deploy dynamic libraries on the iPhone and everything has to be statically linked - of course if you don't care about the iPhone then it makes no difference :)
 
Back
Top