Avoid Doubles And Longs?


QUOTE
On a somewhat related topic, how do we use the NEON unit for floats?
Some versions of Code Sourcery G++ will generate vectorised Neon instructions under the correct circumstances. I would use either: 2008q72 or 2007q51 ... the 2008 version has better vectorisation but some bugs have been found.

To enable Neon acceleration something like this should work: " -O3 -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=softfp".

Obviously, You could also hand write the ASM if your keen.
 
oh, look, a dead horse! *puts on kicking shoes*

as has been said and repeated time and again in this thread, if there's an fpu unit in a system chances are you'll want to use it for most types of full-blown arithmetics. and this is before we consider things like missing integer op's in the system (eg. idiv on cortex-a8).

floats address one fundamental problem of finite discrete arithmetics - the run-off bits, which any integer-carried fixed-point arithmetics would need to take extra care handling. or in other words, for any arbitrary sequence of multiplications/divisions carried by a 'fractional-point-unaware' unit, the programmer needs to take special care of run-off bits, through means of explicit 're-normalization' of (some of) the transients and/or end results. while floats are not devoid of the issue (by their 'virtue' of being finite and discrete, too), they tackle the issue really well, by utilizing the empirical fact that most of the time tracking ranges is more important than providing maximal precision at a fixed range. actually, IMHO floats are one of the brighter human inventions in arithmetics, and have been in use long before computers came to be.

bottomline being, floats are your friends - use them whenever available (but know their quirks too), as chances are that for most 'fuzzy-range' arithmetics, you'll want to emulate floating-point behavior anyway (man, aren't we lucky to have hardware-implemented floats? ; )
 
darkblu; I take it by "full blown arithmetics" you mean anywhere where integer won't suffice (bearing in mind that integer has a place in a majority of applications, even scientific computation).

Suggesting to "use floats whenever they're available" is over-generalized advice. Usually there are integer multiplication instructions that cover the entire range of results - if your results end up staying around the same order of magnitude within a few bits (and yes, there are applications where this happens) or if you can shift around where the decimal point is at different stages of computations because the order of magnitudes are a compile time known parameter then fixed point can suffice just fine. If the CPU doesn't have a full speed (or any) FPU then you really don't necessarily have to use it. Then you end up with more significant bits in 32bit int than 32bit float like I originally said.

You don't really just have to take my word for it - just look at the programming languages that have an explicit "fixed" type in addition to floating point support. You don't think they made these only for CPUs that don't have FPUs, do you?

Saying that floating point is always more precise or always should be used if available is probably even more foolish than saying that fixed point is always faster than floating point.
 
Exophase said:
darkblu; I take it by "full blown arithmetics" you mean anywhere where integer won't suffice (bearing in mind that integer has a place in a majority of applications, even scientific computation).
by 'full-blown' i mean arithmetics where ranges vary and expressions are more 'complex', particularly such featuring 'deranging' ops like multiplication and exponentiation. of course integers can be made to suffice for everything a float can reach, but it's a matter of convenience, and occasionally, performance (i.e. if you're going to effectively emulate floats anyway).

QUOTE
Suggesting to "use floats whenever they're available" is over-generalized advice. Usually there are integer multiplication instructions that cover the entire range of results - if your results end up staying around the same order of magnitude within a few bits (and yes, there are applications where this happens) or if you can shift around where the decimal point is at different stages of computations because the order of magnitudes are a compile time known parameter then fixed point can suffice just fine.


see the 'full-blown arithmetics' remark, and try to re-apply the bolded part of your post ; )

every fixed-point implementation of a 'full-blown arithmetics' is ridden of re-normalisaitons (i.e. shifts chasing the fractional point) - 'oh, look - a multiplication - quick, let's renormalize!', and careful considerations of what ranges can be expected where (as people really don't care to emulate complete floating-point behavior in software these days), so their integers would not blow up. heck, even when we move one day to 128 bit integers and tracking ranges will not be much of a problem anymore (128 bits suffice to cover the span of the universe down to a nm * 10^-2), the re-normalizations will still be there in fixed point! in floating-point the fpu does that for you.

QUOTE
If the CPU doesn't have a full speed (or any) FPU then you really don't necessarily have to use it. Then you end up with more significant bits in 32bit int than 32bit float like I originally said.


what's a 'full speed fpu'? i'm yet to see an fpu which is slower than a reasonable sw emulation on the respective system.

QUOTE
You don't really just have to take my word for it - just look at the programming languages that have an explicit "fixed" type in addition to floating point support. You don't think they made these only for CPUs that don't have FPUs, do you?


you seem to think i said somewhere people should not use integers per se, which i did not. integers, and thus, fixed point, have their place, but it's not in full-blown arithmetic scenarios.

QUOTE
Saying that floating point is always more precise or always should be used if available is probably even more foolish than saying that fixed point is always faster than floating point.


first, i did not say anything about precision, and second, floating point should be used always if available when you need full-blown arithmetics, and there are no if's, but's and maybe's. particularly for such things as 3d engine spatial arithmetics - there are 1001 things you'd rather spend your time optimizing in such code than trying to beat the fpu to the clock in your lowest-level arithmetics. trust me, i do game engines for a living.
 
Last edited by a moderator:
I had a program that did floats, and my math got totally screwed up, as adding one two the number made it become 2.0000000000031 or something like that. Not sure why, but it totally screwed up my entire program.
 
PSyMastR said:
I had a program that did floats, and my math got totally screwed up, as adding one two the number made it become 2.0000000000031 or something like that. Not sure why, but it totally screwed up my entire program.

I'm not an expert on this, but ran across it a little while back. The following equation is made using floating point:

0.808611 + 1232783568 = 1232783616.000000

This is obviously wrong and only accurate in the bolded section. The best explanation I can give is that you'll run into errors when doing math on extremely large and extremely small floats. You have to keep their magnitude within a certain range to preserve accuracy. Maybe someone who understands this better could add to this :)
 
Last edited by a moderator:
darkblu said:
by 'full-blown' i mean arithmetics where ranges vary and expressions are more 'complex', particularly such featuring 'deranging' ops like multiplication and exponentiation. of course integers can be made to suffice for everything a float can reach, but it's a matter of convenience, and occasionally, performance (i.e. if you're going to effectively emulate floats anyway).
Of course, you never made that clear and no one would be able to guess such a definition, but in the real world not every set of applications are "full blown" as you put it. Far from it. The bottom line is that you don't have an intimate understanding of the nature of the operations being used in the applications the topic creator is working with, so you should keep in mind that the "full blown" applications you keep mentioning may not apply. A major part of optimization (and that's more or less what this topic is about) is understanding the contexts in which the inputs can reside. Since this hasn't been explained to us the best we can really do is help the topic creator understand it.

And I actually do understand how floating point works and why it's useful, so you don't need to educate me on it. What you said is that an FPU should be used whenever available. Yes, shifts are necessary after fixed point operations, by definition. This is something you keep in mind when you decide to use it. Telling people "just use floats so you don't have to think about it" is not sufficient advice.

Okay, since you didn't know what I meant by "full speed FPU" I meant one that's pipelined and generally on the same speed scale as integer operations, as long as heavy dependency chains are avoided. In other words, not what Cortex-A8 has under normal conditions (in particular not with double precision operations, which are what you need to get the same level of precision that 32bit fixed would give you). Software emulated FPU operations can be a orders of magnitude slower than hardware, so it's pretty natural that there will be speed ranges inbetween. Anyone can see (by looking at the benches in this topic) that if Pandora is going to give a lot of scalar float performance it's not doing it right now.

darkblu said:
first, i did not say anything about precision, and second, floating point should be used always if available when you need full-blown arithmetics, and there are no if's, but's and maybe's. particularly for such things as 3d engine spatial arithmetics - there are 1001 things you'd rather spend your time optimizing in such code than trying to beat the fpu to the clock in your lowest-level arithmetics. trust me, i do game engines for a living.
Oh please, I'm really tired of the "I've been doing it for X years, I do it for a living" etc argument.. you don't know what I do for a living so please don't assume you have so much on me. Look, the guy never said what he's doing with it, and if you think that it's "never worth trying to beat the FPU" then you very well might not have a good idea of exactly how fast the IEEE compatible FPU on Pandora is (or rather, isn't). But obviously 3D can't be that impossible using fixed point or there wouldn't be various 3D platforms using it (PS1, Saturn, Nintendo DS, etc). And these are fixed without the kind of flexibility that you'd get in a software implementation. Now don't get me wrong, if you're doing 3D on Pandora then you should be using NEON or the vertex shading on the GPU, and if that's where the game's computations are going then I'd wonder why the 3D math part wasn't done in OGL in the first place (or if the game is recent enough for it to really matter). Nonetheless, C float types for the geometry calculations isn't really the right move (it'd be up there with software rasterization); but that hardly seems to describe his situation because 3D shouldn't have used doubles either. Not that that isn't what happened.

I'm just going to say it again. Your notions of "full blown arithmetic" are over-generalizations, and so is insisting that an FPU must always be used under any vague understanding of the problem, regardless of how capable the actual FPU is, or what the exact performance requirements are.
 
Last edited by a moderator:
PSyMastR said:
I had a program that did floats, and my math got totally screwed up, as adding one two the number made it become 2.0000000000031 or something like that. Not sure why, but it totally screwed up my entire program.
If you'd like to know why, I suggest reading this: http://docs.sun.com/app/docs/doc/800-7895/...l=en&a=view

If I remember right, 1.10 is another of those magical numbers that can't be stored exact in a float or double. ;)
 
Last edited by a moderator:
I admit not having read through all the various responses, but just to throw in my half penny, a lot of professional games that you can buy off the shelf (for PC/PS2/PSP/PS3/360/WII) will have next to *no* double's in the source code, as in general there is no need for the extra precision, and even in cases where the extra precision would be 'nice' it's normally still avoided because of the cost in terms of memory and performance (for those platforms that perform badly with doubles). The PS2 for example performs amazingly bad as soon as you start using doubles, as all double precision arithmetic is done in software which is crazy expensive. Using 'long' integers on the other hand, is fine as far as I'm concerned, in general, using the largest integer data type a given platform can handle normally seems fine from my experience, although saying 'long' doesn't really mean anything, as different platforms think of 'long' as having different amounts of bits, for example on PS2 it's 64 bits, where as (I think, could be wrong) on PC it's only 32.

In GCC (well some builds of it at least) you can actually get the compiler to give you warnings when ever double floating point numbers are used which is pretty handy for tracking them down; although it does get a bit annoying as it'll give you warnings for any functions that have vaargs and you pass a float to (like printf/sprintf) as the compiler will probably provide these functions where vaarg %f is understood to be a double (although the STD library can be recompiled with alternative flags to force vaargs to default to float over double).

Hope that's of some help,

Steve
 
Exophase said:
Of course, you never made that clear and no one would be able to guess such a definition, but in the real world not every set of applications are "full blown" as you put it. Far from it. The bottom line is that you don't have an intimate understanding of the nature of the operations being used in the applications the topic creator is working with, so you should keep in mind that the "full blown" applications you keep mentioning may not apply. A major part of optimization (and that's more or less what this topic is about) is understanding the contexts in which the inputs can reside. Since this hasn't been explained to us the best we can really do is help the topic creator understand it.

i admit i did mix and match in my mind a few of the posts in this thread, ending up with the impression that the OP was referring to a dynamic-range-sensitive and computationally-rich game, which now, when i re-read his posts, i see was not given. in this regard, yes, it is possible that his original arithmetics may be served fine by an integer-based implementation.

QUOTE

And I actually do understand how floating point works and why it's useful, so you don't need to educate me on it. What you said is that an FPU should be used whenever available. Yes, shifts are necessary after fixed point operations, by definition. This is something you keep in mind when you decide to use it. Telling people "just use floats so you don't have to think about it" is not sufficient advice.

i did not mean to educate you, i was just giving examples for my argument. actually, i never assumed you did not know how floats work. yes, my original post was somewhat educational, but that was not directed at you, personally, so you don't need to be that touchy.

QUOTE

Okay, since you didn't know what I meant by "full speed FPU" I meant one that's pipelined and generally on the same speed scale as integer operations, as long as heavy dependency chains are avoided. In other words, not what Cortex-A8 has under normal conditions (in particular not with double precision operations, which are what you need to get the same level of precision that 32bit fixed would give you). Software emulated FPU operations can be a orders of magnitude slower than hardware, so it's pretty natural that there will be speed ranges inbetween. Anyone can see (by looking at the benches in this topic) that if Pandora is going to give a lot of scalar float performance it's not doing it right now.

fair enough. a question, then - where are those benches you're referring to? if they were given in this thread then i admit i've totally missed them. to my understanding, a8's vfp, even though not pipelined within itself, is still execution-independent from the integer pipeline, and thus, parallelizeable with the rest of the workflow. and a8's integer pipelines do have their quirks too - according to the specs a8's imul's can only be paired with one of a8's ALU pipelines (ALU0), which would hurt the statistical case for imul's on the platform.

QUOTE

Oh please, I'm really tired of the "I've been doing it for X years, I do it for a living" etc argument.. you don't know what I do for a living so please don't assume you have so much on me. Look, the guy never said what he's doing with it, and if you think that it's "never worth trying to beat the FPU" then you very well might not have a good idea of exactly how fast the IEEE compatible FPU on Pandora is (or rather, isn't).
huh? i mentioned my background just to back up my claim that in a 3d engine there are tons of things of higher performance impact than tweaking your spatial arithmetics away from floating point. again, that was in the context of my impression that the OP was referring to such a scenario.

QUOTE
But obviously 3D can't be that impossible using fixed point or there wouldn't be various 3D platforms using it (PS1, Saturn, Nintendo DS, etc). And these are fixed without the kind of flexibility that you'd get in a software implementation.

who said it was impossible? and quoting fpu-less platforms as examples of '3d done with fixed-point' does not exactly make a point in this discussion. since you deliberately asked not to be taken wrongly, i'll just have to disregard the above ; )

QUOTE

Now don't get me wrong, if you're doing 3D on Pandora then you should be using NEON or the vertex shading on the GPU, and if that's where the game's computations are going then I'd wonder why the 3D math part wasn't done in OGL in the first place (or if the game is recent enough for it to really matter). Nonetheless, C float types for the geometry calculations isn't really the right move (it'd be up there with software rasterization); but that hardly seems to describe his situation because 3D shouldn't have used doubles either. Not that that isn't what happened.


until a reliable benchmark proves a8's vfp's performance worthless, my default stance remains unchanged: code with floats, in a auto-vectorizable-friendly manner, so with a suitable compiler you could go neon simd at little-to-no effort.

as for utilizing the GPU, generally, in a 3d engine there are tons of (high-dynamic range) spatial calculations you can't, or don't want to, offload to the GPU. for instance, and without getting into details, early-out visibility tests, or any form of early-out logic, which are often largely-spatial in nature.

QUOTE
I'm just going to say it again. Your notions of "full blown arithmetic" are over-generalizations, and so is insisting that an FPU must always be used under any vague understanding of the problem, regardless of how capable the actual FPU is, or what the exact performance requirements are.



maybe, then again, maybe not. if i've learned anything fundamental in my career, it's that the most valuable resource in a software project is the developer's time. in this regard, spending that resource wisely and not chasing waterfalls is crucial to the success of a project. as for the 'full blown arithmetics' generalization, if that would help the discussion, then think of it as the arithmetics required in any spatially-intensive game engine.

there's a reason why even such former bastions of fixed-point computations as the graphics rasterizers are almost entirely floating-point capable (and optimized) these days. the simple reason behind that is that the game developers require this functionality for their everyday tasks. not because they cannot achieve that by any other means, but because it's either more convenient, or faster, or both, to have floating-point in hw.
 
Last edited by a moderator:
darkblu said:
fair enough. a question, then - where are those benches you're referring to? if they were given in this thread then i admit i've totally missed them. to my understanding, a8's vfp, even though not pipelined within itself, is still execution-independent from the integer pipeline, and thus, parallelizeable with the rest of the workflow. and a8's integer pipelines do have their quirks too - according to the specs a8's imul's can only be paired with one of a8's ALU pipelines (ALU0), which would hurt the statistical case for imul's on the platform.
Sorry, I thought it was this thread but it's this one: http://www.gp32x.de/board/index.php?showt...45583&st=45

You'll see what I mean.

The IEEE 754 compliant FPU in Cortex-A8 is not pipelined; yes, it runs in parallel with the integer pipes, but the FPU operations themselves cannot be overlapped and a simple add will take a minimum of around 8 or so cycles. If you can can make this much integer work independent of the FPU work then the floating point computations were probably never that important to begin with. The limitations in pairing on the integer pipes are nothing compared to this.

QUOTE
huh? i mentioned my background just to back up my claim that in a 3d engine there are tons of things of higher performance impact than tweaking your spatial arithmetics away from floating point. again, that was in the context of my impression that the OP was referring to such a scenario.


My argument is that that's not necessarily true if you don't have floating point that's fast enough.

QUOTE
until a reliable benchmark proves a8's vfp's performance worthless, my default stance remains unchanged: code with floats, in a auto-vectorizable-friendly manner, so with a suitable compiler you could go neon simd at little-to-no effort.


The NEON SIMD in Cortex-A8 is obviously not useless, but assuming that the compiler will be able to produce good auto-vectorized code, especially early in the game, is naive. This is especially not going to happen with doubles, which I suppose is another good reason to back away from them. Right now I don't even think the compiler can auto-vectorize, at least not the version that isn't problematically buggy.

QUOTE
maybe, then again, maybe not. if i've learned anything fundamental in my career, it's that the most valuable resource in a software project is the developer's time. in this regard, spending that resource wisely and not chasing waterfalls is crucial to the success of a project. as for the 'full blown arithmetics' generalization, if that would help the discussion, then think of it as the arithmetics required in any spatially-intensive game engine.


If there's one thing that annoys me, it's that programmers treat every project as if it's a commercial venture and apply ever common software engineering axiom to it. If the person is doing a Pandora project in his or her free time then maybe "developer time" isn't the most valuable resource. Sometimes you really don't have a choice, commercial or otherwise. You do what you have to to get the performance that you have to; in the case of enthusiast development, maybe you spend twice as much time making it 10% faster so a few more people can use it or use it off the battery for longer.

You use language such as "required" when referring to "full blown arithmetic" in a "specially intensive game engine"; the fact is, these kinds of games have been done on fixed point HARDWARE that do not allow for dynamic ranges at all. So it isn't required at all. Yes, having float point hardware is better, I'm not arguing that. And ARM has addressed this with NEON. But if you need IEEE compliant doubles that you're guaranteed to at least get by the compiler then you'll get something that may not be good enough. The VFP unit is "medium performance" which might not cut it. It's something cheap and there mainly for compatibility purposes. Just because it's there doesn't mean that it's the best solution.
 
Last edited by a moderator:
cb88 said:
Flags to pass to gcc: -O3 -fomit-frame-pointer -mfloat-abi=softfp -mfpu=neon -mcpu=cortex-a8 -ftree-vectorize -ffast-math

somebody might wanna check my facts on that since I'm not at the top of my game atm... amid finals..(brain power being sapped)
That is right, the bad news is that current gcc only knows about softfp. Meaning that it passes all floats (doubles) in ARM registers instead of VFP/NEON register file. So you end up with lots of moves between the two sets of registers and moving stuff back from VFP/NEON is especially very expensive operation.

In gcc repository there is already ARM/hard_vfp_4_4_branch which hopefully will be merged to mainline soon.
 
Last edited by a moderator:
Exophase said:
Sorry, I thought it was this thread but it's this one: http://www.gp32x.de/board/index.php?showt...45583&st=45

You'll see what I mean.

ok, i see. those tests are not particularly encouraging, but then again, neither are they particularly conclusive. also, i didn't see anybody actually checking the code that was generated (sorry if somebody did and i missed that), so it's not clear what the compiler did in all those cases. tajuma's post above is raising some questions in this regard.

Exophase said:
The IEEE 754 compliant FPU in Cortex-A8 is not pipelined; yes, it runs in parallel with the integer pipes, but the FPU operations themselves cannot be overlapped and a simple add will take a minimum of around 8 or so cycles. If you can can make this much integer work independent of the FPU work then the floating point computations were probably never that important to begin with. The limitations in pairing on the integer pipes are nothing compared to this.

true, i did bother to check the instruction timings specs of the a8 and its vfp v3 is nothing to write home about*, though it does have the option to delegate work to the neon (nfp) pipeline (fast mode), which improves things a little. unsurprisingly, it does appear that neon simd is the way to go on the a8 (particularly for spatial (vector) arithmetics that'd be a no-brainer), but for that you can still use good old c floats. which brings us to..

QUOTE
The NEON SIMD in Cortex-A8 is obviously not useless, but assuming that the compiler will be able to produce good auto-vectorized code, especially early in the game, is naive. This is especially not going to happen with doubles, which I suppose is another good reason to back away from them. Right now I don't even think the compiler can auto-vectorize, at least not the version that isn't problematically buggy.

auto-vectorization is presently a domain in turmoil, but it is clearly they way to go in mid-to-long term. that said, even today there are good vectorizing compilers. gcc 4.4 is getting there in leaps and bounds (re autovectorization in general), and even though neon support may be still be in the oven, one can check the current rvct (arm's official compiler suite) for a taste of what's to come from neon - http://www.jp.arm.com/event/pdf/forum2007/t1-5.pdf - Tatsuya Kobayashi's Cortex–A8 and NEON Field Application Engineering presentation, 16th Oct 2007 - check page 29.

also, staying with c floats, and being auto-vectorization-conscious, is the way to go if you're concerned to the slightest with your app's portability (it.e. it not suffering on other platforms where (simd) fpu's are doing great in comparison), but that's somewhat beyond the topic of this thread.

QUOTE
You use language such as "required" when referring to "full blown arithmetic" in a "specially intensive game engine"; the fact is, these kinds of games have been done on fixed point HARDWARE that do not allow for dynamic ranges at all. So it isn't required at all.

well, strictly speaking, nothing is 'required' besides a tape-equipped turing machine, right? almost everything said by me in this thread has been about development convenience, or programmer's efficiency, if you wish. in this regard, floats in the context of 'full blow arithmetics' will be 'required' to the degree that the programmer will most likely wish they had floats under those scenarios. simple fact is, floats are a better abstraction for those purposes, just as high level languages are a better abstraction vis-a-vis assembly for the greater deal of today's programming tasks, allowing programmers not to care about things that ultimate do not affect the end quality of their software.

QUOTE
Yes, having float point hardware is better, I'm not arguing that. And ARM has addressed this with NEON. But if you need IEEE compliant doubles that you're guaranteed to at least get by the compiler then you'll get something that may not be good enough. The VFP unit is "medium performance" which might not cut it. It's something cheap and there mainly for compatibility purposes. Just because it's there doesn't mean that it's the best solution.


see, i'm not arguing about floats being best (performance-wise) on the a8 - i'm arguing about them being adequate, and then they'd win from a convenience perspective ; )


* not the case with arm v6's vfp11, which is a worthy performer, as found in the arm1176.
 
Last edited by a moderator:
Back
Top