Constant Folding And Floating Point


ari64

Magic Emulator Fairy
Joined
Aug 26, 2009
Messages
560
I mentioned this in another thread, but didn't want to clutter up that thread with too many details.

If you compile the mupen64plus update that I posted a few days ago then you will find that since I fixed the DSRLV instruction, Super Smash Bros runs for awhile, but crashes when you try to play. This is because of another bug...

I do constant folding to combine the results from LUI, ADDI and similar instructions. When the instructions are decoded, a series of flags are set which identify registers with known constants. During assembly, the normal assembly of these instructions is suppressed, and instead load_consts() calls get_final_value() to determine what the combined constant ends up being. This value is then loaded with movw/movt (or from the constant pool if generating ARMv5 code).

The mistake I made was that I used the same bit to flag when the output of the instruction is a constant (as in LUI) and where the input is a constant (as in LW). Super Smash Bros contains the following code:
Code:
  80137478: LUI r8,80140000
  8013747c: ADDIU r8,r8,-29648 (ffff8c30)
  80137488: LW r10,r8+0 (00000000)
  ...
  801374b8: LW r9,r8+12 (0000000c)
  801374bc: LUI r8,80140000
  ...
  801374e0: LW r8,r8-26976 (ffff96a0)
Result was that things got mixed up since the isconst flag for r8 was set through this entire sequence including the last instruction. I'm rather amazed that SSB was the only game that triggered this bug.

Once I made separate flags for inputs and outputs then it worked correctly.

The game still doesn't work well due to really high CPU usage. This is especially noticable in the "vs Yoshi Team" battle where there are a lot of characters on the screen. The N64 has to compute 3D vectors on the MIPS CPU, so CPU load tends to scale with polygon count.

This could possibly be improved by reducing the FPU precision somewhat (eg using NEON) but I am loath to do that since it will make it very difficult to find bugs like the example above amongst a whole bunch of differences caused by floating-point values being slightly inaccurate.
 
What would the feasbility be of implementing NEON support, but having it on a define, so you can compile it on/off? That way you would have your base (non NEON) build to compare against, to track down if the bug is connected to the NEON support, or from elsewhere? On the other hand, maybe it is best you just keep it as 'pure' as possible, and just try and fix up as many bugs/speed up CPU emulation as much as possible, as potentially more dev's will be able to help out with FPU support once lots of people have Pandora's in their hands! :)

Steve
 
Rockthesmurf said:
What would the feasbility be of implementing NEON support, but having it on a define, so you can compile it on/off? That way you would have your base (non NEON) build to compare against, to track down if the bug is connected to the NEON support, or from elsewhere? On the other hand, maybe it is best you just keep it as 'pure' as possible, and just try and fix up as many bugs/speed up CPU emulation as much as possible, as potentially more dev's will be able to help out with FPU support once lots of people have Pandora's in their hands! :)

Steve

His problem is that he doesn't think he'll be able to synchronize between the NEON and non-NEON versions. There could always be slightly different code paths that are taken due to differences in precision, that don't actually adversely effect emulation. If two different variables still result in synchronized paths then you know that any deviance from this is a bug and it's relatively easy to find where they split then find why. If they're not synchronized then it's much harder to find the bug.
 
Last edited by a moderator:
I don't really know how much can be gained, but have you put any more thought into OS HLE, Is it feasible? There seems to be a fair number of fp matrix / math functions that can be HLE'd. DaedalusX64 appears to have a fairly complete implementation, You
can browse their source here: http://daedalusx64.svn.sourceforge.net/viewvc/daedalusx64/Source/OSHLE/

I was going to suggest using the NEON integer pipelines to help emulate the fp, but i can't see many advantages except wider bus, possibly less pressure on the ARM registers and the (slim) possibility of vectorization.... and you get the notable disadvantage of NEON->ARM transfers for branches, etc.
 
What Exophase said.

Generally the way I track down these kind of bugs is to compare the ARM version to the x86 version, and compare the x86 recompiler to the interpreter. Super Smash Bros had the added complication that I needed to clear the SRAM and script the button presses to get a repeatable result, but that is how I tracked down the bug.

If I was to try optimizing floating point, I would have to make the changes conditional so I could turn them off for debugging. Sometimes I have to do that with the liveness analysis too.

If I was going to HLE anything it would probably be the virtual memory in Goldeneye and other Rare games. But there is also Paper Mario, which maps the kseg3 page (0xE0000000) for who knows what.

The VFP has the same delays as NEON for transfers to ARM registers, does it not?
 
The VFP has the same delays as NEON for transfers to ARM registers, does it not?
Yep, you get a >20 cycle stall on both VFP/NEON and ARM pipelines. You can hide the stall by writing to memory in VFP/NEON then loading 20 cycles later on ARM.
 
Ari64 said:
I mentioned this in another thread, but didn't want to clutter up that thread with too many details.

If you compile the mupen64plus update that I posted a few days ago then you will find that since I fixed the DSRLV instruction, Super Smash Bros runs for awhile, but crashes when you try to play. This is because of another bug...

I do constant folding to combine the results from LUI, ADDI and similar instructions. When the instructions are decoded, a series of flags are set which identify registers with known constants. During assembly, the normal assembly of these instructions is suppressed, and instead load_consts() calls get_final_value() to determine what the combined constant ends up being. This value is then loaded with movw/movt (or from the constant pool if generating ARMv5 code).

The mistake I made was that I used the same bit to flag when the output of the instruction is a constant (as in LUI) and where the input is a constant (as in LW). Super Smash Bros contains the following code:
Code:
  80137478: LUI r8,80140000
  8013747c: ADDIU r8,r8,-29648 (ffff8c30)
  80137488: LW r10,r8+0 (00000000)
  ...
  801374b8: LW r9,r8+12 (0000000c)
  801374bc: LUI r8,80140000
  ...
  801374e0: LW r8,r8-26976 (ffff96a0)
Result was that things got mixed up since the isconst flag for r8 was set through this entire sequence including the last instruction. I'm rather amazed that SSB was the only game that triggered this bug.

Once I made separate flags for inputs and outputs then it worked correctly.

The game still doesn't work well due to really high CPU usage. This is especially noticable in the "vs Yoshi Team" battle where there are a lot of characters on the screen. The N64 has to compute 3D vectors on the MIPS CPU, so CPU load tends to scale with polygon count.

This could possibly be improved by reducing the FPU precision somewhat (eg using NEON) but I am loath to do that since it will make it very difficult to find bugs like the example above amongst a whole bunch of differences caused by floating-point values being slightly inaccurate.

Why not just switch (optionally) into runfast mode.

It won't be as quick as NEON, but it will give you semi-pipelined floating point, as long as it is actually pipelinable. (if you had 9 independant multiply-adds they will run pipelined).

This is a fairly togglable setting I would have thought - as you simply need to fiddle with the floating point control register before you start and if they are mostly ADDs or MULs the latency will be substantially less.
 
Last edited by a moderator:
andys said:
Why not just switch (optionally) into runfast mode.

It won't be as quick as NEON, but it will give you semi-pipelined floating point, as long as it is actually pipelinable. (if you had 9 independant multiply-adds they will run pipelined).

This is a fairly togglable setting I would have thought - as you simply need to fiddle with the floating point control register before you start and if they are mostly ADDs or MULs the latency will be substantially less.

That's what I thought too, but runfast mode doesn't actually pipeline. It just shaves off a cycle or two, or somesuch. The wording is pretty misleading. Look at the timing example.

I'm curious if Ari64 is actually using VFP or not yet, since last I heard Mupen64plus was still using GCC soft float. If VFP is being used are transfers strictly to memory?
 
Last edited by a moderator:
Exophase said:
I'm curious if Ari64 is actually using VFP or not yet, since last I heard Mupen64plus was still using GCC soft float. If VFP is being used are transfers strictly to memory?

It calls the functions in r4300/cop1*.c, and values are written to memory. I got a few percent speedup just by replacing the calls that I was using for MTC1/MFC1, so presumably I can get some more improvement by inlining other floating point functions, without affecting the precision.
 
Last edited by a moderator:
If Ari64 wants to keep accuracy at the moment, I'm not sure using hardware FP will work. There are many seemingly small implementation-defined behaviours in IEEE754. For instance, sNaN representation...
 
Exophase said:
andys said:
Why not just switch (optionally) into runfast mode.

It won't be as quick as NEON, but it will give you semi-pipelined floating point, as long as it is actually pipelinable. (if you had 9 independant multiply-adds they will run pipelined).

This is a fairly togglable setting I would have thought - as you simply need to fiddle with the floating point control register before you start and if they are mostly ADDs or MULs the latency will be substantially less.

That's what I thought too, but runfast mode doesn't actually pipeline. It just shaves off a cycle or two, or somesuch. The wording is pretty misleading. Look at the timing example.

I'm curious if Ari64 is actually using VFP or not yet, since last I heard Mupen64plus was still using GCC soft float. If VFP is being used are transfers strictly to memory?

Hrm, a fair point.

It implies that is is pipelined, but as you say, it says

those crafty people at ARM said:
VFP instructions that execute in the NFP pipeline have results that are 32-bit single-precision writes to the upper or lower half of the 64-bit register value. A restriction that applies to VFP instructions executing in the NFP pipeline is that instruction results cannot be forwarded early to subsequent instructions. Each VFP instruction takes 7 cycles to execute in the NFP pipeline because of this restriction.

So the question is, is that 7 cycles blocking the next operation, or 7 cycles pipelined, but unforwardable.

I can't see it mentioning forwarding if it doesn't apply.

That said, it should be easy to check - if you ran 10 independent values (multiply by themselves or something) on a tight loop then you would either see 10x speedup, or about 10% speedup.

Anybody with A8 hardware want to test?
 
Last edited by a moderator:
I said the exact same things you're telling me to Laurent but he really insisted and the instruction timing analysis they have does seem to support it.

Laurent said:
If Ari64 wants to keep accuracy at the moment, I'm not sure using hardware FP will work. There are many seemingly small implementation-defined behaviours in IEEE754. For instance, sNaN representation...

I doubt Ari64 cares about more than what games require. Having that said, I think the differences are probably correctable and still faster than software.
 
Last edited by a moderator:
Back
Top