Mupen64Plus


Ari64 said:
mostly I'm just looking to see if there is any way to speed up the games that do work. Mainly that means figuring out what's slow. Several things that I investigated actually turned out to be contrary to my expectations, so I need to look into this some more.
Were these expectations BinTrans-related or N64-specific?

My experience with QEMU has proved me that one shouldn't trust something that looks clever will speed up anything unless a *wide* range of tests is done :( [And before someone bites, I am experienced with optimization for traditional software, BinTrans is just not the typical beast :) ]
 
Last edited by a moderator:
Laurent said:
Ari64 said:
mostly I'm just looking to see if there is any way to speed up the games that do work. Mainly that means figuring out what's slow. Several things that I investigated actually turned out to be contrary to my expectations, so I need to look into this some more.
Were these expectations BinTrans-related or N64-specific?

My experience with QEMU has proved me that one shouldn't trust something that looks clever will speed up anything unless a *wide* range of tests is done :( [And before someone bites, I am experienced with optimization for traditional software, BinTrans is just not the typical beast :) ]
A performance issue with the OMAP3530 is the small L1 cache. Since the dynamic recompiler tends to output a lot of instructions, it can potentially cause a high i-cache miss rate.

One source of code bloat is the writeback of modified registers. When a cached register is modified, a 'dirty' flag is set, and when a branch exits the block, the dirty registers are written out.

Consider the following example:
Code:
 ADDI r3,r3,256
 BGTZ r3,L1
 SLL  r0,r0,0
 J    L2
The problem is that writing out registers at each branch results in code like this:
Code:
 add r3,r3,#256
 (cycle count stuff)
 cmp r3,#1
 blt +16
 str r3,[r11,#152]
 asr r3,r3,#31
 str r3,[r11,#156]
 b L1
 str r3,[r11,#152]
 asr r3,r3,#31
 str r3,[r11,#156]
 (cycle count stuff)
 b L2
Wouldn't it be better to only write r3 once, like this:
Code:
 add r3,r3,#256
 str r3,[r11,#152]
 asr r14,r3,#31
 str r14,[r11,#156]
 (cycle count stuff)
 cmp r3,#1
 bge L1
 (cycle count stuff)
 b L2
Making this change reduces code size by about 10% on average, but mysteriously makes it run slower. I am trying to figure out why.
 
Last edited by a moderator:
Is it significantly slower?

I see two things that might make the code slower (forgetting about Icache footprint reduction):
1. some branch prediction oddity
2. add -> str dependency.

BTW does the (cycle count stuff) touch flags? I guess it does...
 
Check section 16.3 of "ARM DDI 0344J", it might have to do with it.

In your first code the store and shift are probably double issued in one cycle, the store and the branch too.
In your second code, however, the store probably has to wait for the result of the add, then asr will continue and the next store instruction has to wait too.

But I just noticed that section in the documentation and didn't look into it that much yet, so its just a guess. Handle with care..
 
Laurent said:
Is it significantly slower?

I see two things that might make the code slower (forgetting about Icache footprint reduction):
1. some branch prediction oddity
2. add -> str dependency.

BTW does the (cycle count stuff) touch flags? I guess it does...
About 1% slower, but significant in the sense that the measurements are consistent and repeatable.

Changing the instruction ordering eliminated the problem, so I assume it's because of the dependency. There does seem to be a small effect on the branch prediction from replacing the blt...b with bge, but it's much less than the difference due to the instruction scheduling.

The cycle count is usually either adds/bpl or cmn/bpl/add.

JayFoxRox said:
Check section 16.3 of "ARM DDI 0344J", it might have to do with it.

In your first code the store and shift are probably double issued in one cycle, the store and the branch too.
In your second code, however, the store probably has to wait for the result of the add, then asr will continue and the next store instruction has to wait too.

But I just noticed that section in the documentation and didn't look into it that much yet, so its just a guess. Handle with care..
The store happens at the end of the pipeline, so in theory the add and str can issue in the same cycle. I'm guessing that the stall is due to the shift. Or maybe the documentation is wrong. Anyway, putting another instruction between the add and the str seems to fix the problem. Where there are no other instructions before the branch, then there will have to be a str on both forks of the branch.
 
Last edited by a moderator:
I got this to work, although the speed improvement is not much.

The branch prediction thing is driving me nuts. The problem is that the code that is being compiled often contains a lot of loops, and if a loop is generated with a conditional branch at the end, the Cortex-A8 has a tendency to predict the branch not taken. This causes a branch misprediction the first time through the loop.

So the solution is to output backward branches as unconditional branches, and jump over them with a conditional branch to exit the loop. This resolves the branch prediction problem, but exposes another issue. There appears to be a significant performance penalty for immediately following a conditional branch with another branch. I don't know if this is due to a pipeline issue or limited granularity of the BTB, but to avoid this it is necessary to put another instruction (possibly a NOP) between the branch instructions, eg
Code:
 beq +8
 mov r0,r0
 b loop
After making this change, putting the stores before the branch no longer affects the branch prediction, and the result is a slight improvement due to lower cache pressure.
 
Did you try turning off branch prediction in general?
I guess the N64 didn't have one so I'm not sure if the games are optimized for that and if it makes any sense to use it..

If you already tried that: How did that perform?

5.5 Enabling program flow prediction
You can enable program flow prediction by setting the Z bit in the CP15 c1 Control Register to 1. See c1, Control Register on page 3-58 for details. Reset disables program flow prediction, invalidates the BTB, and resets the GHB to a known state. No software intervention is required to prepare the prediction logic before enabling program flow prediction.
(FTR, you probably know about that already)

//Edit: thinking about it... no that makes no sense because without they get the 13 cycle penalty anyway... ignore me :)
 
Disabling branch prediction won't improve performance. :)

The problem was that when I moved the stores, I replaced the pair of branches with a single branch. This affected the branch prediction, and the effect was so large that I couldn't tell if rearranging the stores made an improvement or not.

After working around the branch prediction problem, the speed improvement was only about half a percent, so this didn't turn out to be too productive.

So now I'm looking at other things. The compiler is still generating too much 64-bit code when it doesn't need to, for example generating a 64-bit compare for SLT when it really only needs to compare 32 bits. It handles the obvious cases correctly, but in some situations it is hard to guarantee that the inputs are always 32-bit. I will probably need to add some global data structures to keep track of which blocks contain only 32-bit code.
 
Ari64 said:
The branch prediction thing is driving me nuts. The problem is that the code that is being compiled often contains a lot of loops, and if a loop is generated with a conditional branch at the end, the Cortex-A8 has a tendency to predict the branch not taken. This causes a branch misprediction the first time through the loop.

So the solution is to output backward branches as unconditional branches, and jump over them with a conditional branch to exit the loop. This resolves the branch prediction problem, but exposes another issue. There appears to be a significant performance penalty for immediately following a conditional branch with another branch. I don't know if this is due to a pipeline issue or limited granularity of the BTB, but to avoid this it is necessary to put another instruction (possibly a NOP) between the branch instructions, eg
Code:
 beq +8
 mov r0,r0
 b loop
After making this change, putting the stores before the branch no longer affects the branch prediction, and the result is a slight improvement due to lower cache pressure.
Welcome to the wonderful world of branch prediction oddities :)

Why don't you insert the instruction that is @loop in place of the nop?
 
Last edited by a moderator:
Laurent said:
Welcome to the wonderful world of branch prediction oddities :)

Why don't you insert the instruction that is @loop in place of the nop?
It may be possible to do that in some cases. Usually I can put the cycle count adjustment there.

I never thought I'd have to schedule delay slots on ARM. I might as well be writing a MIPS code generator.
 
Last edited by a moderator:
JayFoxRox said:
(10:29:30 PM) Pickle: 1mupen64plus: r4300/new_dynarec/assem_arm.c:56: get_pointer: Assertion `(*ptr&0x0ff00000)==0x05900000' failed. 2Aborted

(04:35:21 PM) JayFoxRox: mupen64plus: r4300/new_dynarec/assem_arm.c:56: get_pointer: Assertion `(*ptr&0x0ff00000)==0x05900000' failed. 12Aborted
BTW I found the cause of this. emit_extjump could output a literal pool without updating a pointer, so the pointer would point to the wrong location.

This went away once I started using movw/movt. I only noticed it because I benchmarked the old code to look for performance regressions.
 
Last edited by a moderator:
Ari64 said:
JayFoxRox said:
(10:29:30 PM) Pickle: 1mupen64plus: r4300/new_dynarec/assem_arm.c:56: get_pointer: Assertion `(*ptr&0x0ff00000)==0x05900000' failed. 2Aborted

(04:35:21 PM) JayFoxRox: mupen64plus: r4300/new_dynarec/assem_arm.c:56: get_pointer: Assertion `(*ptr&0x0ff00000)==0x05900000' failed. 12Aborted
BTW I found the cause of this. emit_extjump could output a literal pool without updating a pointer, so the pointer would point to the wrong location.

This went away once I started using movw/movt. I only noticed it because I benchmarked the old code to look for performance regressions.
That's great news.

Now for the bad news. After many hours debugging, I finally asked about the framebuffer corruption on the PowerVR forums. They think it might be a driver error exposed by the large quantity of glDrawArrays() calls that i make. Looks like we'll need to wait for a new driver before its fixed. In the mean time I'll help them isolate a test case and possibly rewrite some of the code to improve geometry batching (not sure how much can be done however, it seems most N64 games did very little batching since they had lower level access).

In other news, I've rewritten the combiner compiler and the OGL renderer so it doesn't depend on WES/fixed function any more. I think this has improved the compatibility (the fancy N64 logo at the beginning of Ocarina of Time displays properly now anyway). It took a while for me to realize that the N64 combiner was just doing ( a - b ) * c + d where a,b,c,d are a variety of colour/alpha sources. Alot of the combiner code in glN64 was just trying to optimize then map this equation to glTexEnv stages aswell as mapping the constant colour sources to glColor and glSecondaryColor (now they're just uniforms in the shader).

EDIT: Ari64, have you set up a repo or is there anyway we can access your current code base?
 
Last edited by a moderator:
Adventus said:
In other news, I've rewritten the combiner compiler and the OGL renderer so it doesn't depend on WES/fixed function any more. I think this has improved the compatibility (the fancy N64 logo at the beginning of Ocarina of Time displays properly now anyway). It took a while for me to realize that the N64 combiner was just doing ( a - b ) * c + d where a,b,c,d are a variety of colour/alpha sources. Alot of the combiner code in glN64 was just trying to optimize then map this equation to glTexEnv stages aswell as mapping the constant colour sources to glColor and glSecondaryColor (now they're just uniforms in the shader).

You're probably all set now, but just in case: have you read MooglyGuy's blog post on the N64 color combiner? It's extremely informative. I have his AIM too, if you want to ask anything in particular. He probably knows more about N64's graphics (and maybe N64 in general) than any other emu coder. But I've never actually IMed him because he scares me. :s
 
Last edited by a moderator:
Exophase said:
You're probably all set now, but just in case: have you read MooglyGuy's blog post on the N64 color combiner? It's extremely informative. I have his AIM too, if you want to ask anything in particular. He probably knows more about N64's graphics (and maybe N64 in general) than any other emu coder. But I've never actually IMed him because he scares me. :s
I wish i had read that stuff, It basically describes everything i just figured out. At the moment my best source of information is rice_video, I'll probably end up adding some of the features from rice to gles2n64 (ie Conker Bad Fur Day ucode, alternative texture packs, etc). MooglyGuy does seem to have a.... reputation, though I think he's more knowledgeable about the actual hardware than how i might get good performance.
 
Last edited by a moderator:
Adventus said:
Ari64, have you set up a repo or is there anyway we can access your current code base?
http://bunnitude.com/ari64/mupen64plus-arm-20091209.tar.gz


Changes since -20090909:

Compile-time option to generate either ARMv5 or ARMv7 instructions.

Fixed 64-bit shift instructions.

During a return from interrupt, check which registers contain 32-bit values instead of assuming they are all 64-bit (avoids unnecessary recompilation).

Optimize clean/dirty state to avoid excessive writeback of registers, and avoid writeback inside of loops.

Delay writeback of modified registers by one instruction to avoid stall on dual-issue CPUs.

Optimize branch prediction on Cortex-A8 by using unconditional branches in loops (compile-time option).

Fixed literal pool bugs in emit_extjump and do_dirty_stub.
 
Last edited by a moderator:
I have some SH2 code where conditional branch lands in delay slot of other conditional branch, and this messes up my recompiler. Do you handle such case on MIPS?
 
notaz said:
I have some SH2 code where conditional branch lands in delay slot of other conditional branch, and this messes up my recompiler. Do you handle such case on MIPS?
Yeah, it messes up the recompiler.

Right now I leave the branch unresolved in cases like this, so when the branch is taken, it jumps back to the recompiler and recompiles that part. It ought to be possible to just compile that one instruction again as part of the branch that jumps to it, though I'd need to make sure the register mapping was correct, and adjust the cycle count to account for it. Kind of a pain in the butt, so I haven't done it yet.
 
Last edited by a moderator:
http://bunnitude.com/ari64/mupen64plus-arm-20091214.tar.gz


Changes since -20091209:

Fix bug in constant propagation

Optimize MTC1/MFC1 instructions

Preserve 32-bit flag when using DADD as a register move operation

Fixed a bug in the data flow analysis (didn't count delay slots)
 
there is a new version of Mupen64Plus out http://www.emutalk.net/showthread.php?t=50098 it is the first beta (christened 1.99.1) of the forthcoming and long-awaited Mupen64Plus 2.0.
overall this version is more portable than the previous version
# Modular architecture: instead of monolithic Mupen64Plus releases, the core, front-end, and all plugins will be released separately in the future
# Simplified, more portable emulator Core
# Removed GUI code from plugins, making them simpler and more portable
 
Back
Top