Mupen64Plus


Can I also please give you a patch for minor changes to Opengl.cpp that get the current code to run on the pandora? It would help if the changes made it into the svn.
Sure any patchs would be most welcome. Just PM it to me or my gmail account. Also, If anyone wants SVN commit rights just contact me.

Ive tried the neon additions and the plugin runs for a bit and then seg faults. No 3d is displayed, I did get the mario kart background of the intro to display.
Did it fail on both __NEON_OPT's?

Not too surprised about the NEON not working straight away, i really need to get my board working properly so i can test this stuff for myself. I was hoping someone could debug it for me. Currently my Mupen compile fails on the SDL_SetVideoMode() function. If i can't fix it myself i guess i'll have to wait for the stable Rev4 distribution.
 
Exophase said:
Ari64 said:
I suspect that the more complex instruction decoding (due to variable-length instructions) is going to hurt performance, but it'd be nice to see some actual numbers.

The Thumb-2 instruction decoding is free on Cortex-A8. If anything averaging more instructions per fetch improves throughput.
I think you misunderstood what Ari64 meant: he wanted to say that code generation will be slowed down because instruction selection will be more complex; for instance when doing an add, you'd have to choose the 16-bit or the 32-bit variant depending on the operands.
 
Last edited by a moderator:
Laurent said:
Exophase said:
Ari64 said:
I suspect that the more complex instruction decoding (due to variable-length instructions) is going to hurt performance, but it'd be nice to see some actual numbers.

The Thumb-2 instruction decoding is free on Cortex-A8. If anything averaging more instructions per fetch improves throughput.
I think you misunderstood what Ari64 meant: he wanted to say that code generation will be slowed down because instruction selection will be more complex; for instance when doing an add, you'd have to choose the 16-bit or the 32-bit variant depending on the operands.
That's not quite what I was getting at. The issue is how the CPU decodes the instructions as they enter the pipeline. With fixed-size instructions, the CPU can decode two instructions at once, and both will be valid.

If you have variable-size instructions, and you initially assume that it's two 16-bit instructions, then when it turns out to be one 32-bit instruction, your decoding is limited to one 32-bit instruction per clock cycle. Of course it would be possible to have additional decode units, and discard the instructions that turn out to be misaligned, but this does not make for a low-power design, so I'm wondering if ARM really did this.

What happens on some x86 CPUs is that it caches the instruction lengths, so the first time the code executes, it will only do one instruction per clock, but the second time it will decode two. Of course this makes the cache more complicated.

For code generation, the assembler already has to figure out whether the immediate value will fit within a rotated 8-bit constant, so the complexity of this would not significantly change.
 
Last edited by a moderator:
Adventus: Both, me and pickle, removed the SDL_SetVideoMode() from the plugin. Pickle didn't give any reason for this but I think he mentioned it. And for me, whenever I turn it on, the framebuffer gets changed in size and my TV-Out doesn't work anymore (Picture of bad framebuffer)
It still displays in the correct size, no matter what resolution you use (Probably hardcoded elsewhere). So you should just comment out that call for now.
However, I had no problems keeping it in - it just resulted in the problems above
I yet have to see wether there is any use for my patches once Pickle supplies his ones.
(However, my makefile might be a bit better, even if I suck at writing makefiles, mine is just a bit cleaner now making it easy to use it on another system and it will copy the *.so to your plugin diretory which is also useful)


Ari64:

(10:29:30 PM) Pickle: 1mupen64plus: r4300/new_dynarec/assem_arm.c:56: get_pointer: Assertion `(*ptr&0x0ff00000)==0x05900000' failed. 2Aborted

(04:35:21 PM) JayFoxRox: mupen64plus: r4300/new_dynarec/assem_arm.c:56: get_pointer: Assertion `(*ptr&0x0ff00000)==0x05900000' failed. 12Aborted

This is something we both had as you can see. I get this assertion in about 50% of all tries, sometimes more, sometimes less. It might be an memory alignment or range problem, its still kinda annoying. It usually happens right after the Title screen of Mario 64, just before the "Marios head" screen where you can drag his face around, after the screen faded away already. Its one of the major problems in my opinion right now because it makes development of plugins so much harder if you have to restart your emulator everytime, waiting for the bug to happen or the game to continue so you can test your plugin in action.
I'm still using the version from the first post, if there is a newer version it would be nice if you could add a new link to the first post.
 
JayFoxRox said:
Ari64:

(10:29:30 PM) Pickle: 1mupen64plus: r4300/new_dynarec/assem_arm.c:56: get_pointer: Assertion `(*ptr&0x0ff00000)==0x05900000' failed. 2Aborted

(04:35:21 PM) JayFoxRox: mupen64plus: r4300/new_dynarec/assem_arm.c:56: get_pointer: Assertion `(*ptr&0x0ff00000)==0x05900000' failed. 12Aborted

This is something we both had as you can see. I get this assertion in about 50% of all tries, sometimes more, sometimes less. It might be an memory alignment or range problem, its still kinda annoying. It usually happens right after the Title screen of Mario 64, just before the "Marios head" screen where you can drag his face around, after the screen faded away already. Its one of the major problems in my opinion right now because it makes development of plugins so much harder if you have to restart your emulator everytime, waiting for the bug to happen or the game to continue so you can test your plugin in action.
I'm still using the version from the first post, if there is a newer version it would be nice if you could add a new link to the first post.
Try this version: http://www.gp32x.de/board/index.php?/topic/49358-mupen64plus/page__view__findpost__p__754233

I've run the mario intro many times and not seen this. If it's still happening, let me know and I'll try to find a reproducable case to debug.
 
Last edited by a moderator:
Laurent said:
I think you misunderstood what Ari64 meant: he wanted to say that code generation will be slowed down because instruction selection will be more complex; for instance when doing an add, you'd have to choose the 16-bit or the 32-bit variant depending on the operands.

It should be clear from his response that I didn't misunderstand what he was saying.

Ari64 said:
If you have variable-size instructions, and you initially assume that it's two 16-bit instructions, then when it turns out to be one 32-bit instruction, your decoding is limited to one 32-bit instruction per clock cycle. Of course it would be possible to have additional decode units, and discard the instructions that turn out to be misaligned, but this does not make for a low-power design, so I'm wondering if ARM really did this.

What happens on some x86 CPUs is that it caches the instruction lengths, so the first time the code executes, it will only do one instruction per clock, but the second time it will decode two. Of course this makes the cache more complicated.

This is an over-generalization. You don't need a full decode sequence to determine if an instruction is 16bit or 32bit. It most likely resolves which combinations of 16 or 32bit instructions the next two in the fetch packet are before real decoding begins. There are only four combinations and determining if an instruction is 16bit or 32bit Thumb is fairly simple and doesn't require a lot of interim decoding like determining x86 instruction length does. I can't find a good description of how it's performed exactly but your assumptions are invalidated by the lack of performance penalty mentioned by ARM, not to say the least the Thumb-2 benchmarks which are consistently close to ARM speed. If all 32bit instructions were forced to single issue then the performance wouldn't be nearly as good and I'm sure ARM would say something.

Biggest proof: if you were right Laurent would have been preaching this from day one as the biggest disadvantage to Thumb-2.
 
Last edited by a moderator:
Exophase said:
It should be clear from his response that I didn't misunderstand what he was saying.
Indeed :)

This is an over-generalization. You don't need a full decode sequence to determine if an instruction is 16bit or 32bit. It most likely resolves which combinations of 16 or 32bit instructions the next two in the fetch packet are before real decoding begins. There are only four combinations and determining if an instruction is 16bit or 32bit Thumb is fairly simple and doesn't require a lot of interim decoding like determining x86 instruction length does. I can't find a good description of how it's performed exactly but your assumptions are invalidated by the lack of performance penalty mentioned by ARM, not to say the least the Thumb-2 benchmarks which are consistently close to ARM speed. If all 32bit instructions were forced to single issue then the performance wouldn't be nearly as good and I'm sure ARM would say something.

Biggest proof: if you were right Laurent would have been preaching this from day one as the biggest disadvantage to Thumb-2.
As I previously wrote, discussing certain subjects might reveal too much :)

Deciding between 16- or 32-bit instructions is indeed not very difficult: you just have to look if the 5 higher bits are 1110x or 11111 (note this can add delay to some already critical path, but let's put that aside).

However this raises many difficulties:

- potential increased density of branches
- a 32-bit instr can cross a cache line, and even worse that instr could be a branch
- a branch can jump to an instruction that crosses a cache line
- etc.

As far as benchmarks go, it's always the same problem: what is a good benchmark? And benchmark for marketing is even more dubious.

OTOH I have seen T2 programs faster than ARM ones, using ARM C compiler (armcc). Sometimes saving Icache has higher benefits than the rest.

But I'll stick to my position (which is a personal one): making T2 efficient has a cost, that if spent on other parts of the chip would make it better.
 
Last edited by a moderator:
Laurent said:
However this raises many difficulties:

- potential increased density of branches
- a 32-bit instr can cross a cache line, and even worse that instr could be a branch
- a branch can jump to an instruction that crosses a cache line
- etc.

Also dealing with instructions crossing a TLB boundary. Let's just say that it's a good thing that ARM is handling TLB loading in software, unlike some other RISC platforms. Do you know if there's an actual penalty for fetches broken between multiple cache lines? I guess there would almost have to be. Just the same, a compiler/ASM programmer can at least do their best to minimize these situations, although it's not especially easy.

Laurent said:
As far as benchmarks go, it's always the same problem: what is a good benchmark? And benchmark for marketing is even more dubious.

The performance problem that Ari64 speculated would most likely manifest noticeably in just about anything. It certainly wouldn't be absent from all known tests.

Laurent said:
But I'll stick to my position (which is a personal one): making T2 efficient has a cost, that if spent on other parts of the chip would make it better.

I don't disagree with you, but really this has nothing to do with how it performs on Cortex-A8, and if all you care about is optimizing for A8 then it doesn't make it an invalid option.
 
Last edited by a moderator:
Exophase said:
Also dealing with instructions crossing a TLB boundary. Let's just say that it's a good thing that ARM is handling TLB loading in software, unlike some other RISC platforms. Do you know if there's an actual penalty for fetches broken between multiple cache lines? I guess there would almost have to be. Just the same, a compiler/ASM programmer can at least do their best to minimize these situations, although it's not especially easy.
Didn't you mean "isn't handling TLB loading in software"?
For the penalty I can't say for sure, but I guess that's the case or otherwise that would cost a lot of power (basically you'd have to make two parallel Icache requests).

Laurent said:
As far as benchmarks go, it's always the same problem: what is a good benchmark? And benchmark for marketing is even more dubious.

The performance problem that Ari64 speculated would most likely manifest noticeably in just about anything. It certainly wouldn't be absent from all known tests.
You're right, sorry. I went too far due to being frustrated by benchmarking in general.

I don't disagree with you, but really this has nothing to do with how it performs on Cortex-A8, and if all you care about is optimizing for A8 then it doesn't make it an invalid option.
That's exactly why I want to test T2 on QEMU ;)
 
Last edited by a moderator:
Laurent said:
Didn't you mean "isn't handling TLB loading in software"?

Yeah, specifically I meant to write "in hardware."

Laurent said:
For the penalty I can't say for sure, but I guess that's the case or otherwise that would cost a lot of power (basically you'd have to make two parallel Icache requests).

I wonder if GCC knows to try to optimize around this, at any rate. Since you can always use a 32bit instruction instead of a 16bit one you can opt to do this to keep instructions aligned where necessary, just means that a 16bit instruction in the block has to be converted to 32bits to push the misaligned instruction over the boundary.

Laurent said:
That's exactly why I want to test T2 on QEMU ;)

I look forward to it :D
 
Last edited by a moderator:
Cpasjuste: No thats another story. The SDL function is used to set the resolution to 640x480 as thats not done by EGL. But as mentioned before: This has the side-effect that the TV-Out doesn't work anymore (or it might even be the SGX because it can't handle the resolution change of the framebuffer). So if you want your framebuffer in 640x480 AND want EGL, do both calls. However if you only want to use EGL at whatever resolution is present - don't do it. You better don't do it anyway because the Pandora itself will probably set the resolution itself so the LCD resolution is fine (800x480) and TV-Out is in an acceptable range too. The better solution would be to get the framebuffer resolution and change your viewport or rescale the output in the shader.
 
I've got mupen working on my board.... it was just as simple as removing the set video call and fixing some linking errors. I've also began debugging some of the NEON optimisations. At the moment i have neon matrix multiplication and matrix vector multiplication working.

I'm running at the default pandora devboard cpu speed (500mhz?) and 800x640 resolution. For Super Mario 64 i get 15 - 19 fps during the ingame intro sequences. The mario face is about 18fps at first but after the 1st ingame sequence drops to about 10fps.... which is weird, its rendering the exact same thing.

@Ari64 i get the same error as the others (with the newest source), 75% of the time it doesn't make it past the title screen with an assertion fail.

@JayFoxRox Could you possibly send me the input plugin? having to power cycle to quit isn't exactly ideal. :)
 
The PSP N64 emulator seems to get lower FPS than that but gives the illusion of faster speed via some kind of auto frame skipping. Although I'm not sure if the PSP one is optimised with HLE for some games.

Changing the rendering resolution does not seem to make any difference speed wise in anything we have tried, so it might as well be the full LCD res.
 
craigix said:
The PSP N64 emulator seems to get lower FPS than that but gives the illusion of faster speed via some kind of auto frame skipping. Although I'm not sure if the PSP one is optimised with HLE for some games.

Changing the rendering resolution does not seem to make any difference speed wise in anything we have tried, so it might as well be the full LCD res.
well super mario 64 plays very smoothly and really full speed but only at the castle.
and you can set frame skipping yourself inside the emulator settings :)
 
Last edited by a moderator:
Adventus said:
@Ari64 i get the same error as the others (with the newest source), 75% of the time it doesn't make it past the title screen with an assertion fail.
I am not able to reproduce this.

The recompiler keeps track of all the branches that it generates, so that it can remove them when code is invalidated or it needs to free memory. This assertion happens when the recompiler follows one of these pointers, and does not find the instructions that it expects to be there.

I'm guessing something is not getting initialized properly or not getting cleaned up properly. Are you re-loading the rom, or re-starting it somehow, or using save-states?

Perhaps you could send me a full source tree so I can build exactly what you are, and I can try to debug that.
 
Last edited by a moderator:
Adventus said:
@JayFoxRox Could you possibly send me the input plugin? having to power cycle to quit isn't exactly ideal. :)

A general purpose driver is in the works so pandora hardware is also available via a real joystick device so original plugins handle it and at the same time I'm writing another tool to create an joystick via network which can replace the pandora hardware devices (in the case your nubs are missing). I'll upload it tomorrow hopefully - I'm rewriting it all for release atm.
 
Last edited by a moderator:
Back
Top