Why Can't The Pandora Emulate The Ps2?


ion, basically this is the process:

Source Code -> Compiler -> Compiled Code (machine code)

Just looking at it like this, it seems like you could do

Source Code <- decompiler <- Compiled Code

the problem is that the Compiler step is not a 1:1 function. It makes some alterations that cannot be reversed.

The compiler does various optimization techniques (removing 'dead' (unreachable) code, unrolling some loops, converting switch statements to jump tables, etc. etc.) that can't be undone.

So no, you can't get the original source code out of the compiled code. However, one thing you could do is...

Suppose you have a program like this (this is not actual code it's just an example) on one machine:

ADD 1 to X and Y
STORE X in MEMORY at location 3000


And now you want to move it to another machine.
Say it doesn't know how to ADD to X and Y at the same time, but all the other operations are the same.
So you could just rewrite this program as

ADD 1 to X
ADD 1 to Y
STORE X in MEMORY at location 3000

Notice that you aren't converting it to the original C source code or anything, you're just replacing instructions with other instructions that do 'the same thing'.

This is called Static Recompilation. It's similar to Dynamic Recompilation (Dynarec) that you hear about in emulators.
The difference is that static recompilation happens ONCE, dynamic recompilation runs every time you run the program.

So why would you EVER want to do dynamic recompilation? static sounds better!


The problem is that in most cases, code has the ability to MODIFY itself.
So you might have something like this:
ADD 1 to X
IF X is 3 CHANGE next instruction to SUBTRACT 1 from X
ADD 1 to X

So now since we don't know what 'X' is (if it's going to be 3) we don't know if line 3 should be ADD or SUBTRACT. So basically static recompilation can't really handle situations like this.
Dynamic recompilation would just wait until it knew what the value of X was, then change the line to ADD or SUBTRACT, and then compile it. It has the current state of the program so it can properly handle self-modifying code.

Some emulators have used static recompilation before. The emulator Corn is known as the fastest N64 emulator that was ever made, essentially. But its compatibility was low because it couldn't handle games that did self-modified code. Mario 64 wasn't one of them, so Corn could run mario 64 on a lot of machines that no other emulators could.

It's a pain and because of the low compatibility, not many people do it.

I'm actually pretty interested in static recompilation but I can't seem to find any statistics on which systems really self-modify code. everyone just does dynarec's these days 'cause in most cases it's not worth the compatibility hit to do static recompilation. But if you do have a program that can be statically recompiled it should be pretty fast :)
 
My understanding is that compiled machine code can be converted to uncommented assembling for the architecture it was compiled for. While porting could be done with such a disassembly, it would be very impractical especially since source and target architectures are unlikely to have one-to-one correspondence of assembly commands. The only practical use for disassembling a binary I am aware of is reprogramming roms intended to run on the same system as the original.

Realistically speaking, if the PS2 had any professional-grade games with open or leaked source code, would an optimized port be able to run with decent performance on the Pandora?

Edit: :ph34r:
 
Jeffery Mewtamer said:
Realistically speaking, if the PS2 had any professional-grade games with open or leaked source code, would an optimized port be able to run with decent performance on the Pandora?

[edit to completely change my post]

It seems like it would be very hard, it depends if there's a very well defined SDK that you also have access to, and you'd probably have to port the SDK first.
 
Last edited by a moderator:
XxionxX said:
Hm... Nothing on Google for "static recompiling" but I found stuff on "static compiling" is that the same thing? There was some information, but it seemed to be above my pay grade. Should I just ask Exophase? Figure it out, got it!
Static recompiling is the process of taking the compiled code for a game, and producing code which has the same result on a different machine. Generally a technique called Dynamic recompiling, where this is done as the game runs is preferred, as it allows it to handle situations that can't be predicted, such as self modifying code (you can't convert an instruction that doesn't exist until the code runs).

HOWEVER, this is by no means simple. Differences in hardware are a lot more complicated than just a different "launguage" as it were. For example, the processor you want to use may have fewer registers (spaces to temporarily store data the processor is working on) than the original machine. Handling an instruction to "copy something into register 17" when you only have 16 registers requires a lot more work than it did on the original machine, however you handle it. This is just one relatively simple example of a possible problem, and there are many others, all of them slowing down the running of the program.

So it is impossible to get the source code from just looking at the compiled code? or illegal? or did WizardStan have a better answer to my question?
It is extremely difficult to determine source code from a compiled binary, and is generally considered to be practically impossible.
I believe there have been various attempts to create a decompiler (a program that does this) and none of them have been particularly successful.

The fact is, compiled code is not just some vague set of instructions to make stuff happen.
It is a very precise set of instructions for a very specific set and arrangement of hardware, and without taking very careful consideration of that hardware, you cannot do anything with those instructions.

EDIT:
@rapidpoobear
While that is an excellent explanation of static and dynarec (better than mine) you might want to avoid giving the impression that using them would make PS2 emulation possible.
Also Corn used extensive HLE which probably was a lot more beneficial to its speed than the static recompilation process itself.
 
Last edited by a moderator:
Prometheus said:
That's a double-negative.

You're double negative. ~_~

Corn isn't really a static recompiler, it's just a deep recursive one (same as say, gpSP).. it'll still recompile blocks dynamically as it finds them. Static recompilation requires predicting where blocks start even if they're not linked by direct branches, which can be difficult or even impossible. It's a worse problem than self modifying code.
 
Last edited by a moderator:
Exo, thanks for the info, I had trouble finding much info on CORN. I'm completely NOOB at all this Emu-writing business so I apologize in advance for other errors I'll no doubt make :p

Aninhumer, I didn't think I gave that impression, which section made it sound like that?
 
Oh I don't blame anyone for Corn, since the author himself called it static recompilation (or at least said it was like that). It's mainly semantics, but I think it has to be distinguished from what people generally consider static recompilation.

On the other hand, this is also inferred from how he described it - since the source wasn't release I can't know for sure.

Interestingly, there's actually someone who goes by ContraSF on an IRC channel I've been in but he's never responded to me, I always wondered if he could be the real thing, heh.
 
Oh nice, I wonder if it is! that would be cool.

Yeah, Corn is awesome, before my time though. Too bad Contra never open-sourced it.

btw is exophase.com your site?
 
No, read the disclaimer at the bottom that I made them put in for stealing my name. :c
 
Exophase said:
No, read the disclaimer at the bottom that I made them put in for stealing my name. :c
What does "Exophase" mean anyway? It kinda reminds me of the stages of mitosis.
 
Last edited by a moderator:
rabidpoobear said:
Aninhumer, I didn't think I gave that impression, which section made it sound like that?
This bit:
But if you do have a program that can be statically recompiled it should be pretty fast :)
 
Last edited by a moderator:
rabidpoobear said:
ion, basically this is the process:

Source Code -> Compiler -> Compiled Code (machine code)

Just looking at it like this, it seems like you could do

Source Code <- decompiler <- Compiled Code

the problem is that the Compiler step is not a 1:1 function. It makes some alterations that cannot be reversed.
There's one other catch as well: source code contains programming and data. When you say
print "hello world"
print gets turned into a string of bytes the computer recognizes as an instruction, but "hello world" is also stored in the compiled code somewhere too. Figuring out whether "hello world" (and not just strings, it can be any random set of data that may or may not mean anything to a human. A set of grade scores, or the hex values for some colours, for example) is something that should (or even can) be decompiled is a huge problem.
 
Last edited by a moderator:
Neko said:
Exophase said:
No, read the disclaimer at the bottom that I made them put in for stealing my name. :c
What does "Exophase" mean anyway? It kinda reminds me of the stages of mitosis.

:D Now I know where I heard this before...
 
Last edited by a moderator:
Aninhumer said:
HOWEVER, this is by no means simple. Differences in hardware are a lot more complicated than just a different "launguage" as it were. For example, the processor you want to use may have fewer registers (spaces to temporarily store data the processor is working on) than the original machine. Handling an instruction to "copy something into register 17" when you only have 16 registers requires a lot more work than it did on the original machine, however you handle it. This is just one relatively simple example of a possible problem, and there are many others, all of them slowing down the running of the program.

That one is easy to handle though, to be fair - after all, most compilers do this in their sleep.

The PS2 has far harder problems - the big one that is hard is that it has weird floating point.

There are no NaNs, and there is no infinity.

Thus you have several unpleasant choices.

1) Ignore it and hope that people don't abuse it (which according to the PCSX2 blog, they do)
2) Write code to try and handle it by checking for NaNs and turning them into zeros and clamping infinite values down to non-infinite maxs.
- this has huge performance impacts (it would cost you 3-4 times as much), AND it is still inaccurate because the PS2 has a higher range due to missing infinities.
3) Emulate it accurately by writing floating point code in integer. Probably a non-starter because of the massive performance slowdown.

It also has functions such as "take this 16x8 SIMD value and divide it by this scalar", which is hard to do on an A-8 efficiently.

Oh, and the GS is very different - it doesn't support multi-texturing, which means people emulate it, and it supports paletted textures, which the SGX doesn't. So that's also a pain to emulate.

The problems with static recompilation in general are quite well explained by Exophase. While not necessarily completely unsolvable, it's not a simple problem, or people would have done it already.
 
Last edited by a moderator:
Nupfi said:
Neko said:
What does "Exophase" mean anyway? It kinda reminds me of the stages of mitosis.

:D Now I know where I heard this before...

I ain't mitosis :( This thread is mean.

The meaning behind "Exophase" is for me to pretend to know and you to pretend to find out.

andys said:
That one is easy to handle though, to be fair - after all, most compilers do this in their sleep.

He was referring to runtime, not difficulty of implementation.

People have a tendency to compare conventional compilers and recompilers - in reality, they're very different things. Machine code is nothing like intermediate language. A lot of higher order structure and metadata that seems subtle and is easy to take for granted is lost.

Specifically for register allocation a compiler will be working with local variables that have a known and limited scope. A recompiler will be working with effectively a bunch of global variables - only some of them will have visibly limited scope revealed due to liveness analysis. Basic blocks in a recompiler will also often be smaller than those in a compiler because of branches generated due to control flow structures, and they'll lack the same hierarchy.

Inter-block allocation schemes in recompilers are possible but they're more difficult than inter-function in compilers, which is actually a pretty hard problem in its own right.
 
Last edited by a moderator:
<-- Reads thread *foom* brain melts with information overload. *reboot* OK, so I think I got it...

Since the compilation process is not equal there is no way to get the machine code out of already compiled code. If you can get it out it is extremely difficult (There are several different implementations of this?). On top of that the ps2 also has a floating point processor which the Panda does not have, the code is designed to run on different hardware and the graphics system has to be emulated because GPU is different. I don't think I missed anything. *system crash*
 
CokeCanNinja said:
I was looking at the hardware of the PS2 and it seems like it has enough processing power. Or am I missing something?

pandora can't even emulate the DS at full speed, so PS2...
 
Last edited by a moderator:
I haven't read through the whole thread... But asking this seems similar to asking "Why can't my netbook run Crysis well?" I don't even get what kind of answer the OP was expecting... :blink:
 
Exophase said:
Specifically for register allocation a compiler will be working with local variables that have a known and limited scope. A recompiler will be working with effectively a bunch of global variables - only some of them will have visibly limited scope revealed due to liveness analysis. Basic blocks in a recompiler will also often be smaller than those in a compiler because of branches generated due to control flow structures, and they'll lack the same hierarchy.
Do you know of any other recompilers that do liveness analysis? I'm not aware of any, and I did it only because I needed a way to reduce the amount of register swapping due to the large number of registers that MIPS has. It's a slightly dangerous optimization since you end up with garbage values in registers if an exception occurs. Because of this I often turn off the liveness analysis when debugging.

Exophase said:
Inter-block allocation schemes in recompilers are possible but they're more difficult than inter-function in compilers, which is actually a pretty hard problem in its own right.
Daedalus does it, sort of. It does hot-path compilation, eliminating the jumps.
 
Last edited by a moderator:
In short the "emulation" of a target system (in this case the PlayStation 2) is not as simple as having enough "power" or "speed" in the computer running it. Due to a great number of variables (in most circumstances the "quirks" of a console, or the type of processor/co-processors used and the general "lay of the land" of the hardware), it can be a complicated ordeal to try and do this in an effective way... or even 30/60 FPS. For example the SEGA Saturn used a system of co-processors (among them custom Digital Signal Processor's, set up in a complicated way) including a pair of main processors that executed code, and sent of specific functions to the rest of the machine.

Emulators that will tackle this system would first need to have a software environment to load and then run the game originally meant for the console. To do this you need to know what sort of code it is (is it for a MIPS based processor, some sort of Zilog processor like a Game Boy, a ARM setup like in some newer handhelds), after you know what "language" it is in you then need to know fairly accurately how it then would be past around the various do-dads inside the machine. It take an in depth knowledge of the system firt to be able the "fake" the ROM ISO or what have you into loading up. There are various methods, and some buzz terms that tend to be passed around forums are things like High Level Emulation, registers, byte code emulation...

Emulating a system required a great many things to happens in the correct order, the right way, and in a quick enough manner that you can actually PLAY a game at something approaching 30FPS or higher. That's not even taking into account that many software developers take liberties with the hardware and either cut corners or frankly replace internal firmware code or hack the console to get every last bit of speed out of it. A good case is Factor 5 games that were developed for the N64. They created their own low-level byte code to replace the internal functions of the system responsible for handling real time lighting IN the hardware itself. This resulted in a noticable increase of quality and speed of executing the effect, but due to the "special" nature of how the software actually itneracted with the console... well it can be next to impossible to run these games. You'd have to have the full blown schematics and detail of how the console was made to work to even start. And since Nintendo, SONY, Microsoft don;t really like to give that kind of info away, well there you go.

Also remember that "emulating" is different then creating a "compatibility layer". It's a small but important difference. A good analogy is this: If console "Z" originally had an "X" processor, and the newer computer "V" still had the "X" processor, even if it was newer and much faster/redesigned. You can still run mostly if not totally unmodified software/games in the "emulator" for your "V" computer. Because it still will be running on something that reads and run software almost exactly alike. If this wasn't the case you would need to "translate" (think Latin to Japanese) and then find the best/closest command on your current computer to bridge the gap. Then after your processor has done this, translate it again into something the game can understand. Great examples of Compatibility Layers are WINE (Wine Is Not an Emulator) and the "Classic" environment for PPC Processor based Mac's running Mac OS X. It's roughly the same environment, you just need the libraries and rule's so that the game can do it's stuff. In short in those cases a LOT less "stuff" needs to be done, and because of this compatibility is greater.

This may be partly why the Pandora has a good chance of running a decent Dreamcast emulator, the 3D GPU (Graphic Processing Unit, if you didn't know) is of the same family and only a derivative of the one originally in the Dreamcast. That fact is what may speed of the task greatly. Also that the system was developer friendly (SEGA learned from the SATURN in this department), and had only one main Processor and a programmable GPU.


One of the main reasons many people HATE this sort of thread/question, is that is HAS been asked a thousand time. "Why can't X do Y" is fairly repetitive. And after a while many people stop wanting to educate and simply lash out. Another factor is that a great many individuals consider this a moot point. I mean if it could run a PS2, wouldn't you already know about it already...
 
Back
Top