Release wine + qemu


Exophase said:
zhasha said:
It doesn't have to be JIT; it could easily be a full recompilation + optimization.
Actually, static recompilation has many complications that make it less ideal than dynamic and with poorer compatibility. Indirect branch search sets can't be directly known, and while heuristics exist they may not resolve all targets and will most likely cause more code to be generated than what is branched to. Probably the more you achieve the complete target set the more false positives (and hence unused "code") you'll end up with. Dynamic loading and self modifying code can't be handled at all, which is a more fundamental issue than may be evident. If intent on storing translated executables to disk then you'd also be passing a large amount of the conversion overhead bloat to your filesystem.
Could (or does) qemu do something in between? JIT recompilation, but it saves the results as it goes to some sort of scratch file, so the next time you run the program, it doesn't need to recompile that section at all? Or am I just being stupid?
 
WizardStan said:
Could (or does) qemu do something in between? JIT recompilation, but it saves the results as it goes to some sort of scratch file, so the next time you run the program, it doesn't need to recompile that section at all? Or am I just being stupid?

I see this brought up a lot. This is my opinion on the approach - it's not worth it. I think there's a tendancy to vastly overestimate how much time is necessary for quality recompilation. I think this originates a lot with the JVM bytecode JITs. These perform a lot of analysis and optimization that would not be necessary for binary translation. That and I don't think they're really that optimized.

You could probably achieve very good x86 to ARM recompilation without a lot of overhead - just enough linear time forward/backward scanning to do propagation and dead elimination and then a quick linear conversion pass. A simple IR optimized for fast manipulation that does a good job matching both instruction sets also helps.

Bottom line is that I think it could actually be slower to load a recompiled file from disk than it can be to generate it. That is, if the file is not cached in RAM already, of course.
 
Actually, the big hit from llvm-qemu comes from using the intermediate macros to generate LLVM code from, rather than going straight from x86 (which would have been much, much more work and you can see why no one has done this yet). But what's telling is that it couldn't even do better than standard QEMU. Think about what's happening: normally QEMU (as of then) would compile blocks of code by pasting GCC generated function bodies together. So each code block is compiled in isolation. llvm-qemu would paste llvm-gcc generated function bodies together, then run the llvm optimizer on the entire block. This should have provided some level of register allocation, liveness analysis, propagation, and so on, and yet it was still worse than the original version. It could have been llvm-gcc's fault, but I have to wonder.

Okay, I need to know. What does GCC have to do with anything here?
LLVM has LLVM-IR, and QEMU has TCG ops. Those are two, possibly very different approaches to an intermediate representation but they have absolutely nothing to do with gcc, llvm-gcc, or any other gcc. gcc is a C compiler frontend and backend, neither LLVM nor the TCG use either.

EDIT: llvm-gcc is just gcc's C frontend, but using LLVM as it's backend for code generation rather than GNU's own
 
zhasha said:
Okay, I need to know. What does GCC have to do with anything here?
LLVM has LLVM-IR, and QEMU has TCG ops. Those are two, possibly very different approaches to an intermediate representation but they have absolutely nothing to do with gcc, llvm-gcc, or any other gcc. gcc is a C compiler frontend and backend, neither LLVM nor the TCG use either.

EDIT: llvm-gcc is just gcc's C frontend, but using LLVM as it's backend for code generation rather than GNU's own

*sigh* You really must think I'm an idiot or something.

QEMU didn't always use TCG. Back when llvm-qemu was done it was using a very different approach. The intermediate language was defined as a series of functions that performed operations on the intermediate data set. The intermediate language was never stored in a discrete form, instead target instructions were converted to intermediate representation than host on the fly. The host conversion was accomplished by a set of C functions that implemented the intermediate language. QEMU would have a GCC compiled object file of these functions and would extract their bodies (skipping the prologue/epilogue code) and paste the contents into the output recompiled blocks.

In llvm-qemu these blocks were being generated by llvm-gcc for obvious reasons. In the normal version of QEMU the normal version of GCC was used. Obviously the quality of GCC's output played a role in how QEMU's translation performed. Do you understand what I was saying now?

In theory this copy and paste object file approach seemed like a good idea because it was more portable. In practice, the rules for parsing the function bodies required almost as much platform specific information as writing code generators would have. The approach also had far less optimization potential because every IR instruction was translated in total isolation. So they moved to TCG, eventually.
 
But then, that would pretty much make all tests moot, however I see your point. It makes sense that TCG would be better performing in a real-time scenario whereas LLVM doing a more exhaustive and generalized job wouldn't be optimal. I'm not suggesting you use LLVM as you would QEMU, but that you in the case of a single application and a with a few modifications to LoadLibrary and the like, create an environment in which an application in itself is recompiled using LLVM and written into native ARM code, then executed. I'm not sure how WINE traps the API calls however, that part of WINE would have to be rewritten for ARM. It would be interesting to see how it would perform compared to a streamlined approach - bloat aside.
I don't think you're an idiot. In fact I think you're way smarter and more experienced in the field than I am, but since we're arguing over completely irrelevant data, let's stop (for now). Based on windows applications I've decompiled, I'd say you can very easily recreate the entire tree of conditionals and loops with very little effort on the x86 side of things. Sure there are also things that might create somewhat poor code, but we can't really know how well it would perform until we try; no I won't do it. I'd rather spend my time playing with a mupen64plus llvm backend so it can run (even at reduced speed) on the Pandora.
In summation: I couldn't care less if WINE got ported to Pandora. I just wanted to drop off a suggestion.
 
Back
Top