Floating point operations aren't just converted to host instructions. They're passed through QEMU's SoftFloat engine. We could probably hack this to be fast for our case.
https://github.com/qemu/qemu/blob/master/fpu/softfloat.c
Holy guacamole, some accurate but scary stuff in there, take a look at
float32_muladd over 250 lines of code to perform:
result = ( a * b ) + c
I wonder how often these sorts of functions are being called, and what would happen if a non accurate version is used instead, e.g.:
result = a * b
result += c
I guess a lot of stuff will be okay, but some stuff will break, potentially making it impossible for such harsh optimizations, although it sort of feels like the sort of thing that might be suitable on a per emulated application basis.
EDIT: From some of the stuff I am reading, from a couple of years ago, all floating point calculations are translated into series of integer instructions, in order to ensure exact precision. Is that still the case now? Is it possible to allow them to be translated into ARM floating point versions instead (obviously lots of stuff could go wrong, but it would be interesting to see if anything at all runs, and what the speed is like). Please forgive if this is a stupid question, I haven't looked at to the insides of QEMU until today, it just feels like turning 250 lines into one ARM instruction would be a massive win if it is true.
EDIT2: There seems to be some 'native' implementations of softfloat,
https://github.com/hackndev/qemu/blob/master/fpu/softfloat-native.h