Sega dreamcast emulator on pandora is it possible?


Exophase said:
You shouldn't trust marketing numbers too easily. Multiple Dreamcast emulator authors claim that real games get as little as 0.5 instructions per clock. And the GFLOP rating is even more misleading because it is based solely on a 1 cycle dot product (that isn't accurate to full 32bit precision, and it has very few other vector instructions) - when it comes to normal floating point operations it's much more ordinary and cannot do more than one at a time under any circumstance. It's better at doing floating point dot products than Cortex-A8, but the flexibility of NEON goes a long way to make up for it. According to drkIIRaziel, Dreamcast games tended to not push more than 1 million polygons a second to begin with, so you don't exactly need all of that geometry transformation power in the CPU.
correct me if i'm wrong, but don't you care more about maximal throughput of the emulated target, rather than a statistical avertage? i.e. wouldn't you aim to meet the worst case? for instance, that mentioned dot product - indeed quite potent on the sh4 - how would you handle that at it maximal throughput when emulated on the cortex/neon? maybe you could 'catch up' on a statisctically-long sequence of dp's with neon's fused mads, but any short burst of dp's in the emulated flow would create 'lag bubbles' on the emulator, which may or may not be maskable with subsequent faster-than-realtime compensations.
 
blu said:
Exophase said:
You shouldn't trust marketing numbers too easily. Multiple Dreamcast emulator authors claim that real games get as little as 0.5 instructions per clock. And the GFLOP rating is even more misleading because it is based solely on a 1 cycle dot product (that isn't accurate to full 32bit precision, and it has very few other vector instructions) - when it comes to normal floating point operations it's much more ordinary and cannot do more than one at a time under any circumstance. It's better at doing floating point dot products than Cortex-A8, but the flexibility of NEON goes a long way to make up for it. According to drkIIRaziel, Dreamcast games tended to not push more than 1 million polygons a second to begin with, so you don't exactly need all of that geometry transformation power in the CPU.
correct me if i'm wrong, but don't you care more about maximal throughput of the emulated target, rather than a statistical avertage? i.e. wouldn't you aim to meet the worst case? for instance, that mentioned dot product - indeed quite potent on the sh4 - how would you handle that at it maximal throughput when emulated on the cortex/neon? maybe you could 'catch up' on a statisctically-long sequence of dp's with neon's fused mads, but any short burst of dp's in the emulated flow would create 'lag bubbles' on the emulator, which may or may not be maskable with subsequent faster-than-realtime compensations.

Is it dot product? I thought FTRV was the big one, which is 4x4 matrix multiplication. According to the docs I've read it has a 4 cycle cost. Is there another instruction that I've missed? From my readings the equivalent NEON code sequence would be 8 cycles, which means it would be covered by a higher clock count, although the larger instructions would hurt (4x32bits versus 1x16bit).
 
andys said:
blu said:
Exophase said:
You shouldn't trust marketing numbers too easily. Multiple Dreamcast emulator authors claim that real games get as little as 0.5 instructions per clock. And the GFLOP rating is even more misleading because it is based solely on a 1 cycle dot product (that isn't accurate to full 32bit precision, and it has very few other vector instructions) - when it comes to normal floating point operations it's much more ordinary and cannot do more than one at a time under any circumstance. It's better at doing floating point dot products than Cortex-A8, but the flexibility of NEON goes a long way to make up for it. According to drkIIRaziel, Dreamcast games tended to not push more than 1 million polygons a second to begin with, so you don't exactly need all of that geometry transformation power in the CPU.
correct me if i'm wrong, but don't you care more about maximal throughput of the emulated target, rather than a statistical avertage? i.e. wouldn't you aim to meet the worst case? for instance, that mentioned dot product - indeed quite potent on the sh4 - how would you handle that at it maximal throughput when emulated on the cortex/neon? maybe you could 'catch up' on a statisctically-long sequence of dp's with neon's fused mads, but any short burst of dp's in the emulated flow would create 'lag bubbles' on the emulator, which may or may not be maskable with subsequent faster-than-realtime compensations.

Is it dot product? I thought FTRV was the big one, which is 4x4 matrix multiplication. According to the docs I've read it has a 4 cycle cost. Is there another instruction that I've missed? From my readings the equivalent NEON code sequence would be 8 cycles, which means it would be covered by a higher clock count, although the larger instructions would hurt (4x32bits versus 1x16bit).

Because you think those instructions are executed for 99% of the run life of the game ? People shouldn't really be silly to focus on such instructions to claim SH4 superiority against Cortex-A8. What Exophase was saying is that SH4 processor is not the claimed obstacle for emulation because of reasons he exposed : theorical figures which don't meet the real life in existing games.
 
andys said:
Is it dot product? I thought FTRV was the big one, which is 4x4 matrix multiplication. According to the docs I've read it has a 4 cycle cost. Is there another instruction that I've missed? From my readings the equivalent NEON code sequence would be 8 cycles, which means it would be covered by a higher clock count, although the larger instructions would hurt (4x32bits versus 1x16bit).
yes, sh4 has a verbatim dp4 op - the FIPR - 'inner product'. actually, i brought it up as it poses a worse case for emulation than FTRV, which can be efficiently broken into MADDs. FIPR has a pitch of 1 cycle - rather hard to beat with FMADDs, unless the latter work on a statistically-long batch of dp4's, in which case the average cost of a single dp4 drops. unfortunately, due to the universal application of dp's in anything (3d) spatial, you can expect to have those sprinkled all over a game's code, in statistically-small groups.
 
blu said:
correct me if i'm wrong, but don't you care more about maximal throughput of the emulated target, rather than a statistical avertage? i.e. wouldn't you aim to meet the worst case? for instance, that mentioned dot product - indeed quite potent on the sh4 - how would you handle that at it maximal throughput when emulated on the cortex/neon? maybe you could 'catch up' on a statisctically-long sequence of dp's with neon's fused mads, but any short burst of dp's in the emulated flow would create 'lag bubbles' on the emulator, which may or may not be maskable with subsequent faster-than-realtime compensations.

I care more about what a majority of games conform to. It's possible for basically any emulator, especially recompiler, to find a game that treats it very badly speed-wise but is well out of the norm (and also it's possible that to get more than 95% accuracy speed of the entire emulator has to be taken down a lot). In this case drk||Raziel specifically said that no game he has ever seen has more than 1 million polygons a second. Besides that, what is there to "consider" exactly? You do your best and if that's only enough to run half of all games at full speed then that is what you get.

Anyway, I don't really see you're much better off with FTRV, either way the issue rate seems to be every other cycle for the 4-way operations on NEON. The latency shouldn't really be any worse than on SH4 since the multiplies are forwarded to the addition part of the pipeline; I don't know if any register forwarding is possible inside those pipelines (I'd guess not).

andys said:
Is it dot product? I thought FTRV was the big one, which is 4x4 matrix multiplication. According to the docs I've read it has a 4 cycle cost. Is there another instruction that I've missed? From my readings the equivalent NEON code sequence would be 8 cycles, which means it would be covered by a higher clock count, although the larger instructions would hurt (4x32bits versus 1x16bit).

The matrix multiply is a macro instruction. Yeah it saves on space but it probably isn't going to matter, transformation kernels aren't usually huge amounts of code (the innermost loops).

Calling execution time solely by throughput is misleading, but in the real world 3D math does tend to be easy to parallelize.
 
Exophase said:
Besides that, what is there to "consider" exactly? You do your best and if that's only enough to run half of all games at full speed then that is what you get.
i have no issue with that whatsoever, my original concern was with complete emulation, albeit of the cpu only.

Exophase said:
Anyway, I think you have a bad idea of the relative performance between NEON and SH4 (I think I did too), because SH4 indeed has a 4 cycle latency for its FIPR and NEON only has 2 cycles of latency (unless I'm reading the charts in the TRM horribly wrong). Both can issue them every cycle.
unless i'm reading something wrong on my end, NEON does not have a dp4 op at all. you'll need multiple fmadds to get the same result.

The matrix multiply is a macro instruction. Yeah it saves on space but it probably isn't going to matter, transformation kernels aren't usually huge amounts of code (the innermost loops).

Calling execution time solely by throughput is misleading, but in the real world 3D math does tend to be easy to parallelize. The NEON code would be 4 cycles throughput, 5 cycles latency (vs 4 throughput, 7 latency for SH4).
i'm definitely not a NEON guy, but according to this doc by Tatsuya Kobayashi, a 4x4 by 4x1 product on NEON has 8 cycles throughput. vs 4 cycles throuhput for sh4's FTRV, as you rightfully noted. but that particular op should not be an issue for emulation as cortex has a sufficiently higher clock, any recompiler inefficiencies aside. anyhow, 4x4 by 4x1 is a sufficiently large batch of linear ops to be efficiently handled by fmadds.

apropos, that same document rates NEON's dp capabilities at 3 cycles througput, per 3-term vectors.
 
blu said:
apropos, that same document rates NEON's dp capabilities at 3 cycles througput, per 3-term vectors.

Assuming a dot product is

a1*b1+a2*b2+a3+b3+a4*b4

Then my reading of the instruction cycle timings would be that you would take 4 cycles, but you could do two of them in that time. (assuming MUL, then MAC, MAC, MAC).

My reading of the NEON stuff is that it pretty much can only do two floating point ops per clock (assuming you don't count the "fused" adds, so as soon as you do a quadword op, it just splits it into two ops and then performs them separately. At least, that would gel with the cycle timings
 
blu said:
i'm definitely not a NEON guy, but according to this doc by Tatsuya Kobayashi, a 4x4 by 4x1 product on NEON has 8 cycles throughput. vs 4 cycles throuhput for sh4's FTRV, as you rightfully noted. but that particular op should not be an issue for emulation as cortex has a sufficiently higher clock, any recompiler inefficiencies aside. anyhow, 4x4 by 4x1 is a sufficiently large batch of linear ops to be efficiently handled by fmadds.

apropos, that same document rates NEON's dp capabilities at 3 cycles througput, per 3-term vectors.

You'll notice I edited my post since then (strange that you quoted the old one much later...) The docs are pretty hard to read, but yes, it's 2 cycles per 4-way VMAC.

And yes, it doesn't have a dot product per se. So the question becomes one of what the Dreamcast code is doing. If the dot products are so spread out so as not to be repeated in a loop then the performance overhead for them is negligible. If they're done repeatedly in a tight loop then that loop will probably be unrolled to improve parallelization because of the latency of the dot product on Dreamcast, and if that happens then it's entirely possible that a recompiler can analyze this and vector pack the elements (if they don't already fit the transposed format you'd register cache in for the FTRVs) to do multiple FIPRs in 8 cycles (if more than two can be scheduled). Of course, this depends on the complexity of the recompiler and what the Dreamcast code is doing, but I still think that the FIPRs would be one of the less troublesome things for the emulator to worry about.
 
Exophase said:
You'll notice I edited my post since then (strange that you quoted the old one much later...) The docs are pretty hard to read, but yes, it's 2 cycles per 4-way VMAC.
bah, no mysteries here - i had started posting a reply to you approximately .5h before i actually posted it - i was doing a bunch of things simultaneously, just kept refreshing the future reply so it would not expire : )

And yes, it doesn't have a dot product per se. So the question becomes one of what the Dreamcast code is doing. If the dot products are so spread out so as not to be repeated in a loop then the performance overhead for them is negligible. If they're done repeatedly in a tight loop then that loop will probably be unrolled to improve parallelization because of the latency of the dot product on Dreamcast, and if that happens then it's entirely possible that a recompiler can analyze this and vector pack the elements (if they don't already fit the transposed format you'd register cache in for the FTRVs) to do multiple FIPRs in 8 cycles (if more than two can be scheduled). Of course, this depends on the complexity of the recompiler and what the Dreamcast code is doing, but I still think that the FIPRs would be one of the less troublesome things for the emulator to worry about.
yes, at the end of the day the recompiler will have to be very cunning as it has little-to-no margin to play with re this op. as about the statistical importance of the dp, i'd keep it in the 'concern' zone, as it will come in all shapes and forms (ie. size of batches and dispersion over the code), me thinks. but that's just me arm-chair hypothesizing - emulators are not my element. *shrug*
 
Back
Top