Theoretically Speaking, Is The Pandora At Least Capable Of Dreamcast E


i like where this is going, so I'm going to contribute to the emulation scene by making a post in this thread

this is a joint effort people!
 
CyruzDraxs said:
MagicPants said:
For every page in this thread the Pandora gets 10% faster and DC emulation gets 10% easier. We're almost there! :rolleyes:
Maybe if we make it to 100 pages we'll get fullspeed PS3 emulation! :eek:



NEED... MORE... TROLLS... :D
 
Last edited by a moderator:
Chip said:
Back on topic, please.
Technically, this post isn't on topic either :p

Realistically, this has been good for the n64 emulator's image. We're so concerned with why DC won't work that we're ignoring n64 :D

Realistically though, people we're highly skeptical of n64 on psp,and now mario 64 is playable on the psp. It's not fair to automatically assume at least a title or two can't be playable.
 
Last edited by a moderator:
I would love to be able to play at least Worms (Armageddon or World Party) on pandora via DC emulator :) The N64 version looked nothing nearly as crisp as on Dreamcast.

I think that it would be one of the easier DC games to emulate at full speed too.

Another Important thing was that the NullDC dev said that the source will be released after it has reached a stable point. This would allow someone here to add DSP support for even better speed if it is not supported at first.
 
greendots said:
I would love to be able to play at least Worms (Armageddon or World Party) on pandora via DC emulator :) The N64 version looked nothing nearly as crisp as on Dreamcast.

I think that it would be one of the easier DC games to emulate at full speed too.

Another Important thing was that the NullDC dev said that the source will be released after it has reached a stable point. This would allow someone here to add DSP support for even better speed if it is not supported at first.
Or you could play free games like LieroX and OpenLiero, which IMHO, are much better than Worms.

-God Ginrai
 
Last edited by a moderator:
Exophase said:
Jaguar is probably another good arch for this discussion. The RISC CPUs can perform instructions in as little as 1 cycle but because of register dependencies they'll be stalled all the time. I doubt a Jaguar emulator would emulate this if it wanted high performance, unless it had a recompiler, so effectively cutting the number of cycles per second would work to the same effect. Unfortunately these stalls probably varies a lot from game to game.
I've been having a think about this...

In the Underground Dox, it says that the Jag's RISC CPU pipeline stages go load-ALU-store. Do you mean that the pipelines will be stalled "all the time" because instructions are either a load, a store, an ALU op, or a division, and so any one instruction won't be doing useful work for most of the pipeline?
 
Last edited by a moderator:
Firefox said:
In the Underground Dox, it says that the Jag's RISC CPU pipeline stages go load-ALU-store. Do you mean that the pipelines will be stalled "all the time" because instructions are either a load, a store, an ALU op, or a division, and so any one instruction won't be doing useful work for most of the pipeline?

Note: I don't know anything about Jag's RISC CPU so I am not sure if what follows apply.

On a pipelined processor you have "forwarding paths" that go from one pipeline stage to the other without having to wait for the result to be stored in a register.

Example :
1. r1 <- r0 + 2
2. r2 <- r1 + 1

cycle 0: load r0 ; ---
cycle 1: compute r0 + 2 ; load r1
cycle 2: store r1; compute r1 + 1
cycle 3: ---; store r2

Cycle 2 will be able to use r0 + 2 (=r1) computed at cycle 1 immediately thanks to forwarding paths (the cycle 1 load r1 is useless).

This example is oversimplified, if you want more details take a look at Hennessy and Paterson books :).
 
Last edited by a moderator:
Firefox said:
Exophase said:
Jaguar is probably another good arch for this discussion. The RISC CPUs can perform instructions in as little as 1 cycle but because of register dependencies they'll be stalled all the time. I doubt a Jaguar emulator would emulate this if it wanted high performance, unless it had a recompiler, so effectively cutting the number of cycles per second would work to the same effect. Unfortunately these stalls probably varies a lot from game to game.
I've been having a think about this...

In the Underground Dox, it says that the Jag's RISC CPU pipeline stages go load-ALU-store. Do you mean that the pipelines will be stalled "all the time" because instructions are either a load, a store, an ALU op, or a division, and so any one instruction won't be doing useful work for most of the pipeline?


Do you have the official Jaguar Development Manual? Although you're not technically supposed to, I doubt there's a whole lot wrong with obtaining it with Atari in the state they're in now. If you don't have it then definitely pick it up, because it's actually pretty well written as far as official console documentation tends to go.

The pipeline stage names you gave might be a little misleading. This is what the stages do:

- decode
- read operands from registers
- execute operation
- write result to register

Most instructions do actually do something useful for each of these stages. The problem is that since results are written two stages after they're read, it means that the results are not available until two cycles after the instruction. If you try to load them beforehand then you'll cause a stall. Other architectures use register forwarding to get around this problem but that's not present in the Jaguar. Another problem is that the register file only lets you read or write to two registers per cycle, but strangely can read. This makes it so that the writeback stage has to write to one of the registers used in the read stage two cycles ago, or another stall will occur.

So I think it works like this...

CODE
nop
nop
add r0, r1 @ instruction a: reads r0 and r1, writes r0
add r2, r3 @ instruction b: reads r2 and r3, writes r2
add r0, r5 @ instruction c: reads r0 and r5 as instruction a writes r0, OK


CODE
nop
nop
add r0, r1 @ instruction a: reads r0 and r1, writes r0
add r2, r3 @ instruction b: reads r2 and r3, writes r2
mov r4, r5 @ instruction c: reads r5 as instruction a writes r0, OK


CODE
nop
nop
add r0, r1 @ instruction a: reads r0 and r1, writes r0
add r2, r3 @ instruction b: reads r2 and r3, writes r2
add r4, r5 @ instruction c: reads r4 and r5 as instruction a writes r0, port overload, stalls 1


CODE
nop
nop
add r0, r1 @ instruction a: reads r0 and r1, writes r0
add r2, r0 @ instruction b: reads r0 and r2, but instruction a needs 2 more cycles to give r0, stalls 2


A lot of code can be parallelized to avoid these dependency chains, but sometimes it's just unavoidable. This is probably compounded by having two address instructions, and having a latency of 2 is much more worse than just having one of one. The other stall condition is probably even harder to avoid, although moves and operations with immediates alleviate pressure on the register file. It's too bad that the narrow 16bit instruction set with lots of registers only allows for tiny immediates.

You can see how complex this could get, so I'm sure actual Jaguar code has internal stalls all over the place, before you even get to the stalls from constantly having to shuffle things in and out of main RAM.
 
Last edited by a moderator:
Thanks for that Exophase! I completely understand what you mean now. I'd confused myself by trying to square what it says in the Underground Dox with what you'd said.

Exophase said:
Do you have the official Jaguar Development Manual? Although you're not technically supposed to, I doubt there's a whole lot wrong with obtaining it with Atari in the state they're in now. If you don't have it then definitely pick it up, because it's actually pretty well written as far as official console documentation tends to go.
I do now! I'm just off to read it, I may be some time... :)

Laurent said:
Absolutely no offense taken! :)
 
Last edited by a moderator:
Yes, the Jaguar is a particularly fascinating case of this.

The 't2k only' unthrottled emulator dodges the problem entirely; the DSP is replaced by a native code routine and the GPU basically runs to completion so the 68000 never spins (which we can get away with because T2K is pretty much linear code, just running on multiple CPUs). In that mode, for the webs it's executing something of the order of 20-40 million instructions per second IIRC at 60fps.

In unthrottled mode, I just defaulted it to 0.5 instructions per clock. 'Good code' is supposed to achieve about 0.75 in practice, but I'm not sure there's a lot of that about.

For the current work I've been doing on the Jaguar I'm trying to get the accuracy up and so unthrottled really hasn't been a concern. At this point in time, I'm running it at a bit under 0.25 instructions per clock (default of 4 cycles per instruction, with a few instructions where there are known pipe stalls adding a couple of clocks) and to be honest that looks about the right kind of rate to me, for Tempest 2000 at least (which I reckon runs in about 3-5 frames on the real hardware depending on the web complexity: 30 million instructions / 4 frames = 7.5mips, so 0.25 is pretty much in line with the 27MHz clock rate).

I haven't taken memory wait states into account yet. That will change things about a lot. Any ROM space access is 5 cycles plus, I believe, and I think there is probably very little page reuse in DRAM for the GPU (unlike the OP and the blitter), so any RISC memory access is a page miss - that will be 10 cycles a time or so. The Jaguar had a load/store unit that could defer the hit from a couple of loads or stores, but looking at the kind of code involved I doubt it helped very much.

So my guess is that the 0.25 is about right, composed of:
- 1 cycle per instruction
- average of 1 cycle of pipeline stall per instruction from an uninterleaved dependency chain and the 2-port register file
- 5 or 10 cycle memory waitstates
- a few minor stalls along the way

I'm not currently planning to add loads of drag trying to track the second or the load/store unit. I don't see any reason it wouldn't be possible (DRAM page misses would be tough to get a really good figure on, I guess), but it would be a large and potentially buggy quantity of flab for (at this point) nebulous gain and significant overheads. Instead, I'm going to pick the low hanging fruit with the mem stalls and the minor stalls and then set the global rate at 2 or 3 cycles / instruction to get something that feels right. After that it's just easier just to dial the clock around, I'd have thought...

But that's still from a small sample of games at the moment of course. I haven't been able to get an AvP or a Doom yet and it's fifteen years since I played the former on the real thing :).

One side point mentioned above that I'd agree with is that this sort of thing would almost certainly be easier with a dynarec, where you might have a little more time to calculate this sort of thing (and you would need some similar information knocking about anyway).
 
Hey Dio, nice Jaguar information. I always wanted to know more about how you did this stuff, heh. Although maybe this should be split into a different thread now.

One impression I got, and please correct me if you have information to the contrary, is that the memory controller transparently decided when it needed to reload the DRAM page or not. One thing I heard a lot of people complain about on AtariAge is that games have the 68k spinning around doing nothing all the time, and that it has priority in stealing DRAM bus cycles, so this could screw that up anyway. This of course implies bus contention to begin with. Trying to emulate that in a global sense sounds very difficult if you want to keep any performance at all.

A recompiler would be great, but I fear that the small amount of scratchpad RAM that is used for everything would mean that code is constantly being shuffled in and out of it. Handling this with decent performance is not necessarily out of the question, but depending on the patterns games take it can be very, very tricky. If there are games that just use a fixed portion of code in the fast RAM then they'd be a good baseline for getting things working. I take it there are only a few games people really want to play anyway, so maybe if anything amongst these does so.
 
Glad to be of service :).

Yes, the memory controller does control the page access and doesn't close a page until it gets a request to a new one.

If you have the 68000 spinning on a DRAM location that's going to completely bugger up the whole memory system. ROM accesses and SRAM accesses (the two RISC accesses) don't cause paging problems, but they do tie up the IO bus (that connects the 68000, DSP and cartridge port) and also steals accesses from the RISC chips.

So yeah, a sensible busy wait scheme would use STOP #$2000 and use interrupts (vblank or the GPU can force a CPU interrupt). I have not yet seen this method used in a game :). Illustrates again that there aren't a lot of people that really get asm and systems holistic programming...

Certainly modelling this is hard. Given the above, my thought at the moment is to assume that all CPU DRAM accesses are page misses and all blitter accesses are hits unless the source and destination are in different pages, and that the OP doesn't hit the system at all. The latter can easily consume half plus of the memory bandwidth, so if I'm overestimating on the page misses it's still probably too fast, if anything!

Yeah, the small SRAM does mean that a dynarec is going to run lots of times per frame. It wouldn't be cheap, although it still might be a net win overall if the recompiler is efficient. It's not something I'm planning.
 
gibberish said:
jakshep2 said:
I reckon it's theoretically possible but it will be nearly at the end of the pandos lifespan before it gets perfected.

We should ask the guy for the nullDC sourcecode.

i know you mean well but please stop posting rubbish.


how ironic, a guy with a username of "gibberish" telling me to stop posting rubbish lol :p

P.S yes I will, but that's what someone else told me on another thread.

conso said:
CyruzDraxs said:
MagicPants said:
For every page in this thread the Pandora gets 10% faster and DC emulation gets 10% easier. We're almost there! :rolleyes:
Maybe if we make it to 100 pages we'll get fullspeed PS3 emulation! :eek:



NEED... MORE... TROLLS... :D


i'm sure I can fill in a place :p
 
Last edited by a moderator:
On the subject of Dreamcast emulation, how would the Pandora handle the DC's analogue shoulder buttons?
 
Pleng said:
On the subject of Dreamcast emulation, how would the Pandora handle the DC's analogue shoulder buttons?
It wouldn't. Or you could map them to the right analogue stick maybe.
 
Last edited by a moderator:
Back
Top