Does Psx Rearmed Use The Gpu?


rohezal

Advanced Member
Joined
Oct 18, 2009
Messages
1,712
Does the pscx for all gpu plugin in psx rearmed use the gpu? I noticed big speed improvements for turning off lightning. Happens all emulation on the cpu?
 
Do you have some documentation? I dont have time for doing it, but im still curious. I think the rendering is the main bottleneck (since we can't do floating point stuff in hardware on the cpu). I don't know the pandora has GLES too, or? Does it work on the pandora? Do you think it could work? The 16 MB GPU memory should be more then enough or?
 
If the GPU can make the emulation speed better, I can donate to notaz

This is a personal question to notaz ¿how much you think your work value?

hope my question is not bad taken, is only a question
 
If the GPU can make the emulation speed better, I can donate to notaz

I dont have numbers, but GPU instructions are very costly on a normal CPU. Matrix Multiplikation and the "multiply add" instruction is fast on the gpu and slow on the cpu.

If you disable the lightning you have a major speedbost in almost all games. Same for blending (making "see through" effects like glass, dust and water) or skipping each second line of the picture. If you could use the gpu for it, the cpu could give the work to the gpu, which is faster, and continue with the next image.

@Notaz

May I ask if you investigated the problem? Maybe wrong texture format for the GPU? Or unsupported triangle format (3f instead of 4f, quads instead of triangle strip)? Is there a documentation how the vertex / triangles / lights are generated in the PSX and how they are mapped?
 
Flat Eric said:
Is there an emulator at all that uses the Caanoo-GPU or is the GPU useless till now?
There is some homebrew successfully using the GPU, like Audiorace I think. Also some quake ports.

rohezal said:
Do you have some documentation? I dont have time for doing it, but im still curious. I think the rendering is the main bottleneck (since we can't do floating point stuff in hardware on the cpu). I don't know the pandora has GLES too, or? Does it work on the pandora? Do you think it could work? The 16 MB GPU memory should be more then enough or?
pandora has GLES and yes it works for some games. Caanoo has something they call "GLES lite" and the same code from pandora does start, but it's awfully slow and there is lots of corruption. What would work for PSX GPU emulation is some low level code that would program Caanoo's 3d chip directly. And yes 3d chip is documented in pollux databook, see #3 post here:
http://www.gp32x.de/board/index.php?/topic/52886-some-technical-questions-about-the-wiz-ive-got-my-console-today-d/

Cory said:
This is a personal question to notaz ¿how much you think your work value?

hope my question is not bad taken, is only a question
Well it's my free/hobby time where I do what I want and it's not for sale.

rohezal said:
I dont have numbers, but GPU instructions are very costly on a normal CPU. Matrix Multiplikation and the "multiply add" instruction is fast on the gpu and slow on the cpu.
That's not really the problem, PSX GPU operates in 2d coordinates so no transformation is needed. PSX had a coprocessor for that stuff (GTE) and it's emulated separately along with main CPU, and you can't move that to real GPU because of latencies and incompatible data formats anyway.

rohezal said:
May I ask if you investigated the problem? Maybe wrong texture format for the GPU? Or unsupported triangle format (3f instead of 4f, quads instead of triangle strip)? Is there a documentation how the vertex / triangles / lights are generated in the PSX and how they are mapped?
The existing GLES code from pandora works bad because of Caanoo's driver bugs/missing features. According to Exophase Caanoo's 3d chip does have enough features to be useful for PSX GPU emulation but currently there is simply no such code done and it would be rather difficult to write it. And no I don't have plans to work on this, sorry.
 
Last edited by a moderator:
Ok. thanks for the answer, I hope you have not taken from the wrong way. All appreciate your work so we do desperate things :p
 
Read the documentation. Sounds pretty like a small normal gpu too me (why the hell do they use 24bit floats oO)?

This thing can do floating point stuff, the cpu can read it. And it can do MADs which are realy nice for some stuff. But if the 3d work is done in the co cpu of the psx, and cant be seperated: how can the gpu of the caanoo help? Maybe calculate the brignes in lightning or the blending of some texture?

Do you think Exophase will do some coding? I'm scared too ask him, he helped me with some work for an emulator for the freerunner phone. And I just stopped the work, before it realy begun (had my reasons, not Exophase / Emulator related)...
 
I read these forums too you know ;) I find the prospect of doing PS1 GPU code on Wiz/Caanoo very interesting, but I'm afraid I have a lot of higher priority stuff to work on instead.. such a task is just too platform constrained. But, if you want to undertake it yourself, I can at least maybe give you ideas.

The first thing I would recommend doing is grabbing the Pollux data book. It has documentation for how the 3D subsystems work. If you can directly access the registers for setting up triangles w/o going through the GTE you'll be in good shape. Then you can play around with their parameters (mainly colors and textures although depth is useful for some PS1 emulation stuff), try different combining and blending modes, maybe get DMA working..

If I recall correctly, Pollux supports all of PS1's texture modes directly, meaning that you may not need texture caching to emulate it. But it does not support subtractive blending, so I'm not sure what you'd do to emulate it - I think it supports subtractive combining, so you might be able to do something indirectly. Or you can use some alpha blending based approximations that I've seen used in PC emulation code. The other tricky part is that textures are swizzled in large blocks. I think you can render swizzled, so you might not have a problem. This way you should be able to keep the emulated VRAM as a 1:1 match to the PS1's, although you'll still have to swizzle/unswizzle on operations copying from the PS1 CPU to/from the GPU (PS1 games almost never read back VRAM, for what it's worth).

I could be missing something, no idea.. I don't mind reading through the documentation again and talking to you on AIM/MSN/whatever (sorry don't remember what you use) about this. But if you decide to pursue this it probably will be a lot of work.

On a slightly different note, I noticed that documentation for the Freerunner's Glamo chip was released, maybe you could try the same thing there too :D
 
Hey Exo :) .

find the prospect of doing PS1 GPU code on Wiz/Caanoo very interesting, but I'm afraid I have a lot of higher priority stuff to work on instead..
Same here writing currently my bachelor thesis and doing some changes to Savage 2... We need 48 hour days^^

On a slightly different note, I noticed that documentation for the Freerunner's Glamo chip was released, maybe you could try the same thing there too :D
Yes finaly hardware scaling. This was one of the major problems for gameboy emulation.
 
The 3D core of the POLLUX consists of several functional blocks (sub-modules). 3D Graphic Engine block consistes of command
processor, Premitive Processor, GTE/ Clipper, TSE and Rasterizer blocks.
The CPU controls directly all Sub-module[inconsistent with earlier spelling] operations and it can also request the Command processor to
control them.
CPU can control the operation of sub modules either directly or through command processor.

The Register3D stores the parameters to control the Sub-Module. It generate operation signals to Sub-module, When external device such as
CPU writes a data in control register. Sub-modules and sends operation signals to the Sub-module when an external device writes in the
control register (GRP3D_CONTROL). The Command Processor and Primitive Processor can control sub-module through the Register3D.
The control priority is Primitive Processor > Command Processor > CPU


Controlls all sub modules. Nice. So we can skip the vector multiply stuff. Notaz said its a strange format and bound closely to the psx cpu so we cant seperate it. This sux, I still believe vector math is expensive on a cpu :( . But hey better then nothing to use the other modules.

I just know the open gl pipeline where vertex are floats and matrix stuff is vector calculation. I dont know how the psx handles this. But maybe we could use this:

The GTE can be used as a floating point Coprocessor because the CPU can read the calculation results.

High latencies, but if we the Command Processor we can maybe push the floating point operations to it, and calculate other stuff during the latency. Like normal gpus do it.

The Command Processor can operate all 3D operations independently - without the intervention of the CPU.
The Command buffer (or command queue) is a memory area that stores commands. It contains command and other additional information
which will be copied into Register 3D block. The CPU assigns the command buffer area (refer to the Command buffer section).
The CPU controls the Command Processor directly, without the Register3D. For Register3D, the Command Processor has priority over the
CPU.


This is interessting:

22.1.5.
TSE (Triangle Setup Engine)
The TSE converts vertex data in the Input Register to fit into the Rasterizer. The vertex data which is stored into three input registers from
clipper block for the Rasterizer. The TSE is controlled from the Register3D or GTE/Clipper.
22.1.6.
Rasterizer
The Rasterizer has two operations as given below.
Render triangles or rectangles Rasterizer does two operations.
Shading / Texture mapping / fog blending / alpha blending / z-buffering /...
Memory fill
Fill certain memory areas (rectangles) with certain values.
Fast z-buffer/screen/texture clear
The Rasterizer is controlled by the Register3D and TSE. The Rasterizer has a 1-depth FIFO to input the next parameters during operations.

This means, ok we have to calculate our triangles in the PSX gte who is bound to the PSX cpu. But the texturing, blending and probably lightning can be done by the rasterizer pipeline. This takes lots of the load of the cpu to the gpu and will speed things a lot. No UV looksup etc. The GPU Ram has the right "swizzle" for the textures too (writing an exam over it in one week, it can speed up things a lot, for caching textures). Not sure if the pollux support clever gpu caching thought...

Since the gpu has a 64 bit interface to one Ram Chip and the CPU has its own ramchip with 32 bit seperate interface, the could use more bandwidth and wont interfere.

So the CPU can keep compiling / executing psx code and the rasterizer can get the triangles and textures and render them. The cpu wont have to use its register for colouring since its done in the 3d registers. And dont have to waste ram since the gpu chip has its own.

I want to point here that on there general overview (page) is said the the cpu can use 32 of rambus1 and 32 bit of rambus2. Rambus2 has 64 bit. So even if we use both banks the cpu just uses 64 (32x2) Bits out of 96 bits of memory bandwidth.

I realy think this is worth doing it... but I cant do it, not enough time. Same for some friends who could do it... damn we need more skilled people with more time^^
 
It would be awesome! But the time and the skills... :( I don't have both of them so the only thing is to hope that one day...
 
Back
Top