Release Pcsx-Rearmed


Coldbird said:
Tested German Grandia (PAL) - playable, but awful... crashes the emulator on Intro Movie if you don't disable XA decoding.

Voice Acting repeats itself and overlaps all the time.
Ruins the voice acting, other than that stable and fullspeed (40-50fps).

Will retest Grandia NTSC with XA decoding off tomorrow to see if it now also passes the intro movie.
Another Grandia test... ntsc us one still hangs on whitescreen, even if xa decoding is off.
German pal one works if xa decoding is off, but crashes the emulator if you use a savepoint.
Savestates work.
 
Last edited by a moderator:
notaz said:
EvilDragon said:
I can select "Resume game" and it resumes - but only for half a second and then it goes back to the menu. I can repeat this over and over.
Tried this with different GFX plugins - always happened.
Never seen that, give me a savestate. Could be caused by overclock - as said the limit is lower here than usual because of use of NEON.

Hmm, but then it should crash and not go to menu and resume... I'll make a savestate for you.

EvilDragon said:
One other issue I have is that very often PCSX Rearmed doesn't start. I have to delete its config and then it starts again.
Nothing special in the log, it just goes immediately back to the main menu.
Never seen this either, I need to see your config that causes this.

I'll send you a non-working one.

EvilDragon said:
notaz, how have you implemented filter selection?
If I select "none" it still has the default (blurry) filter on.
I'm on latest version (HF5 Beta) which combined all filter files into one, but if you're using the op_videofir script, the commandline is exactly the same.
As said on IRC I'll fix it up so that it plays nice with the new script.

I also fixed it with HF5 Beta 2 - the script now checks for *_up if * is not found ;)
 
Last edited:
Exophase said:
notaz said:
Maybe, but not every game I guess (not saying I can do it though). PSP has an advantage of low level access to it's GPU, so it can convert PSX drawing commands in to PSP display lists and run them in parallel (maybe something similar can be done with GLES, I don't know much about that). I can guess the PSP is doing most audio emulation on it's second CPU which has it's dedicated RAM and should be good for this job. Maybe some other tricks, I wonder if anyone did any analysis of that POPS emu.

hlide has looked at it, you could ask him.

I think there will always be much more overhead with GL ES on Pandora than the GPU on PSP just by nature of the drivers and architecture. PSP has the further advantage of supporting PS1's texture formats and so theoretically not needing texture caching. It can also do a little better with GTE emulation using VFPU, but I don't think they ever employed the kinds of flags calculating tricks hlide came up with. Calculation of flags and other values that often go unused is probably the big price of GTE emulation and I imagine it can be sped up with some simple liveness analysis and versions not doing redundant work.

PS1's GPU shouldn't actually take that much to emulate w/o enhancement in software. The DSP could probably do a really good job at this. Is everyone testing with the peops driver instead of Una-i's? Because that's probably a ton slower, and not just due to being more accurate. Even without DSP one optimized with ARM ASM + NEON could probably do nicely, although for texture mapped rendering with NEON you have to do multiple passes.

SONY designs PSX. SONY also designs PSP. They have all the necessary resources to get the best emulator, supposedly their software programmers had the skills.

I noticed they used asynchronous IO instead of blocking IO a lot for everything. And yes, the audio is probably handled through the second processor (sceMeAudio implemented in popsman.prx). MDEC is done by the main CPU through an intensive use of VFPU. They massively used the VFPU load/store with write-buffer to clear/transfer. GTE instructions seem to be executed through a call to a function which mix VFPU, FPU and integer operations. So no liveness analysis and GTE recompilation. GPU seems to be converted into GE commands. GE has some nice features OpenGL has not. But GE cannot handle RGB24, they must be transformed into RGBA32.

GPU on PSX is quite simple compared with GE on PSP : no direct VRAM, no real 3D. NEON+OpenGL ES 2.0 with smart use of shaders should help.

Due to the fact that NEON instruction can work on integer, it's probably a better winner than VFPU (which only works on float) for GTE emulation.

EDIT:
@Exophase: in fact, looking at a set of functions of emulated GTE instructions in POPS from FW 5.00, i can clearly see those functions are now intensively using VFPU and if i'm not wrong FLAGS goes in VFPU register S330 instead of the mix of VFPU, FPU and integer instructions I used to see in the first version of POPS.
 
Last edited by a moderator:
Just had a bash on Road Rash - at 900Mhz, plays sweet and constant fullspeed with music off... Enable music though and when you ride up a hill the game slows down noticeably. Disable music and the slowdowns no longer happen.

Only happens when you go uphill! How odd :)

D.
 
EvilDragon said:
notaz said:
EvilDragon said:
I can select "Resume game" and it resumes - but only for half a second and then it goes back to the menu. I can repeat this over and over.
Tried this with different GFX plugins - always happened.
Never seen that, give me a savestate. Could be caused by overclock - as said the limit is lower here than usual because of use of NEON.

Hmm, but then it should crash and not go to menu and resume... I'll make a savestate for you.
I've just got that with Silent Hill and fixed it, so no need for a savestate.
But still overclocking does not always crash, it might result in any kind of unexpected behavior, for example Prometheus had games glitching but working still (not this case though).
 
Last edited by a moderator:
notaz said:
DaveC said:
Do you think this emu will or is possible to be full speed at 600 MHz or is that not possible with current hardware? I am curious how the PSP at 200odd Mhz can be full speed. Is that because the CPU is similar or does the PSP have a superior hardware layout ( for PSX) or something?
Maybe, but not every game I guess (not saying I can do it though). PSP has an advantage of low level access to it's GPU, so it can convert PSX drawing commands in to PSP display lists and run them in parallel (maybe something similar can be done with GLES, I don't know much about that). I can guess the PSP is doing most audio emulation on it's second CPU which has it's dedicated RAM and should be good for this job. Maybe some other tricks, I wonder if anyone did any analysis of that POPS emu.
Oh I am sure you could do it. If it is possible I think you have the coding skill to pull it off. Now the question as to if you have the time or motivation to do it is a different story...

Anyway it seems that without low level access on the Pandora that many things will suffer and not reach full potential of the hardware. Will low level access to the OMAP features ever be possible?
 
Last edited by a moderator:
DaveC said:
Anyway it seems that without low level access on the Pandora that many things will suffer and not reach full potential of the hardware. Will low level access to the OMAP features ever be possible?

It is possible, but unlikely, as it is the case with most modern systems or SoCs.
The technical documentation for the OMAP is a book of over 3000 pages... the old GP2X SoC way WAY more simple, can't even be compared - and even that wasn't really used low-level...

I don't know ANY system nowadays with a recent CPU that really runs in lowlevel mode... PSP, PS3, XBox360, etc. nothing. They all have an OS (or kernel) that handles the stuff for the games.
 
Last edited:
notaz said:
I've just got that with Silent Hill and fixed it, so no need for a savestate.

Heh - funny coincidence. I started to play Silent Hill while waiting to be able to continue with Suikoden 2 and haven't encountered that issue there :D
 
Last edited:
EvilDragon said:
DaveC said:
Anyway it seems that without low level access on the Pandora that many things will suffer and not reach full potential of the hardware. Will low level access to the OMAP features ever be possible?

It is possible, but unlikely, as it is the case with most modern systems or SoCs.
The technical documentation for the OMAP is a book of over 3000 pages... the old GP2X SoC way WAY more simple, can't even be compared - and even that wasn't really used low-level...

I don't know ANY system nowadays with a recent CPU that really runs in lowlevel mode... PSP, PS3, XBox360, etc. nothing. They all have an OS (or kernel) that handles the stuff for the games.
So basically the more powerful systems get the more the OS sucks up that power. Now we have the Pandora that is much more powerful than the Wiz but because of a bloated OS the PSX emu runs only a little faster.

It seems a little counter productive. What is the point of more power if you really can't get to it?
 
Last edited by a moderator:
DaveC said:
So basically the more powerful systems get the more the OS sucks up that power.

Well, the more features an SoC offers (like NEON optimization, etc.), the more complicated it is to use that power.
We've come to a point where it's not easily possible to just increase the raw CPU power. Faster things nowadays work by including optimized stuff (i.e. a CPU for 3D, NEON for special tasks, etc.).
However, the more you include, the more complicated it gets.

The downside is that you can't use the raw potential as easily, as you need a lot more code to use those optimized features.
The good thing though is that using an OS or compiler that uses these optimized features is still way faster than you would just have an old SoC with more horsepower.

Now we have the Pandora that is much more powerful than the Wiz but because of a bloated OS the PSX emu runs only a little faster.

Ermm... only a little?
I tried a lot of games on the WIZ. Games like Spyro the Dragon, etc.
While I get about 20 - 30 fps WITH frameskip on the WIZ (overclocked to 700MHz), I get the same game fullspeed on the Pandora, clocked to 800MHz (and nigh fullspeed at 700MHz).

I've yet to find a game that I find enjoyable on the WIZ PSX emulator. It's really impressive work, but for me, no game is enjoyable.

Have you tried that yourself?

It seems a little counter productive. What is the point of more power if you really can't get to it?

It's a technical limitation. You can't just create a CPU with 10GHz.
So to gain more speed you need to add optimized features in the CPU, add dedicated routines for common features.
That has been the case with a lot of CPUs for a LOOONG time.
Just think about MMX, SSE, SSE2, SSE3 on desktop PCs.

The WIZ SoC is five times slower than Pandoras SoC, though they both use the same MHz.
But the OMAP has a lot more optimizations included and therefore runs a lot faster.

So even with all that bloated stuff (that's not really that bloated at all, since we CAN use all the optimized features the OMAP offers), it is still a lot faster than to just forget about those optimizations and use raw CPU power, lowlevel.
 
Last edited:
EvilDragon said:
It's a technical limitation. You can't just create a CPU with 10GHz.
So to gain more speed you need to add optimized features in the CPU, add dedicated routines for common features.
That has been the case with a lot of CPUs for a LOOONG time.
Just think about MMX, SSE, SSE2, SSE3 on desktop PCs.

Well, you could create a CPU that was 10Ghz but I think Intel gave up on the Mhz race with the Pentium 4. Intel was pushing 3.5Ghz on 13 micron process, so I'm relatively confident they could have continued but AMD put pressure on them to change. Not too sure why a longer instruction pipeline allowed a higher frequency, but branch mispredictions hurt the P4 pretty bad.

Also what is interesting to note is the extended instruction sets like MMX, SSE, etc, are being replaced by dedicated chips with ARM. (As far as I can see it) I mean the only thing ARM has is NEON (not too sure how important thumb is from a performace/legacy standpoint) but other things are getting bundled onto the SoC like DSP, ISP and GPU. I know the Tegra 2 has an ISP and the LG Optimus X2 uses it quite well.

So how I see the computer race going is many different specialized architectures running inside an SoC. Multi-core CPU's definitely have a diminishing returns thing going on and they just keep slapping more band-aids on the thing. I.E. cache. But x86 is dying finally. Windows is going ARM and with IE9 they are finally going HW accelerated GPGPU-wise. But I maybe giving MS too much credit here.

Bottom line is ARM with their license model will usher in a commodification of IC's and the main difference brought about will be optimizations via software. So yea, I agree with ED that optimizing will be more complicated, but once optimized correctly, you will see a system fly WAY faster than a 10GHz computer ought to do while consuming far less power.

The only problem is man-power. Look at the state of drivers for the SGX530 and compilers. Man-power will come once the world switches over to ARM. But emulating this gen, let alone last gen seems like a daunting task. And emulating stuff two generations further is still causing problems. Hell I hear the Saturn is a beast to emulate.

Hopefully the source code for everything gets saved. I'd hate to see some gems lost to old consoles.
 
Last edited by a moderator:
Phawx said:
EvilDragon said:
It's a technical limitation. You can't just create a CPU with 10GHz.
So to gain more speed you need to add optimized features in the CPU, add dedicated routines for common features.
That has been the case with a lot of CPUs for a LOOONG time.
Just think about MMX, SSE, SSE2, SSE3 on desktop PCs.

Well, you could create a CPU that was 10Ghz but I think Intel gave up on the Mhz race with the Pentium 4. Intel was pushing 3.5Ghz on 13 micron process, so I'm relatively confident they could have continued but AMD put pressure on them to change. Not too sure why a longer instruction pipeline allowed a higher frequency, but branch mispredictions hurt the P4 pretty bad.

Also what is interesting to note is the extended instruction sets like MMX, SSE, etc, are being replaced by dedicated chips with ARM. (As far as I can see it) I mean the only thing ARM has is NEON (not too sure how important thumb is from a performace/legacy standpoint) but other things are getting bundled onto the SoC like DSP, ISP and GPU. I know the Tegra 2 has an ISP and the LG Optimus X2 uses it quite well.

So how I see the computer race going is many different specialized architectures running inside an SoC. Multi-core CPU's definitely have a diminishing returns thing going on and they just keep slapping more band-aids on the thing. I.E. cache. But x86 is dying finally. Windows is going ARM and with IE9 they are finally going HW accelerated GPGPU-wise. But I maybe giving MS too much credit here.

Bottom line is ARM with their license model will usher in a commodification of IC's and the main difference brought about will be optimizations via software. So yea, I agree with ED that optimizing will be more complicated, but once optimized correctly, you will see a system fly WAY faster than a 10GHz computer ought to do while consuming far less power.

The only problem is man-power. Look at the state of drivers for the SGX530 and compilers. Man-power will come once the world switches over to ARM. But emulating this gen, let alone last gen seems like a daunting task. And emulating stuff two generations further is still causing problems. Hell I hear the Saturn is a beast to emulate.

Hopefully the source code for everything gets saved. I'd hate to see some gems lost to old consoles.

Well, one of the biggest reasons they stopped doing faster and faster processors (MHZ-wise) is that it was costing more and more to cool them. That's why they switched to the core approach, so that people could still get fast programs if they are coded with multiple cores in mind, and each core would run a lot cooler because they would be at lower clockspeeds.

Also, I've only heard rumors about Windows going ARM. Could you link me to your source? o_ô

-God Ginrai
 
Last edited by a moderator:
Hm I thought it was getting more and more difficult to make faster CPUs MHZ-wise because of the speed of light limitation because of which the core has to be smaller but then quantum effects become nasty. Yes, seriously...
 
God Ginrai said:
Also, I've only heard rumors about Windows going ARM. Could you link me to your source? o_ô

-God Ginrai

http://www.microsoft.com/Presspass/Features/2011/jan11/01-05SinofskySOC.mspx
 
Last edited by a moderator:
frefol said:
God Ginrai said:
Also, I've only heard rumors about Windows going ARM. Could you link me to your source? o_ô

-God Ginrai

http://www.microsoft.com/Presspass/Features/2011/jan11/01-05SinofskySOC.mspx

ooh. Thanks for the read. :)

-God Ginrai
 
Last edited by a moderator:
Emnasut said:
Hm I thought it was getting more and more difficult to make faster CPUs MHZ-wise because of the speed of light limitation because of which the core has to be smaller but then quantum effects become nasty. Yes, seriously...

Speed of light doesn't really have anything to do with it. Heat has always been the issue. And heat was generated because of electrical leakage. The leakage caused because the process was continuing to shrink and electrons were being forced through smaller and smaller pathways. Intel created a process called strained silicon while AMD used SOI and later used low-k dielectrics as an insulator.

Intel *could* have gone the same route and we would have 15-20Ghz cpu's. But that insanely long pipeline does more harm than good as the Mhz race was merely a PR movement such as MP in cameras is no longer relevant and ISO rating matters more.

The number associated with frequency really doesn't matter all that much, because it's what you can *do* with that clock that matters. Core's are the PR movement of today. And after 16 core's for general purpose cpu's, diminishing returns *really* starts to kick in. Specialized architectures are where we are moving towards. In about 10 years, your mobile phone will have a transforming UI/UX. Microsoft is well suited to create this experience. They have WP7 UI for the personal UX (touched based), K&M for 2ft UX and now they have Kinect for 10ft UX (body gestures/hand gestures/voice). It will all just be a matter of where you dock it and it will transform the UI.

To see this happening now you only need to look at Motorola Atrix 4G. It can dock to a desktop/laptop/TV and transform it's UI based on the experience needed. Obviously, this is using Android and a suite of tools proprietary to Motorola. But Google is doing this now. Eventually, they will merge Android/Chrome OS/Google TV into the same product and it will all be carried out through a mobile device you dock in.

It's pretty easy to see with Google and Microsoft, especially when they announced Windows on ARM at CES '11. It's tough to pinpoint what Apple is doing. Especially with the Mac App Store. It seems like they are moving the desktop to iOS ways. I wouldn't be surprised if Apple removed the filesystem from the user in future versions of OSX. Especially when you see that each app is going TRUE full screen and they are removing DMG support.

I see GPU's supplanting CPU's in 10 years, because (so far) they seem the only IC that can be utilized to carry out general tasks far faster than CPU's can. They are going direct compute now and software, especially the UI, will need to make use of HW acceleration to meet consumer expectations of fluidity and the computer "just working". I fully expect cache band-aids to be replaced by dedicated specialized IC's in the future.
 
Last edited by a moderator:
Phawx said:
Well, you could create a CPU that was 10Ghz but I think Intel gave up on the Mhz race with the Pentium 4. Intel was pushing 3.5Ghz on 13 micron process, so I'm relatively confident they could have continued but AMD put pressure on them to change. Not too sure why a longer instruction pipeline allowed a higher frequency, but branch mispredictions hurt the P4 pretty bad.

Increasing pipeline length means that in an ideally balanced pipeline (this is probably a lot harder than it sounds, but I'm not a CPU designer so I don't really know) each stage will require less time and can therefore all can complete in a smaller clock period.

That time of completion for a pipeline stage isn't going to keep scaling down linearly with feature size reduction, although I don't know the specific limitations. Higher frequency also uses more power and generates more heat (as lulzfish pointed out) and I would imagine there are limitations that make generating and driving the clock itself harder the higher frequency it gets.

Phawx said:
Also what is interesting to note is the extended instruction sets like MMX, SSE, etc, are being replaced by dedicated chips with ARM. (As far as I can see it) I mean the only thing ARM has is NEON (not too sure how important thumb is from a performace/legacy standpoint) but other things are getting bundled onto the SoC like DSP, ISP and GPU. I know the Tegra 2 has an ISP and the LG Optimus X2 uses it quite well.

It's the other way around. NEON is completely analogous to MMX/SSEn. Those dedicated chips don't really replace a SIMD coprocessor. It's desktop x86 processors that have been integrating more and more on-die, like IMCs, GPUs and Quickpath on Sandybridge.

Phawx said:
So how I see the computer race going is many different specialized architectures running inside an SoC. Multi-core CPU's definitely have a diminishing returns thing going on and they just keep slapping more band-aids on the thing. I.E. cache. But x86 is dying finally. Windows is going ARM and with IE9 they are finally going HW accelerated GPGPU-wise. But I maybe giving MS too much credit here.

Meanwhile, it's ARM that has been scaling in core counts and their roadmaps suggest that they want to enter the server market with more cores than x86. For 2011 we'll already be seeing 4-core Cortex-A9 and similar.
 
Last edited by a moderator:
@Exophase The point I should have made about NEON is how tightly coiled it is with the Cortex A8 right now. Scaling clock frequency is prohibitive with NEON instructions active, so you are left with the same problems x86 currently faces in terms on compromises that have to be made with the chip. And only recently have desktop x86 procs integrated on-die GPU's. The A64 was the first x86 with on-die memory controller and just this year have commercially available x86 chips been bundled with a GPU. Whereas ARM SoC's have been available for quite some time.

As for ARM "coring up", I can totally see a market for ARM to do 1,000's of cores on a chip if they could. The TDP and heat issues alone would make most people flip at that chance. When it comes to virtualization, I don't know of any limit, barring caching, to the appropriate amount of cores. But for single use cases, 4 cores is awesome, but 16 cores won't give you the performance difference that 1 to 4 did. I know I've seen you remark on data parallelism before and the realities of how many things could realistically benefit from it, but you get to a point where the only thing that is going to "go faster" is a different architecture.

Often it can be as easily seen as how popular the iPhone was and is just because the UI is GPU accelerated. The smoothness and transitioning without jerkiness is what makes the iPhone king (Of course, Apps help a whole bunch as well). Even though every half year Android gets a handheld with better specs, iPhone still feels more responsive. Hell, the original iPhone still feels leagues better than most Android phones.

So for server markets that can use virtualization, I don't really see a core limit, but for consumer uses 8-16 cores is definite max I think we will be seeing.

Though, I can easily be wrong and I am only weighing things as far as I can see them. And my view can easily be skewed based on a false assumption of how things work. But, I do look forward to be enlightened.
 
Exophase said:
That time of completion for a pipeline stage isn't going to keep scaling down linearly with feature size reduction, although I don't know the specific limitations. Higher frequency also uses more power and generates more heat (as lulzfish pointed out) and I would imagine there are limitations that make generating and driving the clock itself harder the higher frequency it gets.

I think the heat is the problem and limits the GHz you can use. At one time it starts to melt the traces - unless you use big trace but then you need a biiiig CPU.
 
Last edited:
Back
Top