[announce] c64_tools (DSP loader and IPC)


@Letalis Sonus: Sounds like every major player has its own API then, Intel=VA-API, AMD=XvBA, NVidia=VDPAU, Imagination=also VA-API?, TI/OMAP=?
As far as I've heard, OpenMAX is first choice on ARM systems. That whole mess is mostly just x86 related and basically runs down on VA-API vs VDPAU, while it seems that there are wrappers for both to make use of the other API.
 
XvBA isn't closed down since 24.02.2011 when AMD released official XvBA SDK. ...
That didn't change a lot about requiring additional closed source software to use it, however.
I think with exception of Intel, you need additional closed source software to use HW acceleration (using VA-API, VDPAU, ...). But I'm not disputing the rest of your post.
Anyway, I don't have much hope for proper video acceleration on OMAP3. Hasn't happened in 5 years so why should it now.
What do you consider proper video acceleration to be? I think the OMAP3 DSP/video accelerator is not capable of more than the codecs TI already released. There's also Theora decoder for the DSP, but it's not fullspeed.
@M-HT: That sprite engine uses some fairly generic commandlist interface. From a design point of view, I intended it to serve as a simple interface to the DSP for a collection of 'various' functions, rather than a graphics-framework specific interface with lots of options that, if combined, would require/implicate a lot of different render loops, for each of the possible feature-combinations. It should be fairly easy to integrate your scaler functions. If you don't object, could you please point me to the latest version of your scalers so I can give that a try?
I haven't released the latest version (it's not really ready for release). But I can give you what I have. (When I get home, I'll put it in my thread about scalers.)
Regarding OPL emulation: Are you really sure that your performance measurements are correct ? Even when overclocked to 800Mhz, 1% CPU usage would mean 8,000,000 cycles per second. OPL3 has up to 18 channels and even when used in 3*2 op + 6*4 op + 5 drum channels (6 ops) mode, that's still 14 channels / 36 FM operators that need to be emulated. At a sample rate of 48Khz (OPL3 uses 49.7Khz), that only leaves (8,000,000/(48000 * 36)) = 4.63 cycles per operator. That doesn't sound right to me (and that's still without envelopes/LFOs).

I still think that OPL emulation would be a nice task for the DSP and since in that DOS-emu scenario the DSP wouldn't be used for anything else, free DSP cycles could be used to improve audio quality and/or add some ear candy (e.g. reverb, eqs, ..).
I don't know how OPL really works, but it's possible that in my test the OPL wasn't using all capabilities/channels/... so it required less CPU time. Also Dosbox has two OPL emulators. The default one, which I tested, is faster, but less accurate (I don't know how much faster). And I think the default OPL sample rate is 44100.
Anyway, I won't be working on OPL emualation on DSP, but someone else could take a look at it.

It would be nice if the people who previously reported these issues on their CC/Rebirth units (magic_sam, M-HT) would update their firmware and run the DSP stresstest to confirm that the fix works on their Pandoras as well.
I trust the testing by notaz :) , but I'll run the stresstest.
 
And I don't agree about technical background, you just have to know C, c64_tools already hides all DSP specifics.
Fair enough. But parallel programming is not straightforward, as you mentioned in your post, so that's an extra level of difficulty in implementing any solution using the DSP. 
 
I think with exception of Intel, you need additional closed source software to use HW acceleration (using VA-API, VDPAU, ...).
Even if you take the required firmware for using the UVDs with the free radeon driver as "additional closed source software", I already provided a completely free example: The generic shader-based VDPAU implementation of Gallium, which e.g. can be used with the entirely closed-source-firmware-free nouveau driver (that implementation is pretty much limited to MPEG2 so far, though). I'm currently using that implementation myself as I have one of the few UVD2 Radeon cards that still lack the required firmware.
 
When ever I develop for Pandora and hit a bottle neck, the bottle neck always seems to be the GPU (I do not have a GHz unit for what it is worth), so the DSPs power doesn't really help any projects I have looked at so far.

I get the feeling there are a few people on the boards that would quite like to do something cool with the DSP, but just don't quite know what. The ideal case would be where the end product really is twice as fast thanks to the DSP (rather than something contrived, using the DSP for the sake of it, potentially only getting modest gains). Sure I can think of things I could do on the DSP that would benefit a few projects, and maybe have a little gain, but I am nowhere near interested enough in any of these ideas to work on them to completion.

Is there already a list of worthwhile ideas of what could be done on the DSP knocking around?
 
Some where some posts. Some things:

1.

DSP Scalers for better images, this could speed up the gpu too.

Let me explain:

In Open GL2, the you have per pixel lightning. So you have to calculate the normal and the angle between the light souce(s) and the fragment per pixel. When you are are rendering at 400x240 you are only to calculate 1/4 of the image. This is a big nice.

There must be a way to make the memory area where the texture is saved, to a shared memory for the dsp, so it wont have to be copied around.

2. A video decoder for youtube vids. Higher youtube vids are laggy, which is not nice since we have some nice youtube apps. A video decoder on the dsp could be a big speed improvement.

3. Some parallel AI stuff, like an fast iterrative fast algo to speed up games which have lots of units.
 
Last edited by a moderator:
Some where some posts. Some things:

1.

DSP Scalers for better images, this could speed up the gpu too.

Let me explain:

In Open GL2, the you have per pixel lightning. So you have to calculate the normal and the angle between the light souce(s) and the fragment per pixel. When you are are rendering at 400x240 you are only to calculate 1/4 of the image. This is a big nice.

There must be a way to make the memory area where the texture is saved, to a shared memory for the dsp, so it wont have to be copied around.
GPU alone can do that quickly. I tried that on an FBO, and the bliting back of the FBO on the screen is not tacking time (a few ms). Not sure there is something to win here.
 
GPU alone can do that quickly
Not if the GPU is already struggling with other stuff. If the GPU is the bottleneck then you need to take as much away from it as possible.
I really don't think so. You will gain 2ms per frame, maximum, GPU stuggling or not. And to parallelize that bliting operation, you need to double the FBO, and you also need to create an egl context that still permit bliting to FB. A lot's of technical difficulties for 2ms / frame. If you're game struggle at, let's say, 10fps, that means you need 100ms per frame. so you can have 98fps per frame at best, so 10.2 fps... Sorry, I really don't think it's worth a try (but you can prove my wrong).

better move OpenAL 3D sound code to DSP, it will be much more effective (you may gain 1 to 2 fps, maybe 3)
 
Last edited by a moderator:
Hi,

OK, I have updated my Rebirth Pandora with the Upgrade Pandora OS tool, and ran the stresstest program (dsp_stresstest-14Jan2014b.tar.gz).

I'm sorry to report it still hangs, without any error message ...

Wi-fi stopped working, the XFCE menu disappeared and I had to reboot the Pandora :(

I have also noticed some slight data corruption (files starting with question marks ??? ) at the root of the SD-card.

Bye, and sorry for the bad news.

Magic Sam
 
Strange, maybe you forgot to reboot after doing "Upgrade Pandora OS"? In that case you were still running old code.

Could you also start up the terminal and type "uname -a" and report what date it prints there?
 
No, I'm sure I did reboot the Pandora after the upgrade.

And "uname -a" says the kernel is from Tue Jan 21 23:45:27 EET 2014.
 
Did you run "go64.sh" from dsp_stresstest-14Jan2014b.tar.gz ? If so you shouldn't do it, because it loads the old kernel module with all the problems.

After fresh boot, run /usr/pandora/scripts/op_dsp_c64.sh , then ./hugetlb.sh and then ./stress_test.sh
 
Yes I did run go64.sh, silly me :unsure:

I'll try again with your method and report back if I face any more issues.

PS: is there something I can do about data corruption, besides re-flashing the Pandora and reformatting the SD-card 
 
@bsp, would you have any idea how fitting dsp would be for matrix palette skinning?

I've been trying to think uses for dsp, and thought skinning would fit the best, so NEON can be used for other tasks and GPU focuses on drawing.
 
@Cloudef which part of the skinning? The per vertex part is presumably going to be fastest via a shader (although I take your point that having a simpler shader and move that work onto the DSP theoretically could have some improvement). Or were you thinking more of the work required to calculate the joint/bone matrices? For this part, I know in the past Exophase mentioned that it would a task that could be NEON optimized very well (although again if the DSP can free up CPU/NEON time, it could ultimately have an overall performance improvement).
 
The vertex part indeed, this would be used in GLES1 mode where shaders are not available

(and might have even merit for GLES2, as my exprience with GLES2 on pandora SGX has been meh)

Animator part indeed can be simdified.
 
Last edited by a moderator:
Well you can sign me up as being interested in the answer; I have some skinned animation code that I have running on the Pandora; it could be optimized in many ways (I'm sure), but certainly the DSPs potential input is interesting to me.
 
Back
Top