[announce] c64_tools (DSP loader and IPC)


Looks like you can compile the source w/o X86 assembly optimizations so I would say yes, you can run that.

Question is whether it will, without further optimizations, run faster than a Cortex-A8 build of it (..probably not).

Making the decoder use the OMAP h264 video acceleration HW could (should?) make a big(?) difference speed-wise but requires intimiate knowledge of video decoders and will most likely involve a lot of hard work.

Apparently there is an OSS video acceleration library/API http://www.freedesktop.org/wiki/Software/vaapi/ that already supports various HW backends (Broadcom, Intel, Imagination, NVidia, Via, ATI). Judging from its description, it probably makes sense to add a TI/OMAP backend to it. I don't have any idea if or how this can be integrated with the Cisco codec, though.
 
NVidia, ATI
Actually, that ain't direct HW VA-API backends, they are just wrappers for other APIs. AMD's driver uses XvBA, which is totally closed down (access to the API docs already requires a NDA, therefore that wrapper is closed source), and VDPAU is Nvidia's open counterpart.
VDPAU is currently getting more popular, though. The free radeon and nouveau drivers as well as the (limited) generic shader-based implementation of Gallium and apparently even S3 prefer it over Intel's VA-API. Additionally it is the only API supported by flash on x86.
 
That's upto ED to decide. As there is no "real" application using it yet, it's not a pressing matter I guess, everyone can run the online update for now.
Thanks, I understand, it makes sense at this stage. 

It would be interesting to know if anyone's working on anything using the DSP (apparently MH-T, bsp have some plans for it, but is there anyone else as well?)
 
could OpenH264 from CISCO be run on the DSP (http://www.openh264.org/) ?
bsp already answered this, but I'll add that the decoder only supports constrained baseline profile, so it can't decode movies encoded using main profile or higher.
AMD's driver uses XvBA, which is totally closed down (access to the API docs already requires a NDA
XvBA isn't closed down since 24.02.2011 when AMD released official XvBA SDK.
It would be interesting to know if anyone's working on anything using the DSP (apparently MH-T, bsp have some plans for it, but is there anyone else as well?)
I don't really have plans for the DSP.I am testing scaling algorithms on the DSP, but I'm not planning to use it anywhere. Unless someone else uses it, it will go unused.

Other than that, I'm still thinking of something that the DSP could be used for. For instance, on GP2X someone offloaded OPL processing in Dosbox to second core, so I thought that something similiar can be done with DSP. But by my measurments, OPL processing takes less than 1% of CPU time, so offloading it seems meaningless.
 
Maybe we need some wrapper for the DSP.  A simple dsp c++ class which can do everything needed and is an interface to the images on the dsp. So a normal coder can use the dsp.

Like, this example which scales the rendered image almost without extra "costs" on the dsp.

1. The cpu renders the image

2. the dsp takes the image and scales it

2. At the same time the dsp scales the image, the cpu calculates the next image. 

This could be used for flashenv, to scale the reduced resolution to nice images up, without sacrificing cpu power, which is allready almost to low.

#include "dsp.h"

void main()

{

DSP dsp;

Image x;

Image* bigX;

bigX = dsp.scaleImage2Sal(x);

unsigned char* ImageBuffer[2][800][480];

while(true)

{

dsp.scaleImageToFrameBuffer(ImageBuffer[getCurrentBuffer()], 2Sal); //async call

flipBuffer();

emalute_next_frame(ImageBuffer[getCurrentBuffer()]);

while(!dsp.hasFinished())

{

}

}
 
Last edited by a moderator:
This could be used for flashenv, to scale the reduced resolution to nice images up, without sacrificing cpu power, which is allready almost to low.
 well your suggested functionality is already there: it's the hardware scaler, and this again is already used in flashenv :)

besides that, ... i don't have any ideas using the DSP now. time is limited and there's already too much on my plate :) ... and i'm glad i didn't do anything with the DSP yet, because I'm not a fan of SD or NAND corruption. I usually can't stand such issues, getting too anxious if i broke, bricked or ruined something :)
 
Last edited by a moderator:
well your suggested functionality is already there: it's the hardware scaler, and this again is already used in flashenv :)
Oh I thought the hardware scaler can just do linear scaling or nearest neighbour. Can it do 2Sal?

I'm not a fan of SD or NAND corruption.
Same here, but I thought it was fixed in bsp last update.
 
XvBA isn't closed down since 24.02.2011 when AMD released official XvBA SDK.
That didn't change a lot about requiring additional closed source software to use it, however. Pretty much every piece of free software that actually did anything to support it (about 2 projects?) has dropped it by now as deprecated, pointing to the VDPAU support of the free driver. People are still using the old closed source wrapper (which is pretty much deprecated by now as well, and nobody seems to care enough to start an open alternative) as the driver still does not support any other API.
 
@Letalis Sonus: Sounds like every major player has its own API then, Intel=VA-API, AMD=XvBA, NVidia=VDPAU, Imagination=also VA-API?, TI/OMAP=?  Anyway, I don't have much hope for proper video acceleration on OMAP3. Hasn't happened in 5 years so why should it now. Hopefully this will be different with OMAP5, although from a HW point of view the video accelerator looks similar to the one in OMAP3. It also contains the iLF and iME modules and adds iPE (intraprediction), MC3 (motion compensation), CALC3 ((I)DFT?), and ECD3 (whatever that is). Maybe there will be software available for that next time..

@eumnehS: ekianjo is probably right, there really is just a handful of people who bother to optimize for this particular platform, and a DSP compo would not make much sense, especially if this handful of devs would compete on one particular subject, like "fastest scaler" or "fastest blitter". What does make sense, IMHO, is to use these limited dev resources to create different kinds of DSP 'libs', accompanied by small and easy to use GPP wrapper libraries so other devs can easily use them. Ideally these libs should also be available as portable C versions so development can still be done on a PC (for the most part) (this was already suggested much earlier in this thread but back then there were other issues that had to be dealt with, first)

Like M-HT already indicated, there are different kind of devs. Some like to spend hours optimizing tiny fragments of code to make it as efficient as possible, some like to create libraries and 'frameworks' just because that can be fun, too. Some are interested in looking at and porting existing code (much to learn, instant user gratification in case of apps), some like to write new software by using building blocks created by other devs, and so on.

As I mentioned in the first paragraph, the people who bother to write new DSP code (or port/optimize existing code to/for the DSP) should invest some time to make these 'building blocks' easily accessible to other devs so the code eventually gets used in actual 'end-user' software.

For my part, I'll package my sprite engine resp. create a GPP wrapper library for it. Maybe it would make sense to merge this with M-HT's scaler efforts? Some of the code in the GPP wrapper will be common to both 'engines' so that would avoid duplicate efforts.

@M-HT: That sprite engine uses some fairly generic commandlist interface. From a design point of view, I intended it to serve as a simple interface to the DSP for a collection of 'various' functions, rather than a graphics-framework specific interface with lots of options that, if combined, would require/implicate a lot of different render loops, for each of the possible feature-combinations. It should be fairly easy to integrate your scaler functions. If you don't object, could you please point me to the latest version of your scalers so I can give that a try?

Regarding OPL emulation: Are you really sure that your performance measurements are correct ? Even when overclocked to 800Mhz, 1% CPU usage would mean 8,000,000 cycles per second. OPL3 has up to 18 channels and even when used in 3*2 op + 6*4 op + 5 drum channels (6 ops) mode, that's still 14 channels / 36 FM operators that need to be emulated. At a sample rate of 48Khz (OPL3 uses 49.7Khz), that only leaves (8,000,000/(48000 * 36)) = 4.63 cycles per operator. That doesn't sound right to me (and that's still without envelopes/LFOs).

I still think that OPL emulation would be a nice task for the DSP and since in that DOS-emu scenario the DSP wouldn't be used for anything else, free DSP cycles could be used to improve audio quality and/or add some ear candy (e.g. reverb, eqs, ..).

@ekianjo: besides my comments regarding the DSP compo (see above), I just wanted to say that you really shouln't compare programming the DSP to rocket science :) It's far easier/less work than GLES2 / shader programming, for example. As far as my big plans for the DSP are concerned: I don't really have any, neither. However, I will use the DSP but I'll rather treat it as some special purpose coprocessor and will mostly run plain C code on it. E.g. for 2D graphics it's faster than the GPU (or the GPP) and also more flexible.

@slaeshjag: Then you have little imagination :p   But really, I can think of tons of software that could be written for the DSP but which I won't write (also not for any other processor) since I rather spend my time doing something else. Spare time is limited, after all. Besides, if I had known beforehand how much time this whole DSP driver task was going to take in the end, I might not have started it at all. Didn't want to leave it unfinished, though.

@rohezal: Yep, that's what we need. Except that the code will look a bit different (you can't simply pass pointers to paged memory to the DSP).

@crow_riot: The hardware scaler is not a replacement for Scale2x / 2xSAI. They both can be combined, though, i.e. the DSP scalers could be used to e.g. scale 320x240 to 640x480, then the HW scaler could expand that image to fill the screen, if you don't care for the wrong aspect ratio or non-integer scaling artefacts.

As far as DSP stability / NAND issues are concerned: Notaz seemed quite confident that these issues are now resolved. The stress test on his unit ran fine for 24h. Before the fix, things went haywire rather quickly.

It would be nice if the people who previously reported these issues on their CC/Rebirth units (magic_sam, M-HT) would update their firmware and run the DSP stresstest to confirm that the fix works on their Pandoras as well.

@Levi: exactly -- but thanks to the ppl who did, it eventually became stable.
 
Last edited by a moderator:
Again, most people here would not even have the technical background to know how to use it. "Why don't we do a competition to go to the moon ?" => it won't make easier to make rockets and to fly there.
I don't think this is the real problem. The problem is that vast majority pandora's of software are ports from somewhere, and few remaining things that are not are written for pandora and something else (there are of course exceptions like _wb_'s stuff). And ported things, whatever they are, are not designed to take advantage of extra custom asymmetric processors on the system, so it's quite difficult to fit that in retrospect. Not to mention porters rarely have deep understanding of the program they are porting, it's usually "compile, adapt controls and maybe resolution, release", not "spend months analyzing and profiling the code, rewrite some parts in NEON and move some other parts for DSP". There is also general problem of parallelization, to figure out ways how to do multiple things at once and get consistent results in the end, and a whole new class of potential bugs and pitfalls you get when you start to deal with such code.

That said the potential is still there, in a perfect case you could almost double the processing power of the system with it. I do have some ideas, maybe sometime in 2014..

And I don't agree about technical background, you just have to know C, c64_tools already hides all DSP specifics.
 
(you can't simply pass pointers to paged memory to the DSP)
May I ask if how you would do it?

Something like this?

dsp.function(syscall_virtual_to_physical_address(pointer));

If this is true, couldn't you hide this in the dsp class to make it invisible. Lots of highlevel programmers are scared when there is no mmu ;) .
 
Last edited by a moderator:
You have to allocate physically contiguous shared memory, just take a look at one of the testcases in c64_tc.c (look for dsp_shm_*).

It's almost as easy as using malloc/free. Actually, you just need two extra calls to setup a shared memory heap (dsp_shm_alloc() and dsp_mspace_create()), then you can use dsp_mspace_alloc() and dsp_mspace_free() to allocate memory just like you would do with malloc() and free().

An application specific GPP wrapper library would hide this detail, though, just like e.g. SDL does for hardware surfaces.

And there's an MMU but if you are not a seasoned C programmer or the code is a bit more complex, you can always debug it on your PC first before running it on the DSP.
 
Last edited by a moderator:
Back
Top