What happened to the Coding Competitions?


Can we hope to ever use gcc to have the DSP as a target? Or at least some open source compiler? I would love to try coding some stuff on the DSP, but I do all my coding on the Pandora itself, and I'm not going to do it in qemu to be able to use TI's x86 toolchain :)
It does support it. The quality of the generated code is much, much worse. The DSP needs a very good compiler to have any chance of decently scheduling code. And if the code quality is awful it kind of defeats the whole purpose.
 
Rob hubbard got hired to do sound for a helicopter game, and the guy had used the SID for graphics so he couldnt do anything.
You must be misremembering something you read. There's no way you can use the SID for anything more than audio. You set some registers and it sets some waveform generators which get mixed and turned into analog output. It has some interesting features like a configurable filter and ring modulators, and it has some other peripheral functions (like an ADC for some I/O inputs) but there's no way you could use it for graphics.
 
I guess one question would be what the goal of the competition is, for example is it:

  • Get some projects using the DSP
  • Get more optimized software on the Pandora (and potentially teach/improve developers optimizing skills)
  • Create some games/applications that are either good entries by themselves, or starting points to turn into good games/applications with continued development
  • Sell more Pandora units
I like the idea of the optimizing/DSP related ideas, although I do worry about who would actually enter these, perhaps there could be multiple threads to the competition, something like:

  1. Develop a new game/application
  2. Optimize something for the Pandora (needs much more stipulation, but with or without the DSP, with some benchmarks as Expophase suggested)
  3. Port/enhance an existing project
I personally prefer having 1. and 3. separate, having so many cool ports on the Pandora is amazing, and I don't want to undervalue that, but I always think it is exciting to see brand new projects released for competitions.
 
How about an open competition where you get some bonus points when making use of the DSP? Really using OFC, not just sending messages back and forth :)
 
I don't think forcing use of DSP or other componant is important.  The main point is the result.

If you want a competition based on technology, judge the "wow effect" on the software/games, regardless of the technics (being NEON, GPU, DSP, other, or a combination of that.)
 
Most important thing in compo are the productions to choose from, not the quality. It needs to be fun for devs in the first place , not for voters.. Otherwise, beginner programmers dont even start to think about competing in compo. 

i say, "crap game" contest should be the way to go. By the way -> Crap game is a metaphor. Make any game, and dont bother if it will be crap :D  
 
I think a "demo" compo would be pretty good, whether you used the DSP or not. Gives something for ED to put on the screens when he takes them to expos ;)

I like the idea of an optimisation compo, but not sure there'd be enough entrants.

I've also always wanted to produce a "good" scene demo, my previous efforts have always been rather lacklustre
 
Running a toy fractal benchmark that wasn't particularly well optimized for the CPU and using that to make blanket statements that the DSP is faster is way over the top
Exophase, of course more benchmarking will need to be done. You wanted an optimization compo, so here's your chance: The DSP version of the fractal benchmark was 50% faster than MHT's optimized ARM version, all that at 80% the ARM clockrate at around 60% the power consumption of the ARM-only version. If you think I'm making "blanket statements", go ahead, make a faster ARM version and prove me wrong. I deliberately chose the fractal algorithm because I consider it quite DSP unfriendly (not assembly optimized, lots of branches). I expect the performance advantage of DSP friendly algorithms ("block" processing code like graphics blitters, audio filters/resamplers, ..) to be much higher -- but _that_ remains to be seen.


But you're also right of course, no need to create a hype here so in addition to my last statement I have to add that it is much less convenient/comfortable to code for the DSP since there's no symbolic debugger for it (actually not any kind of debugger), programming it in assembly is hell (but TI's optimizing C/C++ compiler is very good so you do not have to), you cannot do any high level things like accessing the file system, or use peripherals (no drivers for most of them), and using it comes with a certain overhead (see below).


@comradekingu


So that was actually a real question and you really did not know ? sorry for the sarcasm then.


As WizardStan already mentioned, the DSP really shines at stream/block processing but it can also be used as a general purpose processor. You can even run Linux on it (yes, someone ported it once) so technically it were possible to reverse the roles and use the ARM as a coprocessor. That would not make much sense, though, since the ARM is optimized for low power consumption when idle (the DSP has a slightly higher baseline power consumption, ~80mW from what I have measured), and the DSP MMU is very basic and can only handle 16 address ranges.


Since in the Pandora/Angstrom Linux configuration the DSP is the coprocessor, there is a certain overhead when using it.

Using interrupt based messaging, simple functions on the DSP can be called ~25000 times per second.

Using the c64_tools "fastcall" messaging protocol, the same DSP function can be called ~456000 times per second with the drawback that the DSP has to be locked for the calling ARM process during the time fastcalls can be made, and polling is used (bad for power consumption).


Since you were mentioning audio resamplers, let's assume you have an audio stream running at 48000 stereo samples (fragments) per second, you are using that for a video game so the audio output latency should not exceed 50fps = 20ms, i.e. you will need to fill a new output buffer every (48000/50)=960 fragments, and call the DSP 50 times a second.

You would use the interrupt based messaging for that to be multiprocess/power consumption friendly, so that gives you a base DSP overhead of (1000/2*25000)=0.02 milliseconds per PAL video frame, or 20000 ARM clockcycles (assuming a 1GHz clock). I divided by 2 since for practical reasons you would start the DSP calculation, then do something else with the ARM, and when you are done with that the result will most likely already be available without further waiting.

Depending on the application, some tricks can be used though, a common one being to collect processing requests in a shared memory area and notify the DSP to process these requests at certain intervals.


Let's assume you use the DSP to "emulate" a soundchip / implement a synthesizer.

You will have the replay control code on the ARM but you will notice that "oh noes", 960 sample intervals are much too long for properly timed music playback.

So what you do is you fill a "request" shared memory buffer with commands like "process 10 samples, then change volume of channel 'x' to 'y', process another 140 samples, then stop channel 'z' and change frequency of channel 'w' to 'v'", and so on, then let the DSP parse that buffer and do the audio rendering.


The same strategy could be used for graphics rendering, of course. E.g. in an emulator you could use the fastcalls to call the DSP per scanline (or whatever makes sense), let it do the actual rendering and use the ARM only to create these "process requests" / command lists.

I don't think forcing use of DSP or other componant is important.  The main point is the result.
exactly my thoughts (said sth similar on the previous page)

I think a "demo" compo would be pretty good
sure, if people here can write some nice effects, create interesting screen designs and music, and it's not just XScreensaver ports :)


(personally, I never was any good at demo coding but never really tried, either. Did some intros on homecomputers back in the later 80ies, early 90ies but nothing worth to write home about. I always prefered application and game programming (i.e. interactive things). With demos back in the days, everyone knew that it was realtime since there was no other way to do it. Nowadays many do not get it and think it's just a video (and therefore compare the demo to videos which is very unfair, of course). Unfortunately, you really need a pricey gamer PC to run nowadays demos, so many ppl. just stick to watching the videos to be able to see the demos at all ..).

For a Pandora demo compo, I'd therefore suggest to make it an interactive one. Then again, I could understand if ppl. decided to make a demo-ish game with fancy abstract visuals instead :)
 
Last edited by a moderator:
ure, if people here can write some nice effects, create interesting screen designs and music, and it's not just XScreensaver ports :)
Yeah, a demo is not just throwing a couple of effects and putting a soundtrack on it. You'll just end up with a crappy demo if you do that :)

It takes a lot of talent to make a great demo. Probably 90% of the demos out there are just junk, but 5% are good and the rest of the % can be very, very good.

The demos from ASD comes to mind. These guys have been producing strong demos one after the other, with innovative effects and matching soundtracks. SUperb work.
 
Are the samples clocked in time critical order, or it is just a (varying) number of operations per second?

Interesting stuff, I wonder what will come of it. The co-processor of the gp2x didnt come to much use, maybe that will be different with the pandora.
 
Last edited by a moderator:
i say, "crap game" contest should be the way to go. By the way -> Crap game is a metaphor. Make any game, and dont bother if it will be crap :D
I like optimized stuff so how about a compromise? One "Crap Contest" where you just hammering the code in and look what comes out at the end and one "Optimized Contest, where coders can make small but highly optimized Demos and programs that make good usage of the hardware. ^^
Well so far second CPU on GP2X was more widely used than DSP onpandora.
Was these easier to use than the Pandora DSP?
 
Yeah, a demo is not just throwing a couple of effects and putting a soundtrack on it [..] The demos from ASD comes to mind.
@ekianjo: yes, it takes a lot more. ASD is one of my favourite demo groups, too. The amount of work and dedication that goes into the top demos nowadays borderlines on insane :)


I also like what some people are doing with exotic/unusal, even custom hardware.

The Pandora would make a nice demo platform. It's fixed hardware, like homecomputers of the past or consoles, and has some unusual but very powerful hardware features, like the DSP, hardware scaling and multiple graphics layers.

Are the samples clocked in time critical order, or it is just a (varying) number of operations per second?
I am not sure what you mean. The samples have to be output in constant intervals (the sample rate), the kernel driver uses DMA transfers for that.


You usually end up with at least two buffers: One that is being played / output to the DAC (digital analog converter), and another one that must be filled with new data by the application.

If the app. takes too long to fill the next buffer, you end up with "buffer underruns" which cause sound clicks/crackles/stutter -- you surely have heard them before.

Well so far second CPU on GP2X was more widely used than DSP onpandora.

Was these easier to use than the Pandora DSP?
I would say that the second CPU on the GP2X was more difficult to use since, last time I looked at tutorials/example code for that, there was not any easy to use toolchain/framework that helped you by defining a "standard" interface to access it and e.g. allowed sharing it among multiple Linux processes.

Last but not least, the hard part is not running code on a secondary processor but rather thinking in parallel and designing appropriate algorithms.

Any system with multiple cores/processors poses this kind of challenge.
 
ASD is one of my favourite demo groups, too. The amount of work and dedication that goes into the top demos nowadays borderlines on insane :)
Yeah, they are super cool. I interviewed them a couple of years ago and they were very friendly and open as well. Love this kind of attitude and humility.
 
Exophase, of course more benchmarking will need to be done. You wanted an optimization compo, so here's your chance: The DSP version of the fractal benchmark was 50% faster than MHT's optimized ARM version, all that at 80% the ARM clockrate at around 60% the power consumption of the ARM-only version. If you think I'm making "blanket statements", go ahead, make a faster ARM version and prove me wrong. I deliberately chose the fractal algorithm because I consider it quite DSP unfriendly (not assembly optimized, lots of branches). I expect the performance advantage of DSP friendly algorithms ("block" processing code like graphics blitters, audio filters/resamplers, ..) to be much higher -- but _that_ remains to be seen.
Better optimizing a fractal program for Cortex-A8 isn't needed to support the claim that you made a blanket statement based on one very synthetic benchmark. It doesn't help that the measurements weren't done by the same person to ensure all the variables were correct, weren't done on the same hardware but using assumptions of linear scaling in clock speed, not to mention the assumption that his fixed-point ARM code would use the same amount of power as your VFP-based ARM code.

This fractal bench isn't at all what I'd consider lots of branches, and I don't see a branch in the inner loop like you do. Rather the main loop has an early exit (one branch coalesced with the main branch). That loop has 10 operations, but really since you're emulating floating point it's way more than that in terms of instructions, so there's a pretty good amount of distance between iterations. This isn't a DSP ideal situation like a big convolution loop but it's far from being a very bad case or even something representative of normal code. It's normal to see code have a branch every 15% of instructions, that's about 5-6 instructions between branches. And it's normal for a lot of code to chase pointers, even if that's not visibly what the code is doing - like loading a value from a LUT then using that value to load into another LUT. That's the kind of thing that's really going to hurt the DSP, not this.

At the same time, this fractal bench is awful for optimizing on Cortex-A8. Although you can parallelize fractal generation across multiple pixels w/SIMD without losing that much (you have to throw out some iterations but it's only really bad at the fractal edges - btw, using only one particular input set makes this test even more ridiculously synthetic), but the flow control based on the results is poison for Cortex-A8 because of the huge penalty for going from NEON to the scalar pipeline. This applies to plain VFP too. I'm really baffled as to how M-HT didn't get better performance than he did. But without knowing details of precision it's hard to say.

You could instead do a single iteration over a pass of many pixels then prune the ones that hit the limit and reconstruct the list. Then after 24 iterations or when the list is empty you'd convert the list of iterations/positions to pixels and spit out the pixels - not stored in order - to the screen. That'd have to be done one pixel at a time and you'd have to load and store a fair amount of data for each pixel. There'd probably be at least a few cycles per pixel overhead for this. I'd have to run some numbers on the data set characteristics to see if the overhead is worth it and estimate how it could do vs M-HT's numbers.

Maybe I'll do this as a random optimization exercise. But it has nothing to do with disproving the blanket statement you made. It stays a blanket statement regardless of who does fractals faster. If I did a faster version than you did would you really turn around then and say the Cortex-A8 is faster than the DSP? No, you should rightfully say that the DSP is better for some things and the Cortex-A8 is better for other things.

But you're also right of course, no need to create a hype here so in addition to my last statement I have to add that it is much less convenient/comfortable to code for the DSP since there's no symbolic debugger for it (actually not any kind of debugger), programming it in assembly is hell (but TI's optimizing C/C++ compiler is very good so you do not have to), you cannot do any high level things like accessing the file system, or use peripherals (no drivers for most of them), and using it comes with a certain overhead (see below).
You did this before.. starting with "it's faster and it's more power efficient" but then adding a caveat "but it's more work to use." That doesn't reduce hype, that just makes people who won't program this stuff themselves call devs lazy for not doing it.
 
Last edited by a moderator:
some dsp libs would be great for the combo :) . Blitter, audio / video decoding, path finding
 
It would be interesting to see your optimization process for this Exophase (not in conjunction with proving anything about the DSP, just purely that it would be interesting to see how you achieve the optimizations you do).
 
Back
Top