What happened to the Coding Competitions?


but using assumptions of linear scaling in clock speed

this was not an assumption. I actually changed the clock speed and the performance scaled exactly linearly.


The results were posted in the c64_tools thread (or try for yourself if you do not believe me).

based on one very synthetic benchmark

I never claimed that this was not a synthetic benchmark. I repeatedly stated that more (i.e. different) benchmarks need to be done.

I don't see a branch in the inner loop like you do

I left the compiler generated C64p .asm source in the .tar.gz so you can see for yourself although it can be seen in the source code, too.


Usually a graphics renderloop consists of the mandatory for(y..) and for(x..) loops but the fractal benchmark requires an additional loop and such a thing requires a branch (surprise, surprise). In fact, up to 24 branches (the default maximum iteration depth) are executed per pixel.

It doesn't help that the measurements weren't done by the same person to ensure all the variables were correct

totally agree. I had not gotten around to integrating M-HT's optimized code and I just did that.


First of all, he turned off the graphics output / video memory writes. That was not a good idea since then he would have seen that his version of the benchmark rendered only half of the screen! (to be exact: the bottom half was almost constant color and needed very few iterations per pixel)


It was a simple mistake and easily fixed (_FP(-1.8f) instead of just -1.8f).


A quick glance at the source code also revealed a huge (and missed) optimization opportunity:


A C compiler will not generate an arithmetic right shift when an integer is divided by 65536 !


Fixed that (replaced the division with the shift) and the performance improved by 44%.


(the precision difference is negligible).

like loading a value from a LUT then using that value to load into another LUT. That's the kind of thing that's really going to hurt the DSP, not this
you mean like some kind of "movetable effect", e.g. the classic 90ies Winamp visualizations ?


I have to dig in my old source backups..back then I once wrote an implementation of that and a friend of mine created some nice patterns.


I'll add this to the benchmarks/demos and I am very curious to see if that really brings the DSP down to its knees. These kind of table lookups are definitely not too uncommon.

btw, using only one particular input set makes this test even more ridiculously synthetic
bollocks, sry. All test parameters are passed via a struct, it's not constants that the compiler can optimize. Plus the main parameters (c1, c2) change each frame, that's why some frames render a lot faster than others. Just the screen size and the maximum iteration depth are constant but even those are passed via the struct.

not to mention the assumption that his fixed-point ARM code would use the same amount of power as your VFP-based ARM code
I have to admit that I did not consider that. The fixed point version does indeed use less power (~200mW) than the float version compiled with -mfpu=neon (or -mfpu=vfp, does not make a difference).


I reran the "stress_test.sh" script: The ARM-only float version uses ~1.8 W, the fixed point ARM-only version of the benchmark ~1.6 W, and the DSP-only version ~1.1 W.

I'm really baffled as to how M-HT didn't get better performance than he did. But without knowing details of precision it's hard to say
I attached the updated source code so you can see for yourself. Except for the mistake mentioned above he did a good job at converting the source to fixed point (maybe he used the DSP variant and just replaced the macro implementations? I did not compare the sources, does not really matter anyway).

If I did a faster version than you did would you really turn around then and say the Cortex-A8 is faster than the DSP?

I would say that for rendering this (and similar) effects, the Cortex-A8 is faster.


In fact, after optimizing the code as described above, it turns out that the GPP _is_ faster in this case:


s: fractal_benchmark-09Oct2013.txt


GPP only: (float, -mfpu=vfp)
[...] 3600 iterations in 65343 millisecs.

DSP only: (fixpoint via iqmath)
[...] 3600 iterations in 10841 millisecs.

GPP only: (fixpoint, original _FPmpy)
[...] 3600 iterations in 11054 millisecs.

GPP only: (fixpoint, optimized _FPmpy)
[...] 3600 iterations in 7652 millisecs.

GPP+DSP:
[...] 3600 iterations in 5054 millisecs.


This is what had initially anticipated and why I considered the benchmark a worst/bad case scenario for the DSP -- it hates branches.


Just so that you understand we are on the same page here: My statement on the previous page was sarcastic and exaggerated but it is, from what I have seen so far, not very far from the truth. In the c64_tools thread I already said that more benchmarks need to be done to come to a final conclusion.


The original purpose of this exercise - as I already said - was to see how a piece of source code run on the DSP would stack up against the same one run on the GPP.


I assumed that when people consider using the DSP, they do not want to learn too much about it and just write plain C code and use whatever libraries are readily available.


Sure, the algorithm itself could be optimized (always the best kind of optimization) and your suggestions sound reasonable but the same optimizations could be done on the DSP side.


Let's just leave it at that and focus our energy on creating new benchmarks or working on real applications.

You did this before.. starting with "it's faster and it's more power efficient" but then adding a caveat "but it's more work to use." That doesn't reduce hype, that just makes people who won't program this stuff themselves call devs lazy for not doing it.
devs _are_ lazy. That's another thing I already said and I included myself.


hey, if every last bit of software were optimized to the fullest, we would not need to buy new hardware every few years to make up for "sloppy" coding, would we ? (by that I mean doing the same tasks with newer software versions on newer HW at the same speed as with older SW versions on older hardware, of course)


I do not expect everyone to suddenly become DSP assembly / optimization experts. Personally I would use some intrinsics for some small loops but generally stick to plain "C".


There are many other platforms out there which do not have a DSP core but maybe one or three additional GPP cores.


It makes sense to optimize software for that, if processing power is the bottleneck.


The DSP should be seen as just another core which is simply not as tightly integrated (e.g. in the dev. toolchain (compiler/debugger)) as a regular, additional GPP core would be. Once ppl start writing software for the DSP, they will realize that this is not _that_ much different from using multi-threading.


It should be fairly easy to port a c64_tools DSP component to another multi-core architecture. I already mentioned that I am thinking about writing a version of c64_tools that can be used on standard PCs. The DSP would simply be replaced by a thread. Could be useful for development purposes (I guess not everyone writes/compiles code directly on the Pandora).
 

Attachments

  • c64_fractal.c.tar.gz
    4.7 KB · Views: 135
Last edited by a moderator:
I reran the "stress_test.sh" script: The ARM-only float version uses ~1.8 W, the fixed point ARM-only version of the benchmark ~1.6 W, and the DSP-only version ~1.1 W.
I assume these numbers are with display and everything?

As the graph you posted a while ago ( http://boards.openpandora.org/index.php/topic/14334-announce-c64-tools-dsp-loader-and-ipc/?p=278282 ) looks totally wrong as "Actual power" is the only real measurement from battery monitoring chip, and _wb_'s CPU power use estimation (over 2W) is way above the maximum real measurement of total power (< 1.5W) in that graph.
 
Last edited by a moderator:
Yes, the numbers are with display on, backlight at medium brightness, USB-power connected, WiFi/USB-Host off. During the "gpponly" benchmarks, the DSP was powered down, too.

In the graph you linked to only the actual power consumption (i.e. the red line resp. the red '+' markers) is what really matters, of course.

The quoted power consumption figures therefore refer to that, not the sysinfo estimations.

EDIT: for completeness sake, here's the sysinfo graph:

plot-power-breakdown-09Oct2013_stress_test.png


legend:

@~16:00: "c64_fractal_gppfpu" -- the original FPU code, ARM-only, not optimized in any way

@~17:20: "c64_fractal_gpponly" -- M-HT's original ARM-only fixed point version (w/ minor bugfixing)

@~17:35: "c64_fractal_gpponly2" -- with optimized _FPmpy() macro (arithmetic shift right instead of division by 65536)

@~18:00: "c64_fractal_dsponly" -- DSP fixed point version using TI's iqmath library/macros

@~18:20: "c64_fractal_gppdsp" -- both processor cores working together

p.s.: this is with the lid closed, display off, WiFi/USB-host off, no USB power connected.

p.p.s.: the timestamps are wrong due to the Pandora clock not ticking on when the system is suspended via the power switch. 2am here.

p.p.p.s: the constant power spikes (every second) are due to _wb_'s current sysinfo tool release. the previous version did not do that (that frequently)
 
Last edited by a moderator:
devs _are_ lazy. That's another thing I already said and I included myself.


hey, if every last bit of software were optimized to the fullest, we would not need to buy new hardware every few years to make up for "sloppy" coding, would we ? (by that I mean doing the same tasks with newer software versions on newer HW at the same speed as with older SW versions on older hardware, of course)
Maybe, but we'd also have exponentially-ish longer development times and a lot more bugs. Fully optimizing something takes time and often reduces the code flexibility dramatically. This causes changes more often than not result in rewrites with low-to-none code reuse, resulting in new bugs and a whole lot of time lost doing the same thing over and over again.

(to the non-developers, devs should already know this and correct me if I'm wrong)

There are different types of optimization. I categorize them as follows:

  • "What I should've done" is fixing doing things stupidly (because of derps or fast prototyping or whatever), like bad algorithm or data structure choices. These are optimizations one should do after the initial prototyping and continue to do whenever an opportunity presents itself. These are usually about reducing asymptotic complexity.
  • "What I could've done" is making non-flexible/hacky/platform-specific tuning. This should be done when it's needed or when the code is not expected to ever change. Usually this makes changes to the optimized code require a lot more work. These are usually about utilizing platform-specific features (like ASM implementations, DSP, NEON and so forth) and speedups that don't change asymptotic complexity (like writing the code to be more easily vectorized by a compiler).
There is value in optimizing, but sometimes there's also value in not optimizing. Optimization is not a magical thing to make things work better without affecting anything else.

End rant.
 
Last edited by a moderator:
Maybe, but we'd also have exponentially-ish longer development times and a lot more bugs. Fully optimizing something takes time and often reduces the code flexibility dramatically. This causes changes more often than not result in rewrites with low-to-none code reuse, resulting in new bugs and a whole lot of time lost doing the same thing over and over again.
exactly. that's why I said "sloppy", not sloppy.


I like to classify optimizations like this:

  • Runtime optimizations that are _not_ disproportional to the implementation effort (wise choice of overall program layout / datastructures, reasonable use of multiple processor cores, including DSPs or GPUs (if appropriate), reasonable optimization of time-critical code sections, ..) ("what I should do")
  • Runtime optimizations that are disproportional to the implementation effort (assembly optimizations, heavy use of platform specific features and/or peripherals) ("what I should do if someone pays me for it")
  • Development time optimizations (use of (very) high level languages or VMs for control code, UIs, and to make software more robust in general) ("what I should do to get things done in limited time")
For some time now, I can see a shift in the software industry towards "development time optimizations" since that means that because of the tools/ecosystem involved (VHLL), you can hire less experienced (and cheaper) devs, shorten development times (more money saved), and release your product earlier (add the "banana soft" mentality to that and there's even more money saved).


I guess that's how we end up with bloated, resource-squandering software that requires much more hardware resources than necessary.


but yeah, enough of that so

End rant.
 
Back
Top