Release Some Dsp Codecs Out


Status
Not open for further replies.
I have to say I thought you were talking more generally than this - you would need books to answer that question in more detail than has already been here! I myself haven't looked into this more than fleetingly, but basically there are a whole load of linux devices that are used to communicate with and control the DSP (including loading and starting programs, communication etc). The DSP has its own C compiler provided by TI. The best place to get started is probably the specification on http://dspgateway.sourceforge.net/

Edit: Exophase got there first! In light of what he's said, here's a bit more detail...there are devices in linux such as a DSP task device, DSP control device, DSP memory device etc. There are also sysfs entries for monitoring the DSP from the command line or whatever. The DSP programs are compiled independently using the TI tools, and then loaded, executed, communicated with and monitored using standard linux device driver calls to the relevant DSP device (i.e. open, close, ioctl, read, write, poll, select, lseek etc.. though not all the devices implement all of these calls obviously). I'm not really sure what role sysfs plays, but probably the same as sysfs/procfs does for any other cpu; general information about what the CPU is doing etc (not generally used in programming the device I wouldn't have thought, just for getting stats when in the command line, or monitoring what the DSP is doing ina general sense). A userspace library for doing useful, regular tasks (e.g. loading a program) would probably be useful, and may well also be available with the tools already.
 
Last edited by a moderator:
'Exophase' said:
'lulzfish' said:
? It's not part of the compiled machine code?
Probably not. Will probably be an object file or binary that the interface takes.

You compile it using TI's freely avaible C6x C compiler. Or assembler, if you're good at that sort of thing.

You wouldn't "call" it from your program, you'd upload it using their tools, which probably consists of something available at runtime (not something you compile in), but I don't really know.

Linux isn't running on the DSP and you wouldn't want to try interfacing with it directly. You'd probably want to write audio to somewhere the host program running on the Cortex-A8 can see it and have it stream it to the audio device.


So I'd need a separate C or assembly program for the DSP. That makes sense, it's similar to how graphics shader programs work.

This 'upload' thing is very weird. Is it similar to compiling GLSL shaders, but the compiling is done before your program touches it?

That's what I meant, copy data from the DSP's output into something ALSA can access [using the CPU to manage the memory]

'EdCa22' said:
I have to say I thought you were talking more generally than this - you would need books to answer that question in more detail than has already been here! I myself haven't looked into this more than fleetingly, but basically there are a whole load of linux devices that are used to communicate with and control the DSP (including loading and starting programs, communication etc). The DSP has its own C compiler provided by TI. The best place to get started is probably the specification on http://dspgateway.sourceforge.net/
This term 'devices' confused me.
I guess you mean 'device' as in a device file, like /dev/whatever?
For some reason I thought "hardware device" [separate from the Pandora] and it didn't make any sense. But now I get it.

Well, that certainly simplifies things.
 
Last edited by a moderator:
'lulzfish' said:
So I'd need a separate C or assembly program for the DSP. That makes sense, it's similar to how graphics shader programs work.

This 'upload' thing is very weird. Is it similar to compiling GLSL shaders, but the compiling is done before your program touches it?

That's what I meant, copy data from the DSP's output into something ALSA can access [using the CPU to manage the memory]
Yes, compilation is done first. I've not used alsa, but I guess you'd just get the data from the DSP and then pass it to alsa using whatever mechanism it uses (or just spit it /dev/dsp and ignore alsa the dirty way hehe..no, only for testing of course)

'lulzfish' said:
This term 'devices' confused me.
I guess you mean 'device' as in a device file, like /dev/whatever?
For some reason I thought "hardware device" [separate from the Pandora] and it didn't make any sense. But now I get it.

Well, that certainly simplifies things.



Yeah, that's it. Generally linux device drivers work using a device file in /dev, or at least they do if they can be thought of as something that can be opened, closed, read from and written to in the same way as a file. ioctls are used for other things, like sending a direct command to a device to do something specific, and if a device only needs ioctls it doesnt (generally speaking) need a /dev entry...having said all that, /dev entries are fairly optional anyway...you could implement almost any device driver without using them (e.g. network drivers don't use them), but it wouldnt make much sense in most cases. Anyway, you might know all that already.

Yeah the term 'linux device' is slightly confusing. What you're really communicating with is a linux device driver which controls (in this case) a single device. The /dev entries are device 'nodes', but they don't really follow a strict behavioural pattern... what I mean is you could have one for each device of one type (all managed by the same driver) or just a single entry for many devices, or one device with many entries etc
 
Last edited by a moderator:
I already knew that stuff but I wasn't thinking about it. I think I have PulseAudio on my system, and I have no idea where the sound device would be.

Well, I'll have to see what sound server Angstrom uses. Qt and SDL both do audio, but I don't think they expose the actual sound device or any buffers, so that I don't think that will really work with the DSP.
 
Last edited by a moderator:
Oh I think any sound system will work... what I said about /dev/dsp was probably confusing now I look at it again... /dev/dsp is the audio card device node on some linux systems... so that if you have no sound system installed but do have a driver you can do "cat /dev/dsp > somefile" to record raw audio and then do "cat somefile > /dev/dsp" to play it back.

In what you want to do I don't think the DSP would be able to play the sound itself, I dont think it has any access to peripherals directly (could be wrong here though)...what I think you'd do is get the dsp to do the crunching (whatever sound effects/funtions you'd like) and then you keep picking up the result in your main program and passing it to whatever sound mixer/server you want as normal. This may mean that you have to bypass the SDL or qt audio stuff and interface with the sound server directly (I've not used SDL or qt so again I could be wrong but I imagine they don't deal with raw audio sample data...more play this file or make this frequency noise?)
 
Last edited by a moderator:
'EdCa22' said:
Oh I think any sound system will work... what I said about /dev/dsp was probably confusing now I look at it again... /dev/dsp is the audio card device node on some linux systems... so that if you have no sound system installed but do have a driver you can do "cat /dev/dsp > somefile" to record raw audio and then do "cat somefile > /dev/dsp" to play it back.

In what you want to do I don't think the DSP would be able to play the sound itself, I dont think it has any access to peripherals directly (could be wrong here though)...what I think you'd do is get the dsp to do the crunching (whatever sound effects/funtions you'd like) and then you keep picking up the result in your main program and passing it to whatever sound mixer/server you want as normal. This may mean that you have to bypass the SDL or qt audio stuff and interface with the sound server directly (I've not used SDL or qt so again I could be wrong but I imagine they don't deal with raw audio sample data...more play this file or make this frequency noise?)
I wish SDL would make beeping noises, but I don't think it even does that. Which is understandable, it's just a wrapper for libraries that probably don't have that function either.

Yeah, I don't have anything specific planned, but I'd like to use the Pandora to do something like one of the following:

1. Procedural music generation
2. Procedural game data generaion from music
3. Warping of sound effects / music
4. Some sort of voice changer program that uses the microphone... I think iPhone has one.
 
Last edited by a moderator:
'EdCa22' said:
I'd have to disagree here... high quality audio processing certainly can be handled well and in real time on fixed point DSPs, you just have to be a bit careful about what you're doing...I'd also argue that most devices that are doing real time audio processing (independently of a desktop PC) are implemented on non-floating point DSP chips or even just low power general purpose microprocessors. Note that here there is a distinct difference between general audio processing and compression/decompression (codecs) because there is a lot more number crunching required to implement a codec comapred to the majority of audio processing applications which are generally just fairly simple simple digital filters. So it really depends on the application. Floating point is certainly always an advantage.

I'm not a specialist for sure but given that:

- TI high-end audio solutions use floating-point c67 DSP and they recommend c67 for all professional audio
- csound DSP was based on a floating-point chip (sharc); also csoun uses floating-point all over the place

I'd assume that even when you're not doing audio codec stuff (TI is for audio processing, csound does procedural sound generation), floating-point is highly desirable. I never said it was mandatory, it's like 3d stuff, you can do it using fixed-point, but that quickly becomes a nightmare; as an exercise I'll leave it to you to evaluate FFT computation errors in both fixed-point and floating-point (and point that I can't myself do that exercise :p ). And the OP wanted to know about procedural sound generation, not digital filtering, which I agree is typically done using fixed-point.

References:
- TI Aureus High Performance Digital Audio Processors focus.ti.com/apps/docs/mrktgenpage.tsp?appId=1&contentId=14700
- TI Professional Audio Solutions (click on the DSP link to see recommended DSP) focus.ti.com/apps/docs/mrktgenpage.tsp?contentId=14468&appId=1
- short article about Csound DSP board www.media.mit.edu/~bv/papers/extended%20csound.pdf
- csound: www.csounds.com

'EdCa22' said:
The best place to get started is probably the specification on http://dspgateway.sourceforge.net/

dspgateway was Nokia's way to interact with DSP on previous OMAP platforms. It's obsolete.
The new recommended way of doing things is TI dspbridge.
 
Last edited by a moderator:
'Laurent' said:
floating-point is highly desirable. I never said it was mandatory, it's like 3d stuff, you can do it using fixed-point, but that quickly becomes a nightmare;
Maybe there is some optimized fixed or even floating point library already provided by TI? There was Q15 (15 decimal places) library with C55x on OMAP1. Or maybe gmplib.org could be ported do DSP and still run faster than ARM core doing double FP math? Or maybe not :)
 
Last edited by a moderator:
@laurent: fair enough, I have to admit that I know nothing about these high end audio processors.what I know is the majority of fairly average SISO device (again using my example of programmable guitar pedals) use low end fixed point only chips...however they are basically only playing with the coefficients of a set of transfer functions, which sounds less complex than what you are talking about. Unfortunately I can't check your references or provide any as I am now on my iPhone on a plane that is about to take off!

With regards to the C64x development, looks like I've been checking the wrong stuff...will have to check out the dspbridge when I'm back from holiday!
 
Last edited by a moderator:
>Again do some research, and you'll see that most Audio DSP stuff is floating-point, at least everything that's supposed to provide good quality.


Not true, really. So much of the DSP-based gear that we find in Pro studio rigs, is based on fixed-point processors. Floating point is just an 'easy' way to handle the programming for audio, it isn't a requirement, and certainly great results can be attained with fixed-point DSP's .. like, the Access Virus synthesizer, for example.

Me, personally, I'm hoping we'll get the toolkit together that we need to write fresh new DSP code for the Pandora, and when that happens I'll be working on effects processing programs, soft synths, and so on, for the Pandora ..
 
Last edited by a moderator:
'fanoush' said:
'Laurent' said:
floating-point is highly desirable. I never said it was mandatory, it's like 3d stuff, you can do it using fixed-point, but that quickly becomes a nightmare;
Maybe there is some optimized fixed or even floating point library already provided by TI? There was Q15 (15 decimal places) library with C55x on OMAP1. Or maybe gmplib.org could be ported do DSP and still run faster than ARM core doing double FP math? Or maybe not :)


It looks like TI does have a fixed point math library for the C64x+:

CODE
http://focus.ti.com/docs/toolsw/folders/print/sprc542.html

From this link:

CODE
https://community.ti.com/forums/p/1858/10069.aspx

it looks like the current release is focused on Windows, but you can get it to work in linux (and they hope to support linux in the future).
 
Last edited by a moderator:
That's pretty awesome, looks like the software is slowly coming together.

But

We still don't have the damn thing.

I don't think we have an official date right now either
 
Last edited by a moderator:
'Laurent' said:
Certainly. The only issue is that typically procedural sound generation uses floating-point and the C64 is fixed-point only. Search on the web for csound + DSP.


*oh, gosh, what am i doing - arguing for the wrong scalar format..?* ; )

ok, while floats represent signals quite naturally, fx can be quite capable at sound synthesis too. i have one little korg ds-10 here that does miracles (alas, it does occasioanlly run out of juice). for the record, the ds-10 is an emulator of an anlogue synth (korg ms-10 + kaoss pad), running on the nintendo ds (33 armv4 + 66 armv5te, 10 bit sound engine)

here are some good examples of what the ds-10 can do:
CODE

http://soundcloud.com/warptoken
http://soundcloud.com/broseybrose
http://soundcloud.com/blu/flute-2
that last one is by yours truly, and was added to list of great ds-10 works just to spoil the impression ; )
 
Last edited by a moderator:
the begining for DSP coder:CODE
http://focus.ti.com/dsp/docs/dspsplash.tsp?contentId=52451
 
'darkblu' said:
'Laurent' said:
Certainly. The only issue is that typically procedural sound generation uses floating-point and the C64 is fixed-point only. Search on the web for csound + DSP.
*oh, gosh, what am i doing - arguing for the wrong scalar format..?* ; )

ok, while floats represent signals quite naturally, fx can be quite capable at sound synthesis too. i have one little korg ds-10 here that does miracles (alas, it does occasioanlly run out of juice). for the record, the ds-10 is an emulator of an anlogue synth (korg ms-10 + kaoss pad), running on the nintendo ds (33 armv4 + 66 armv5te, 10 bit sound engine)
Oh please don't get me wrong :) I perfectly know one can do wonders with fixed-point. Sometimes it's just so much easier to use floating-point...

BTW: groups.google.com/group/beagleboard/browse_thread/thread/0a88dccbb7acc06c#
By Koen Kooi
QUOTE
There's a new demo image + kernel available from http://angstrom-distribution.org/demo/beagleboard/

It now includes *all* the things needed to get gstreamer to use the
DSP for decoding audio and video using the infrastructure from
gstreamer.ti.com.
 
Last edited by a moderator:
'Laurent' said:
Oh please don't get me wrong :) I perfectly know one can do wonders with fixed-point. Sometimes it's just so much easier to use floating-point...
true. i guess the reason i brought that up was that i'd be more than happy to have a ds-10 -like piece of software on the pandora. as fixed point as it might be ; )
 
Last edited by a moderator:
Archos just released new firmware for their 5 series (same cpu and dsp as the Pandora) that allows WMV / VC-1 playback at 720p @ 24fps; 6Mbps maximum, as well as 'Mpeg4 ASP w/o qpel and gmc at 720p / 24fps / 6Mbps max' - all of course with major DSP (probably TI) help. We need these codecs! :unsure:
 
Last edited by a moderator:
'fischju2000' said:
Archos just released new firmware for their 5 series (same cpu and dsp as the Pandora) that allows WMV / VC-1 playback at 720p @ 24fps; 6Mbps maximum, as well as 'Mpeg4 ASP w/o qpel and gmc at 720p / 24fps / 6Mbps max' - all of course with major DSP (probably TI) help. We need these codecs! :unsure:
*Walks to Archos offices, smashes the window and steals their HDD's*
 
Last edited by a moderator:
'lulzfish' said:
I already knew that stuff but I wasn't thinking about it. I think I have PulseAudio on my system, and I have no idea where the sound device would be.

Well, I'll have to see what sound server Angstrom uses. Qt and SDL both do audio, but I don't think they expose the actual sound device or any buffers, so that I don't think that will really work with the DSP.
SDL_mixer has an old version of MikMod which does pitch changing on musical instruments in music module formats. The actual sound effects do not yet support pitch alterations in real-time. I think that version 1.3 of SDL may correct for that in its version of SDL_mixer.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top