A Possible Performance Bump From Opencl?


pseudomind

Still Fresh
Joined
Oct 8, 2008
Messages
13
I know this is a bit preemptive considering that the openGL support on the pandora is not currently functional. However for those of you who have not gotten wind of this thusfar, OpenCL was recently announced as an open standard for utilizing a system's gpu for general purpose parallel processing. Granted I don't there is any software currently using these libraries and whatnot, but I don't really see a reason why we couldn't be getting a little SMP action out of the pandora in the not too distant future.

Take a look and see what you think.
http://en.wikipedia.org/wiki/OpenCL
http://www.khronos.org/opencl/
 
The 3D driver for the Pandora will be functional very soon according to Craigix.
 
mazza558 said:
The 3D driver for the Pandora will be functional very soon according to Craigix.
yes but will it be able to understand OpenCL?
 
Last edited by a moderator:
CandidStan said:
yes but will it be able to understand OpenCL?
To the best of my knowledge, the drivers do not have OpenCL support built in. Perhaps this is an area where maciek_urbanski's GPU deconstruction work could come in handy. A particularly ambitious person might create OpenCL drivers from MU's low level framework.

I wouldn't hold my breath for this, though. :D
 
Last edited by a moderator:
Isn't there any OpenCL implementation on top of OpenGL ES 2.0? Should be possible, in a similar way to ShivaVG which is a OpenVG implementation on top of OpenGL.
 
I am not an expert in these matters, but as I understand it, the abstraction layer (in this case, OpenCL) works directly with the hardware. I think that the person creating the driver builds the API out of the lowest-level silicon commands in order to be as efficient as it can be. If you were to build OpenCL commands out of OpenGL commands (which were specifically designed for graphics work), then you lose any efficiency gains you might have had otherwise.

Perhaps I'm completely wrong with how this works, but my understanding goes like this: Lets say you want to create a new written language. The lowest level bits of information (the processor's assembly language) you can use are lines and curves. You then create an alphabet (an abstraction layer or API) with these tools. So now you have a set of letters, but you also want to have a way to represent numbers. You could just spell out every number with letters (use one API to create another), but it would make more sense to go back to the basic building blocks of the line and curve and create a whole specialized character set. You need far fewer lines and curves for "129" then you do for "One Hundred Twenty Nine".

Your OpenVG example is a bit different because they are both graphics-centric APIs. They have similar purposes, just a different vocabulary. For that, it would be more like going from "One Hundred Twenty Nine" to "Ciento Veintinueve" :D
 
efegea said:
Isn't there any OpenCL implementation on top of OpenGL ES 2.0? Should be possible, in a similar way to ShivaVG which is a OpenVG implementation on top of OpenGL.
I think a language interpreter on top of a vector graphics renderer is more than a little different to a vector graphics renderer on top of a vector graphics renderer.
 
Last edited by a moderator:
Well, I think I see what you guys are saying a little better. I guess my misunderstanding stems from this one line in the wikipedia entry...

"The purpose is to recall OpenGL and OpenAL, which are open industry standards for 3D graphics and computer audio respectively, to extend the power of the GPU beyond graphics (GPGPU)."

To me, this seems to mean that it is going to use the existing opengl and openal api's in some sort of contorted way to do calculations without needing to be rewritten for all hardware platforms. At any rate, you guys have not completely dashed my hopes yet because the list of companies cited in creating the open standard are as follows (from the khronos page)

"OpenCL is being created by the Khronos Group with the participation of many industry-leading companies and institutions including 3DLABS, Activision Blizzard, AMD, Apple, ARM, Barco, Broadcom, Codeplay, Electronic Arts, Ericsson, Freescale, HI, IBM, Intel, Imagination Technologies, Kestrel Institute, Motorola, Movidia, Nokia, NVIDIA, QNX, RapidMind, Samsung, Seaweed, Takumi, Texas Instruments and Umeå University."

As you can see, two of those are groups have pretty close ties to what is the heart of the pandora hardware. This would particularly make sense considering that there seems to be a push for the arm platform into the netbook arena with Ubuntu, TI, and friends. So I am thinking that either ARM or TI themselves will be working on getting this particular setup working if only to make the netbooks more viable. Granted this is all completely speculation, but I am think that by this means we may find support for opencl even if assuming that it wouldn't just work outright.
 
pseudomind said:
Well, I think I see what you guys are saying a little better. I guess my misunderstanding stems from this one line in the wikipedia entry...

"The purpose is to recall OpenGL and OpenAL, which are open industry standards for 3D graphics and computer audio respectively, to extend the power of the GPU beyond graphics (GPGPU)."

To me, this seems to mean that it is going to use the existing opengl and openal api's in some sort of contorted way to do calculations without needing to be rewritten for all hardware platforms. At any rate, you guys have not completely dashed my hopes yet because the list of companies cited in creating the open standard are as follows (from the khronos page)

"OpenCL is being created by the Khronos Group with the participation of many industry-leading companies and institutions including 3DLABS, Activision Blizzard, AMD, Apple, ARM, Barco, Broadcom, Codeplay, Electronic Arts, Ericsson, Freescale, HI, IBM, Intel, Imagination Technologies, Kestrel Institute, Motorola, Movidia, Nokia, NVIDIA, QNX, RapidMind, Samsung, Seaweed, Takumi, Texas Instruments and Umeå University."

As you can see, two of those are groups have pretty close ties to what is the heart of the pandora hardware. This would particularly make sense considering that there seems to be a push for the arm platform into the netbook arena with Ubuntu, TI, and friends. So I am thinking that either ARM or TI themselves will be working on getting this particular setup working if only to make the netbooks more viable. Granted this is all completely speculation, but I am think that by this means we may find support for opencl even if assuming that it wouldn't just work outright.



I think the fact that ImgTech is on the list is MUCH more interesting...
 
Last edited by a moderator:
surt said:
efegea said:
Isn't there any OpenCL implementation on top of OpenGL ES 2.0? Should be possible, in a similar way to ShivaVG which is a OpenVG implementation on top of OpenGL.
I think a language interpreter on top of a vector graphics renderer is more than a little different to a vector graphics renderer on top of a vector graphics renderer.


And doesn't a vector graphics renderer such as the SGX535 feature a programable shader engine?? Can't be that language interpreted by the shaders??
 
Last edited by a moderator:
I would assume that the basic OpenCL compiler/implementation would be converting to the lowest it can, but the default would likely be using OpenGL with GLSL of some form. At least, that is how I would implement it
 
Chip said:
CandidStan said:
yes but will it be able to understand OpenCL?
To the best of my knowledge, the drivers do not have OpenCL support built in. Perhaps this is an area where maciek_urbanski's GPU deconstruction work could come in handy. A particularly ambitious person might create OpenCL drivers from MU's low level framework.

I wouldn't hold my breath for this, though. :D
Oh, well it would be nice considering what this language could do in terms of games.
 
Last edited by a moderator:
I think that using the SGX535 for anything not related to the graphical pipeline would be unwise. It doesn't have a ton of shading power (just how much is pretty unclear), especially compared to the video card beasts that are out now. If you're not using the SGX for anything else it might be worthwhile, but you should probably try to maximize utilization of the DSP first.
 
Would be nice for mathematical software too.

/me goes to read up on these things in preparation
 
What I am more interested in is using OpenCL as CPU and DSP language.
From the current design it seems possible to write your application in 1 Language and run the specific kernels on CPU, DSP and GPU with a thin wrapper program in C and OpenCL.
Only thing I am not sure about is using libraries for CPU side of OpenCL C.

Some downsides would be that the runtime would have to be available for the target plattform and that you are bound to the OpenCL C Language. Another would be that you can't use the GPU to its full graphical potential but that can be done with OpenGL(ES) or OpenVG.

On the other hand you get plattform independence and optimized plattformcode if you distribute by source and the loading time shouldn't be too high if using binary mode but that may sacrifice some plattform indepence.

Maciek's project might help with the GPU side of OpenCL on Pandora but someone still has to code a OpenCL implementation for it.
 
Just to clear things up - Pandora does not have SGX535.

Unit inside Pandora is SGX530 - which is (surprisingly) quite different in terms of capabilities.

Creating OpenCL compiler on top of OpenGL [ES] is very inefficient (and perhaps, next to impossible). The one thing that comes to my mid is, that OpenCL supports in-kernel synchronization, which has no direct (read:effective) correspondence in OpenGL.

(...but don't take my word for it - I've been proven wrong before). B)
 
maciek_urbanski said:
Just to clear things up - Pandora does not have SGX535.

Unit inside Pandora is SGX530 - which is (surprisingly) quite different in terms of capabilities.
Good or bad suprise?
 
Last edited by a moderator:
maciek_urbanski said:
ashdjones said:
Good or bad suprise?
Good one. Capability bits suggest that SGX530 has more features than SGX535. B)


But 530 is able to do much less polygons per second than the 535

EDIT: according to the wikipedia
 
Last edited by a moderator:
Back
Top