Direct (close-to-the-metal) open-source SGX driver


warmi said:
A waste of time imho..

Imagine trying to write code based on your API ... with 50% of features being in "experimental" state, half-implemented and basically based on guess work ... that would make for some seriously stressful coding sessions…

Thanks, but no thanks, I will stick to OpenGL ES.

It's not quite true. Designers of HW create 'their view' of graphics pipeline. It has custom 'stages' (mostly, but not all - programmable), and custom feature set. And it's ready, done in HW and there's no need to implement it.

Then, there are APIs like Direct3D, OpenGL and OpenGL ES. Essentially driver for an API for given HW architecture is a compiler that gets state of input graphics pipeline (OGL for example) and compiles it into state of HWs custom pipeline. This process is very non-trivial, and it's done for, well, compatibility with standard.
Problem arises when input API (OGL for example) has smaller feature set that HW itself.
Another problem would be if compiler would be poorly written (compilation time penalty, low code quality).

I suggest finding out how to program HW pipeline and program it directly, thus using all features already available in HW.

If somebody would like use this information to write OGL driver - be my guest, but that would not be me. :p
 
If it makes you happy, go for it! We would all benefit if you figure it out. Good luck.
 
maciek_urbanski

Do you think that GPGPU will be possible ?

Im a fan of powervr arch from the first graphic board for pc :), that one of the reasons i love dreamcast ^_^

Thanks for trying : :wink:
 
Alphacore said:
Do you think that GPGPU will be possible ?
I think so. But I'm talking about GPGPU operations performed via it internal graphics pipeline. I'm writing this, because pipeline is fixed in all GFX architectures I know of. Of course parts of it can be disabled, but GPU is not fully 'general purpose'. There are rumors that in DirectX 12 there will be variable number of pipeline stages. As far as GPGPU goes in DirectX 11 we have compute shader(link).
But I digress...
SGX is described as being DirectX 10.1 - compatible. For shaders it means dynamic flow control and arbitrary shader length. So yes - it will be possible.
But how fast would it be ? That's another matter. For highly parallel operations - speedup should be significant (like in every GPGPU algorithm).

At the same time I have distinct feeling that DSP module should be better match for this...
 
If everything is fine, be my guest and go for it ! ;-)

But personnally as many people have pointed out, it is most likely that for a small % of new functions,
you will loose compatibility with openGL-ES (=custom APIs or you want to implement something like Dx 10 ?, or just use OGL extensions ?).

And I bet people at ImgTech have been spending years to tune their drivers...
So, yes, having a fully nicely documented source code of a driver would be nice.
But in a realistic usage, I am not sure that your driver could compete in term of performance against the original drivers.
(not even talking about subtle bugs...) Even with "thinner" APIs.

But anyway, I bet it is not about writing something better at first but about making something cool,
and have fun. If you feel it is the thing you want to do, just go for it !

Of course, all things mature very well over time if effort is constantly put in,
and any project that started as something that looked silly may end up as something really nice and usable.

Personnally, if I had to start a project I would go in the following direction :
- Remake the OGL completly.
--> This allow you to find bugs against the original drivers, compare performance, find more easily the registers and hardware inner stuff etc...

- Write a set of API like DX 10, where you can "store" the states and is lighter than OGL.

I would not be surprised that, to find how it works and writing a basic OGL would require at least a year of work on your "hobby" time... One more year for the tuning. If you are alone...

Just my 2cent.
 
maciek_urbanski said:
For highly parallel operations - speedup should be significant (like in every GPGPU algorithm).

But there are only two USSEs and each can only do 32bit operations (meaning 4-way SIMD only on 8bit data). Considering the clock rate (only up to 166MHz confirmed so far) this hardly looks like it'll compete with NEON on the Cortex-A8, much less the DSP.
 
Concerning GPGPU stuff,

I would bet that a 450 Mhz DSP and 600 Mhz Neon would get FAR more power
than a 150 Mhz GPU with all the hassle of texture format, data transfer and synchronization of hardware...

Just my 2cent.
 
But there are only two USSEs and each can only do 32bit operations (meaning 4-way SIMD only on 8bit data)
Doesn't each USSE contain multiple 32bit SIMD Execution units?

Anyway, good luck with this, surely it would benefit the community if this information was available.
 
laxer3a said:
And I bet people at ImgTech have been spending years to tune their drivers...

I wouldn't count on that. I followed the discussion about the missing linux driver for the MBX in Nokias internet tablets. Some devs who came in touch with an ImgTech driver said it was crap. I don't know about the SGX, but its the question if it is a priority for ImgTech to have a good LINUX driver.

Every road that leads to more insight on the PowerVR should be taken in my opinion.
 
Let's see what we can deduce from tidbits all around he web:
  1. SGX @ 200MHz should have 1.2 gpix/s and 13.5 mpoly/s - source(link)
  2. SGX should have multiple unified processing units, support multiple HW threads - source(link)
  3. SGX's execution units support 16 HW threads each - source(link)
  4. SGX should have dedicated HW for texture access, pixel writeout, and tiling - source(link)
  5. SGX processes data 32-bit at time and has a floating-point registers - source(link)

From these numbers it seems that:

1.2*giga pixels in RGBA mean 4.8*giga values. Since shader unit of SGX can process 1 float at time (it has 32-bit SIMD registers) it might mean:
  • ( 4.8*giga [value/s] )/( 200*mega [1/s] )/( 1 [float/register] ) = 24 execution units (if data processing was done on floats)
  • ( 4.8*giga [value/s] )/( 200*mega [1/s] )/( 4 [bytes/register] ) = 6 execution units (if data processing was done on SIMD bytes )
I'm guessing it's 2.

If it's in fact 2, vertex processing (assuming 2/3 vertex cache hit ratio, or long triangle strip) ( 200*mega [1/s] )*( 6 [execution units] )/( 13.5*mega [vertex/s] ) = 88.8(8) clocks/vertex. This includes vertex fetch, vertex shader execution, tiling calculation, and adding triangle to tiles covered by it.

Because of tiling architecture with on-chip buffer anti-aliasing should be almost free.

It seem that access to external memory will be slow (think textures, vertex buffers, index buffers, constant buffers, etc.) because each execution unit has 16 HW threads. Those theads are latency hiding mechanism - if something slow is starting (texture fetch for example) context is switched to next thread.

Since ARM page size is 4096, and here(link) we can read that SGX can map 4GB RAM it's reasonable assumption that page SGX directory has two levels (like in x86).

But we have to test it to see...
 
Those fill rate values are crazy, sorry. 1.2 gigapixels/sec is an outright lie. The real number is 400 megapixels/sec, the * 3 is given because they think that it saves you 3x in overdraw prevention. That they both advertise TBDR heavily AND claim a 3x fillrate is, in my opinion, quite dishonest. So the real numbers: 200MHz * 2 pixels per cycle = 400mpixels/sec. There are 2 USSEs, 2 execution units. Since it isn't stated anywhere that float colors with 32bit precision per component is used (which is just overkill, think of the memory bandwidth on this thing) and we KNOW that it has 4-way 8888 32bit SIMD we can probably throw out your first figure and just go with the second. Divide the dishonest 3 out and we get 2, or parallel 64bits, which at 200MHz is a lot worse than the 64bits SIMD you get with NEON at 600MHz+, or the 256bits VLIW you get with the DSP at 430MHz.

Also, you assume that the polygon throughput is limited by vertex shading (which is not mandatory), but actually as far as I'm aware it's limited by the binning required by TBDR.
 
Exophase said:
Oh man, you guys fell for the same trap I did.
Nope, we didn't.

Exophase said:
16 hardware threads doesn't mean 16 execution units. It means 16 fine-grained contexts that it can quickly switch out of. From all I could gather the USSEs do NOT have multiple execution units, they have one each.
Yes I know. Read part about 'access to external memory' above.

Exophase said:
Also, those fill rate values are crazy, sorry. 1.2 gigapixels/sec is an outright lie.
This is not a lie. You should not accuse others of lying if you cannot prove they are. :p
But I agree - this value is a throughput in very unlikely scenario. Essentially they might benchmark shader setting R8G8B8A8 render target into solid color... but that's how marketing numbers are generated.

Exophase said:
The real number is 400 megapixels/sec, the * 3 is given because they think that it saves you 3x in overdraw prevention.(...)
Please provide link to some benchmarks.

Exophase said:
Since it isn't stated anywhere that float colors with 32bit precision per component is used (which is just overkill, think of the memory bandwidth on this thing) and we KNOW that it has 4-way 8888 32bit SIMD we can probably throw out your first figure and just go with the second.
Execution units often represent data internally in one unified format. In majority of cases it's IEEE float. Even if HW is able to perform SIMD on 2 half-floats(or 4 bytes) in parallel it does not mean that compiler will generate such code. And often it doesn't.
But I agree - 24 execution units is rather improbable.

...and yes - I know that these numbers should be taken with grain of salt.

But let's base our 'guesswork' on published information. Building guesswork on the guesswork seems without merit. :p
 
sounds like you are gonna do just what theses guys are doing for nvidia cards... render something and check what registers have changed...
http://nouveau.freedesktop.org/wiki/

I believe there is also a a new trace functionality in the latest linux kernel so you will probably want to build it yourself i think i it exposes some hardware mermory io directly in /dev/proc that you can read with cat
http://nouveau.freedesktop.org/wiki/MmioTraceHowto

they have a utility for generating dumps as well but im not sure it would be of use since you already seem to know how that works

so basically what you are doing is writing the part of the driver that gallium3d would sit on top of potentially giving us a slew of 3d apis

@Exophase chill out man.... emulators aren't necessarily easy to code or figure out either but look how many of them are already ported to the pandora.. also from what i hear some people happen to enjoy hacking hardware if thats his thrill by all means let him have it. Also check out this status matrix http://nouveau.freedesktop.org/wiki/FeatureMatrix as you can see they have the 2d driver pretty much done and are actually working on the shaders now... and beyond even that there is even a video on youtube of the driver running openarena so yeah it is definitely doable and to make it that much easier he only has to figure out 1 chip and not a whole buch of them like the nouveau team even though a lot of thier code is shared it is the little differences in the cards that throws a wrench in the works like different registers to init or shutdown the car. I don't claim to know how to do any of this myself but I like to say informed being an nvidia owner X.x
 
cb88 said:
sounds like you are gonna do just what theses guys are doing for nvidia cards... render something and check what registers have changed...
http://nouveau.freedesktop.org/wiki/
Yup, that's the general idea. But I want to dump entire memory visible via SGX translation table too.

cb88 said:
I believe there is also a a new trace functionality in the latest linux kernel so you will probably want to build it yourself i think i it exposes some hardware mermory io directly in /dev/proc that you can read with cat
http://nouveau.freedesktop.org/wiki/MmioTraceHowto
That's very handy, THX. :)
I'll merge it with kernel release that comes with Pandora. It will be -omap1 branch, right ?

cb88 said:
so basically what you are doing is writing the part of the driver that gallium3d would sit on top of potentially giving us a slew of 3d apis
Yes, but I hope I could program it directly, because most (small/embedded) 3D accelerators are no more complex to program that any other 3D API (if there is no need for HW workarounds...).

THX, cb88. :)
 
Exophase said:
Those fill rate values are crazy, sorry. 1.2 gigapixels/sec is an outright lie. The real number is 400 megapixels/sec, the * 3 is given because they think that it saves you 3x in overdraw prevention. That they both advertise TBDR heavily AND claim a 3x fillrate is, in my opinion, quite dishonest. So the real numbers: 200MHz * 2 pixels per cycle = 400mpixels/sec. There are 2 USSEs, 2 execution units. Since it isn't stated anywhere that float colors with 32bit precision per component is used (which is just overkill, think of the memory bandwidth on this thing) and we KNOW that it has 4-way 8888 32bit SIMD we can probably throw out your first figure and just go with the second. Divide the dishonest 3 out and we get 2, or parallel 64bits, which at 200MHz is a lot worse than the 64bits SIMD you get with NEON at 600MHz+, or the 256bits VLIW you get with the DSP at 430MHz.

Also, you assume that the polygon throughput is limited by vertex shading (which is not mandatory), but actually as far as I'm aware it's limited by the binning required by TBDR.

Surely with such a limited speed it would have some issues doing vertex geometry? For example, just translating a single point to the screen would cost 16 multiply-adds (assuming a 4x4 matrix and a 4 element vector). Or does it have custom hardware to perform transformation?

I mean, assuming even that it used 9 MACs to do each vertex (3x3matrix), then that would cap it to 44 million vertices, before it did any additional transformation or pixel stuff, which would obviously burn more.

I guess maybe then their polygon count is calculated perhaps by doing flat-shaded polygons only with a basic fixed function pipeline and triangle strips which is a bit of a sucky way of calculating it because it's utterly unrealistic in the real world.
 
too much guesswork in this thread and too little reading of related documentation.

here's Intel System Controller Hub (Intel SCH) datasheet, which, as you can guess, features an SGX part @200MHz (AKA intel GMA500). it gives answers to a few of the questions circulating here. check out chapters:

9.1.1 3D Core Key Features
9.1.2 Shading Engine Key Features
9.1.5 Unified Shader

@Exophase
Divide the dishonest 3 out and we get 2, or parallel 64bits, which at 200MHz is a lot worse than the 64bits SIMD you get with NEON at 600MHz+, or the 256bits VLIW you get with the DSP at 430MHz
your last post was pretty much to the point until you made the above comparison.

sorry but the above makes no sense. on one hand you have a part capable of hiding latencies like nobody's business (that's what GPUs do in general - they nearly eliminate data-flow latencies for a certain class of computational tasks). on the other you have a regular SIMD cpu extension, generally orders of magnitude less-efficient at what the GPU does, and a VLIW unit which i admit to know nothing about, but i would be utterly surprsed if it could be as flexible and as good at latencies hiding as the GPU is, without requiring insane levels of manual data micro-management. the mere fact that TI thew in there an SGX along with their DSP should tell you something.
 
maciek_urbanski said:
Nope, we didn't.
Yes I know. Read part about 'access to external memory' above.

I wasn't referring to you, but you can see why I deleted that part.

maciek_urbanski said:
This is not a lie. You should not accuse others of lying if you cannot prove they are. :p
But I agree - this value is a throughput in very unlikely scenario. Essentially they might benchmark shader setting R8G8B8A8 render target into solid color... but that's how marketing numbers are generated.

Yes it is a lie, it's ImgTech's lie. A lie that you weren't aware of, apparently.

maciek_urbanski said:
Please provide link to some benchmarks.

http://www.gp32x.de/board/index.php?sh ... ntry617065

dmdm is a PowerVR rep, so please drop this nonsense about me building "guesswork on guesswork", you're being all kinds of cocky.

cb88 said:
@Exophase chill out man.... emulators aren't necessarily easy to code or figure out either but look how many of them are already ported to the pandora..

Great, porting emulators is as difficult as reverse engineering GPUs now.

maciek_urbanski said:
your last post was pretty much to the point until you made the above comparison.

sorry but the above makes no sense. on one hand you have a part capable of hiding latencies like nobody's business (that's what GPUs do in general - they nearly eliminate data-flow latencies for a certain class of computational tasks). on the other you have a regular SIMD cpu extension, generally orders of magnitude less-efficient at what the GPU does,

Orders of magnitude? Would you like to tell me what a 32bit SIMD scalar processor can do in one cycle that's ORDERS OF MAGNITUDE more efficient than 2-way FPU SIMD that NEON can do? Yes, the hardware threads hides latency but there are other mechanisms to hide latency even on CPUs, such as prefetching. They just take more work. On the other hand, the CPU has 256KB of L2 cache that is quite fast, although we don't know how much cache the GPU has.

maciek_urbanski said:
and a VLIW unit which i admit to know nothing about, but i would be utterly surprsed if it could be as flexible and as good at latencies hiding as the GPU is,

It's much more flexible. Go read the documentation. Can issue 8 execution units per cycle, which basically addresses 4 units with 2x redundancy each, but can do many similar ALU operations over nearly all of them. And of course they also have prefetching and decent caches. I'd like to know why you think the USSEs have such an amazing instruction set. 16-way threads are nice, but the USSEs were obviously made to be scaled, with the SGX 530/535 being roll out parts. The newer SGX's already have more USSEs.

maciek_urbanski said:
without requiring insane levels of manual data micro-management. the mere fact that TI thew in there an SGX along with their DSP should tell you something.

OR it could be that the highest end PowerVR chip available has shaders anyway, or it could be that you can't do pixel shading using a DSP... take your pick? Your sentence suggests that the SGX is only good for its shaders. I entirely expect that some people will use NEON for transformation and lighting, especially if they want to maximize pixel shader computational throughput.
 
Last edited by a moderator:
blu said:
here's Intel System Controller Hub (Intel SCH) datasheet, which, as you can guess, features an SGX part @200MHz (AKA intel GMA500).(...)
That's a fantastic find blu. :)

Let's dissect it a bit.

  1. At 9.1.1. '3D Core Key Features' there is information that it has fill rate of 2 pixels/clock.
  2. At 9.1.2. 'Shading Engine Key Features':
    • Multi-threaded with four concurrently running threads
    • 2048 32-bit registers
  3. At 9.1.5. 'Unified Shader' there is fragments:
    • The unified shader core also has a task and thread manager which tries to maintain
      maximum performance utilization by using a 16-deep task queue to keep the 16
      threads full.
    • The unified store contains 16 banks of 128 registers.

So it has 4 execution engines, each with 4 HW threads. Each thread has separate 128-entry register file.

Looks nice, but I'm wondering how they arrived with this 1.2gpixel/s bandwidth....
 
maciek_urbanski said:
Let's dissect it a bit.

  1. At 9.1.1. '3D Core Key Features' there is information that it has fill rate of 2 pixels/clock.
  2. At 9.1.2. 'Shading Engine Key Features':
    • Multi-threaded with four concurrently running threads
    • 2048 32-bit registers
  3. At 9.1.5. 'Unified Shader' there is fragments:
    • The unified shader core also has a task and thread manager which tries to maintain
      maximum performance utilization by using a 16-deep task queue to keep the 16
      threads full.
    • The unified store contains 16 banks of 128 registers.

So it has 4 execution engines, each with 4 HW threads. Each thread has separate 128-entry register file.

Looks nice, but I'm wondering how they arrived with this 1.2gpixel/s bandwidth....
well, my version is a bit different.

the documents mentions 'two pipelines' which i believe constitute:

2 shader units, 2 tmu's, 2 ROPs, configured as 1x shader + 1x tmu + 1x ROP per 'pipeline'.

those 2 shader units are SIMD, and can be fed from 4 thread contexts, ergo the mentioning of '4 concurrent threads' (9.1.2 Shading Engine Key Features), which threads are picked from a queue of 16 threads by the scheduler, ergo the 16 hw context maintained, each with 128 (32bit) registers, amouting to 2048 registers in total.

the 1.2 gpixel/s is, as already mentioned by Exophase, a result of factorization by 3 to account for TBDR's efficiency at overdraw (3 is a reasonable overdraw to consider), but that brings nothing to the discussion at hand.
 
Back
Top