Sgx Opengl Es 2.0 Application Development Recommendations


Exophase

Nothing good will ever come of Exophase.
Joined
Sep 21, 2006
Messages
10,307
Age
40
Location
Cleveland OH
I saw darkblu mentioned this before in one topic but I haven't seen a link to it anywhere. Please delete/merge this topic if I'm wrong. At the very least I added it to the relevant node on the wiki (http://pandorawiki.org/OpenGLES_On_the_Pandora)

http://www.imgtec.com/factsheets/SDK/POWERVR%20SGX.OpenGL%20ES%202.0%20Application%20Development%20Recommendations.1.1f.External.pdf

I found this guide very informative on how to use the SGX efficiently.

Section 6.3 in particular relates to something I know a number of people here have been wondering about:

6.3. Texture Upload

When you upload textures to the OpenGL ES driver via glTexImage2D, the input data is usually in
linear scanline format. Internally, though, POWERVR SGX uses a twiddled layout (i.e. following a
plane-filling curve) to greatly improve memory access locality when texturing. Because of this different
layout uploading textures will always require a somewhat expensive reformatting operation,
regardless of whether the input pixel format exactly matches the internal pixel format or not.
For this reason we recommend that you upload all required textures at application or level start-up
time in order to not cause any framerate dips when additional textures are uploaded later on.
You should especially avoid uploading texture data mid-frame to a texture object that has already
been used in the frame.

This means that you'll never be able to do texture uploads that don't cost CPU time, for instance for scaling a virtual 2D framebuffer. I also doubt that there's any way to upload pre-swizzled data. So we're at the mercy of the driver to convert + copy the framebuffer, which may or may not be heavily optimized for the platform, ie using NEON. If you need to do simple integer scaling w/o filtering then you might be better off doing it in software with NEON, depending on the scale ratio. If you're scaling down by an integer amount then you'll always be better off doing this.

For fullscreen scaling the OMAP3530's builtin scaler will be the best option, but for partial scaling things aren't so clear cut. I'm thinking that delegating the DSP for this will still be a good option, since it can be done without taking any CPU time. It'd still take bus bandwidth, though - in this sense, the builtin scaler would be the best performing option since the operation is merged with framebuffer I/O.
 
Exophase said:
For fullscreen scaling the OMAP3530's builtin scaler will be the best option, but for partial scaling things aren't so clear cut. I'm thinking that delegating the DSP for this will still be a good option, since it can be done without taking any CPU time. It'd still take bus bandwidth, though - in this sense, the builtin scaler would be the best performing option since the operation is merged with framebuffer I/O.

Maybe I'm missing what you're trying to accomplish, but once you uploaded the texture, why wouldn't use just scale it using OpenGL? Upload the texture once, mapped to a quad or rectangle, then scale the rectangle via glScale or something similar.
 
Last edited by a moderator:
Yes, you are missing the point. I'm referring to applications that render 2D content in software at a high rate, ie 60 frames per second. Virtually all emulators fall under this category. Emulators that need a way to go to 800x480 (or some aspect ratio correct subset thereof) from a substantially smaller resolution. Some emulators will have the headroom to do this: it's certainly a straightforward option. Others won't. All software will benefit from wasting as little CPU time as possible in order to maximize battery life.
 
If a custom driver were made then it'd possible to have extensions to upload textures pre-swizzled via an extension. I have no idea if the hardware is capable of rendering non-swizzled textures or not. I would certainly not expect a custom or extensible (open source) OpenGL driver to ever surface.

For more on this topic it's probably worth checking this out: http://imaginationtec.net/forum/forum_posts.asp?TID=232&PN=1

At 256x256x32bpp you already don't have enough CPU time for 60Hz output. So you can see this is completely useless, even for emulators/games with a very low CPU footprint, although I suppose 16bpp would help things a lot it'd still be a very large expense.
 
Exophase said:
If a custom driver were made then it'd possible to have extensions to upload textures pre-swizzled via an extension. I have no idea if the hardware is capable of rendering non-swizzled textures or not. I would certainly not expect a custom or extensible (open source) OpenGL driver to ever surface.
pre-swizzled texture uploads would be a handy option, but i would not expect that to happen anytime soon on a handheld near you - you need either a fully-OSS driver edge that you could tinker with, or you need a way to somehow plug your API inbetween the workings of a proprietary edge, which might prove to be impossible for any practical purposes. i think Maciek could throw in his 2 eurocents here, if he's still reading this board.

generally, re the ability of the hw to use linear textures: every GPU that supports render-targets (i.e. FBOs) should have provisions for handling such layouts. that is, it should be able to internally 're-route' linear updates into its native swizzled format. for FBOs in particular, that could be implemented at various stages - at the ROPs stage, as an intermediate step before tex-sourcing an image buffer, etc. but that should happen internally, lest the whole concept of FBOs goes out the window.

Exophase said:
At 256x256x32bpp you already don't have enough CPU time for 60Hz output. So you can see this is completely useless, even for emulators/games with a very low CPU footprint, although I suppose 16bpp would help things a lot it'd still be a very large expense.
FWIW, during my musings with the iphone (read: ipod touch) i noticed the following:

i was doing lazy (read: sloppy) on-deman texture uploads, where i'd take a 256x256 24bit png source image, pass it through the system codecs to obtain raw image data, and subsequently upload that as a 32bit texture. all that would occur once per N frames, at the beginning of the frame, before i draw whatever uses the texture.

on the original ipod touch (412MHz cpu, guessed ~50MHz mbx) that approach costs the loss of a frame (from the original 60 fps). on the ipod touch 2G, though (533MHz cpu, guessed 66MHz mbx), that passes smoothly, without a hiccup.

i think on the pandora we may still be able to do a per-frame tex upload @ 60fps, given the emu leaves some spare cpu time, and our source images are in the 256x256x24 vicinity.
 
Last edited by a moderator:
There's a neat little trick to get round this on the MBX which I suspect will also work on the SGX: Bind the texture to a pbuffer. :)

You never have to make as context current on the pbuffer so there's no switching context overhead or anything. It just seems to force the pixel data to be stored as linear RGB. Of course it will be slower to render, but if you're uploading a new texture each frame it's well worth it. I was seeing glTexImage2D speed up by ~600% if I bound it to a pbuffer.

If anyone's got a pandora prototype, it might be worth just checking you get the same improvements on the SGX.
 
darkblu said:
generally, re the ability of the hw to use linear textures: every GPU that supports render-targets (i.e. FBOs) should have provisions for handling such layouts. that is, it should be able to internally 're-route' linear updates into its native swizzled format. for FBOs in particular, that could be implemented at various stages - at the ROPs stage, as an intermediate step before tex-sourcing an image buffer, etc. but that should happen internally, lest the whole concept of FBOs goes out the window.

Good point. Then I wonder if it's possible to upload an FBO texture in OGL ES2? Of course that's semantically nonsense but who knows.

Maybe PBOs operate non-swizzled too?

darkblu said:
FWIW, during my musings with the iphone (read: ipod touch) i noticed the following:

i was doing lazy (read: sloppy) on-deman texture uploads, where i'd take a 256x256 24bit png source image, pass it through the system codecs to obtain raw image data, and subsequently upload that as a 32bit texture. all that would occur once per N frames, at the beginning of the frame, before i draw whatever uses the texture.

on the original ipod touch (412MHz cpu, guessed ~50MHz mbx) that approach costs the loss of a frame (from the original 60 fps). on the ipod touch 2G, though (533MHz cpu, guessed 66MHz mbx), that passes smoothly, without a hiccup.

i think on the pandora we may still be able to do a per-frame tex upload @ 60fps, given the emu leaves some spare cpu time, and our source images are in the 256x256x24 vicinity.

Do you think you gained any speed from uploading textures with no-alpha over uploading textures with alpha? That the poster I referenced was getting such slow upload makes me question the quality of the OGL2 ES drivers for OMAP3530. The operation should be entirely CPU driven on a shared memory architecture and the OMAP3530's CPU should be much faster than the iPhone 2G's. It'd especially be faster if they used NEON. In fact, if the swizzling is just tiling then it could use the DMA engine, since it has 2D blits, and the entire process could be made asynchronous, taking out any real CPU load at all. Do you know anything about the SGX's swizzle format?

I still think we should look into other options where possible though. I still think the DSP could scale up an arbitrary sized (smaller) image to 800x480 with bilear filtering at 60Hz. It'd take some very tight coding though. 640x480 would gain a bit more as well. Zero copying would be necessary, the virtual framebuffer would just have to be flushed out of cache, which would happen anyway if the cache region were set as non-write allocating or at least write through. This is assuming that the DSP bridge protocol allows for sharing memory directly.

Or we could hack around the SGX driver. Actually, I think this could work and it wouldn't be very hard either, if you:

- Make a render to texture FBO.
- Get the SGX to render something that has a very recognizable pattern in a way such that you can easily determine where it starts.
- Scan physical memory until you find the pattern.
- mmap it to the user's address space.
- Never actually use that render target again, and when you want to have it draw the texture do something to tell the SGX to flush texture cache (I don't have the faintest idea what would cause this). This last stage is really moot if you're not having the SGX do anything but update your 2D surfaces, so long as they're all bigger then texture cache they'll perfectly thrash it anyway.

What do you think? Hackish enough? It'd require basically no reverse engineering of the SGX. It could even be done from userspace if /dev/mem or /dev/ram is available and you're root.

Oh yeah, and you'd want two of them to double buffer and swap the texture you're using. That'd solve any synchronization issues.
 
Last edited by a moderator:
Exophase said:
Good point. Then I wonder if it's possible to upload an FBO texture in OGL ES2? Of course that's semantically nonsense but who knows.
actually, it's a viable scenario.

as the API does not define limitations that textures cannot be client-updated once they've been associated with an FBO, one can bind a texture to an FBO render buffer, potentially render to it once (and use it, just to make sure it's what we think it is), and then measure a bunch of glTex*Image calls onto it.

Exophase said:
Maybe PBOs operate non-swizzled too?
i've never used PBOs before, but i just skimmed through the specification. it's strictly buffer-control mechanism of the type other GL buffer objects are, and its major promise is one of DMA friendliness. unfortunately, i don't see what extras PBOs would give you on a shared-mem platform, over bog-standard tex objects.

Exophase said:
Do you think you gained any speed from uploading textures with no-alpha over uploading textures with alpha? That the poster I referenced was getting such slow upload makes me question the quality of the OGL2 ES drivers for OMAP3530. The operation should be entirely CPU driven on a shared memory architecture and the OMAP3530's CPU should be much faster than the iPhone 2G's. It'd especially be faster if they used NEON. In fact, if the swizzling is just tiling then it could use the DMA engine, since it has 2D blits, and the entire process could be made asynchronous, taking out any real CPU load at all. Do you know anything about the SGX's swizzle format?

I still think we should look into other options where possible though. I still think the DSP could scale up an arbitrary sized (smaller) image to 800x480 with bilear filtering at 60Hz. It'd take some very tight coding though. 640x480 would gain a bit more as well. Zero copying would be necessary, the virtual framebuffer would just have to be flushed out of cache, which would happen anyway if the cache region were set as non-write allocating or at least write through. This is assuming that the DSP bridge protocol allows for sharing memory directly.

Or we could hack around the SGX driver. Actually, I think this could work and it wouldn't be very hard either, if you:

- Make a render to texture FBO.
- Get the SGX to render something that has a very recognizable pattern in a way such that you can easily determine where it starts.
- Scan physical memory until you find the pattern.
- mmap it to the user's address space.
- Never actually use that render target again, and when you want to have it draw the texture do something to tell the SGX to flush texture cache (I don't have the faintest idea what would cause this). This last stage is really moot if you're not having the SGX do anything but update your 2D surfaces, so long as they're all bigger then texture cache they'll perfectly thrash it anyway.

What do you think? Hackish enough? It'd require basically no reverse engineering of the SGX. It could even be done from userspace if /dev/mem or /dev/ram is available and you're root.

Oh yeah, and you'd want two of them to double buffer and swap the texture you're using. That'd solve any synchronization issues.
sounds like a violent plan, so it could just as well work ; )

one remark, though: as the texture may still be kept in a swizzled image buffer (as FBO's may still take advantage of some on-the-fly swizzle mechanism), the spotting of the buffer may not be as easy as it sounds. i know Maciek used similar techiques for spotting other types of storage buffers, but those were more trivial, from what i recall.

but first and foremost, if i were doing that, i'd try to come up with a more-or-less precise performance chart of standard texture uploads speeds on the platform (per driver version). before we decide something needs deep optimisation, we need to know what our current standing is. like you said, saving on the source pixel format could give you the needed headroom. and nope, i haven't investigated that; i needed those textures as 32bit uncompressed for normal map + gloss mask.
 
Last edited by a moderator:
Exophase said:
Do you know anything about the SGX's swizzle format?

from what I understood, (but I didn't have time to test it), the swizzling is not tile-based like the usual NxN tiles
(4x4 tile here for my example, 16x16 texture)
Code:
  xxxx xxxx xxxx xxxx
  +0   +10  +20  +30
y 0123 0123 ...
y 4567 4567
y 89AB 89AB
y CDEF CDEF

  +40  +50  +60  +70
y ...
  +80  +90  +A0  +B0
y ...
  +C0  +D0  +E0  +F0
y ...

that most gpu I know uses but instead it goes like this:

Code:
  +0   +10  +40  +50
  0145
  2367
  89CD
  ABEF
  +20  +30  +60  +70
  +80  +90  +C0  +D0
  +A0  +B0  +E0  +F0

where the address bits are interleaved

instead of "tile" addressing with AxB tiles of NxN

Code:
     /--- tile index part  ----/  /--inside tile part-/
(msb) y5 y4 y3 y2 x5 x4 x3 x2      y1 y0 x1 x0 (lsb)

you have texels addressed as:

Code:
(msb) y5 x5 y4 x4 y3 x3 y2 x2 y1 x1 y0 x0 (lsb)

so the address generation is not really cpu friendly
 
Last edited by a moderator:
Exophase said:
Or we could hack around the SGX driver. Actually, I think this could work and it wouldn't be very hard either, if you:

- Make a render to texture FBO.
- Get the SGX to render something that has a very recognizable pattern in a way such that you can easily determine where it starts.
- Scan physical memory until you find the pattern.
- mmap it to the user's address space.
- Never actually use that render target again, and when you want to have it draw the texture do something to tell the SGX to flush texture cache (I don't have the faintest idea what would cause this). This last stage is really moot if you're not having the SGX do anything but update your 2D surfaces, so long as they're all bigger then texture cache they'll perfectly thrash it anyway.

its recipe for disaster.

you would also get a few cpu cache and texture cache incoherencies.
its surprising how long a single cache line can stay dirty while all the surrounding ones have been recycled.

plus the PowerVR usually does deferred rendering meaning it first grabs all geometry information and process it while you're doing the next frame.

this is needed as the GPU is a tile-based deferred renderer and needs to build a list of all the triangles matching each tiles first.

so you would be updating the texture for frame #2 while the GPU is still in the middle of rendering frame #1.

the OpenGL driver actually hides this by creating a 2nd texture when you update a texture in-frame so it can render the previously captured frame correctly.
if you update too many textures you might run out of texture memory, or the OpenGL driver will have to flush and wait to run in non-deferred at a huge performance penalty.

this also means that in OpenGL ES on PowerVR, updating only part of a texture creates a duplicate of the whole texture.

its recommended to create and manage 2 textures yourself if you're going to update only parts of it on each frames, might be even faster to re-upload the whole thing than just part of it.
 
Last edited by a moderator:
Tom Cooksey said:
There's a neat little trick to get round this on the MBX which I suspect will also work on the SGX: Bind the texture to a pbuffer. :)

You never have to make as context current on the pbuffer so there's no switching context overhead or anything. It just seems to force the pixel data to be stored as linear RGB. Of course it will be slower to render, but if you're uploading a new texture each frame it's well worth it. I was seeing glTexImage2D speed up by ~600% if I bound it to a pbuffer.

If anyone's got a pandora prototype, it might be worth just checking you get the same improvements on the SGX.

this makes a lot of sense, I wouldn't be surprised if it behaved the same way on the Pandora.

however, one should only use this when the texture is updated on every frames and/or is used upright on screen (not rotated and not flipped: 3D billboards (sprites) ) as any other orientations will be unfriendly to memory access / caching.

textures that are uploaded/updated rarely and used often should still be uploaded using glTexImage2D without pbuffer.

(Tom: I know you know this, I'm mentioning it for others so they don't bind EVERYTHING to pbuffers :rolleyes: )
 
Last edited by a moderator:
Stephane Hockenhull said:
Tom Cooksey said:
There's a neat little trick to get round this on the MBX which I suspect will also work on the SGX: Bind the texture to a pbuffer. :)

You never have to make as context current on the pbuffer so there's no switching context overhead or anything. It just seems to force the pixel data to be stored as linear RGB. Of course it will be slower to render, but if you're uploading a new texture each frame it's well worth it. I was seeing glTexImage2D speed up by ~600% if I bound it to a pbuffer.

If anyone's got a pandora prototype, it might be worth just checking you get the same improvements on the SGX.

this makes a lot of sense, I wouldn't be surprised if it behaved the same way on the Pandora.
ok, i was originally sceptical about this as i did not think we'd be actually getting pbuffers on the pandora, but i've just changed my mind. sounds like a solution.

*note to self: i should really start paying more attention to EGL*
 
Last edited by a moderator:
Stephane Hockenhull said:
its recipe for disaster.

you would also get a few cpu cache and texture cache incoherencies.

Read what I said about the GPU's texture cache. You'd have to flush CPU cache no matter what you did, if updating textures.

Stephane Hockenhull said:
its surprising how long a single cache line can stay dirty while all the surrounding ones have been recycled.

Nope, that's not how caches work. If you load something through the cache that's larger than the cache itself is you will flush everything in it, guaranteed.

I have sincere doubts that the SGX has a texture cache greater than the size of the framebuffer.

Stephane Hockenhull said:
plus the PowerVR usually does deferred rendering meaning it first grabs all geometry information and process it while you're doing the next frame.

this is needed as the GPU is a tile-based deferred renderer and needs to build a list of all the triangles matching each tiles first.

so you would be updating the texture for frame #2 while the GPU is still in the middle of rendering frame #1.

the OpenGL driver actually hides this by creating a 2nd texture when you update a texture in-frame so it can render the previously captured frame correctly.
if you update too many textures you might run out of texture memory, or the OpenGL driver will have to flush and wait to run in non-deferred at a huge performance penalty.

this also means that in OpenGL ES on PowerVR, updating only part of a texture creates a duplicate of the whole texture.

its recommended to create and manage 2 textures yourself if you're going to update only parts of it on each frames, might be even faster to re-upload the whole thing than just part of it.

Did you bold enough? This is why I said to double buffer the textures. If necessary, triple buffer instead. What is the problem exactly?

I'm sure pbuffers will offer some performance benefit over nothing, but I'm almost 100% certain it won't be zero cost. At the very least it'll probably involve a copy.
 
Last edited by a moderator:
Exophase said:
Stephane Hockenhull said:
its surprising how long a single cache line can stay dirty while all the surrounding ones have been recycled.

Nope, that's not how caches work. If you load something through the cache that's larger than the cache itself is you will flush everything in it, guaranteed.

no, its not guaranteed.

it depends on the associativity of the cache and its cache line recycling algorithm.
it could be less recently used, sequential, or pseudo-random.

I doubt directly updating a texture, then reading an amount of memory X times larger than the cache to somehow force it to flush regardless of recycling algo will be faster than the OpenGL call using the correct cache flush instructions.

reading memory often requires a refresh cycle, writing doesn't.
on some ARM systems writing is 2.5x faster than reading.
the OpenGL call reading the texture (which is likely still in the cache) and writing it to video ram + correct flush will be faster than dummy-reading an equal amount of memory without absolute guarantee of coherency.
 
Last edited by a moderator:
Stephane Hockenhull said:
no, its not guaranteed.

it depends on the associativity of the cache and its cache line recycling algorithm.
it could be less recently used, sequential, or pseudo-random.

Okay, I thought about it more and will admit that it does depend on the replacement, but only with pseudo-random can it fail. And it'd need some fairly bad luck if your thrashing is enough times larger than the cache.

Stephane Hockenhull said:
I doubt directly updating a texture, then reading an amount of memory X times larger than the cache to somehow force it to flush regardless of recycling algo will be faster than the OpenGL call using the correct cache flush instructions.

No, you don't understand... The very act of rendering it will have to touch every byte in the texture, which will most likely be much larger than the cache. 320x240x2bpp would be 150KB, so long as the reading is roughly sequential. Say the cache is 32KB, and the replacement strategy is not random. By the time it finishes rendering, the last 32KB of the texture is what will be in the cache. When it begins rendering again everything will be fresh.

Stephane Hockenhull said:
reading memory often requires a refresh cycle, writing doesn't.

It depends on the cache, some are configured as write allocate, some aren't.. but I don't see the relevance...

Stephane Hockenhull said:
on some ARM systems writing is 2.5x faster than reading.

What does that have to do with this? On the ARM you write the cache, and you configure the L2 to be non-write allocating for the pages (something I meant to say in the original post) - however, I believe it is write-through by default, so so long as there's a certain amount of time and the bus isn't starved it should be alright.

Stephane Hockenhull said:
the OpenGL call reading the texture (which is likely still in the cache) and writing it to video ram + correct flush will be faster than dummy-reading an equal amount of memory without absolute guarantee of coherency.

I never said anything about performing a dummy reading. At the very least, your framebuffer texture shouldn't be in L1 cache unless you've been reading from it, and if you configure the L2 correctly it won't be there either. If you have a kernel module to do this then it can take care of the latter part, the same way /dev/fb0 would when returning you a pointer to the framebuffer.

By the way, that OpenGL call would probably need a way to tell the video card to flush its cache too (not like it really matters), and if it WERE necessary I'm sure there's a less expensive way to force it to flush cache, by performing some other state change.

And if the texture cache has a random replacement strategy then that'd be pretty weird.
 
Last edited by a moderator:
Stephane Hockenhull said:
Exophase said:
Do you know anything about the SGX's swizzle format?
from what I understood, (but I didn't have time to test it), the swizzling is not tile-based like the usual NxN tiles
(4x4 tile here for my example, 16x16 texture)

[...trimmed by Jason...]

instead of "tile" addressing with AxB tiles of NxN
Code:
     /--- tile index part  ----/  /--inside tile part-/
(msb) y5 y4 y3 y2 x5 x4 x3 x2      y1 y0 x1 x0 (lsb)
you have texels addressed as:
Code:
(msb) y5 x5 y4 x4 y3 x3 y2 x2 y1 x1 y0 x0 (lsb)
so the address generation is not really cpu friendly

FWIW, this style of tiling looks like a Z-order space filling curve. Googling for that might turn up some good algorithms for transforming data from linear layout into this representation.
 
Last edited by a moderator:
I was looking at the functional block diagram for the OMAP3530

it might be faster to scale the emulator's image in software as it is generated.

for 2D emulation if the image is generated scanline by scanline it'll easily fit in the CPU cache (possibly along with the previous scanline for smoothing) and writing directly to the FB rather than going

CPU(render) -> RAM -> CPU(opengl glTexImage2D) -> RAM -> SGX(scale) -> RAM -> display

you hit only

CPU(render 1 line)-> CACHE -> CPU(scale 1 line) -> RAM -> display
 
Main memory bandwidth is not everything and it's pretty expected that the CPU will be off the bus for a significant amount of the frame. Spending CPU time on scaling, on the other hand, is going to get in the way of CPU time that can be used on other things. This is especially true with filtering, moreso if it's an arbitrary non-integer resize with bilinear filtering applied.

That aside, not every emulator is going to render a scanline at a time, for instance PS1..

But like I already said, the best method (where applicable) is to let one of the scalers on the display controller handle it.
 
Back
Top