glshim


@ces

I also believe glshim's OpenGL 1.x support is more complete than Regal's, please correct me if I'm wrong
Regal tries to do everything with shaders I think, which means it simply cannot do some things like glLogicOp on Android. glshim is explicitly for "I have an OpenGL 1.x compatible game and want to run it on an OpenGL ES 1.x device" and I'm fairly certain there's no real competition to glshim anywhere (the closest things are stubs like jwzgles, which are like 1% of the functionality of glshim).

I also think Regal builds to somewhere around 200MB, which is insane. glshim is in the <=1MB range.

1. The three branches to try are lunixbochs/masterlunixbochs/unstable, and ptitseb/master.

My unstable has a couple of fixes not in master and should work fine. ptitseb's master has vastly diverged from mine, and has some features mine doesn't (and vice versa), but I would consider the ptitseb branch much less stable. My branch has a big spreadsheet of "how much of the GL spec do I conform to" as well as automated regression tests.

2. glReadPixels and glDrawPixels work fine for GL_RGBA. glCopyPixels isn't implemented, though it would be very easy using the first two *but* it would also be very slow.

3. Mainline is missing FBO support right now. I'll add it at some point.

4. What do you want from an ES 2.x backend? It's a huge task.

Can we get this into the official OS release at some point, and if so, can we determine which of all the branches/forks is the best for the job yet?
glshim will be part of the Pyra OS. Historically it was extremely unstable (with very different output and performance between versions), so it made sense to package it with games to ensure compatibility. I don't see why we would need to include it with Angstrom on the Pandora at this point.
 
Last edited by a moderator:
@ces

I also believe glshim's OpenGL 1.x support is more complete than Regal's, please correct me if I'm wrong
Regal tries to do everything with shaders I think, which means it simply cannot do some things like glLogicOp on Android. glshim is explicitly for "I have an OpenGL 1.x compatible game and want to run it on an OpenGL ES 1.x device" and I'm fairly certain there's no real competition to glshim anywhere (the closest things are stubs like jwzgles, which are like 1% of the functionality of glshim).

I also think Regal builds to somewhere around 200MB, which is insane. glshim is in the <=1MB range. [...]

Yes, it really surprised me that Regal doesn't support glLogicOp(). It doesn't support display lists either. Facts like these disappointed me, because I expected more completeness given where it comes from.


3. Mainline is missing FBO support right now. I'll add it at some point.

Is there any quick'n'easy way of plugging ptitseb's FBO support into your unstable branch? (maybe ptitseb can better answer this, as he'll know the details of his FBO implementation).

4. What do you want from an ES 2.x backend? It's a huge task.
I'd basically want the same features as in the ES 1.x backend (i.e: OpenGL 1.x support). Well, I'd be glad to have cube mapping, and depth textures with shadow mapping, but nothing really "advanced", just basic OpenGL 1.x support. My interest in ES 2.x (or even 3.x) is mainly because of future support (in case either iOS or Android decide to drop ES 1.x at some point).
 
Yes, it really surprised me that Regal doesn't support glLogicOp(). It doesn't support display lists either. Facts like these disappointed me, because I expected more completeness given where it comes from.
My support for display lists is insanity (I do a ton of code generation), but it's mostly complete, accurate, and fast.
glLogicOp isn't very easy on ES 2.0 or WebGL: https://github.com/kripken/emscripten/issues/1416 so I mostly have support for it due to my backend choice.

Is there any quick'n'easy ™ way of plugging ptitseb's FBO support into your unstable branch? (maybe ptitseb can better answer this, as he'll know the details of his FBO implementation).
I could probably do it in a couple of days. I'm not very familiar with FBO internals so I'd need to do some research. ( http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES )
 
Last edited by a moderator:
Is there any quick'n'easy ™ way of plugging ptitseb's FBO support into your unstable branch? (maybe ptitseb can better answer this, as he'll know the details of his FBO implementation).
I could probably do it in a couple of days. I'm not very familiar with FBO internals so I'd need to do some research. ( http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES )
And this is what Apple says about offscreen rendering with FBOs on iOS: https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html

Unfortunately, Apple actively fights against backwards compatibility, so they use to remove documentation relevant to previous versions. I say this because a previous version of the iOS GLES guide, says this sentence: 




[SIZE=11pt]On iOS, all framebuffers are implemented using framebuffer objects, which are built-in to OpenGL ES 2.0, and provided on all iOS implementations of OpenGL ES 1.1 by the [/SIZE][SIZE=10pt]GL_OES_framebuffer_object [/SIZE][SIZE=11pt]extension[/SIZE]
[SIZE=11pt] [/SIZE]





Unfortunately, this wording style (backwards-compatibility-friendly) has been removed in the current version of the guide that you can read in the previous link. It's the Apple way of "moving forward". I keep a previous version of the guide in PDF, which is a much higher quality document IMHO because it still keeps GLES 1.x related wording.

As a side note, I just found that none of the iOS devices implements GLES 1.x natively. The Apple GLES 1.x implementation runs as shaders on top of a native GLES 2.x implementation. But, from what I've read, such emulation is complete (I think they support the complete GLES 1.x feature set).

Back to the FBO subject, my suggestion is that you implement the GL_ARB_framebuffer_object extension in glshim. You cannot put it as a core feature because it was moved to the core in OpenGL 3.0, but, this extension is an "special" one in the sense that the extension function names are the same as the core names counterparts (ie: they don't have an "ARB" suffix, they are named just like if they were core functions). If there're any games which were developed for the older EXT version of this extension (which was done as three extensions actually:  GL_EXT_framebuffer_object, GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_blit --all these three extensions were merged as GL_ARB_framebuffer_object), then supporting this EXT flavour would require that you add an EXT suffix to the function names.

The best stuff is that the ARB and the EXT flavours match almost exactly. IIRC correctly, the set of functions is a perfect match, even with the same prototypes, so, for example, you could safely make glBindRenderbuffer() and glBindRenderbufferEXT() point to the same function if you wish.

Regarding enumerants, it isn't a perfect match, because the ARB flavour has a few enumerants more, but I think they aren't important ones.

What I said in my first post is still relevant: a whole implementation of OpenGL FBOs would be an awesome task (the GL_ARB_framebuffer_object extension specification is huge a book), but I think that's not necessary, I'd begin by the minimum features needed for running basic FBO demos, and only implement more features when running into some game or app that actually use them.
 
As a side note, I just found that none of the iOS devices implements GLES 1.x natively. The Apple GLES 1.x implementation runs as shaders on top of a native GLES 2.x implementation. But, from what I've read, such emulation is complete (I think they support the complete GLES 1.x feature set).
Do you have a source on this? Some 1.x operations are extremely hard on 2.x, like glLogicOp. I could see them targeting Metal or the native PowerVR shaders, however.
 
As a side note, I just found that none of the iOS devices implements GLES 1.x natively. The Apple GLES 1.x implementation runs as shaders on top of a native GLES 2.x implementation. But, from what I've read, such emulation is complete (I think they support the complete GLES 1.x feature set).
Do you have a source on this? Some 1.x operations are extremely hard on 2.x, like glLogicOp. I could see them targeting Metal or the native PowerVR shaders, however.
Didn't find an "official" source about the completeness of the emulation, but there's a book about 3D with the iPhone that suggests to use glLogicOp() for some effects, so I assume it works reasonably well. Anyway, if they say it's a GLES 1.x implementation, I believe they've the obligation to fulfill the complete 1.x specification.

I hope to be able to do some glLogicOp() tests on iOS this Easter (with iPad 1 running iOS 4.3, iPad Air running iOS 8.x, and iPhone5 running iOS 6.x. If I do it, I'll tell you how it looks performance-wise).
 
I added experimental FBO support in my fbo branch, though I don't have any test cases.
 
  • Like
Reactions: ces
I added experimental FBO support in my fbo branch, though I don't have any test cases.
Thanks a lot! That was fast!! I downloaded it, and I'll be testing it by the end of the week. Btw, what does "ERROR_IN_BLOCK" mean? It appears in your spreadsheet, but I don't know the meaning. Never heard of such error in OpenGL. And what's the meaning for "ton of errors" in glDraw/ReadPixels? Is it about some games not being displayed properly, or about glErrors management?

If I fix and/or add features when using glshim, I'll tell you, so that you can get my modifications.
 
Last edited by a moderator:
The errors column is "which errors do I return as per the spec?"

For glDrawPixels, it's pretty much "none" :) , but that hasn't been a problem yet.

ERROR_IN_BLOCK is my macro for this issue https://github.com/lunixbochs/glshim/issues/42

I'm curious to know what your case for glDrawPixels or glBitmap is, as they're actually kinda hard to implement (my current solution is semi-fast but has no Z order).
 
Last edited by a moderator:
The errors column is "which errors do I return as per the spec?"

For glDrawPixels, it's pretty much "none"  :) , but that hasn't been a problem yet.

ERROR_IN_BLOCK is my macro for this issue https://github.com/lunixbochs/glshim/issues/42
Great!! No problem about the lack of adding error checking. If I've some available time, I may add more error checking.

 


I'm curious to know what your case for glDrawPixels or glBitmap is, as they're actually kinda hard to implement (my current solution is semi-fast but has no Z order).
Well, I use them from time to time, although quite sparingly. The main uses I've for them is text output (as the old trick of glBitmaps inside display lists -one display list for each char, with the same ID as its ASCII code so that you can directly pass a C string to glCallLists --this was a quite common trick for outputting text in the early days of OpenGL 1.x, so I guess you must have hit this case use in some old games), as well as drawing images at exactly 1:1 pixel size via glDrawPixels in situations where I really want to avoid texturing (I do this some times for previewing image files such as TIFF, JPG, etc...). Anyway, they're less important scenarios, that can be reconsidered through texturing if necessary.

Btw, is it really impossible to make your glBitmap/glDrawPixels emulation use the depth test? How are you doing it, with some texture? If you use texturing, it should be possible draw a primitive that is affected by the depth test if needed (I say this because it's cool to print text labels at a 3D object vertices and see them obscured if there's some geometry hiding the label).
 
Last edited by a moderator:
Btw, is it really impossible to make your glBitmap/glDrawPixels emulation use the depth test?
https://github.com/lunixbochs/glshim/issues/21
It's extremely slow to do bitmaps with depth testing on the Pandora, because the only way I've managed to get good bitmap performance was to batch the entire frame worth of draws into one texture upload. This will probably be better with texture streaming, but that's hardware-specific.
 
Btw, is it really impossible to make your glBitmap/glDrawPixels emulation use the depth test?
https://github.com/lunixbochs/glshim/issues/21

It's extremely slow to do bitmaps with depth testing on the Pandora, because the only way I've managed to get good bitmap performance was to batch the entire frame worth of draws into one texture upload. This will probably be better with texture streaming, but that's hardware-specific.
There is no Alpha with Streaming Texture unfortunatly, only RGB565 and some YUV formats...
 
There is no Alpha with Streaming Texture unfortunatly, only RGB565 and some YUV formats...
Should still be possible. You can use GL tricks to render a different color as alpha, even in the fixed pipeline.
 
Last edited by a moderator:
Btw, is it really impossible to make your glBitmap/glDrawPixels emulation use the depth test?
https://github.com/lunixbochs/glshim/issues/21

It's extremely slow to do bitmaps with depth testing on the Pandora, because the only way I've managed to get good bitmap performance was to batch the entire frame worth of draws into one texture upload. This will probably be better with texture streaming, but that's hardware-specific.
I think the best way to implement this with hardware acceleration is to render glBitmap/glDrawPixels as textured quads. Of course this means you've to temporally modify the view transformations, so that the lower left corner of the image follows the correctly projected coordinates from the last call to glRasterPos, and building the quad so that it's parallel to the view axis and one texel maps exactly to one pixel.

This is the fastest way I can imagine to emulate glBitmap/glDrawPixels with hardware acceleration.

Of course it has some drawbacks/problems:

-It might be difficult to guarantee that the rest of the OpenGL state doesn't introduce side effects on this trick (although if you do it with care, you can guarantee correct behavior).

-Except with some extensions, textures are limited to power of two dimensions, and have a maximum size. This means that you'd need to use the smallest power-of-two size not smaller than the bitmap size, and waste some unused texels. Also, it means that very large images may not fit as a texture without resampling.

Anyway, this would allow correct depth buffer behavior while drawing bitmaps/images, and with hardware acceleration.
 
I do it like that on my fork. It's fine for simple list used to display some characters, but it is clearly slower then lunixbochs batching method when using glBitmap intensively. ..
 
I do it like that on my fork. It's fine for simple list used to display some characters, but it is clearly slower then lunixbochs batching method when using glBitmap intensively. ..
For intense calls to glBitmap drawing small bitmaps, maybe you could get faster performance if you queue the calls, building in RAM a big texture made from the small bitmaps of each call, and then doing a single call to glTexImage2D when there's no more space available in the current texture. You still have to render a different GL_QUAD for each queued glBitmap call, but you'll make very few calls to glTexImage2D, so maybe you can get faster performance doing this.

But of course, the emulation becomes even more complicated: you've to keep an internal queue of glBitmap calls (with the current value of glRasterPos at the moment of each call, as well as the texture coordinates where the bitmap has been stored).

And if the bottleneck isn't at glTexImage2D, then you won't get any performance gain, of course.
 
I think the best way to implement this with hardware acceleration is to render glBitmap/glDrawPixels as textured quads. Of course this means you've to temporally modify the view transformations, so that the lower left corner of the image follows the correctly projected coordinates from the last call to glRasterPos, and building the quad so that it's parallel to the view axis and one texel maps exactly to one pixel.
This is how it works right now, but with one big quad.
These are all things I've considered. So far I've only had one game where it mattered, and I wrote a software renderer for that game instead (Uplink).

I don't see a point in implementing something that will cause more than one texture upload per frame unless I have texture streaming, as the current fully batched solution is already bottlenecked to ~20fps.

The most promising way to me was to automatically atlas textures and use a hash table to prevent uploading the same glBitmap/glDrawPixels call more than once, but that also has its tradeoffs and isn't exactly trivial.

For now, rendering text with no Z order is actually usable for most games and looks nice and crisp.
 
Last edited by a moderator:
  • Like
Reactions: ces
I think the best way to implement this with hardware acceleration is to render glBitmap/glDrawPixels as textured quads. Of course this means you've to temporally modify the view transformations, so that the lower left corner of the image follows the correctly projected coordinates from the last call to glRasterPos, and building the quad so that it's parallel to the view axis and one texel maps exactly to one pixel.
This is how it works right now, but with one big quad.

These are all things I've considered. So far I've only had one game where it mattered, and I wrote a software renderer for that game instead (Uplink).


I don't see a point in implementing something that will cause more than one texture upload per frame unless I have texture streaming, as the current fully batched solution is already bottlenecked to ~20fps.


The most promising way to me was to automatically atlas textures and use a hash table to prevent uploading the same glBitmap/glDrawPixels call more than once, but that also has its tradeoffs and isn't exactly trivial.


For now, rendering text with no Z order is actually usable for most games and looks nice and crisp.
Yes, please, don't consider my comment as a feature request. The current implementation of glBitmap/glDrawPixels is more than enough for me. It cannot be compared to my need for FBOs, which I really needed (thanks a lot for adding them so quickly!!).

In the case of glBitmap(), I have some applications that use it intensively, but I was already aware that their way of displaying text should be reconsidered when porting to mobile devices, so don't worry, I already had in mind that I had to write an alternate render path for it.

The use of glBitmap() in these apps is for printing "data labels" in the vertices of 3D meshes. Imagine for example that you've a 3D mesh whose Z coordinate represents temperature, and you wish to print the temperature at each vertex: you need to have the depth test enabled so that you don't see numbers which should be hidden. Or a poly editor, with an option to show the vertices IDs. But, as I said, I already had in mind that their mobile port would need an alternative display method.
 
Last edited by a moderator:
I can't seem to build this on the Raspberry Pi 2:


pi@raspberrypi ~/glshim $ cmake . -DBCMHOST=1; make GL
CMake Error at src/CMakeLists.txt:18 (target_link_libraries):
Cannot specify link libraries for target "GL2" which is not built by this
project.


-- Configuring incomplete, errors occurred!
make: *** No rule to make target 'GL'. Stop.

This is ptitseb's fork.
 
GL2 ? It should not try to build that. That's odd. But I have never tried building the RPi path, so I may have broken it when doing some cleanup.
 
Back
Top