GPU Board with Vulkan Support


i think we definitely need a sarcasm meter in real life.

not exactly sure how to get the "sarcasm tone" correctly, but it's pretty noticeable (machine learning i suppose). of course, to get the full sarcasm level, you multiply the sarcasm tone by the preposterous-ness of the sentence, the latter of which would require speech recognition and a fancier algorithm.

bonus points for spitting out the geiger counter clicks when the sarcastic tone is meant in a radioactive, destructive, mean-spirited sort of way.
 
Yeah, but all the extra GPU efficiency is for nothing if the CPU spends more time sending crap to the GPU than it does on game logic and audio combined...

Nah, I'm sure it's great, it really is.

The thing is the OpenGL driver is doing this (and more! MUCH MORE) behind the scenes, so all this is actually likely a *lot* less than calling glDrawMeAGoddamnTriangle()
 
Yeah, but OpenGL runs on the GPU not the CPU. On games and whatnot which are more GPU bound than CPU bound that might actually not be a bad thing, but I dunno how many things that applies to.
 
Yeah, but OpenGL runs on the GPU not the CPU. On games and whatnot which are more GPU bound than CPU bound that might actually not be a bad thing, but I dunno how many things that applies to.

Not really, the code that 'sets up the GPU to run the OpenGL rendering' is still run on the CPU, and that's what vulkan is doing here. It's not actually writing out pixels, but setting up the GPU command buffer so it knows how to write out which pixels to where, and kicks it off. In some use cases, this can have a high CPU cost, and actually be the bottleneck (if the cost of setting up the commands is higher than running them on the GPU - often caused by 'lots of little commands' rather than 'few big ones')

An equivalent (though much more complicated as it has many more options and weird corner cases it has to deal with) is run on the CPU as the 'OpenGL driver'.
 
This is borne out by my experiments. I have, as many of you know, a pet project that is a BASIC interpreter. What I do is declare a region of memory as the display and the interpreter writes to that as it goes. Once per frame, I blit it to the display.

Currently I do that with SDL - declare a surface and write to the memory pointed to by the address it's loaded to. I tried using OpenGL to see if that might be any faster than SDL - no, as it turns out - the cost of uploading the surface as a texture each frame is possibly a little slower than SDL. What would be nice is if I can get a pointer to the actual graphics memory on the GPU. Now that would be awesome, but the cost of writing to it in a random-access fashion is a lot slower than uploading a texture each frame.

You'd have thought by now we'd have faster access or a concept of true shared memory with no bottlenecks, but no. We haven't had that since the 8bit days.
 
This is borne out by my experiments. I have, as many of you know, a pet project that is a BASIC interpreter. What I do is declare a region of memory as the display and the interpreter writes to that as it goes. Once per frame, I blit it to the display.

Currently I do that with SDL - declare a surface and write to the memory pointed to by the address it's loaded to. I tried using OpenGL to see if that might be any faster than SDL - no, as it turns out - the cost of uploading the surface as a texture each frame is possibly a little slower than SDL. What would be nice is if I can get a pointer to the actual graphics memory on the GPU. Now that would be awesome, but the cost of writing to it in a random-access fashion is a lot slower than uploading a texture each frame.

You'd have thought by now we'd have faster access or a concept of true shared memory with no bottlenecks, but no. We haven't had that since the 8bit days.

Like glMapBuffer? https://www.khronos.org/registry/gles/extensions/OES/OES_mapbuffer.txt

To do it 'fast' you'll probably want to locally double-buffer anyway, so you're currently modifying one buffer while the GPU is using the other, otherwise the costs of synchronizing the two would probably be much higher than any possible gain.

And because of the static cost of setting up a GPU command list, it'll never be 'faster' to draw pixels using opengl, only polygons will be useful. Opengl isn't a magic *make graphics go faster* button, but a way of accessing a hardware block optimised for rasterising out polygons and running shaders on large numbers of pixels at a time.
 
Recent benchmarks show Vulkan actually using quite a bit less CPU than OpenGL. It's not the amount of code that matters, it's what it does and what it's synchronized with.

@ZXDunny: Plotting pixels using the CPU and sending them to the GPU to render as a full screen texture is not a really good use case unless you want some shader effects or offloaded scaling. Send your sprites and stuff to the GPU beforehand and do all the blitting there. Don't send textures at runtime, only location and scene composition data. Sorry if this is obvious, but thinking of using OpenGL or Vulkan like SDL sounds to me like a misunderstading of the concept.
 
Recent benchmarks show Vulkan actually using quite a bit less CPU than OpenGL. It's not the amount of code that matters, it's what it does and what it's synchronized with.

@ZXDunny: Plotting pixels using the CPU and sending them to the GPU to render as a full screen texture is not a really good use case unless you want some shader effects or offloaded scaling. Send your sprites and stuff to the GPU beforehand and do all the blitting there. Don't send textures at runtime, only location and scene composition data. Sorry if this is obvious, but thinking of using OpenGL or Vulkan like SDL sounds to me like a misunderstading of the concept.

Indeed - it's that I had no choice. The OSX/SDL combination, unlike Linux and Windows, results in a maximum of about 12 fps window updates, which is clearly not acceptable so I had to go to an OpenGL solution so I could hit the magic 50fps that SpecBAS demands. Of course by that point my renderer and window manager was pretty much done - it's 8bpp so there's really no need for hardware at all, and I'd have liked to avoid it but.... Well, Apple.

I had considered using OpenGL when I eventually get around to adding 3D extensions to the BASIC, but I've decided it will be better to go through software for that as there's a particular style I want to do, and hardware would be overkill.
 
Back
Top