Possibly daft question - will hardware help?


ZXDunny

Deep avatar
Joined
Oct 12, 2010
Messages
2,585
Long post alert :)

Ok, as you all probably know by now my pet project is SpecBAS, the BASIC interpreter that exists on the repo as PandaBAS.

Due to the rather poor performance of the ARM CPU regarding floats, it's quite slow. However, while updating the code I've run into a rather interesting (and previously unconsidered) slow-down - the display rendering.

I recently updated SpecBAS with an aim to provide handling of 32bit graphics. To this end, the original 8bit rendering surface (the part the user sees) had to be converted to 32bit and my compositor was altered to render the current graphics system to that. A little background then:

SpecBAS provides a drawing surface (the screen) upon which the user can render text and graphics. It also provides windows, which are themselves plain graphical surfaces. Whenever a frame is rendered, the screen and all its windows are blitted to the SDL surface that is declared when SpecBAS is run. I've limited drawing to only the rectangles of the changes that occur, and obviously only to windows that are visible. Previously I copied the display data to the SDL surface 4 pixels at a time (using longword pointers to transfer) but now of course with a 32bit display target I have 4 times the amount of data to move....

As a test, I ran a small program which plot pixels to the display - a plasma cloud function called recursively in a diamond-square algorithm to a 512x400 image. The results were interesting:

1. With the old renderer to an 8bit target, blitting longwords - 102 seconds

2. New renderer blitting one pixel at a time - 148 seconds.

Clearly unacceptable and the whole thing feels quite sluggish. I messed around with instruction ordering, unrolled some loops and stored common expressions in variables and got it down to:

3. Newer renderer - 128 seconds.

Not too far behind the old renderer - only 0.79 times slower. For a laugh I disabled the compositor and ran the test again, to gauge how much time PandaBAS spends in there... And got 57 seconds.

So almost half of the time PandaBAS spends running is drawing the display. This isn't a problem on x86 CPUs these days, as the compositor runs in a separate thread and therefore (because I'm careful to set affinity properly) gets a core to itself, and doesn't interfere with the interpreter.

The Pandora only has one core - so time spent rendering the display is time that is unavailable to execute BASIC code.

How do I get around this?

Can I use GLES to do the heavy lifting? All my windows are stored as a 256-entry palette of RGBA values followed by a plain memory region that represents their surfaces, 8bpp. Can GLES be used in FPC/Lazarus?

Can the DSP help? Is it even possible to program the DSP in Lazarus?

Both of the above options would be a fire-and-forget strategy - I tell it to render the display and then go off and continue interpreting; it doesn't matter if the memory regions that hold the display and windows get altered during a render.

Given that I also target the Raspberry PI, would either of the above be of any use at all?

Or do I need to optimise my compositor further to try and get performance up as far as possible?

Any thoughts, anyone?

Cheers,

D.
 
OpenGL|ES can use 8-bit per pixel paletted textures afaik. Texture uploads causes some overhead, but compared to the over a minute you have? I'd say that overhead is irrelevant.

EDIT: But, do keep in mind that OpenGL on the pandora can only render 16-bpp. You can use 32-bpp textures, but half the bit-depth will be wasted.
 
Last edited by a moderator:
NEON blitters like those in notaz' SDL can make a big difference. You could try bsp's DSP blitters too.
 
Cheers guys. I'll investigate Notaz's blitters - I'm not sure that the DSP can be programmed in Lazarus (unless it's just a C header conversion).

OpenGL losing half the bitdepth sounds pretty bad - I might have to give that a miss.

D.
 
As it's just copies (from what I understood), you should somehow call memcpy() that's in the C library instead of doing this with interpreted code. My blitters won't do any better than that as memcpy() in this case as memcpy() already does all necessary preloads.

And yes if the surfaces can be copied in parallel, DSP would help. I suppose FPC can interface with system libraries somehow, otherwise it would not be able to use SDL at all? That means DSP code could be put into another library written in C and called from FPC/Lazarus. For R-Pi that library could use the DMA that could run in parallel (I saw somewhere sample code for that).
 
Last edited by a moderator:
Memcpy() - and its pascal equivalent - I would assume copies chunks of bytes. Would the calling overhead be worth it for the number of times it would be called? Also, does it do pixel format conversion? :)

The graphics I need to blit to the destination are 8bpp, and so need to be converted to 32bpp. Currently, my code does this:

While SrcH > 0 Do Begin

W := SrcW;

While W > 0 Do Begin
dPtr^ := LongWord(Window^.Palette[pByte(sPtr)^]);
Dec(W);
Inc(pByte(sPtr));
Inc(dPtr);
End;

Inc(pByte(sPtr), tWidth1);
Inc(pByte(dPtr), tWidth2);
Dec(SrcH);

End;
I.e, it reads a byte and uses that as an index into the palette, which stores a longword format pixel colour (RGBA) which is then written to the destination. When a line has been drawn, the source and destination pointers are incremented to take into account the stride of the surfaces they work on (this routine also does clipping).
I've also tried this:

{$IFDEF PANDORA}
If SP_ScreenZero = nil Then Begin
Window := pSP_Window_Info(@Bank^.Info[WindowIdx]);
SP_ScreenZero := SDL_CreateRGBSurfaceFrom(@Bank^.Memory[Window^.Offset], SCREENWIDTH, SCREENHEIGHT, 8, SCREENWIDTH, 0, 0, 0, 0);
SP_ScreenZero^.Format^.Palette := @Window^.nColours;
End;
SDL_BlitSurface(Screen, nil, SP_ScreenZero, nil);
Inc(WindowCnt);
Inc(WindowIdx, SizeOf(SP_Window_Info));
{$ENDIF}
Basically, for the largest (and most common) surface, the main display, an SDL_Surface is declared and assigned to the bytes used to represent the surface internally. The palette is assigned (same format as SDL_Palette) by changing the pointer (I'm not worried about memory leaks at this point, it's only done once and will clear it up later). Then I blit the new surface to the previously created display surface, and continue with the next window. Note that this is just for the main window, the rest are rendered by my own routine.
All I get, however, is a black screen. My other windows (the editor windows) display fine :(

I'm investigating whether or not the alpha component of the palette is significant - it's zero atm. It shouldn't be significant, as the surface is 8bpp.

Edit: As I suspected, the alpha value has no effect - still black.

D.
 
Last edited by a moderator:
Ok so you need to convert from paletted format. I don't have much idea how that palette lookup code translates to ARM so it's difficult to comment. If it all goes though some interpreter it probably several times slower than doing the same in C.

I almost never use SDL so I'm not an expert, but I doubt just setting the pointer is enough to setup the palette. There is SDL_SetPalette() function, so you're probably supposed to use that.

Other thing that could cause black screen is using doublebuffering but not SDL_Flip(), but if everything works by updating screen surface directly, it should not be that problem.
 
Last edited by a moderator:
Are you sure the Packing of the data of a pSP_Window_Info starting at nColors is 100% the same as SDL_Palette, because the colors after the nColors are a pointer to, not just an array of byte (but I haven't looked at the code) (can you copy/paste SP_Window_Info definition ?)

For Pascal code, it looks pretty optimal :) The issue with FPC is that it doesn't vectorize (cannot use NEON as it's not supported).
 
Last edited by a moderator:
Are you sure the Packing of the data of a pSP_Window_Info starting at nColors is 100% the same as SDL_Palette, because the colors after the nColors are a pointer to, not just an array of byte (but I haven't looked at the code) (can you copy/paste SP_Window_Info definition ?)
Way ahead of you :)

That was exactly the problem - I assumed that the colour palette was an array, and didn't spot the pointer! I've modified it now, but also found that I'd got the Blit's parameters the wrong way round! No wonder I just get black :)

For Pascal code, it looks pretty optimal :) The issue with FPC is that it doesn't vectorize (cannot use NEON as it's not supported).
Yeah, I was afraid of that. If I can't make some major savings in CPU usage elsewhere, I may have to just accept that this app isn't going to run very well on the Pandora and abandon it as a target. Either that or abandon 32bit graphics and go back to 8bit.

D.
 
Last edited by a moderator:
Are you sure the Packing of the data of a pSP_Window_Info starting at nColors is 100% the same as SDL_Palette, because the colors after the nColors are a pointer to, not just an array of byte (but I haven't looked at the code) (can you copy/paste SP_Window_Info definition ?)
Way ahead of you :)


That was exactly the problem - I assumed that the colour palette was an array, and didn't spot the pointer! I've modified it now, but also found that I'd got the Blit's parameters the wrong way round! No wonder I just get black :)
^^

For Pascal code, it looks pretty optimal :) The issue with FPC is that it doesn't vectorize (cannot use NEON as it's not supported).
Yeah, I was afraid of that. If I can't make some major savings in CPU usage elsewhere, I may have to just accept that this app isn't going to run very well on the Pandora and abandon it as a target. Either that or abandon 32bit graphics and go back to 8bit.


D.
Well, you can still redo that While loop in ARM assembly. I suppose it's doable in FPC (I did that a lot in TPX, I just suppose similar keyword like "asm" exist in FPC too). Hand optimize assembly should be faster 2 to 3 faster than compiled pascal, even if you cannot use NEON for the Palette...
 
Well, you can still redo that While loop in ARM assembly. I suppose it's doable in FPC (I did that a lot in TPX, I just suppose similar keyword like "asm" exist in FPC too). Hand optimize assembly should be faster 2 to 3 faster than compiled pascal, even if you cannot use NEON for the Palette...
I was afraid you were going to say that :)

I'll have to start learning ARM assembly then - this should be fun!

D.
 
Well, you can still redo that While loop in ARM assembly. I suppose it's doable in FPC (I did that a lot in TPX, I just suppose similar keyword like "asm" exist in FPC too). Hand optimize assembly should be faster 2 to 3 faster than compiled pascal, even if you cannot use NEON for the Palette...
I was afraid you were going to say that :)


I'll have to start learning ARM assembly then - this should be fun!


D.
Yeah, ARM is funy, and soooo different from x86 !!!
 
Ok, so ASM is my last, best hope for speed. Delphi does a pretty good job of that loop on x86 - about 5 instructions all told per iteration. I have no idea what FPC does with it in ARM assembly.

Thing is, I don't want to have to give up on 32bit graphics. I know that on the Pyra this thing will fly just like it does on the desktop PC, and the Pandora will be irrelevant then. But it's still a shame I can't get decent performance out of it. Then again, I'm throwing around more than a megabyte of data every 20ms. It's just too much for the Pandora to handle, I suppose.

D.
 
Ok, finally tracked down the slowdown. I built PandaBAS for Windows/SDL and worked from that - and it turns out the updating the screen with SDL_UpdateRect is really slow.

With some faffing about, I've got the speed to double the previous version's performance - I'm quite chuffed about that. I'm still calling SDL_UpdateRects() but on examination of the SDL readme for Notaz's accelerated driver, I've noticed this:

SDL_OMAP_FORCE_DIRECTBUF:

When double buffering is not used, this option forces all blits to go

directly to the framebuffer (SDL_UpdateRect() has no effect), which will

give speed but may cause flickering. Otherwise all blits will go to offscreen

buffer and SDL_UpdateRect() is needed to update the screen (this is how

standard SDL works too).

When double buffering is used, this option has no effect (all blits always

go to back buffer that's displayed after flip).
Now *this* looks interesting. The original idea of the compositor was that it would run in the background (much like the ZX Spectrum's GPU) and would update constantly, once per frame. Now, if I can eliminate calls to SDL_UpdateRects() and point my compositor at the screen directly, it will eliminate the speed penalty (my blits really are quite fast even without NEON) and also behave as I'd originally intended.

Here is my code to create the surface that I draw to:

 

Screen := SDL_SetVideoMode(Width, Height, 32, SDL_FULLSCREEN or SDL_HWSURFACE);
Where "Screen" is of type pSDL_Surface (pointer to a surface). That pointer is used to render to that surface - how do I get a pointer to the actual screen (frame buffer?)
Cheers,

D.
 
Last edited by a moderator:
If you set SDL_OMAP_FORCE_DIRECTBUF environment variable to 1, the returned pointer will automatically be directly to video RAM (surface memory will be the same RAM location that display controller copies from), otherwise it will be offscreen one.
 
Last edited by a moderator:
If you set SDL_OMAP_FORCE_DIRECTBUF environment variable to 1, the returned pointer will automatically be directly to video RAM (surface memory will be the same RAM location that display controller copies from), otherwise it will be offscreen one.
Interesting, thanks :)

It works, though isn't much faster in the current test - probably is in others though... It's useful to see what flickers and where - I'm drawing far too much each frame in the editor at least.

D.
 
Ok, I think that in order to reduce flickering I should add exclusion rectangles to my blitter. If anyone has any suggestions as to the best algorithm for that (to be able to exclude a rect from a rect by dividing it up into smaller rects is my best guess) then I'm all ears :)

D.
 
Ok, so now the speed of PandaBAS with the renderer enabled is the same as the speed with it disabled - I get 57 seconds in my test, which you'll see from the first post is what I got when I removed graphical output!

Thanks for the help guys :)

D.
 
Back
Top