Killed


rodolforg

Member
Joined
Apr 9, 2008
Messages
72
Age
41
Location
Brazil
I'm coding an applicatation that uses SDL. sometimes, when I try to allocate a surface (by SDL_CreateRGBSurface) - and yes, it's a little big, the program crash reporting "Killed" message, and not just a NULL pointer to the supposed to be returned surface.

Any idea how can I prevent that crash? Or I have to restrict the surfaces to an arbitrary size? I read the SDL_surface.c* code and it already limitates the dimension in order to prevent overflow on variable size (int, long, etc).

Edit: *this file I got from 1.2.14 available at libSDL.org site.
 
SDL_CreateRGBSurface() in SDL_surface.c appears to call SDL_OutOfMemory() before returning NULL. If you're running that close to the line on memory, though, then it could be anything that's killing your app.
 
Rodolfo said:
I'm coding an applicatation that uses SDL. sometimes, when I try to allocate a surface (by SDL_CreateRGBSurface) - and yes, it's a little big, the program crash reporting "Killed" message, and not just a NULL pointer to the supposed to be returned surface.

Any idea how can I prevent that crash? Or I have to restrict the surfaces to an arbitrary size? I read the SDL_surface.c* code and it already limitates the dimension in order to prevent overflow on variable size (int, long, etc).

Edit: *this file I got from 1.2.14 available at libSDL.org site.

are you certain that is the line? you can use gdb and step through the code to pinpoint the line.
if are you sure about that line just try making the size smaller and see if it gets past it.
 
Last edited by a moderator:
The Linux kernel can overcommit, where a memory allocation will succeed even if there's not enough room, in the hopes that you won't use all the memory. If too much memory really is used then the kernel has to pick something to kill. This is a configurable option, not sure what GPH went with here, but I do know I've gotten stuff killed on launch for having big static arrays that couldn't fit in RAM.

I'm sure you could track memory usage yourself but unless your program can cope with your allocation failing that's somehow better than closing I don't know what the point is. Obviously you're going to want to do something to reduce the memory usage.
 
Pickle: Yes, I'm pretty sure. fprintf(stderr,...) before and after. And when it is smaller, it runs through without problems. I can't use gdb because the battery is empty and I can't find my cable... And I never carry it when I leave home. One member here is trying it for me on Wiz.

Exophase: Hmm.. interesting. the surface is generated from a file the user chooses. So it can be that big and I would'nt know until... well, it crashes before I could know.

I test my apps on my SO (Debian/Linux) on my notebook, use gdb/valgrind/gmon to check everything I can (memory leaks, invalid read/write, etc.). That error doesn't happen here. Just on Wiz.

Finally, I wouldn't mind if it couldn't be loaded or created that big at runtime, but I wih I could prevent the crash.
 
You should read here for more information:

http://linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html?page=1

One option is to turn down overcommitting in an attempt to force the attempt to fail. This is pretty obtrusive to operation of everything else on the system and I imagine that it needs the user to be root, but on Wiz I'm sure that usually your app is the only important thing that'll start running and it'll probably be ran as root. Another option is to read /proc/meminfo to try to determine how much memory is available ("reclaimable") before attempting to load the image. That's pretty cumbersome and there may be some way to check it more directly through a syscall or something, but I don't know how.
 
Pickle: it would something like 3000x3000 . but it's a user file, it's not that I want that.

Exephase: good link! I read just the first page, though - too long ;) I'll continue next week, as I'll in travel saturday morning~thursday.
I think I'll use the /proc/meminfo approach. it seems easy and ir gives me a good estimative.

Many thanks, both of you.
 
3000 x 3000 x 3 bytes/pix = 25.7 Megabytes!!, if you are using 24 bpp images, and more if you use 32 bpp. You can use 16 bpp to reduce this. The Wiz memory free for apps is about 20 Mb or less if I remember. You can consider access the file by blocks, that is, reading only certain square areas, this is used in geographic software. You can load a part or all the file in a normal array and extract an area to a reduced screen surface to paint. If you must to process all the pixels, then you can to read small blocks too and not retain the data in memory. You can set a limit in the size of the image file too.
 
Hardyx said:
3000 x 3000 x 3 bytes/pix = 25.7 Megabytes!!,

Lol, nice edit Hardyx ;-)

Rodolfo like Hardyx said your going to be bumping up against the available memory on the wiz, if this was the caanoo you could open that image.
One other thing might be worth knowing the wiz has about 48 mb of 64 mb assigned to linux. The upper memory area they usually use for the framebuffer and some other things that you wouldnt be using. So you could allocate some of that memory for use, but you would have manually handle the image data. SDL will never be able to use it.
But you might have to go the route of manging the image yourself in order to load sections.
 
Last edited by a moderator:
Hardyx said:
3000 x 3000 x 3 bytes/pix = 25.7 Megabytes!!, if you are using 24 bpp images, and more if you use 32 bpp. You can use 16 bpp to reduce this. The Wiz memory free for apps is about 20 Mb or less if I remember.
Yeah, I know it's really really big ;) I use 16bpp and no alpha channel, so, it's a little bit smaller. The GPH developer guide says that, with SDL, there's about 47MB of free memory available...
Hardyx said:
You can consider access the file by blocks, that is, reading only certain square areas, this is used in geographic software.
Hmm... I thought about this once. But I need the image can be scrolled - according the user's will... And, as the SD card access is so slow on Wiz, I could not see a way to loading the next block in background while scrolling without little freezes. =/
I could do something by threads, as I did in Wiz File Archive, but I can't test it until I get my cable back. I mean, I do not want to disturb LTStone everytime I make a change on it to check if it's still temporarily freezing. I already do that for big image loads/scales.

And I'd need to handle/study first the image libraries myself (JPG, PNG, BMP, GIF images at least) to discover the dimensions without load the image itself.


Pickle said:
So you could allocate some of that memory for use, but you would have manually handle the image data.
Oh, that damn SDL_image ;) libjpeg allows, out of box, read a scaled version of the image - SDL_Image could have this kind of functions... Cropped loads and scaled loads.
 
Last edited by a moderator:
Rodolfo said:
Yeah, I know it's really really big ;) I use 16bpp and no alpha channel, so, it's a little bit smaller. The GPH developer guide says that, with SDL, there's about 47MB of free memory available...

That 47 mb number is misleading, that is the total amount that can be accessed by linux, not the total amount available for user applications. The kernel of course is going to use part of that memory.
 
Last edited by a moderator:
Hmm... I see.
So, for now, I will get the image dimensions not using the SDL_image library to avoid the suddenly and not announced Killed message, and restricts its opening if the image is too large. [/shame]
 
Back
Top