Page 5 of 5 FirstFirst ... 345
Results 41 to 45 of 45

Thread: Why proper error handling should ALWAYS be done

  1. #41
    Join Date
    Aug 2006
    Beans
    198

    Re: Why proper error handling should ALWAYS be done

    Quote Originally Posted by j_g View Post
    Yep.

    But Wy, this likely has to do with the differences between the size of your swap file, and total RAM. You're probably running under a more constrained system, so you really are hitting the "low memory handling" of Linux whereas some of these other guys may have systems where their test cases don't trigger the behavior (so they assume it doesn't exist). I haven't looked at the algorithm Linux uses for figuring out how much it will over-commit. It may be that this is relative to how much swap space and RAM you have. I have not heard of a way of asking Linux what this amount would be. And even if you could, it still doesn't change the fact that malloc can return 0. (Can we please put this rumor to bed?)
    There is no algorithm for that.
    As I said in my post that had the code, it returns 0 if you ask for more than you have contiguous address space.

    Linux will overcommit memory to infinity, but malloc returns 0 when (per process) address space runs out.
    Last edited by Tuna-Fish; November 20th, 2007 at 05:31 AM.

  2. #42
    Join Date
    May 2007
    Beans
    245
    Distro
    Ubuntu 10.04 Lucid Lynx

    Post Re: Why proper error handling should ALWAYS be done

    Quote Originally Posted by Tuna-Fish View Post
    If you had that much experience, and/or had the decency of spending the whole 2 minutes testing it, you'd know that in "real world applications" malloc doesn't return 0.

    Behold, proof:
    Code:
    Code:
    #include <stdio.h>
    #include <stdlib.h>
    #define SIZE 20
    #define AMOUNT 1073741824 // 1024*1024*1024
    int main()
    {
        int i;
        int*a[SIZE];
        for (i=0;i<SIZE;i++){
            a[i] = malloc(AMOUNT);
        }
        printf("pointers:\n");
        for (i=0;i<SIZE;i++){
            printf("%x\n",a[i]);
            free(a[i]);
        }
        return 0;
    }
    Result:
    Code:
    pointers:
    ff8b2010
    3f8b3010
    7f8b4010
    bf8b5010
    ff8b6010
    3f8b7010
    7f8b8010
    bf8b9010
    ff8ba010
    3f8bb010
    7f8bc010
    bf8bd010
    ff8be010
    3f8bf010
    7f8c0010
    bf8c1010
    ff8c2010
    3f8c3010
    7f8c4010
    bf8c5010
    My machine has 1 GB ram and 2 GB swap.

    I just allocated 20GB.
    Malloc returns 0 when the amout it is asked to allocate is more than the biggest chunk of contiguous address space on the machine. Not a big issue on a 64 bit machine...

    If you need to be sure you don't run out of mem, use calloc. it nulls the pages when it allocates, and if i repeated the above with it, it would return 0 on the 3rd allocation. I don't because I don't want to wait the time it would take to null 2gigs of swap.

    Close, but you still haven't demonstrated a real world application. If a real world application ever required direct access to more than a Gig of memory, it would not use malloc -- it would simply open a memory-mapped file. Using a memory-mapped file, even on a 64MB RAM 32-bit machine, you can directly access up to 18 TERABYTES of storage. This avoids the paging file (swap space), so you also get the added benefit of not causing any thread blockage do to serialization (calls that access the paging file are serialized).

  3. #43
    Join Date
    Aug 2006
    Beans
    198

    Re: Why proper error handling should ALWAYS be done

    Quote Originally Posted by j_g View Post
    You got it.

    What Linux needs is an equivalent to Win32's VirtualAlloc so that the app can literally ask the operating system if there is actually the ability to make particular pages available right now, and if not, the OS can tell the app "No, there isn't, so don't read/write to/from the memory". How does the OS tell the app whether the requested page is actually available or not? Much like malloc(), VirtualAlloc returns non-zero if so, or 0 if not. In other words, there are separate functions to reserve memory, versus committing memory, ie, actually ask for some virtual memory (whether over-committed or not) to be physically made available right now. (Actually, VirtualAlloc serves both purposes. But you can think of it as having two, distinct functions). The OS should expect the app to be sensibly written and properly handle being told "no". (And the app should be sensibly written. It shouldn't do something like "Well I'm just going to loop around this call to VirtualAlloc, constantly asking you to commit").

    The OS shouldn't wait until the app actually accesses memory before it does the commit. It should first give the app an opportunity to say "I really do intend to access this memory now, so make sure it is backed up by either RAM or the swap file. Don't just write me out an IOU that you can't cash". Plus, by an app calling this new commit function, then the OS can assume that the app is written not to do things that made Linux resort to over-committing to begin with (ie, forking to run code that shouldn't need copies of its global resources, etc. See my other thread, which is now locked so you can't discuss these things there anymore, for more details).
    calloc returns a pointer to memory that has been allocated and set to 0 beforehand. Seems to do what you want. The big point here is that the os doesn't actully know how much memory it could free any given moment if it had to. Windows does this by always going the worst case, meaning most of the memory is actually wasted. When it asks the user to reduce memory use, I bet there is still several hundred megabytes free or freeable. Linux tries to soldier on until it actually really has to kill something. By the way, at this point the user will know he's low on memory, because almost no userspace code is in memory, but is executed mainly from disk. as in the system is ridiculously slow.

    That's exactly where we need to start doing it (right after the malloc/free stuff gets fixed). Shared libs are where it impacts the majority of apps running on a system. If the shared libs are fixed, what Linux apps I've written are fairly good to go. (I check for 0 malloc return, and gracefully recover).
    unless your apps routinely ask for more memory than the address space of the architecture can handle (the actual amount of memory on the machine and how much of it is used are completely irrelevant), pointless waste of time. If you want to be sure you got the memory, use calloc.

    I write lots and lots and lots of shared libs, and mine are definitely all ready to go (insofar as checking malloc for a 0 return, and gracefully handling it are concerned). I frankly would have little problem updating my code to also account for a new function to handle committing mem. It's something that I have to do on other platforms anyway, so my code already accounts for it. If other programmers did the same right from the start, they'd be well along the way too. (I'm told that Gnome has been retooling code to really improve error handling issues such as this. They even have a function that attempts to check if mem can be committed, to the extent that it can be implemented outside of the kernel, and they use that).
    As said before, such function exists and it's name is calloc.

  4. #44
    Join Date
    Feb 2007
    Beans
    236

    Re: Why proper error handling should ALWAYS be done

    Quote Originally Posted by Tuna-Fish View Post
    it return 0 if you ask for more than you have contiguous address space.
    malloc returns 0 when (per process) address space runs out.
    Wait. Are you saying that his test failed because of memory fragmentation (ie, no "contiguous" address space) or the fact that he's using a 32-bit edition of Ubuntu (and therefore is limited to about 3gig of virtual, "per process" address space)?

  5. #45
    Join Date
    Feb 2007
    Beans
    236

    Re: Why proper error handling should ALWAYS be done

    Quote Originally Posted by Tuna-Fish View Post
    calloc returns a pointer to memory that has been allocated and set to 0 beforehand. Seems to do what you want.
    Nope. calloc() simply serves the purpose of initializing the memory to 0. It doesn't try to ask the OS if the pages can actually be committed, and then gracefully return a 0 (to the app) if not. It has no way to do that. (But it could if Linux had an equivalent to VirtualAlloc). All calloc() does is simply make the OOM Killer perhaps kick in sooner than it may if you just used malloc(). That could actually be even worse.

    I haven't looked over that gnome function that checks if mem can be actually committed, but I have a suspicion it just may do what calloc does. And probably gnome developers figure, if Linux ever gets a function to ask the OS about page commitment, we'll substitute that, and then we'll be good to go.

    the os doesn't actually know how much memory it could free any given moment if it had to. Windows does this by always going the worst case, meaning most of the memory is actually wasted.
    Oh, I wouldn't say that the memory is wasted. I want that security. In fact, I hadn't realized until recently that Linux's over-committing behavior can be effectively disabled. I'm going to investigate that, and also see if I can tune the amount of virtual address space per app, with the swap space turned off, to get it so that the OOM Killer is itself "killed". Oh, the irony.

    When it asks the user to reduce memory use, I bet there is still several hundred megabytes free or freeable.
    Of virtual address (ie, swap) space? Undoubtably. Windows just will not let a swap file fill up without telling the user before it happens. And I'm sure it also keeps tabs on how much unswappable mem is committed versus how much RAM you have, and will report when things get too close for comfort.

    But that can be a good thing when there's an enduser sitting there.

    Linux tries to soldier on until it actually really has to kill something.
    Well, that's because it's a Unix clone, and Unix was made to be a server OS. If there's no enduser sitting at a computer, why would the computer bother to post an error message and wait for some enduser to intervene in helping to free RAM? Why would it even prepare the means to do that ahead of time? That's why Linux likes to write error messages to log files. So later on, when a human shows up well after the deed has already happened, he goes back to read about what happened. On the contrary, Windows log files exist mostly to provide MS phone/internet tech support with something to look at when the enduser calls and says "The system did this when I was using it".

    You know, there was some guy who recently suggested that it may be a good idea to make two separate Linux kernels -- one for server use, and one for desktop use. (He apparently got totally flamed for it). It would probably be at least a good idea if Linux APIs were introduced that helped in a desktop situation (and providing an app with the means to ask if particular mem can be committed right then and there, and being told "yes" or "no" would probably be a good start).

    user will know he's low on memory, because almost no userspace code is in memory, but is executed mainly from disk.
    Yeah, I read about that. It's why I want to experiment with turning my swap space off.

    If you want to be sure you got the memory, use calloc.
    Can't agree with that for reasons above. That could potentially be like waving a red flag in front of a bull, as far as keeping the OOM Killer at bay is concerned.
    Last edited by j_g; November 20th, 2007 at 06:29 AM.

Page 5 of 5 FirstFirst ... 345

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •