Last edited by Tuna-Fish; November 20th, 2007 at 05:31 AM.
Close, but you still haven't demonstrated a real world application. If a real world application ever required direct access to more than a Gig of memory, it would not use malloc -- it would simply open a memory-mapped file. Using a memory-mapped file, even on a 64MB RAM 32-bit machine, you can directly access up to 18 TERABYTES of storage. This avoids the paging file (swap space), so you also get the added benefit of not causing any thread blockage do to serialization (calls that access the paging file are serialized).
calloc returns a pointer to memory that has been allocated and set to 0 beforehand. Seems to do what you want. The big point here is that the os doesn't actully know how much memory it could free any given moment if it had to. Windows does this by always going the worst case, meaning most of the memory is actually wasted. When it asks the user to reduce memory use, I bet there is still several hundred megabytes free or freeable. Linux tries to soldier on until it actually really has to kill something. By the way, at this point the user will know he's low on memory, because almost no userspace code is in memory, but is executed mainly from disk. as in the system is ridiculously slow.
unless your apps routinely ask for more memory than the address space of the architecture can handle (the actual amount of memory on the machine and how much of it is used are completely irrelevant), pointless waste of time. If you want to be sure you got the memory, use calloc.That's exactly where we need to start doing it (right after the malloc/free stuff gets fixed). Shared libs are where it impacts the majority of apps running on a system. If the shared libs are fixed, what Linux apps I've written are fairly good to go. (I check for 0 malloc return, and gracefully recover).
As said before, such function exists and it's name is calloc.I write lots and lots and lots of shared libs, and mine are definitely all ready to go (insofar as checking malloc for a 0 return, and gracefully handling it are concerned). I frankly would have little problem updating my code to also account for a new function to handle committing mem. It's something that I have to do on other platforms anyway, so my code already accounts for it. If other programmers did the same right from the start, they'd be well along the way too. (I'm told that Gnome has been retooling code to really improve error handling issues such as this. They even have a function that attempts to check if mem can be committed, to the extent that it can be implemented outside of the kernel, and they use that).
Nope. calloc() simply serves the purpose of initializing the memory to 0. It doesn't try to ask the OS if the pages can actually be committed, and then gracefully return a 0 (to the app) if not. It has no way to do that. (But it could if Linux had an equivalent to VirtualAlloc). All calloc() does is simply make the OOM Killer perhaps kick in sooner than it may if you just used malloc(). That could actually be even worse.
I haven't looked over that gnome function that checks if mem can be actually committed, but I have a suspicion it just may do what calloc does. And probably gnome developers figure, if Linux ever gets a function to ask the OS about page commitment, we'll substitute that, and then we'll be good to go.
Oh, I wouldn't say that the memory is wasted. I want that security. In fact, I hadn't realized until recently that Linux's over-committing behavior can be effectively disabled. I'm going to investigate that, and also see if I can tune the amount of virtual address space per app, with the swap space turned off, to get it so that the OOM Killer is itself "killed". Oh, the irony.the os doesn't actually know how much memory it could free any given moment if it had to. Windows does this by always going the worst case, meaning most of the memory is actually wasted.
Of virtual address (ie, swap) space? Undoubtably. Windows just will not let a swap file fill up without telling the user before it happens. And I'm sure it also keeps tabs on how much unswappable mem is committed versus how much RAM you have, and will report when things get too close for comfort.When it asks the user to reduce memory use, I bet there is still several hundred megabytes free or freeable.
But that can be a good thing when there's an enduser sitting there.
Well, that's because it's a Unix clone, and Unix was made to be a server OS. If there's no enduser sitting at a computer, why would the computer bother to post an error message and wait for some enduser to intervene in helping to free RAM? Why would it even prepare the means to do that ahead of time? That's why Linux likes to write error messages to log files. So later on, when a human shows up well after the deed has already happened, he goes back to read about what happened. On the contrary, Windows log files exist mostly to provide MS phone/internet tech support with something to look at when the enduser calls and says "The system did this when I was using it".Linux tries to soldier on until it actually really has to kill something.
You know, there was some guy who recently suggested that it may be a good idea to make two separate Linux kernels -- one for server use, and one for desktop use. (He apparently got totally flamed for it). It would probably be at least a good idea if Linux APIs were introduced that helped in a desktop situation (and providing an app with the means to ask if particular mem can be committed right then and there, and being told "yes" or "no" would probably be a good start).
Yeah, I read about that. It's why I want to experiment with turning my swap space off.user will know he's low on memory, because almost no userspace code is in memory, but is executed mainly from disk.
Can't agree with that for reasons above. That could potentially be like waving a red flag in front of a bull, as far as keeping the OOM Killer at bay is concerned.If you want to be sure you got the memory, use calloc.
Last edited by j_g; November 20th, 2007 at 06:29 AM.
Bookmarks