How does memory/commit charge work in Windows 10?
This question is prompted by the following regularly observed phenomena I'd like to find an explanation for:
- Current commit is regularly higher than Physical usage + Pagefile size. What's up with that? Shouldn't that be impossible? [This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?]
- Sometimes this reaches extreme levels where Current commit is more than double physical memory usage!
- When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
This is on windows 10, as reported by Process Explorer.
The ultimate question I'd like to answer, is: Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
windows-10 memory memory-management
add a comment |
This question is prompted by the following regularly observed phenomena I'd like to find an explanation for:
- Current commit is regularly higher than Physical usage + Pagefile size. What's up with that? Shouldn't that be impossible? [This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?]
- Sometimes this reaches extreme levels where Current commit is more than double physical memory usage!
- When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
This is on windows 10, as reported by Process Explorer.
The ultimate question I'd like to answer, is: Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
windows-10 memory memory-management
The commit charge has nothing to do with RAM usage, pagefile usage, or any combination of the two. It is essentially a total of potential storage space required which could be in either RAM or the pagefile. The commit limit is RAM size + pagefile size - a small overhead. Thus, the only way to increase the commit limit is to increase the pagefile size or add RAM. Usually the former is the easiest.
– LMiller7
Feb 2 '17 at 1:03
You already said as much, but also said you have no time to elaborate further. This is why I decided to ask here. The way I understand it, commit charge is "something, at some point has asked for this much memory and the OS has said done" and commit charge reflects this whether the memory is used or not. However that does not answer most of the questions I have. At the very least I'd like an answer to the last question and ideally I'd like to gain a deeper and clearer picture of how memory management works in Windows.
– martixy
Feb 2 '17 at 1:19
add a comment |
This question is prompted by the following regularly observed phenomena I'd like to find an explanation for:
- Current commit is regularly higher than Physical usage + Pagefile size. What's up with that? Shouldn't that be impossible? [This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?]
- Sometimes this reaches extreme levels where Current commit is more than double physical memory usage!
- When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
This is on windows 10, as reported by Process Explorer.
The ultimate question I'd like to answer, is: Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
windows-10 memory memory-management
This question is prompted by the following regularly observed phenomena I'd like to find an explanation for:
- Current commit is regularly higher than Physical usage + Pagefile size. What's up with that? Shouldn't that be impossible? [This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?]
- Sometimes this reaches extreme levels where Current commit is more than double physical memory usage!
- When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
This is on windows 10, as reported by Process Explorer.
The ultimate question I'd like to answer, is: Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
windows-10 memory memory-management
windows-10 memory memory-management
asked Feb 1 '17 at 23:42
martixy
123210
123210
The commit charge has nothing to do with RAM usage, pagefile usage, or any combination of the two. It is essentially a total of potential storage space required which could be in either RAM or the pagefile. The commit limit is RAM size + pagefile size - a small overhead. Thus, the only way to increase the commit limit is to increase the pagefile size or add RAM. Usually the former is the easiest.
– LMiller7
Feb 2 '17 at 1:03
You already said as much, but also said you have no time to elaborate further. This is why I decided to ask here. The way I understand it, commit charge is "something, at some point has asked for this much memory and the OS has said done" and commit charge reflects this whether the memory is used or not. However that does not answer most of the questions I have. At the very least I'd like an answer to the last question and ideally I'd like to gain a deeper and clearer picture of how memory management works in Windows.
– martixy
Feb 2 '17 at 1:19
add a comment |
The commit charge has nothing to do with RAM usage, pagefile usage, or any combination of the two. It is essentially a total of potential storage space required which could be in either RAM or the pagefile. The commit limit is RAM size + pagefile size - a small overhead. Thus, the only way to increase the commit limit is to increase the pagefile size or add RAM. Usually the former is the easiest.
– LMiller7
Feb 2 '17 at 1:03
You already said as much, but also said you have no time to elaborate further. This is why I decided to ask here. The way I understand it, commit charge is "something, at some point has asked for this much memory and the OS has said done" and commit charge reflects this whether the memory is used or not. However that does not answer most of the questions I have. At the very least I'd like an answer to the last question and ideally I'd like to gain a deeper and clearer picture of how memory management works in Windows.
– martixy
Feb 2 '17 at 1:19
The commit charge has nothing to do with RAM usage, pagefile usage, or any combination of the two. It is essentially a total of potential storage space required which could be in either RAM or the pagefile. The commit limit is RAM size + pagefile size - a small overhead. Thus, the only way to increase the commit limit is to increase the pagefile size or add RAM. Usually the former is the easiest.
– LMiller7
Feb 2 '17 at 1:03
The commit charge has nothing to do with RAM usage, pagefile usage, or any combination of the two. It is essentially a total of potential storage space required which could be in either RAM or the pagefile. The commit limit is RAM size + pagefile size - a small overhead. Thus, the only way to increase the commit limit is to increase the pagefile size or add RAM. Usually the former is the easiest.
– LMiller7
Feb 2 '17 at 1:03
You already said as much, but also said you have no time to elaborate further. This is why I decided to ask here. The way I understand it, commit charge is "something, at some point has asked for this much memory and the OS has said done" and commit charge reflects this whether the memory is used or not. However that does not answer most of the questions I have. At the very least I'd like an answer to the last question and ideally I'd like to gain a deeper and clearer picture of how memory management works in Windows.
– martixy
Feb 2 '17 at 1:19
You already said as much, but also said you have no time to elaborate further. This is why I decided to ask here. The way I understand it, commit charge is "something, at some point has asked for this much memory and the OS has said done" and commit charge reflects this whether the memory is used or not. However that does not answer most of the questions I have. At the very least I'd like an answer to the last question and ideally I'd like to gain a deeper and clearer picture of how memory management works in Windows.
– martixy
Feb 2 '17 at 1:19
add a comment |
1 Answer
1
active
oldest
votes
This is actually pretty straightforward once you understand that commit charge represents only potential - yet "guaranteed available if you want it" - use of virtual memory, while the "private working set" - which is essentially the RAM used by "committed" memory - is actual use, as is pagefile space. (But this is not all of the use of RAM, because there are other things that use RAM).
Let's assume we're talking about 32-bit systems, so the maximum virtual address space available to each process is normally 2 GiB. (There is no substantial difference in any of the following for 64-bit systems, except that the addresses and sizes can be larger - much larger.)
Now suppose a program running in a process uses VirtualAlloc (a Win32 API) to "commit" 2 MiB of virtual memory. As you'd expect, this will show up as an additional 2 MiB of commit charge, and there are 2 MiB fewer bytes of virtual address space available in the process for future allocations.
But it will not actually use any physical memory (RAM) yet!
The VirtualAlloc call will return to the caller the start address of the allocated region; the region will be somewhere in the range 0x10000 through 0x7FFEFFFF, i.e. about 2 GiB. (The first and last 64KiB, or 0x10000 in hex, of v.a.s. in each process are never assigned.)
But again - there is no actual physical use of 2 MiB of storage yet! Not in RAM, not even in the pagefile. (There is a tiny structure called a "Virtual Address Descriptor" that describes the start v.a. and length of the private committed region.)
So there you have it! Commit charge has increased, but physical memory usage has not.
This is easy to demonstrate with the sysinternals tool testlimit
.
Sometime later, let's say the program stores something (ie a memory write operation) in that region (doesn't matter where). There is not yet any physical memory underneath any of the region, so such an access will incur a page fault. In response to which the OS's memory manager, specifically the page fault handler routine (the "pager" for short... it's called MiAccessFault), will:
- allocate a previously-"available" physical page
- set up the page table entry for the virtual page that was accessed to associate the virtual page number with the newly-assigned physical page number
- add the physical page to the process private working set
- and dismiss the page fault, causing the instruction that raised the fault to be retried.
You have now "faulted" one page (4 KiB) into the process. And physical memory usage will increment accordingly, and "available" RAM will decrease. Commit charge does not change.
Sometime later, if that page has not been referenced for a while and demand for RAM is high, this might happen:
- the OS removes the page from the process working set.
- because it was written to since it was brought into the working set, it is put on the modified page list (otherwise it would go on the standby page list). The page table entry still reflects the physical page number of the page of RAM, but now has its "valid" bit clear, so the next time it's referenced a page fault will occur
- when the modified page list hits a small threshold, a modified page writer thread in the "System" process wakes up and saves the contents of modified pages to the pagefile (assuming that you have one), and...
- takes those pages off of the modified list and puts them on the standby list. They are now considered part of "available" RAM; but for now they still have their original contents from when they were in their respective processes. Again, commit charge doesn't change, but RAM usage and the process private working set will go down.
- Pages on the standby list can now be repurposed, which is to say used for something else - like resolve page faults from any process on the system, or used by SuperFetch. However...
- If a process that's lost a page to the modified or standby list tries to access it again before the physical page has been repurposed (i.e. it still has its original content), the page fault is resolved without reading from disk. The page is simply put back in the process working set and the page table entry is made "valid". This is an example of a "soft" or "cheap" page fault. We say that the standby and modified lists form a system-wide cache of pages that are likely to be needed again soon.
If you don't have a pagefile, then steps 3 through 5 are changed to:
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
Step 6 remains the same, since pages on the modified list can be faulted back into the process that lost them as a "soft" page fault. But if that doesn't happen the pages sit on the modified list until the process deallocates the corresponding virtual memory (maybe because the process ends).
There is other use of virtual address space, and of RAM, besides private committed memory. There is mapped virtual address space, for which the backing store is some specified file rather than the pagefile. The pages of mapped v.a.s. that are paged in are reflected in RAM usage, but mapped memory does not contribute to commit charge because the mapped file provides the backing store: Any part of the mapped region that isn't in RAM is simply kept in the mapped file. Another difference is that most file mappings can be shared between processes; a shared page that's already in memory for one process can be added to another process without going to disk for it again (another soft page fault).
And there is nonpageable v.a.s., for which there is no backing store because it's always resident in RAM. This contributes both to the reported RAM usage and to the "commit charge" as well.
This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?
No. It has nothing to do with compression. Memory compression in Windows is done as an intermediate step, on pages that otherwise would be written to the pagefile. In effect it allows the modified page list to use less RAM to contain more stuff, at some cost in CPU time but with far greater speed than pagefile I/O (even to an SSD). Since commit limit is calculated from total RAM + pagefile size, not RAM usage + pagefile usage, this doesn't affect commit limit. Commit limit doesn't change with how much RAM is in use or what it's in use for.
When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
It isn't that Windows is being inefficient. It's the apps you're running. They're committing a lot more v.a.s. than they're actually using.
The reason for the entire "commit charge" and "commit limit" mechanism is this: When I call VirtualAlloc, I am supposed to check the return value to see if it's non-zero. If it's zero, it means that my alloc attempt failed, likely because it would have caused commit charge to exceed commit limit. I'm supposed to do something reasonable like try committing less, or exiting the program cleanly.
If VirtualAlloc returned nonzero, i.e. an address, that tells me that the system has made a guarantee - a commitment, if you will - that however many bytes I asked for, starting at that address, will be available if I choose to access them; that there is someplace to put it all - either RAM or the pagefile. i.e. there is no reason to expect any sort of failure in accessing anything within that region. That's good, because it would not be reasonable to expect me to check for "did it work?" on every access to the allocated region.
The "cash lending bank" analogy
It's a little like a bank offering credit, but strictly on a cash-on-hand basis. (This is not, of course, how real banks work.)
Suppose the bank starts with a million dollars cash on hand. People go to the bank and ask for lines of credit in varying amounts. Say the bank approves me for a $100,000 line of credit (I create a private committed region); that doesn't mean that any cash has actually left the vault. If I later actually take out a loan for, say, $20,000 (I access a subset of the region), that does remove cash from the bank.
But whether I take out any loans or not, the fact that I've been approved for a maximum of $100K means the bank can subsequently only approve another $900,000 worth of lines of credit, total, for all of its customers. The bank won't approve credit in excess of its cash reserves (ie it won't overcommit them), since that would mean the bank might have to turn a previously-approved borrower away when they later show up intending to take out their loan. That would be very bad because the bank already committed to allowing those loans, and the bank's reputation would plummet.
Yes, this is "inefficient" in terms of the bank's use of that cash. And the greater the disparity between the lines of credit the customers are approved for and the amounts they actually loan, the less efficient it is. But that inefficiency is not the bank's fault; it's the customers' "fault" for asking for such high lines of credit but only taking out small loans.
The bank's business model is that it simply cannot turn down a previously-approved borrower when they show up to get their loan - to do so would be "fatal" to the customer. That's why the bank keeps careful track of how much of the loan fund has been "committed".
I suppose that expanding the pagefile, or adding another one, would be like the bank going out and getting more cash and adding it to the loan fund.
If you want to model mapped and nonpageable memory in this analogy... nonpageable is like a small loan that you are required to take out and keep out when you open your account. (The nonpageable structures that define each new process.) Mapped memory is like bringing your own cash along (the file that's being mapped) and depositing it in the bank, then taking out only parts of it at a time (paging it in). Why not page it all in at once? I don't know, maybe you don't have room in your wallet for all that cash. :) This doesn't affect others' ability to borrow money because the cash you deposited is in your own account, not the general loan fund. This analogy starts breaking down about there, especially when we start thinking about shared memory, so don't push it too far.
Back to the Windows OS: The fact that you have much of your RAM "available" has nothing to do with commit charge and commit limit. If you're near the commit limit that means the OS has already committed - i.e. promised to make available when asked for - that much storage. It doesn't have to be all in use yet for the limit to be enforced.
Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
Well, I'm sorry, but if you're running into commit limit, there are just three things you can do:
- Increase your RAM.
- Increase your pagefile size.
- Run less stuff at one time.
Re option 2: You could put a second pagefile on a hard drive. If the apps are not actually using all that committed memory - which apparently they're not, since you're seeing so much free RAM - you won't actually be accessing that pagefile much, so putting it on a hard drive won't hurt performance. If the slowness of a hard drive would still bother you, another option is to get a small and therefore cheap second SSD and put your second pagefile on that. The one "showstopper" would be a laptop with no way to add a second "non-removable" drive. (Windows will not let you put pagefiles on removeable drives, like anything connected with USB.)
Here is another answer I wrote that explains things from a different direction.
p.s.: You asked about Windows 10, but I should tell you that it works the same way in every version of the NT family, back to NT 3.1, and prerelease versions too. What has likely changed is Windows' default setting for pagefile size, from 1.5x or 1x RAM size to much smaller. I believe this was a mistake.
1
+1 This is the answer I wish I had written. This is just how a modern OS works. It wasn't a problem before SSDs because we didn't have much RAM and we had lots of hard drive space. Now that we have lots of RAM and not as much mass storage space on some machines, having sufficient paging file space is becoming an issue again. Make it a priority so your machine can make efficient use of RAM.
– David Schwartz
Feb 2 '17 at 6:28
@DavidSchwartz: I've seen many of your answers on MM issues, and I have to say, coming from you that is high praise. Thank you.
– Jamie Hanrahan
Feb 2 '17 at 9:10
The light has shone down. It all(well, most) finally makes sense. Read this and your other answer, each offered new insights. I'm even tempted try and track down the book. Notably, I did ask this question first in none other than the internals forum(as hinted by the comments exchange under the question), but it seems slightly dead. What David said is also true. This question is a little backdated, in a sense, because I got a new SSD these days and can afford the extra pagefile, but it was a real problem with my previous extra-small drive. ...Continued below...
– martixy
Feb 2 '17 at 16:18
...continued from above. Incidentally, in my own research I discovered that this is not the only way to do things, as linux and many VM hypervisors have an option called "overcommit". In fact it seems Windows is in the minority when it comes to its approach of memory allocation. Oh, and when thinking about it I came up with more or less the same banking analogy. The coincidence is uncanny.
– martixy
Feb 2 '17 at 16:20
But those "overcommit" options don't always refer to this specific concept. Virtual memory in a general purpose OS, once the OS has told you "ok, allocation succeeded," is supposed to act to the programmer just like physical memory... except for the slight delay that might occur now and then when a pagefault happens. The trouble with allowing overcommit of all of the physical storage that can realize virtual is that simple memory refs like i = *j; might raise fatal errors, even if i is on your stack and j was previously returned as a supposedly valid pointer. (contd...)
– Jamie Hanrahan
Feb 2 '17 at 18:00
|
show 3 more comments
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1174229%2fhow-does-memory-commit-charge-work-in-windows-10%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
This is actually pretty straightforward once you understand that commit charge represents only potential - yet "guaranteed available if you want it" - use of virtual memory, while the "private working set" - which is essentially the RAM used by "committed" memory - is actual use, as is pagefile space. (But this is not all of the use of RAM, because there are other things that use RAM).
Let's assume we're talking about 32-bit systems, so the maximum virtual address space available to each process is normally 2 GiB. (There is no substantial difference in any of the following for 64-bit systems, except that the addresses and sizes can be larger - much larger.)
Now suppose a program running in a process uses VirtualAlloc (a Win32 API) to "commit" 2 MiB of virtual memory. As you'd expect, this will show up as an additional 2 MiB of commit charge, and there are 2 MiB fewer bytes of virtual address space available in the process for future allocations.
But it will not actually use any physical memory (RAM) yet!
The VirtualAlloc call will return to the caller the start address of the allocated region; the region will be somewhere in the range 0x10000 through 0x7FFEFFFF, i.e. about 2 GiB. (The first and last 64KiB, or 0x10000 in hex, of v.a.s. in each process are never assigned.)
But again - there is no actual physical use of 2 MiB of storage yet! Not in RAM, not even in the pagefile. (There is a tiny structure called a "Virtual Address Descriptor" that describes the start v.a. and length of the private committed region.)
So there you have it! Commit charge has increased, but physical memory usage has not.
This is easy to demonstrate with the sysinternals tool testlimit
.
Sometime later, let's say the program stores something (ie a memory write operation) in that region (doesn't matter where). There is not yet any physical memory underneath any of the region, so such an access will incur a page fault. In response to which the OS's memory manager, specifically the page fault handler routine (the "pager" for short... it's called MiAccessFault), will:
- allocate a previously-"available" physical page
- set up the page table entry for the virtual page that was accessed to associate the virtual page number with the newly-assigned physical page number
- add the physical page to the process private working set
- and dismiss the page fault, causing the instruction that raised the fault to be retried.
You have now "faulted" one page (4 KiB) into the process. And physical memory usage will increment accordingly, and "available" RAM will decrease. Commit charge does not change.
Sometime later, if that page has not been referenced for a while and demand for RAM is high, this might happen:
- the OS removes the page from the process working set.
- because it was written to since it was brought into the working set, it is put on the modified page list (otherwise it would go on the standby page list). The page table entry still reflects the physical page number of the page of RAM, but now has its "valid" bit clear, so the next time it's referenced a page fault will occur
- when the modified page list hits a small threshold, a modified page writer thread in the "System" process wakes up and saves the contents of modified pages to the pagefile (assuming that you have one), and...
- takes those pages off of the modified list and puts them on the standby list. They are now considered part of "available" RAM; but for now they still have their original contents from when they were in their respective processes. Again, commit charge doesn't change, but RAM usage and the process private working set will go down.
- Pages on the standby list can now be repurposed, which is to say used for something else - like resolve page faults from any process on the system, or used by SuperFetch. However...
- If a process that's lost a page to the modified or standby list tries to access it again before the physical page has been repurposed (i.e. it still has its original content), the page fault is resolved without reading from disk. The page is simply put back in the process working set and the page table entry is made "valid". This is an example of a "soft" or "cheap" page fault. We say that the standby and modified lists form a system-wide cache of pages that are likely to be needed again soon.
If you don't have a pagefile, then steps 3 through 5 are changed to:
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
Step 6 remains the same, since pages on the modified list can be faulted back into the process that lost them as a "soft" page fault. But if that doesn't happen the pages sit on the modified list until the process deallocates the corresponding virtual memory (maybe because the process ends).
There is other use of virtual address space, and of RAM, besides private committed memory. There is mapped virtual address space, for which the backing store is some specified file rather than the pagefile. The pages of mapped v.a.s. that are paged in are reflected in RAM usage, but mapped memory does not contribute to commit charge because the mapped file provides the backing store: Any part of the mapped region that isn't in RAM is simply kept in the mapped file. Another difference is that most file mappings can be shared between processes; a shared page that's already in memory for one process can be added to another process without going to disk for it again (another soft page fault).
And there is nonpageable v.a.s., for which there is no backing store because it's always resident in RAM. This contributes both to the reported RAM usage and to the "commit charge" as well.
This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?
No. It has nothing to do with compression. Memory compression in Windows is done as an intermediate step, on pages that otherwise would be written to the pagefile. In effect it allows the modified page list to use less RAM to contain more stuff, at some cost in CPU time but with far greater speed than pagefile I/O (even to an SSD). Since commit limit is calculated from total RAM + pagefile size, not RAM usage + pagefile usage, this doesn't affect commit limit. Commit limit doesn't change with how much RAM is in use or what it's in use for.
When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
It isn't that Windows is being inefficient. It's the apps you're running. They're committing a lot more v.a.s. than they're actually using.
The reason for the entire "commit charge" and "commit limit" mechanism is this: When I call VirtualAlloc, I am supposed to check the return value to see if it's non-zero. If it's zero, it means that my alloc attempt failed, likely because it would have caused commit charge to exceed commit limit. I'm supposed to do something reasonable like try committing less, or exiting the program cleanly.
If VirtualAlloc returned nonzero, i.e. an address, that tells me that the system has made a guarantee - a commitment, if you will - that however many bytes I asked for, starting at that address, will be available if I choose to access them; that there is someplace to put it all - either RAM or the pagefile. i.e. there is no reason to expect any sort of failure in accessing anything within that region. That's good, because it would not be reasonable to expect me to check for "did it work?" on every access to the allocated region.
The "cash lending bank" analogy
It's a little like a bank offering credit, but strictly on a cash-on-hand basis. (This is not, of course, how real banks work.)
Suppose the bank starts with a million dollars cash on hand. People go to the bank and ask for lines of credit in varying amounts. Say the bank approves me for a $100,000 line of credit (I create a private committed region); that doesn't mean that any cash has actually left the vault. If I later actually take out a loan for, say, $20,000 (I access a subset of the region), that does remove cash from the bank.
But whether I take out any loans or not, the fact that I've been approved for a maximum of $100K means the bank can subsequently only approve another $900,000 worth of lines of credit, total, for all of its customers. The bank won't approve credit in excess of its cash reserves (ie it won't overcommit them), since that would mean the bank might have to turn a previously-approved borrower away when they later show up intending to take out their loan. That would be very bad because the bank already committed to allowing those loans, and the bank's reputation would plummet.
Yes, this is "inefficient" in terms of the bank's use of that cash. And the greater the disparity between the lines of credit the customers are approved for and the amounts they actually loan, the less efficient it is. But that inefficiency is not the bank's fault; it's the customers' "fault" for asking for such high lines of credit but only taking out small loans.
The bank's business model is that it simply cannot turn down a previously-approved borrower when they show up to get their loan - to do so would be "fatal" to the customer. That's why the bank keeps careful track of how much of the loan fund has been "committed".
I suppose that expanding the pagefile, or adding another one, would be like the bank going out and getting more cash and adding it to the loan fund.
If you want to model mapped and nonpageable memory in this analogy... nonpageable is like a small loan that you are required to take out and keep out when you open your account. (The nonpageable structures that define each new process.) Mapped memory is like bringing your own cash along (the file that's being mapped) and depositing it in the bank, then taking out only parts of it at a time (paging it in). Why not page it all in at once? I don't know, maybe you don't have room in your wallet for all that cash. :) This doesn't affect others' ability to borrow money because the cash you deposited is in your own account, not the general loan fund. This analogy starts breaking down about there, especially when we start thinking about shared memory, so don't push it too far.
Back to the Windows OS: The fact that you have much of your RAM "available" has nothing to do with commit charge and commit limit. If you're near the commit limit that means the OS has already committed - i.e. promised to make available when asked for - that much storage. It doesn't have to be all in use yet for the limit to be enforced.
Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
Well, I'm sorry, but if you're running into commit limit, there are just three things you can do:
- Increase your RAM.
- Increase your pagefile size.
- Run less stuff at one time.
Re option 2: You could put a second pagefile on a hard drive. If the apps are not actually using all that committed memory - which apparently they're not, since you're seeing so much free RAM - you won't actually be accessing that pagefile much, so putting it on a hard drive won't hurt performance. If the slowness of a hard drive would still bother you, another option is to get a small and therefore cheap second SSD and put your second pagefile on that. The one "showstopper" would be a laptop with no way to add a second "non-removable" drive. (Windows will not let you put pagefiles on removeable drives, like anything connected with USB.)
Here is another answer I wrote that explains things from a different direction.
p.s.: You asked about Windows 10, but I should tell you that it works the same way in every version of the NT family, back to NT 3.1, and prerelease versions too. What has likely changed is Windows' default setting for pagefile size, from 1.5x or 1x RAM size to much smaller. I believe this was a mistake.
1
+1 This is the answer I wish I had written. This is just how a modern OS works. It wasn't a problem before SSDs because we didn't have much RAM and we had lots of hard drive space. Now that we have lots of RAM and not as much mass storage space on some machines, having sufficient paging file space is becoming an issue again. Make it a priority so your machine can make efficient use of RAM.
– David Schwartz
Feb 2 '17 at 6:28
@DavidSchwartz: I've seen many of your answers on MM issues, and I have to say, coming from you that is high praise. Thank you.
– Jamie Hanrahan
Feb 2 '17 at 9:10
The light has shone down. It all(well, most) finally makes sense. Read this and your other answer, each offered new insights. I'm even tempted try and track down the book. Notably, I did ask this question first in none other than the internals forum(as hinted by the comments exchange under the question), but it seems slightly dead. What David said is also true. This question is a little backdated, in a sense, because I got a new SSD these days and can afford the extra pagefile, but it was a real problem with my previous extra-small drive. ...Continued below...
– martixy
Feb 2 '17 at 16:18
...continued from above. Incidentally, in my own research I discovered that this is not the only way to do things, as linux and many VM hypervisors have an option called "overcommit". In fact it seems Windows is in the minority when it comes to its approach of memory allocation. Oh, and when thinking about it I came up with more or less the same banking analogy. The coincidence is uncanny.
– martixy
Feb 2 '17 at 16:20
But those "overcommit" options don't always refer to this specific concept. Virtual memory in a general purpose OS, once the OS has told you "ok, allocation succeeded," is supposed to act to the programmer just like physical memory... except for the slight delay that might occur now and then when a pagefault happens. The trouble with allowing overcommit of all of the physical storage that can realize virtual is that simple memory refs like i = *j; might raise fatal errors, even if i is on your stack and j was previously returned as a supposedly valid pointer. (contd...)
– Jamie Hanrahan
Feb 2 '17 at 18:00
|
show 3 more comments
This is actually pretty straightforward once you understand that commit charge represents only potential - yet "guaranteed available if you want it" - use of virtual memory, while the "private working set" - which is essentially the RAM used by "committed" memory - is actual use, as is pagefile space. (But this is not all of the use of RAM, because there are other things that use RAM).
Let's assume we're talking about 32-bit systems, so the maximum virtual address space available to each process is normally 2 GiB. (There is no substantial difference in any of the following for 64-bit systems, except that the addresses and sizes can be larger - much larger.)
Now suppose a program running in a process uses VirtualAlloc (a Win32 API) to "commit" 2 MiB of virtual memory. As you'd expect, this will show up as an additional 2 MiB of commit charge, and there are 2 MiB fewer bytes of virtual address space available in the process for future allocations.
But it will not actually use any physical memory (RAM) yet!
The VirtualAlloc call will return to the caller the start address of the allocated region; the region will be somewhere in the range 0x10000 through 0x7FFEFFFF, i.e. about 2 GiB. (The first and last 64KiB, or 0x10000 in hex, of v.a.s. in each process are never assigned.)
But again - there is no actual physical use of 2 MiB of storage yet! Not in RAM, not even in the pagefile. (There is a tiny structure called a "Virtual Address Descriptor" that describes the start v.a. and length of the private committed region.)
So there you have it! Commit charge has increased, but physical memory usage has not.
This is easy to demonstrate with the sysinternals tool testlimit
.
Sometime later, let's say the program stores something (ie a memory write operation) in that region (doesn't matter where). There is not yet any physical memory underneath any of the region, so such an access will incur a page fault. In response to which the OS's memory manager, specifically the page fault handler routine (the "pager" for short... it's called MiAccessFault), will:
- allocate a previously-"available" physical page
- set up the page table entry for the virtual page that was accessed to associate the virtual page number with the newly-assigned physical page number
- add the physical page to the process private working set
- and dismiss the page fault, causing the instruction that raised the fault to be retried.
You have now "faulted" one page (4 KiB) into the process. And physical memory usage will increment accordingly, and "available" RAM will decrease. Commit charge does not change.
Sometime later, if that page has not been referenced for a while and demand for RAM is high, this might happen:
- the OS removes the page from the process working set.
- because it was written to since it was brought into the working set, it is put on the modified page list (otherwise it would go on the standby page list). The page table entry still reflects the physical page number of the page of RAM, but now has its "valid" bit clear, so the next time it's referenced a page fault will occur
- when the modified page list hits a small threshold, a modified page writer thread in the "System" process wakes up and saves the contents of modified pages to the pagefile (assuming that you have one), and...
- takes those pages off of the modified list and puts them on the standby list. They are now considered part of "available" RAM; but for now they still have their original contents from when they were in their respective processes. Again, commit charge doesn't change, but RAM usage and the process private working set will go down.
- Pages on the standby list can now be repurposed, which is to say used for something else - like resolve page faults from any process on the system, or used by SuperFetch. However...
- If a process that's lost a page to the modified or standby list tries to access it again before the physical page has been repurposed (i.e. it still has its original content), the page fault is resolved without reading from disk. The page is simply put back in the process working set and the page table entry is made "valid". This is an example of a "soft" or "cheap" page fault. We say that the standby and modified lists form a system-wide cache of pages that are likely to be needed again soon.
If you don't have a pagefile, then steps 3 through 5 are changed to:
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
Step 6 remains the same, since pages on the modified list can be faulted back into the process that lost them as a "soft" page fault. But if that doesn't happen the pages sit on the modified list until the process deallocates the corresponding virtual memory (maybe because the process ends).
There is other use of virtual address space, and of RAM, besides private committed memory. There is mapped virtual address space, for which the backing store is some specified file rather than the pagefile. The pages of mapped v.a.s. that are paged in are reflected in RAM usage, but mapped memory does not contribute to commit charge because the mapped file provides the backing store: Any part of the mapped region that isn't in RAM is simply kept in the mapped file. Another difference is that most file mappings can be shared between processes; a shared page that's already in memory for one process can be added to another process without going to disk for it again (another soft page fault).
And there is nonpageable v.a.s., for which there is no backing store because it's always resident in RAM. This contributes both to the reported RAM usage and to the "commit charge" as well.
This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?
No. It has nothing to do with compression. Memory compression in Windows is done as an intermediate step, on pages that otherwise would be written to the pagefile. In effect it allows the modified page list to use less RAM to contain more stuff, at some cost in CPU time but with far greater speed than pagefile I/O (even to an SSD). Since commit limit is calculated from total RAM + pagefile size, not RAM usage + pagefile usage, this doesn't affect commit limit. Commit limit doesn't change with how much RAM is in use or what it's in use for.
When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
It isn't that Windows is being inefficient. It's the apps you're running. They're committing a lot more v.a.s. than they're actually using.
The reason for the entire "commit charge" and "commit limit" mechanism is this: When I call VirtualAlloc, I am supposed to check the return value to see if it's non-zero. If it's zero, it means that my alloc attempt failed, likely because it would have caused commit charge to exceed commit limit. I'm supposed to do something reasonable like try committing less, or exiting the program cleanly.
If VirtualAlloc returned nonzero, i.e. an address, that tells me that the system has made a guarantee - a commitment, if you will - that however many bytes I asked for, starting at that address, will be available if I choose to access them; that there is someplace to put it all - either RAM or the pagefile. i.e. there is no reason to expect any sort of failure in accessing anything within that region. That's good, because it would not be reasonable to expect me to check for "did it work?" on every access to the allocated region.
The "cash lending bank" analogy
It's a little like a bank offering credit, but strictly on a cash-on-hand basis. (This is not, of course, how real banks work.)
Suppose the bank starts with a million dollars cash on hand. People go to the bank and ask for lines of credit in varying amounts. Say the bank approves me for a $100,000 line of credit (I create a private committed region); that doesn't mean that any cash has actually left the vault. If I later actually take out a loan for, say, $20,000 (I access a subset of the region), that does remove cash from the bank.
But whether I take out any loans or not, the fact that I've been approved for a maximum of $100K means the bank can subsequently only approve another $900,000 worth of lines of credit, total, for all of its customers. The bank won't approve credit in excess of its cash reserves (ie it won't overcommit them), since that would mean the bank might have to turn a previously-approved borrower away when they later show up intending to take out their loan. That would be very bad because the bank already committed to allowing those loans, and the bank's reputation would plummet.
Yes, this is "inefficient" in terms of the bank's use of that cash. And the greater the disparity between the lines of credit the customers are approved for and the amounts they actually loan, the less efficient it is. But that inefficiency is not the bank's fault; it's the customers' "fault" for asking for such high lines of credit but only taking out small loans.
The bank's business model is that it simply cannot turn down a previously-approved borrower when they show up to get their loan - to do so would be "fatal" to the customer. That's why the bank keeps careful track of how much of the loan fund has been "committed".
I suppose that expanding the pagefile, or adding another one, would be like the bank going out and getting more cash and adding it to the loan fund.
If you want to model mapped and nonpageable memory in this analogy... nonpageable is like a small loan that you are required to take out and keep out when you open your account. (The nonpageable structures that define each new process.) Mapped memory is like bringing your own cash along (the file that's being mapped) and depositing it in the bank, then taking out only parts of it at a time (paging it in). Why not page it all in at once? I don't know, maybe you don't have room in your wallet for all that cash. :) This doesn't affect others' ability to borrow money because the cash you deposited is in your own account, not the general loan fund. This analogy starts breaking down about there, especially when we start thinking about shared memory, so don't push it too far.
Back to the Windows OS: The fact that you have much of your RAM "available" has nothing to do with commit charge and commit limit. If you're near the commit limit that means the OS has already committed - i.e. promised to make available when asked for - that much storage. It doesn't have to be all in use yet for the limit to be enforced.
Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
Well, I'm sorry, but if you're running into commit limit, there are just three things you can do:
- Increase your RAM.
- Increase your pagefile size.
- Run less stuff at one time.
Re option 2: You could put a second pagefile on a hard drive. If the apps are not actually using all that committed memory - which apparently they're not, since you're seeing so much free RAM - you won't actually be accessing that pagefile much, so putting it on a hard drive won't hurt performance. If the slowness of a hard drive would still bother you, another option is to get a small and therefore cheap second SSD and put your second pagefile on that. The one "showstopper" would be a laptop with no way to add a second "non-removable" drive. (Windows will not let you put pagefiles on removeable drives, like anything connected with USB.)
Here is another answer I wrote that explains things from a different direction.
p.s.: You asked about Windows 10, but I should tell you that it works the same way in every version of the NT family, back to NT 3.1, and prerelease versions too. What has likely changed is Windows' default setting for pagefile size, from 1.5x or 1x RAM size to much smaller. I believe this was a mistake.
1
+1 This is the answer I wish I had written. This is just how a modern OS works. It wasn't a problem before SSDs because we didn't have much RAM and we had lots of hard drive space. Now that we have lots of RAM and not as much mass storage space on some machines, having sufficient paging file space is becoming an issue again. Make it a priority so your machine can make efficient use of RAM.
– David Schwartz
Feb 2 '17 at 6:28
@DavidSchwartz: I've seen many of your answers on MM issues, and I have to say, coming from you that is high praise. Thank you.
– Jamie Hanrahan
Feb 2 '17 at 9:10
The light has shone down. It all(well, most) finally makes sense. Read this and your other answer, each offered new insights. I'm even tempted try and track down the book. Notably, I did ask this question first in none other than the internals forum(as hinted by the comments exchange under the question), but it seems slightly dead. What David said is also true. This question is a little backdated, in a sense, because I got a new SSD these days and can afford the extra pagefile, but it was a real problem with my previous extra-small drive. ...Continued below...
– martixy
Feb 2 '17 at 16:18
...continued from above. Incidentally, in my own research I discovered that this is not the only way to do things, as linux and many VM hypervisors have an option called "overcommit". In fact it seems Windows is in the minority when it comes to its approach of memory allocation. Oh, and when thinking about it I came up with more or less the same banking analogy. The coincidence is uncanny.
– martixy
Feb 2 '17 at 16:20
But those "overcommit" options don't always refer to this specific concept. Virtual memory in a general purpose OS, once the OS has told you "ok, allocation succeeded," is supposed to act to the programmer just like physical memory... except for the slight delay that might occur now and then when a pagefault happens. The trouble with allowing overcommit of all of the physical storage that can realize virtual is that simple memory refs like i = *j; might raise fatal errors, even if i is on your stack and j was previously returned as a supposedly valid pointer. (contd...)
– Jamie Hanrahan
Feb 2 '17 at 18:00
|
show 3 more comments
This is actually pretty straightforward once you understand that commit charge represents only potential - yet "guaranteed available if you want it" - use of virtual memory, while the "private working set" - which is essentially the RAM used by "committed" memory - is actual use, as is pagefile space. (But this is not all of the use of RAM, because there are other things that use RAM).
Let's assume we're talking about 32-bit systems, so the maximum virtual address space available to each process is normally 2 GiB. (There is no substantial difference in any of the following for 64-bit systems, except that the addresses and sizes can be larger - much larger.)
Now suppose a program running in a process uses VirtualAlloc (a Win32 API) to "commit" 2 MiB of virtual memory. As you'd expect, this will show up as an additional 2 MiB of commit charge, and there are 2 MiB fewer bytes of virtual address space available in the process for future allocations.
But it will not actually use any physical memory (RAM) yet!
The VirtualAlloc call will return to the caller the start address of the allocated region; the region will be somewhere in the range 0x10000 through 0x7FFEFFFF, i.e. about 2 GiB. (The first and last 64KiB, or 0x10000 in hex, of v.a.s. in each process are never assigned.)
But again - there is no actual physical use of 2 MiB of storage yet! Not in RAM, not even in the pagefile. (There is a tiny structure called a "Virtual Address Descriptor" that describes the start v.a. and length of the private committed region.)
So there you have it! Commit charge has increased, but physical memory usage has not.
This is easy to demonstrate with the sysinternals tool testlimit
.
Sometime later, let's say the program stores something (ie a memory write operation) in that region (doesn't matter where). There is not yet any physical memory underneath any of the region, so such an access will incur a page fault. In response to which the OS's memory manager, specifically the page fault handler routine (the "pager" for short... it's called MiAccessFault), will:
- allocate a previously-"available" physical page
- set up the page table entry for the virtual page that was accessed to associate the virtual page number with the newly-assigned physical page number
- add the physical page to the process private working set
- and dismiss the page fault, causing the instruction that raised the fault to be retried.
You have now "faulted" one page (4 KiB) into the process. And physical memory usage will increment accordingly, and "available" RAM will decrease. Commit charge does not change.
Sometime later, if that page has not been referenced for a while and demand for RAM is high, this might happen:
- the OS removes the page from the process working set.
- because it was written to since it was brought into the working set, it is put on the modified page list (otherwise it would go on the standby page list). The page table entry still reflects the physical page number of the page of RAM, but now has its "valid" bit clear, so the next time it's referenced a page fault will occur
- when the modified page list hits a small threshold, a modified page writer thread in the "System" process wakes up and saves the contents of modified pages to the pagefile (assuming that you have one), and...
- takes those pages off of the modified list and puts them on the standby list. They are now considered part of "available" RAM; but for now they still have their original contents from when they were in their respective processes. Again, commit charge doesn't change, but RAM usage and the process private working set will go down.
- Pages on the standby list can now be repurposed, which is to say used for something else - like resolve page faults from any process on the system, or used by SuperFetch. However...
- If a process that's lost a page to the modified or standby list tries to access it again before the physical page has been repurposed (i.e. it still has its original content), the page fault is resolved without reading from disk. The page is simply put back in the process working set and the page table entry is made "valid". This is an example of a "soft" or "cheap" page fault. We say that the standby and modified lists form a system-wide cache of pages that are likely to be needed again soon.
If you don't have a pagefile, then steps 3 through 5 are changed to:
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
Step 6 remains the same, since pages on the modified list can be faulted back into the process that lost them as a "soft" page fault. But if that doesn't happen the pages sit on the modified list until the process deallocates the corresponding virtual memory (maybe because the process ends).
There is other use of virtual address space, and of RAM, besides private committed memory. There is mapped virtual address space, for which the backing store is some specified file rather than the pagefile. The pages of mapped v.a.s. that are paged in are reflected in RAM usage, but mapped memory does not contribute to commit charge because the mapped file provides the backing store: Any part of the mapped region that isn't in RAM is simply kept in the mapped file. Another difference is that most file mappings can be shared between processes; a shared page that's already in memory for one process can be added to another process without going to disk for it again (another soft page fault).
And there is nonpageable v.a.s., for which there is no backing store because it's always resident in RAM. This contributes both to the reported RAM usage and to the "commit charge" as well.
This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?
No. It has nothing to do with compression. Memory compression in Windows is done as an intermediate step, on pages that otherwise would be written to the pagefile. In effect it allows the modified page list to use less RAM to contain more stuff, at some cost in CPU time but with far greater speed than pagefile I/O (even to an SSD). Since commit limit is calculated from total RAM + pagefile size, not RAM usage + pagefile usage, this doesn't affect commit limit. Commit limit doesn't change with how much RAM is in use or what it's in use for.
When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
It isn't that Windows is being inefficient. It's the apps you're running. They're committing a lot more v.a.s. than they're actually using.
The reason for the entire "commit charge" and "commit limit" mechanism is this: When I call VirtualAlloc, I am supposed to check the return value to see if it's non-zero. If it's zero, it means that my alloc attempt failed, likely because it would have caused commit charge to exceed commit limit. I'm supposed to do something reasonable like try committing less, or exiting the program cleanly.
If VirtualAlloc returned nonzero, i.e. an address, that tells me that the system has made a guarantee - a commitment, if you will - that however many bytes I asked for, starting at that address, will be available if I choose to access them; that there is someplace to put it all - either RAM or the pagefile. i.e. there is no reason to expect any sort of failure in accessing anything within that region. That's good, because it would not be reasonable to expect me to check for "did it work?" on every access to the allocated region.
The "cash lending bank" analogy
It's a little like a bank offering credit, but strictly on a cash-on-hand basis. (This is not, of course, how real banks work.)
Suppose the bank starts with a million dollars cash on hand. People go to the bank and ask for lines of credit in varying amounts. Say the bank approves me for a $100,000 line of credit (I create a private committed region); that doesn't mean that any cash has actually left the vault. If I later actually take out a loan for, say, $20,000 (I access a subset of the region), that does remove cash from the bank.
But whether I take out any loans or not, the fact that I've been approved for a maximum of $100K means the bank can subsequently only approve another $900,000 worth of lines of credit, total, for all of its customers. The bank won't approve credit in excess of its cash reserves (ie it won't overcommit them), since that would mean the bank might have to turn a previously-approved borrower away when they later show up intending to take out their loan. That would be very bad because the bank already committed to allowing those loans, and the bank's reputation would plummet.
Yes, this is "inefficient" in terms of the bank's use of that cash. And the greater the disparity between the lines of credit the customers are approved for and the amounts they actually loan, the less efficient it is. But that inefficiency is not the bank's fault; it's the customers' "fault" for asking for such high lines of credit but only taking out small loans.
The bank's business model is that it simply cannot turn down a previously-approved borrower when they show up to get their loan - to do so would be "fatal" to the customer. That's why the bank keeps careful track of how much of the loan fund has been "committed".
I suppose that expanding the pagefile, or adding another one, would be like the bank going out and getting more cash and adding it to the loan fund.
If you want to model mapped and nonpageable memory in this analogy... nonpageable is like a small loan that you are required to take out and keep out when you open your account. (The nonpageable structures that define each new process.) Mapped memory is like bringing your own cash along (the file that's being mapped) and depositing it in the bank, then taking out only parts of it at a time (paging it in). Why not page it all in at once? I don't know, maybe you don't have room in your wallet for all that cash. :) This doesn't affect others' ability to borrow money because the cash you deposited is in your own account, not the general loan fund. This analogy starts breaking down about there, especially when we start thinking about shared memory, so don't push it too far.
Back to the Windows OS: The fact that you have much of your RAM "available" has nothing to do with commit charge and commit limit. If you're near the commit limit that means the OS has already committed - i.e. promised to make available when asked for - that much storage. It doesn't have to be all in use yet for the limit to be enforced.
Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
Well, I'm sorry, but if you're running into commit limit, there are just three things you can do:
- Increase your RAM.
- Increase your pagefile size.
- Run less stuff at one time.
Re option 2: You could put a second pagefile on a hard drive. If the apps are not actually using all that committed memory - which apparently they're not, since you're seeing so much free RAM - you won't actually be accessing that pagefile much, so putting it on a hard drive won't hurt performance. If the slowness of a hard drive would still bother you, another option is to get a small and therefore cheap second SSD and put your second pagefile on that. The one "showstopper" would be a laptop with no way to add a second "non-removable" drive. (Windows will not let you put pagefiles on removeable drives, like anything connected with USB.)
Here is another answer I wrote that explains things from a different direction.
p.s.: You asked about Windows 10, but I should tell you that it works the same way in every version of the NT family, back to NT 3.1, and prerelease versions too. What has likely changed is Windows' default setting for pagefile size, from 1.5x or 1x RAM size to much smaller. I believe this was a mistake.
This is actually pretty straightforward once you understand that commit charge represents only potential - yet "guaranteed available if you want it" - use of virtual memory, while the "private working set" - which is essentially the RAM used by "committed" memory - is actual use, as is pagefile space. (But this is not all of the use of RAM, because there are other things that use RAM).
Let's assume we're talking about 32-bit systems, so the maximum virtual address space available to each process is normally 2 GiB. (There is no substantial difference in any of the following for 64-bit systems, except that the addresses and sizes can be larger - much larger.)
Now suppose a program running in a process uses VirtualAlloc (a Win32 API) to "commit" 2 MiB of virtual memory. As you'd expect, this will show up as an additional 2 MiB of commit charge, and there are 2 MiB fewer bytes of virtual address space available in the process for future allocations.
But it will not actually use any physical memory (RAM) yet!
The VirtualAlloc call will return to the caller the start address of the allocated region; the region will be somewhere in the range 0x10000 through 0x7FFEFFFF, i.e. about 2 GiB. (The first and last 64KiB, or 0x10000 in hex, of v.a.s. in each process are never assigned.)
But again - there is no actual physical use of 2 MiB of storage yet! Not in RAM, not even in the pagefile. (There is a tiny structure called a "Virtual Address Descriptor" that describes the start v.a. and length of the private committed region.)
So there you have it! Commit charge has increased, but physical memory usage has not.
This is easy to demonstrate with the sysinternals tool testlimit
.
Sometime later, let's say the program stores something (ie a memory write operation) in that region (doesn't matter where). There is not yet any physical memory underneath any of the region, so such an access will incur a page fault. In response to which the OS's memory manager, specifically the page fault handler routine (the "pager" for short... it's called MiAccessFault), will:
- allocate a previously-"available" physical page
- set up the page table entry for the virtual page that was accessed to associate the virtual page number with the newly-assigned physical page number
- add the physical page to the process private working set
- and dismiss the page fault, causing the instruction that raised the fault to be retried.
You have now "faulted" one page (4 KiB) into the process. And physical memory usage will increment accordingly, and "available" RAM will decrease. Commit charge does not change.
Sometime later, if that page has not been referenced for a while and demand for RAM is high, this might happen:
- the OS removes the page from the process working set.
- because it was written to since it was brought into the working set, it is put on the modified page list (otherwise it would go on the standby page list). The page table entry still reflects the physical page number of the page of RAM, but now has its "valid" bit clear, so the next time it's referenced a page fault will occur
- when the modified page list hits a small threshold, a modified page writer thread in the "System" process wakes up and saves the contents of modified pages to the pagefile (assuming that you have one), and...
- takes those pages off of the modified list and puts them on the standby list. They are now considered part of "available" RAM; but for now they still have their original contents from when they were in their respective processes. Again, commit charge doesn't change, but RAM usage and the process private working set will go down.
- Pages on the standby list can now be repurposed, which is to say used for something else - like resolve page faults from any process on the system, or used by SuperFetch. However...
- If a process that's lost a page to the modified or standby list tries to access it again before the physical page has been repurposed (i.e. it still has its original content), the page fault is resolved without reading from disk. The page is simply put back in the process working set and the page table entry is made "valid". This is an example of a "soft" or "cheap" page fault. We say that the standby and modified lists form a system-wide cache of pages that are likely to be needed again soon.
If you don't have a pagefile, then steps 3 through 5 are changed to:
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
Step 6 remains the same, since pages on the modified list can be faulted back into the process that lost them as a "soft" page fault. But if that doesn't happen the pages sit on the modified list until the process deallocates the corresponding virtual memory (maybe because the process ends).
There is other use of virtual address space, and of RAM, besides private committed memory. There is mapped virtual address space, for which the backing store is some specified file rather than the pagefile. The pages of mapped v.a.s. that are paged in are reflected in RAM usage, but mapped memory does not contribute to commit charge because the mapped file provides the backing store: Any part of the mapped region that isn't in RAM is simply kept in the mapped file. Another difference is that most file mappings can be shared between processes; a shared page that's already in memory for one process can be added to another process without going to disk for it again (another soft page fault).
And there is nonpageable v.a.s., for which there is no backing store because it's always resident in RAM. This contributes both to the reported RAM usage and to the "commit charge" as well.
This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?
No. It has nothing to do with compression. Memory compression in Windows is done as an intermediate step, on pages that otherwise would be written to the pagefile. In effect it allows the modified page list to use less RAM to contain more stuff, at some cost in CPU time but with far greater speed than pagefile I/O (even to an SSD). Since commit limit is calculated from total RAM + pagefile size, not RAM usage + pagefile usage, this doesn't affect commit limit. Commit limit doesn't change with how much RAM is in use or what it's in use for.
When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
It isn't that Windows is being inefficient. It's the apps you're running. They're committing a lot more v.a.s. than they're actually using.
The reason for the entire "commit charge" and "commit limit" mechanism is this: When I call VirtualAlloc, I am supposed to check the return value to see if it's non-zero. If it's zero, it means that my alloc attempt failed, likely because it would have caused commit charge to exceed commit limit. I'm supposed to do something reasonable like try committing less, or exiting the program cleanly.
If VirtualAlloc returned nonzero, i.e. an address, that tells me that the system has made a guarantee - a commitment, if you will - that however many bytes I asked for, starting at that address, will be available if I choose to access them; that there is someplace to put it all - either RAM or the pagefile. i.e. there is no reason to expect any sort of failure in accessing anything within that region. That's good, because it would not be reasonable to expect me to check for "did it work?" on every access to the allocated region.
The "cash lending bank" analogy
It's a little like a bank offering credit, but strictly on a cash-on-hand basis. (This is not, of course, how real banks work.)
Suppose the bank starts with a million dollars cash on hand. People go to the bank and ask for lines of credit in varying amounts. Say the bank approves me for a $100,000 line of credit (I create a private committed region); that doesn't mean that any cash has actually left the vault. If I later actually take out a loan for, say, $20,000 (I access a subset of the region), that does remove cash from the bank.
But whether I take out any loans or not, the fact that I've been approved for a maximum of $100K means the bank can subsequently only approve another $900,000 worth of lines of credit, total, for all of its customers. The bank won't approve credit in excess of its cash reserves (ie it won't overcommit them), since that would mean the bank might have to turn a previously-approved borrower away when they later show up intending to take out their loan. That would be very bad because the bank already committed to allowing those loans, and the bank's reputation would plummet.
Yes, this is "inefficient" in terms of the bank's use of that cash. And the greater the disparity between the lines of credit the customers are approved for and the amounts they actually loan, the less efficient it is. But that inefficiency is not the bank's fault; it's the customers' "fault" for asking for such high lines of credit but only taking out small loans.
The bank's business model is that it simply cannot turn down a previously-approved borrower when they show up to get their loan - to do so would be "fatal" to the customer. That's why the bank keeps careful track of how much of the loan fund has been "committed".
I suppose that expanding the pagefile, or adding another one, would be like the bank going out and getting more cash and adding it to the loan fund.
If you want to model mapped and nonpageable memory in this analogy... nonpageable is like a small loan that you are required to take out and keep out when you open your account. (The nonpageable structures that define each new process.) Mapped memory is like bringing your own cash along (the file that's being mapped) and depositing it in the bank, then taking out only parts of it at a time (paging it in). Why not page it all in at once? I don't know, maybe you don't have room in your wallet for all that cash. :) This doesn't affect others' ability to borrow money because the cash you deposited is in your own account, not the general loan fund. This analogy starts breaking down about there, especially when we start thinking about shared memory, so don't push it too far.
Back to the Windows OS: The fact that you have much of your RAM "available" has nothing to do with commit charge and commit limit. If you're near the commit limit that means the OS has already committed - i.e. promised to make available when asked for - that much storage. It doesn't have to be all in use yet for the limit to be enforced.
Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
Well, I'm sorry, but if you're running into commit limit, there are just three things you can do:
- Increase your RAM.
- Increase your pagefile size.
- Run less stuff at one time.
Re option 2: You could put a second pagefile on a hard drive. If the apps are not actually using all that committed memory - which apparently they're not, since you're seeing so much free RAM - you won't actually be accessing that pagefile much, so putting it on a hard drive won't hurt performance. If the slowness of a hard drive would still bother you, another option is to get a small and therefore cheap second SSD and put your second pagefile on that. The one "showstopper" would be a laptop with no way to add a second "non-removable" drive. (Windows will not let you put pagefiles on removeable drives, like anything connected with USB.)
Here is another answer I wrote that explains things from a different direction.
p.s.: You asked about Windows 10, but I should tell you that it works the same way in every version of the NT family, back to NT 3.1, and prerelease versions too. What has likely changed is Windows' default setting for pagefile size, from 1.5x or 1x RAM size to much smaller. I believe this was a mistake.
edited Dec 8 at 5:22
answered Feb 2 '17 at 3:13
Jamie Hanrahan
17.8k34078
17.8k34078
1
+1 This is the answer I wish I had written. This is just how a modern OS works. It wasn't a problem before SSDs because we didn't have much RAM and we had lots of hard drive space. Now that we have lots of RAM and not as much mass storage space on some machines, having sufficient paging file space is becoming an issue again. Make it a priority so your machine can make efficient use of RAM.
– David Schwartz
Feb 2 '17 at 6:28
@DavidSchwartz: I've seen many of your answers on MM issues, and I have to say, coming from you that is high praise. Thank you.
– Jamie Hanrahan
Feb 2 '17 at 9:10
The light has shone down. It all(well, most) finally makes sense. Read this and your other answer, each offered new insights. I'm even tempted try and track down the book. Notably, I did ask this question first in none other than the internals forum(as hinted by the comments exchange under the question), but it seems slightly dead. What David said is also true. This question is a little backdated, in a sense, because I got a new SSD these days and can afford the extra pagefile, but it was a real problem with my previous extra-small drive. ...Continued below...
– martixy
Feb 2 '17 at 16:18
...continued from above. Incidentally, in my own research I discovered that this is not the only way to do things, as linux and many VM hypervisors have an option called "overcommit". In fact it seems Windows is in the minority when it comes to its approach of memory allocation. Oh, and when thinking about it I came up with more or less the same banking analogy. The coincidence is uncanny.
– martixy
Feb 2 '17 at 16:20
But those "overcommit" options don't always refer to this specific concept. Virtual memory in a general purpose OS, once the OS has told you "ok, allocation succeeded," is supposed to act to the programmer just like physical memory... except for the slight delay that might occur now and then when a pagefault happens. The trouble with allowing overcommit of all of the physical storage that can realize virtual is that simple memory refs like i = *j; might raise fatal errors, even if i is on your stack and j was previously returned as a supposedly valid pointer. (contd...)
– Jamie Hanrahan
Feb 2 '17 at 18:00
|
show 3 more comments
1
+1 This is the answer I wish I had written. This is just how a modern OS works. It wasn't a problem before SSDs because we didn't have much RAM and we had lots of hard drive space. Now that we have lots of RAM and not as much mass storage space on some machines, having sufficient paging file space is becoming an issue again. Make it a priority so your machine can make efficient use of RAM.
– David Schwartz
Feb 2 '17 at 6:28
@DavidSchwartz: I've seen many of your answers on MM issues, and I have to say, coming from you that is high praise. Thank you.
– Jamie Hanrahan
Feb 2 '17 at 9:10
The light has shone down. It all(well, most) finally makes sense. Read this and your other answer, each offered new insights. I'm even tempted try and track down the book. Notably, I did ask this question first in none other than the internals forum(as hinted by the comments exchange under the question), but it seems slightly dead. What David said is also true. This question is a little backdated, in a sense, because I got a new SSD these days and can afford the extra pagefile, but it was a real problem with my previous extra-small drive. ...Continued below...
– martixy
Feb 2 '17 at 16:18
...continued from above. Incidentally, in my own research I discovered that this is not the only way to do things, as linux and many VM hypervisors have an option called "overcommit". In fact it seems Windows is in the minority when it comes to its approach of memory allocation. Oh, and when thinking about it I came up with more or less the same banking analogy. The coincidence is uncanny.
– martixy
Feb 2 '17 at 16:20
But those "overcommit" options don't always refer to this specific concept. Virtual memory in a general purpose OS, once the OS has told you "ok, allocation succeeded," is supposed to act to the programmer just like physical memory... except for the slight delay that might occur now and then when a pagefault happens. The trouble with allowing overcommit of all of the physical storage that can realize virtual is that simple memory refs like i = *j; might raise fatal errors, even if i is on your stack and j was previously returned as a supposedly valid pointer. (contd...)
– Jamie Hanrahan
Feb 2 '17 at 18:00
1
1
+1 This is the answer I wish I had written. This is just how a modern OS works. It wasn't a problem before SSDs because we didn't have much RAM and we had lots of hard drive space. Now that we have lots of RAM and not as much mass storage space on some machines, having sufficient paging file space is becoming an issue again. Make it a priority so your machine can make efficient use of RAM.
– David Schwartz
Feb 2 '17 at 6:28
+1 This is the answer I wish I had written. This is just how a modern OS works. It wasn't a problem before SSDs because we didn't have much RAM and we had lots of hard drive space. Now that we have lots of RAM and not as much mass storage space on some machines, having sufficient paging file space is becoming an issue again. Make it a priority so your machine can make efficient use of RAM.
– David Schwartz
Feb 2 '17 at 6:28
@DavidSchwartz: I've seen many of your answers on MM issues, and I have to say, coming from you that is high praise. Thank you.
– Jamie Hanrahan
Feb 2 '17 at 9:10
@DavidSchwartz: I've seen many of your answers on MM issues, and I have to say, coming from you that is high praise. Thank you.
– Jamie Hanrahan
Feb 2 '17 at 9:10
The light has shone down. It all(well, most) finally makes sense. Read this and your other answer, each offered new insights. I'm even tempted try and track down the book. Notably, I did ask this question first in none other than the internals forum(as hinted by the comments exchange under the question), but it seems slightly dead. What David said is also true. This question is a little backdated, in a sense, because I got a new SSD these days and can afford the extra pagefile, but it was a real problem with my previous extra-small drive. ...Continued below...
– martixy
Feb 2 '17 at 16:18
The light has shone down. It all(well, most) finally makes sense. Read this and your other answer, each offered new insights. I'm even tempted try and track down the book. Notably, I did ask this question first in none other than the internals forum(as hinted by the comments exchange under the question), but it seems slightly dead. What David said is also true. This question is a little backdated, in a sense, because I got a new SSD these days and can afford the extra pagefile, but it was a real problem with my previous extra-small drive. ...Continued below...
– martixy
Feb 2 '17 at 16:18
...continued from above. Incidentally, in my own research I discovered that this is not the only way to do things, as linux and many VM hypervisors have an option called "overcommit". In fact it seems Windows is in the minority when it comes to its approach of memory allocation. Oh, and when thinking about it I came up with more or less the same banking analogy. The coincidence is uncanny.
– martixy
Feb 2 '17 at 16:20
...continued from above. Incidentally, in my own research I discovered that this is not the only way to do things, as linux and many VM hypervisors have an option called "overcommit". In fact it seems Windows is in the minority when it comes to its approach of memory allocation. Oh, and when thinking about it I came up with more or less the same banking analogy. The coincidence is uncanny.
– martixy
Feb 2 '17 at 16:20
But those "overcommit" options don't always refer to this specific concept. Virtual memory in a general purpose OS, once the OS has told you "ok, allocation succeeded," is supposed to act to the programmer just like physical memory... except for the slight delay that might occur now and then when a pagefault happens. The trouble with allowing overcommit of all of the physical storage that can realize virtual is that simple memory refs like i = *j; might raise fatal errors, even if i is on your stack and j was previously returned as a supposedly valid pointer. (contd...)
– Jamie Hanrahan
Feb 2 '17 at 18:00
But those "overcommit" options don't always refer to this specific concept. Virtual memory in a general purpose OS, once the OS has told you "ok, allocation succeeded," is supposed to act to the programmer just like physical memory... except for the slight delay that might occur now and then when a pagefault happens. The trouble with allowing overcommit of all of the physical storage that can realize virtual is that simple memory refs like i = *j; might raise fatal errors, even if i is on your stack and j was previously returned as a supposedly valid pointer. (contd...)
– Jamie Hanrahan
Feb 2 '17 at 18:00
|
show 3 more comments
Thanks for contributing an answer to Super User!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1174229%2fhow-does-memory-commit-charge-work-in-windows-10%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
The commit charge has nothing to do with RAM usage, pagefile usage, or any combination of the two. It is essentially a total of potential storage space required which could be in either RAM or the pagefile. The commit limit is RAM size + pagefile size - a small overhead. Thus, the only way to increase the commit limit is to increase the pagefile size or add RAM. Usually the former is the easiest.
– LMiller7
Feb 2 '17 at 1:03
You already said as much, but also said you have no time to elaborate further. This is why I decided to ask here. The way I understand it, commit charge is "something, at some point has asked for this much memory and the OS has said done" and commit charge reflects this whether the memory is used or not. However that does not answer most of the questions I have. At the very least I'd like an answer to the last question and ideally I'd like to gain a deeper and clearer picture of how memory management works in Windows.
– martixy
Feb 2 '17 at 1:19