Newbie Spellweaver
- Joined
- Mar 29, 2013
- Messages
- 37
- Reaction score
- 57
~kevintjuh93~MoopleDEV 2.0? You better get yourself another name. Thank you.
It's better~kevintjuh93~
KonaDev -* So original *-
~Kona~
It's better
Thank you very much and good luck. Though you do know in Rev121 a lot will be changed right?
Ah, another uninformed developer using winapi functions to pretend that they can somehow improve performance.
The only thing you're doing is telling windows to invalidate as many pages as possible from the process as possible. It does not actually reduce the total memory usage of the program, and the pages themselves are still in memory, just not associated to the program. This means the program now causes soft page faults whenever it accesses those pages. In effect your program does nothing more than look good on paper by making task manager report a much lower value, meanwhile it doesn't actually do anything of substance.
There is no reason to ever set the working set size of a process because Windows is already intelligent enough to page unneeded pages onto the hard drive when that space is needed for something else.
Maybe when you're learning C++ someday you can also learn something about how memory in a modern computer works.
There are several measures to the memory usage of a process.~retep998~
I am very interested in this,
lets say: a program is eating up your ram, then your computer will start using Virtual Memory. In case you call "SetProcessWorkingSetsize(proc,-1,-1)" isn't that then reducing the ram used by the program, so your virtual memory and your ram will get "free"?. Afther that In case your program is unstable its ram will increase again?
Don't see this as a "I am smarter response",because i am not. I have never bother to learn how the way memory is handled. What i posted above is the way how i thought it works. (if the way above is the wrong way of thinking, then i would appreciate it if you could explain the good way).
~kona~
There are several measures to the memory usage of a process.
The first is working set which indicates how much physical ram the process has access to at the moment. There is private working set which is pages of physical ram only that process can access, and there is shared working set which is pages that other processes can access too.
Then there is commit size which is the amount of pages that process can access in total on both the swap and on physical ram, but it excludes some pages which it merely has shared access to. Generally speaking commit size indicates the actual amount of space the program has allocated in one way or another.
Now, the concept of paging works like this. Whenever a process tries to access space from anywhere in its virtual memory (measured by commit size, not working set), the cpu uses page tables to translate the virtual address into a physical address. If the page is not part of the processes working set, then a page fault is triggered and the OS has to get the page into the processes's working set before it can continue. Soft page faults are when the page already exists on ram but just wasn't attached to the processes's working set yet. Hard page faults are when the page can only be found on the hard drive.
What SetProcessWorkingSetsize(proc,-1,-1) does is it removes all pages from the processes's working set, but it doesn't remove them from ram. This means that process now has to go through soft page faults before it can use those pages again. In addition, that space on ram is still being consumed by those pages, so it doesn't actually free up any ram, and if the OS really needs more space it has to go through the same paging to swap process that it would have to do anyway. In addition, the commit size of the process is entirely unaffected. The only way to actually reduce the commit size of the process is by actually coding the process in the first place to not bleed memory like a stuck pig, aka don't use java, or don't be such a terrible programmer.
Why wouldn't you release KonaMS? I would be very interested in it. Thanks.