Having done a bit more on this I now find that my WD system is running at some 650 page faults per second and it has over 200mb available memory. However this tells far from the whole story.
Firstly let me say what I have found applies to W2K and probably XP, W98 and WME run very different memory models.
There are two main types of page fault, there is the ‘hard’ page fault where the data has to be read in from the page file on disk, and there is the ‘transition’ page fault. Now the transition list is where a page marked for page out to disk is put while it is waiting for the I/O to complete, the page is marked invalid in the page table but has not yet been re-allocated so the data is still as it was. When the memory manager gets a page fault it checks to see if the page is on the transition list, if it is no I/O is needed the page is then marked valid and the faulted code can be resumed. So a transition page fault is very quick to be resolved as it only takes a few cpu cycles and no I/O.
My WD system runs with 450 tansition faults per second, so if I read the info correctly I am only getting some 200 ‘hard’ page faults a second. This is not so bad but still bad enough in my view.
The available memory can lead people to think that this equates to free memory. I believe this is not the case, available memory is the total of any free (unallocated) memory AND any memory which is allocated but is also pageable and therefore available for other applications to ‘steal’. So looking at available memory does not tell you how much unallocated memory you have. It would seem to me that in a windows model you need an awful lot of memory to reduce paging (both hard and transition). It would appear that the W2K task manager memory graph gives a better idea of memory allocation, on my WD system it sits at 313604K (this is also shown under the Commit Charge) which means that since I only have 384mb available physical memory it would seem that the maximum unallocated memory I have is about 71Mb an awful lot less that the 247000K shown as available. You dont need very much to go awry and all of the physical memory would be eaten up and paging would then take off even more.
Even adding shed loads of RAM will not prevent paging, my m/c I use for most of my PC work has 1Gb of physical memory and even with not a whole lot running it still pages stuff in and out (altough not nearly as much as the WD m/c pages) and it currently has over 600MB available and the Commit Charge sits at 539MB roughly and so most of this available 600MB is actually unallocated!
So to sum up… hmmm… Nothing conclusive I guess except to say what most people have always said about Windows, add as much RAM as you can when building a system. If you have a slowish processor the more RAM the better.
My WD machine has only two ram slots with currently a 256mb and a 128mb in each, I will probably replace the 128mb with a 512mb, giving 768mb in total. This I hope will reduce the paging somewhat and if memory useage does peek above the 384mb then it will certainly help.
Having read quite a lot about windows memory management it does appear that there is memory which can be allocated which is not pageable. I am wondering if Brian could take a look at the memory used for data which is shared between WD and say WDWatch to see if anything could be changed to ensure the shared data stays in memory, this might help with the huge numbers of page faults which WDWatch experiences?
Stuart