Jump to content
Compatible Support Forums
Sign in to follow this  
Maasneotek

a question for you pros...

Recommended Posts

Is a page file really necessary if you have enough ram? I mean i have a LRGE amount of ram on my system, and virtual memory is just using the Hard drive as temporary ram right? so could i just DISABLE the page file ( Virtual memory) altogether?

 

 

just a thought... Im new at this...

 

 

-maas

Share this post


Link to post

As far as i know, under NT/2000/XP you need a paging file no matter how much RAM you have. Course i could be totally wrong smile

Share this post


Link to post

You do still need it tho you can set it pretty low. THe reason I believe is for memory dumps and system recovery after crashes

Share this post


Link to post

From what I have learned and read of the pagefile and its usage by a Windows OS, it's beneficial to get rid of it when you can. Unfortunately, MS doesn't have much in the way of documentation on it since the days of NT4, and they seem interested in giving the users the option to can the pagefile since XP can now be set to "disable pagefile". I don't care about memory dumps on my workstations and my main station at home currently has 1GB of RAM on it, so I have been running without it just fine. However, last night I went to launch Photoshop and surprise!, I can't get it to launch since it perceives the lack of virtual memory as an incredibly low amount of virtual memory freespace. I have heard of this issue, but never actually encountered it so I don't know if there is a fix for it yet. If I have to keep that stupid thing around *just* for PS6, then I will probably hold to my old rule for pagefiles and set the min and max to be equal to each other, and the total amount equal to 150% of the system RAM to a max of 250MB (which in this case would be 250MB).

Share this post


Link to post

It has always been recommended to leave the PageFile managed by windows no matter how much RAM you have.

 

Mike

Share this post


Link to post

I would agree, disabling the page file on this box does three things:

1) Lowers framerate in games

2) Increases heat produced by cpu and drives {always a BAD thing}

3) Doesn't allow some programs that need it to work properly because of design {photoshop, but this can be adjusted IN ps, not really an issue w/this particular program}

 

I use it at a windows managed setting for best results.

Share this post


Link to post
Quote:

I would agree, disabling the page file on this box does three things:
1) Lowers framerate in games

I haven't seen any loss, and everything has been cherry (not to mention all apps that have been cached launch even faster).
Quote:

2) Increases heat produced by cpu and drives {always a BAD thing}

Again, I haven't seen this, nor would I see how this could be the case. Also, on my main box it's a 1.6GHz Northwood running at 2.4GHz (150x4 FSB), so I think I would *definately* see a weakness in this case. Where have you seen this at?
Quote:

3) Doesn't allow some programs that need it to work properly because of design {photoshop, but this can be adjusted IN ps, not really an issue w/this particular program}

I haven't checked as of yet, but I will be fiddling with that when I get home tonight.

Share this post


Link to post
Quote:

but this can be adjusted IN ps, not really an issue w/this particular program


so are you saying that there is a way to run photoshop when windows has no page file?

How?

Share this post


Link to post

Hey clutch! How are your pagefiles set? I tried to set an 8MB pagefile, and in response Windows XP silently forces me to have the recommended pagefile size. The pagefile settings don't indicate that I have a 1536MB page file (1.5 * 1024), but the "currently allocated" pagefile stays at 1536 in spite.

 

It looks like XP is defaulting to a pagefile on my C: drive, because it doesn't like the size I specified. I have the following physically separate IDE disks: C (13GB, boot), D(40GB, system), E(40GB), and F(20GB). I previously specified a system managed pagefile on the F: drive and nothing else. When I changed to 8MB on my E: drive, the C: drive suddenly got a 1536MB pagefile. I can't get rid of it. Any ideas?

Share this post


Link to post

I have set it too 128mb, 256mb, and leave it alone, to Windows default settings.

 

My machine seems to run best when Windows Manages it

Share this post


Link to post

All of my systems have 256MB of RAM or greater, so I don't bother using a pagefile any larger than 250MB (and if I *have* to have one, it's no less than 100MB; I also have the min and max locked to avoid the "burping" from Windows when it increases the size of the file). I would recommend that you get it onto one of the following, ordered by preference (assuming IDE):

 

1. Another disk on another separate IDE channel

2. Another disk on the same IDE channel

3. Another partition other than the system partition

4. System partition (default)

 

If you can use a RAMDISK for it, then give it a shot. I haven't used one so I can't tell you how well they work. As for splitting the pagefile, it's a worthless act if you have a large amount of memory and/or a single disk with at least 35% free space AFTER the total pagefile is maxed out sitting on it. All you are doing is creating more disk/channel traffic by splitting it up, and IDE doesn't even support simultaneous R/W functions so you will slow disk access even further there.

Share this post


Link to post

If you have your pagefile on another partition on the same disk as the system partition, it can actually slow things down a bit.

If the partition is on the outside edge of the disk it will take longer for the write heads to move out there to access the pagefile. It's definitely best to have a pagefile on its own, small partition at the beginning of a different disk.

Share this post


Link to post

Most current hardware will not show a huge difference in running from the inside to the outside of the platter, not to mention having a second harddisk (which might as well be on a separate channel as suggested before). And, as most documentation on the subject (such as this) speak of moving the primary (heavily used) pagefile to another partition and having a secondary (lesser used) for crashdumps it would seem that moving the pagefile to another partition isn't such a bad idea. I don't debug from the crashdump file on workstations, so having one on the system partition is of no concern to me. Also, wouldn't the read speed actually be faster on the outside edge of the disk or were you basing this on the idea that the heads are rest most of the time?

Share this post


Link to post

It's based on the idea that typically on average (insert exceptions here) the system partition is half full or less. Therefore the heads are accessing data in the first half of the first partition. If the pagefile is managed by windows, regardless of size constraints, the pagefile will also be in the same general vicinity as the other data, so the heads will not have to move very far. However, if the pagefile is located in a position at the outer edge of the disk, this extra distance increases seek time during accesses to the page file. The heads have to move to the outer edge and back again.

 

A benefit of moving the pagefile to its own partition is less fragmentation of data around the pagefile. But you need to remember to account for the extra seek time. I agree that in modern drives this is fairly negligible, but it still exists, and imho makes it unnecessary to have a pagefile set up in this manner. The performance increase of a pagefile on a second drive is definitely noticeable, because you are eliminating seek time as well as enabling simultaneous access to the system files and the pagefile. If you want to take advantage of higher velocity at the outer edge of the disk, you could locate your pagefile there, but the seek time issue will cancel out some or all of the benefit.

Share this post


Link to post
Quote:

A benefit of moving the pagefile to its own partition is less fragmentation of data around the pagefile. But you need to remember to account for the extra seek time. I agree that in modern drives this is fairly negligible, but it still exists, and imho makes it unnecessary to have a pagefile set up in this manner.


Interesting point, as this would reinforce the idea of moving the pagefile (an easily controlled system file) to another partition, rather than having it sit on the same partition as the system files and MFT and further subjecting that partition to the effects of fragmentation.

Share this post


Link to post
Quote:

It has always been recommended to leave the PageFile managed by windows no matter how much RAM you have.

Mike


Only if want a fragmented pagefile...

Share this post


Link to post

It seems that the question here is what impacts performance the greatest, increased seek time or decreased fragmentation? I'm sure it would vary greatly on different systems.

Share this post


Link to post

Where to begin?

 

Do you need virtual memory if you have more than enough ram? Theoretically, no. In actuality, even if disabled, pagesys will be created. It is not only a physical file, it seems also to function as a variable that other programs call on to store info to preserve ram for a program's more compelling task. Is it a scratch pad? It seems that it can function that way apart from the amount of ram available.

 

Letting Windows Manage or Doing it yourself?

 

This used to be called Dynamic and Fixed virtual memory. In theory Windows Dynamically managed the file and it would grow or decrease in size depending on the way the computer functioned. Early on, this supposedly had the advantage of allowing windows to find the space that a contiguous file could be run from. Today, the dynamic file does not seem to need to be contiguous. Fixed allowed a user to limit the size of the file to be created. The problem was not one of speed but of space. While the dynamic pagefile was supposed to increase and decrease, often it only grew. In the days of smaller hard disks, this became a problem. Secondarily it was noted that a fixed pagefile, because it was a limited file size, seemed to be faster in some respects since the cache stored more recent "data" overwriting earlier data to preserve the fixed length of the cache. The dynamic cache often retained both newer and older "data" and so had to search more.

 

Putting the pagefile on the outer track

 

This was a trick introduced by Norton's speed disk. The outer track of a hard drive moves at a greater velocity. The theory is that read and write functions are faster. As far as the heads moving around, Davros, I am not sure that I agree. Heads being in the vecinity of data and it being near the cache would mean essentially nothing. The computer keeps track of data through a file allocation table, whose location can be no where near the "data". The heads would still have to move to get the file index to know where and what to do and how to modify a file, so head movement may be moot. Just to give an example, several years ago people noticed that Microsoft Office components loaded so much more quickly than others. It turned out that on their installation, Office components were loaded at a specific place on the platter in a contiguous fashion so that the controller did not have to query the file allocation table, it already had the specific zone to go to load the components. This works for loading Office programs; the time saved was a function of the controller not having to go first to the file index and being able to read the program contiguously. When you load something or write something, the heads have to consult the file index otherwise. Loading the pagefile to the outer track helps the speed of reads. But, what do I know?

 

Loading the cache on another disk or partition

 

I am not an engineer. Having more than one disk on a controller helps with volume. All disks are not read simultaneously and info in and out works much like a busy intersection. I've never seen any statistics that actually show a faster through put. I would say put the cache on the fastest hard disk, which, usually is the one you boot from and put most of your most used programs. Put your data elsewhere for space constraints.

 

Fragmented caches

 

Norton's Speeddisk used to "defrag" the cache. I am not sure that it does this under NTFS or that it puts the cache on the outer track. I believe clutch told me that W2K/XP cut the heart out of it and it doesn't anymore. There may be software that can defrag the cache.

 

On the other hand, defragging the hard disk no matter what the pagefile is has seemed to me to do more for speed than most other things. I work with databases mainly and I defrag by last Access. For whatever reason, this speeds things along better than alphabetically or creation date or just space alotment.

 

Microsoft gives a formula for using pagefile if you want to handle it yourself. Clutch seems to have a formula he uses that works for him. It may be trial and error is perhaps still the best way to tweak after you've read some folks about it. While this article is about 95 and 98 it is a long but interesting read: http://www.rojakpot.com/Speed_Demonz/Swapfile_Optimization/Swapfile_Optimization_01.htm

Share this post


Link to post

It's been a while since I have seen that link... smile

 

In any case, I am curious as to how you are getting a page file generated even though you are explicitly declaring it disabled. When I disabled it before (and had to re-enable it because of PS6 having issues) it never came up, at all. I am trying it again, and I have Outlook and Word XP, IE6, SQL 2K Dev Ed, MS Active Sync, McAfee AV 4.5.1, Solidworks 2001 Plus (CAD app), and various other things taking up about 310MB of RAM according to task manager, while I have 512MB of RAM in this box. While I have been opening, closing, and creating documents I have yet to see a pagefile pop up. Where would it be? I have "Show hidden files" enabled and "Hide protected OS files" disabled (and I could see the pagefile before I rebooted, when it was still set), so I know that I would see it. Am I missing something with this, is there some sort of percentage threshold that I need to cross before the file comes back?

Share this post


Link to post

As a physical file, it generally is found as a hidden file in your root directory \pagefile.sys. Have never seen a task called pagefile in the task manager. When I disabled mine, the file stayed there though there were very few bytes in it. It has been a long time since I did it, I recall there also being a second file working with it. To get it off the hard disk completely I had to go through the console to delete both files. Since my system started acting peculiar after that, I went back and let Windows manage it.

What I meant to say is that from the original install of windows a pagefile.sys is created as default. In my case when I disabled it later, the file remained. I was able to erase it. I didn't mean to say that it will be recreated if needed on the fly. Sorry if what I wrote gave that impression.

Share this post


Link to post

Ahh, and it would seem that I gave the wrong impression as well. I didn't mean that I had a task named "pagefile", but rather I was indicating that my PC was digging rather heavily into its RAM and still not recreating the file. Also, I never had to manually delete the pagefile on either of the systems I have done this with; WinXP just canned it on its own.

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×