Jump to content
Compatible Support Forums
Sign in to follow this  
Ron_Jeremy

Clearing page file at shutdown option

Recommended Posts

Messing around in the registry, I noticed the option to "ClearPageFileAtShutdown". It is set by default to "0" (no). What is the purpose of enabling this feature? Are there any performance advantages/disadvantages in doing do?

Share this post


Link to post

Security.

 

The pagefile may contain the data that you were working on. Clearing it a shutdown makes it harder to find the data. I would not enable it. The pagefile will be remade on bootup re-fragmenting your files.

Share this post


Link to post

That registry entry is controlled by the Local Security Policy named "Shutdown: Clear virtual memory pagefile". It doesn't remove the pagefile, but merely wipes it. So the pagefile doesn't have to be re-created at boot if the option is turned on. It might be worthwhile option to enable if you keep encrypted data on the system and don't want anyone to be able to snoop the pagefile. If you don't use encrypted data, I'm not sure why anyone would bother to use it. If the pagefile is big, this option can make shutdown take quite a while.

 

Regards,

Jim

Share this post


Link to post

I'm not sure I see the point you're making. The pagefile is NOT deleted by this security setting. The contents are wiped. Fragmentation will not result from the use of this setting. That's all I was saying.

 

The pagefile and registry hives are defragged by the Sysinternals utility, Pagedefrag. However it doesn't touch the MFT or metadata. O&O makes a decent defragger that performs a defragging operation of all of this stuff at boot time, and in very little more time than it takes for Pagedefrag to run. However, the versions that do boot time defragging are not freeware.

 

Regards,

Jim

Share this post


Link to post
Quote:
Did I say it was deleted by clearing it above?

It can be deleted, by filesystem corruptions!


That wasn't the topic under discussion. I was merely trying to be certain that it was understood that the security setting being discussed would NOT delete the pagefile itself, and therefore would not result in a file system fragmentation issue, in and of itself.

Quote:
(I gain speed by housing the pagefile/swapfile onto another disk... on EIDE a second one on another EIDE I/O channel, & on ScSi on another drive device on the chain. So, when one drive is seeking/reading/writing for me? The swapfile & temp. operations take place on another.. simultaneously! Makes for good performance sense!)

* Understand now?

APK

P.S.=> You are bringing in the possibility of MFT$ defrags now? Diskeeper from Executive Software also does the same as well... not a freeware one, & not in their LITE versions either! I told folks abotu a FREEBIE they can use for PageFile & Reg file defrags above! apk


I pointed out the differences in cost in my own post. For the information of anyone who's interested in the differences, the Executive Software product has to be set each time to perform the boot time defrag, whereas the O&O product can be set to perform it automatically at each boot.

As for you, APK, you might want to have that ego checked. Your voluminous posts speak volumes about you but more, I think, about a presumptuous nature than about knowledge.

Share this post


Link to post

I don't really need confirmation of my opnions from others, but I've seen the comments about your "contributions", and they are far from unaniously slanted in your favor. Take a hint. You are presumptuous, and no one needs a doctorate in psychology to see that.

 

I guess you're at least relatively safe with your puffery online. Hard to get away with it in real life, isn't it?

Share this post


Link to post

O.K you two, let it go. Besides, you're both talking over my head now anyway. Thanks for initial replies - they're most appreciated. If you're gonna continue the mud slinging, I'm gonna delete this thread.

Share this post


Link to post

"The pagefile and registry hives are defragged by the Sysinternals utility, Pagedefrag. However it doesn't touch the MFT or metadata. O&O makes a decent defragger that performs a defragging operation of all of this stuff at boot time, and in very little more time than it takes for Pagedefrag to run. However, the versions that do boot time defragging are not freeware."

 

Part of this statement is correct and part is incorrect. The correct part is that Sysinternals doesn't provide a mechanism to defragment the Master File Table ($MFT) or related metadata.

 

The incorrect part is that O&O's defragger will defragment the MFT and metadata. O&O defragments the $MFT only - it doesn't defragment the $Logfil, $Bitmap, $Upcase, etc... There is only 1 defragger available that will defragment these metadata files - PerfectDisk - it is also the only defragger that tells you how badly fragmented these metadata files are. Defraggers like O&O Defrag only tell you how badly fragmented the $MFT is.

 

- Greg/Raxco Software

 

Disclaimer: I work for Raxco Software, the maker of PerfectDisk - a competitor to O&O Defrag, as a systems engineer in the support department.

Share this post


Link to post
Quote:
The incorrect part is that O&O's defragger will defragment the MFT and metadata. O&O defragments the $MFT only - it doesn't defragment the $Logfil, $Bitmap, $Upcase, etc... There is only 1 defragger available that will defragment these metadata files - PerfectDisk - it is also the only defragger that tells you how badly fragmented these metadata files are. Defraggers like O&O Defrag only tell you how badly fragmented the $MFT is.

- Greg/Raxco Software

Disclaimer: I work for Raxco Software, the maker of PerfectDisk - a competitor to O&O Defrag, as a systems engineer in the support department.


Sorry, I should have been more careful / precise. Have you examined the "Select Additional Files" feature on the Boot Time Defragmentation dialog in O&O? Once you have performed one full defragmentation of a drive, you have the option to add the files that couldn't be defragged with the GUI online by using the Add Exclusive feature. I won't pretend to know whether or not that comprises all the metadata, but that is some or most of it, isn't it? I mentioned it because it's a feature that I've seen many users / evaluaters of O&O overlook. Anyway, once you add the exclusively locked files, they also get defragged at boot time.

In addition to the manual Action | Boot-Time Defragmentation settings, the Executive Software Product does have FragGuard which can be set to run when fragmentation exceeds certain levels on the MFT or registry hives (but without mention of any other items), but I didn't see evidence that it could defrag the "unmovable" files on an NTFS partition.

BTW, I tried out Perfect Disk about a year-and-a-half ago when I was evaluating defraggers for use with Win2K. (I've been using Windows only since a couple of months before the advent of Win2K.) I thought it was generally a good product, but I had some problems with the user interface on a notebook with an ATI graphics subsystem that I couldn't resolve with tech support and had to resort to O&O.

Regards,
Jim

Edit: I asked you if the "additional files" comprised any significant portion of the metadata but didn't tell you what they were. DOH! I'd be glad to PM or e-mail the list to you.

Share this post


Link to post

Jim,

 

"Sorry, I should have been more careful / precise. Have you examined the "Select Additional Files" feature on the Boot Time Defragmentation dialog in O&O? Once you have performed one full defragmentation of a drive, you have the option to add the files that couldn't be defragged with the GUI online by using the Add Exclusive feature. I won't pretend to know whether or not that comprises all the metadata, but that is some or most of it, isn't it? I mentioned it because it's a feature that I've seen many users / evaluaters of O&O overlook. Anyway, once you add the exclusively locked files, they also get defragged at boot time."

 

I can state with utmost certainty that O&O Defrag does NOT do any of the metatdata besides the $MFT. Even if you go into the Boot Time defrag options and select Additional Files, you are not presented with a way to select any of these other metadata files from their interface (do you see a file called $MFTMir or $Logfile or $Upcase?).

 

 

AlecStaar:

 

Diskeeper also doesn't defragment these other metadata files. The interesting thing about Diskeeper is that even if the $MFT is actually in 1 piece, Diskeeper will always show it as being as in 2 pieces. Why? Because they count the $MFTMirr - one of the metadata files - as a fragment of the $MFT - even though it is a separate file.

 

This is easier to see on an NT4 NTFS partition.

 

- Go to a MSDOS prompt and go to the top level of a NTFS partition.

- Issue the following command:

Attrib $MFT

Attrib $MFTMirr

Attrib $Logfile

 

These are just 3 of the NTFS metadata files.

 

If you try to find out non-$MFT fragmentation information in any other defrag product, it can not be found.

 

The reason SpeedDisk can sometimes only get the $MFT down to 2 pieces is that SpeedDisk can't move the 1st records of the $MFT. This means that if the beginning of the $MFT is not at the top of the logical partition, then SpeedDisk has to leave it where it is - but may put the remainder of the $MFT at the top of the logical partition.

 

Even though I work for a competitor, I do know quite a lot about other defrag products and what they can and cannot do :-)

 

- Greg/Raxco Software

Share this post


Link to post

A little bit of info about NTFS metadata...

 

NTFS is a self-describing file system. This means that all of the information needed to "describe" the file system is contained within the file system itself - in the form of metadata.

 

The $MFT is where all of the information about files are stored - in the form of file id's. A file ID is comprised of a 64bit number - of which 2/3 is the actual FileID and the remaining 1/3 is a sequence number. When files are deleted from an NTFS partition, the file id isn't immediately re-used. Only after hundreds of thousands of files are created is the sequence number incremented and the "empty" file id re-used. That is why the $MFT continues to grow and grow and grow. It is also why the $MFT Reserved Zone exists - to allow the $MFT to grow "into" it - hopefully in a contiguous fashion. Very small files can be stored "resident" in the $MFT. As much of the $MFT as can fit into memory is loaded when the partition is mounted.

 

The $MFTMirr is an exact copy of the 16 records of the $MFT. The first 16 records of the $MFT contain files 0 - 15. File 0 is the $MFT. File 1-15 are the remainder of the metadata (not all used btw...). The $MFTMirr is NTFS's "fallback" mechanism in case it can't read the 1st 16 records of the $MFT.

 

The $Bitmap is exactly that - a file containing a bit for each logical cluster on the partition - with the bit either being set or clear depending if that logical cluster is free or used.

 

The $Logfile is NTFS's transaction log - all updates to disk first go through the transaction log. This transaction log is what provides for NTFS's recovery (roll back/forward transactions) when the operating system is abnormally shutdown/crashes and provides for enhanced file system integrity.

 

$Upcase is used for Unicode information (foreign language support, etc...).

 

These are just a few of the NTFS metadata files and what they are used for. Windows 2000 introduced new metadata files (i.e. $Usnjnl and $Reparse).

 

Regarding SpeedDisk:

 

SpeedDisk is the only commercial defragger that does NOT use the defrag APIs provided by Microsoft as part of the NT/2000/XP operating system. These APIs are tightly integrated with the Windows Memory Manager, caching system and file system and take care of all of the low level I/O synchronization that has to occur to allow safe moving of files online - even if the files are in use by other users/processes. The APIs impose some restrictions, however. Pagefiles can't be defragmented online, (nor the hibernate file under Win2k), directories can't be defragmented online under NT4 (FAT and NTFS) and Win2k (FATx). The $MFT and related metadata can't be defragmented online as well. In order to get around these restrictions, SpeedDisk "wrote their own" stuff to move files - it has a filter driver that gets installed/run. This is why SpeedDisk can be service pack/hotfix dependent. Depending on the changes that MS makes to the Memory Manager and file system, SpeedDisk may have to be updated to safely run. That is why (for example), if you have Windows 2000/SP2 installed and run SpeedDisk, it displays a warning message about not being compatible with that version of the operating system and proceed at your own risk...

 

I know HOW SpeedDisk is doing what they are doing. However, knowing what can happen if they calculate things incorrectly, makes me a bit wary. However, SpeedDisk is alot better product - in terms of actually being able to normal data files - than some of the other defrag products out there.

 

- Greg/Raxco Software

Share this post


Link to post
Quote:

Messing around in the registry, I noticed the option to "ClearPageFileAtShutdown". It is set by default to "0" (no). What is the purpose of enabling this feature? Are there any performance advantages/disadvantages in doing do?

The purpose is to permit NT to gain C2 security evaluation.

It has no notable impact, other than slowing shutdown times, and, of course clearing the pagefile.

It would be theoretically possible for sensitive information to be left in the pagefile, and hence recoverable. C2 has strict rules on the re-use of resources, and so to prevent this kind of behaviour it has this option available to you.

Share this post


Link to post
Quote:

Security.

The pagefile may contain the data that you were working on. Clearing it a shutdown makes it harder to find the data. I would not enable it. The pagefile will be remade on bootup re-fragmenting your files.


I don't believe this is true; it clears the pagefile, not deletes it (as I understand it; this was the NT 3.51 behaviour, and I don't see any reason for it to be different).

Share this post


Link to post
Quote:

Diskeeper 7 can now defraf 4K+ clusters. smile (Took 'em long enough)


It wasn't their fault.

The defrag FSCTLs provided by the NTFS driver didn't work for clusters greater than 4 kbytes.

Share this post


Link to post
Quote:
Originally posted by AlecStaar
(jwallen brought that up and another defragger PerfectDisk AND O&O defrag...)

So, I mentioned Executive Software's Diskeeper! AND Diskeeper DOES DO MFT$ work at boottime, here is a quote from their products features page on it:

"Frag Guard ® (Windows NT and 2000 only). Online prevention of fragmentation in your most critical NT/2000 system files: the Master File Table (MFT) and Paging Files.

Fragmentation of the MFT can seriously impact performance as the operating system has to go through the MFT to retrieve any file on the disk. If the MFT is already fragmented when Diskeeper is installed, Diskeeper can defragment it with a boot-time defragmentation feature, then maintain this consolidated state.

Frag Guard works much the same way with the Paging File. This is a specialized NT file on the disk which acts as an extension of the computer’s memory. When memory fills up, the system can utilize this file as virtual memory. A fragmented Paging file impacts performance when it reads data back into system memory. The greater the fragmentation, the slower vital computer operations will perform."

http://www.execsoft.com/diskeeper/about/diskeeper.asp


Now here's the problem.

Executive Software's most famous product is Diskeeper. Diskeeper is a disk defragger.

As such, it's in Executive Software's interest to make a really big deal about the performance degradation caused by fragmented files.

The thing is... it's not that big a deal.

To determine the number of file reads that are being hindered by fragmentation, go into PerfMon, and add the counter PhysicalDisk\Split I/O/sec, for the disk or disks that you're interested in.

This counter measures the number of reads/writes made to the disk that have to be split into multiple operations (there are two reasons to do this; large I/O operations, and I/O operations that are split by fragmented files).

If that counter stays on zero (or close to it) then your level of fragmentation is irrelevent -- because it's not fragmenting I/Os.

(OK, I'm glossing over some details; for instance, whenever a piece of information is read from a file the list of clusters that make up that file has to be read. With FAT 12/16/32, it doesn't matter if the clusters are contiguous or not, it still has to read the whole cluster chain. With NTFS it is *very* slightly quicker to read this information when the file is in a contiguous block. The reason for this is that NTFS is extent-based (the file entries say, "This file starts at this cluster and continues for the next X clusters", rather than "this cluster, then this cluster, then this cluster, then this cluster ... then this cluster, that was the last cluster"). And it is *very* slightly faster to read one extent than it is several. Given that the extents themselves are listed contiguously, the overhead is likely unmeasurable except for the most extreme cases (like, a 1 Gbyte file made of 250,000 single cluster extents))

The other thing to look at is the counter PhysicalDisk\Avg. Disk Bytes/transfer. This gives a rough indication of how small a fragment has to be in order to cause a problem. At the moment, mine is at about 10 kbytes/transfer. Let's round it up to about 16 kbytes. Now, let's imagine that my pagefile is split up into fragments each of 1 Mbyte each.

The way Windows' VM works on x86 is to use 4 kbyte pages; each read into (or out of) the pagefile will be done in 4 kbyte chunks (I say "on x86" because the page size is platform-dependent; for instance, IA64 uses 8 kbyte pages). My 1 Mbyte fragment of pagefile thus contains 250 individual pages within it. Let's say a running program makes a request that causes a hard pagefault (i.e. it has to read some information back in from the pagefile), and it requires 16 kbytes information (the average transfer size) and they're located within my 1 Mbyte fragment. Assuming each location in the fragment is equally likely, then as long as the read starts at one of the first 246 page boundaries, then it won't require a split I/O. If it begins on one of the last 3 boundaries, it'll require a split I/O (because the end of the request will be in a different fragment). That's only a 1.2% probability that fragmentation of my pagefile will require a split I/O. And a 1 Mbyte pagefile fragment is pretty small.

That's only a rough calculation, but it's fairly representative of the truth. Split I/Os are rare, single transfers of more than about 64 kbytes are rare. If your fragments are all much larger than this (say, nothing smaller than a megabyte or two) then fragmentation is highly unlikely to be the cause of any measurable performance degradation.

Executive Software won't tell you such a thing, of course -- there'd be no money in pointing this out. But that doesn't make it untrue.

Quote:
P.S.=> Norton Speedisk by Symantec? It does pagefile defrags DURING Win32 GUI Operations, only one I know that does! BUT, it has a nasty habit of snapping the MFT$ into 2 parts... always! This is why I keep Diskeeper around additionally, to take care of that! apk

The chance of a two-piece MFT mattering a damn is tiny, and would require extremely bad luck.

I wouldn't touch Speed Disk with a bargepole. It has the (unique) ability to be broken even by the tiniest change to the underlying disk format or internal mechanisms. Simply put, I have no trust in it, and no faith in what it does. The Windows documentation explicitly warns against making certain assumptions, but Speed Disk makes them anyway.

Share this post


Link to post
Quote:
Originally posted by AlecStaar
Yup, sounds JUST LIKE HPFS for Os/2, ext2 for Linux & previous versions of NTFS as well... Extended atributes data stored for files, like last access time & date stamps for example, as well as NTFSCompression & NTFSEncryption attributes as well as STREAMS data!

Actually, the only FS of those that I know works in this way is "previous versions of NTFS". I believe those other FSes have structures "outside" the filesystem (rather than the NTFS way of files *within* the filesystem, despite the NTFS driver hiding their existance) -- I know that HPFS certainly does (it has bitmaps interspersed throughout the disk to describe the 8 Mbyte bands in which data can be stored).

Quote:
PLUS HardLinks (2 files have the same name (another scary one, potentially!) HardLinks are when the same file has two names (some directory entries point to the same MFT record). Say a same file has the names A.txt and B.txt: The user deletes file A, file B still is online. Say the user goes and deletes file B, file A remains STILL. Means both names are completely equal in all aspects at the time of creation onwards. Means the file is physically deleted ONLY when the last name pointing to it gets deleted.)!

Reference counted filenames are a POSIX requirement, and are quite useful. Though their behaviour can be initially disconcerting.

Quote:
Yup, as I heard & mention above, $MFT can store files in its contents...

That's because all features of a file on NTFS are stored as attributes of that file, be they data or metadata. Any attribute can reside within the MFT entry, and any attribute (I *think* including the name) can be made non-resident (i.e. stored as an extent on the non-MFT portion of the disk) if it grows beyond a certain size. The data streams of a file are no exception to this.

Quote:
I think a very dangerous 'bug' exists, because of this & zero-byte files creation: potential for disaster, you may wish to run this by your programmers:
A quote from another developer:
"Each file on NTFS has a rather abstract constitution - it has no data, it has streams. One of the streams has the habitual for us sense - file data. But the majority of file attributes are also streams! Thus we have that the base file nature is only the number in MFT and the rest is optional. The given abstraction can be used for the creation of rather convenient things - for example it is possible to "stick" one more stream to a file, having recorded any data in it - for example information about the author and the file content as it was made in Windows 2000 (the most right bookmark in file properties which is accessible from the explorer). It is interesting that these additional streams are not visible by standard means: the observed file size is only the size of the main stream contains the traditional data. It is possible for example to have a file with a zero length and at its deleting 1 GByte of space is freed just because some program or technology has sticked anadditional stream (alternative data) of gigabyte size on it. But actually at the moment the streams are practically not used, so we might not be afraid of such situations though they are hypothetically possible. Just keep in mind that the file on NTFS is much deeper and more global concept than it is possible to imagine just observing the disk directories. Well and at last: the file name can consist of any characters including the full set of national alphabets as the data is represented in Unicode - 16-bit representation which gives 65535 different characters. The maximum file name length is 255 characters."

I think that the file name length is an API limitation, not a filesystem limitation, though I could be wrong. AFAIK, file names aren't "special" attributes, though they're the ones that the system defaults to sorting by (it would be in principal be possible to have a directory whose contents were listed according to, say, size, or some user-defined attribute, by altering the directory information so that it listed the other attribute as the one to sort by).

Quote:
type nul > Drive:\Folder\Filename.Extension can create zero byte files, take up no room,right? WRONG!

$MFT knows they're there & creates metadata surrounding them & forces itself to grow, small growth for each one, but growth! Do that long enough?? TROUBLE!

E.G.-> A program creates zero byte files with diff. names on them (1.txt, 2.txt... n.txt) in an endless LOOP? Watch what happens to the $MFT: grows until there is NO MORE ROOM left for anything else! Reservation zone might stop that & disk quotas, but I am not sure! PERSONALLY, I think it'd keep growing & growing until the disk is full... I do not believe the OS enforces Quotas on the $MFT nor does the NTFS drivers!
Disk quotas can be enforced on USERS in Explorer.exe security tab... I don't know if they can be imposed on SYSTEM user or NTFS driver itself!

Yep, there is the potential for causing a problem here. The few hundred bytes occupied by MFT entries aren't counted towards my disk quota (and I'm not sure named streams or custom file attributes are, either), and it'd be possible to use all disk space in this way. Similarly, it'd be possible to use all the disk space with zero length files on an inode-based filesystem, by simply using all the inodes. This class of problem isn't restricted to NTFS.

Quote:
To change the amount of space NTFS reserves for the MFT:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\FileSystem

Add NtfsMftZoneReservation as a REG_DWORD value & of a range from 1 - 4.

1 is min percentage, 4 is max % used.

It's exceedingly rare to be worth bothering with this.

Quote:
* Scary eh? Top that off w/ each of those files possessing a hidden filestream... & you compound the danger! Food for thought for your companies next upgrade... Watch for alternate datastreams in zero byte files & alternatedatastreams period!

There are a number of tools for listing streams in a file (the "normal" Win32 file APIs can't, but the Win32 backup APIs can), so it's not an unsolvable problem.

Quote:
Yes, it is a bitmapped filesystem that MS uses in any of them they use! Ext2 on Linux is same... bitmapped filesystem, most defraggers & people call it "Volume Bitmap".

It isn't a "bitmapped filesystem". It uses a bitmap of the disk to speed the process of locating free space on the disk, but if anything, it's an extent-based filesystem. The use of some kind of bitmap is almost mandatory (as it's too expensive (though perfectly possible) to build the information from file entries each time a free cluster needs to be found.

Quote:
I am not aware those currently! I have read about "reparse points" but not about $Usnjnl... is it the 'hidden' folder named "System Volume Information"? I can see it in Explorer.exe but cannot access its contents (must change Explorer's properties to see it)!

No, they aren't. I don't remember where they are (I think in a directory $Extend, but I don't remember). System Volume Information contains the files made by the Content Indexing service.

Quote:
Works fine on SP2 2k, & previous ones... & that IS how they skate around patches! They do it independent of the MS defrag API calls.

This is why I wouldn't ever trust the product.

Quote:
Good read on that @ sysinternals.com also! The API's MS uses?? Came from Executive Software code! In NT 3.5x you had to patch the kernel to use Diskeeper... MS licensed that code & integrated it into their kernel, & the native defrag in 2k/XP is a VERY BASIC watered-down Diskeeper.

That makes little sense, as the mechanisms that the native defragging uses are not part of the kernel.

Quote:
Note, they both run from .msc console shortcuts extensions to Computer Management as well? First time a Symantec Product was NOT the native defrag in a Win32 based OS!

The defragger in Windows 98/98SE/ME is as much an Intel product as it is a Symantec one.

Quote:
It's good stuff, has merits others don't... mainly? System Uptime should appeal to Network Admins!

Uptime is for pissing contests. Availability is all that matters, and if you demand high availability, you have a cluster, and so can afford to take a machine down for maintenance.

Quote:
No taking down a server for maintenance when Speedisk works... uptime is assured, defrags can take place & users still access their data!

A non-issue, IMO. I'd sooner have a product that is guaranteed not to be broken by minor FS updates or kernel changes (i.e. on that uses the built-in FSCTLs) than one that doesnt' require me to reboot occasionally.

Share this post


Link to post
Quote:
Originally posted by AlecStaar
Why?

They add up!

No. They do not.

If an I/O is not split on a fragmented file then defragmenting that file will not make that I/O any faster.

Quote:
Tweaking's cumulative... the more 'tiny' savings you make, the faster you will run overall & at many times!

(That's how I look at it at least... seems to work for me!)

There is no saving to be made. If the I/O isn't split, then it can't be made any faster.

Quote:
An example:

DosFreak & my systems for instance (basically the same) beat the snot out of Dual Athlons of 1.4ghz & Dual Palominos of 1.2ghz not too long ago... and, also many kinds of 'High-End' Athlons of 1.4ghz & up to 1.7ghz overclocked in fact!

AND? We only run Dual Pentium III's, overclocked to 1121ghz & 1127ghz!

There is no /way/ that a PIII at 1100-odd MHz can match an Athlon at 1.4 GHz, with a few exceptions (such as using SSE on the PIII but using a pre-SSE AMD processor).

Quote:
In a forum FULL of VERY VERY FAST single cpu rigs, also, we basically dusted them across the boards!

This was tested on 3 separate benchmark tests: WinTune97, Dr. Hardware 2001, & SiSoft Sandra 2001. It made NO sense we should win nearly across the boards on most all categories, but we did!

Tell me how defragging makes a non-split I/O on a fragmented file any faster.

Quote:
I think taking 'little bits' of speed in tweaks like defragging well, system memory tuning & registry hacks for all kinds of things (as well as using the most current drivers etc.), adds up!

Not if there's nothing to add.

Share this post


Link to post
Quote:
Originally posted by AlecStaar
Your whole premise rests on that grounds... now, if I do not defrag my disk for a year, and have say... my Quake III Arena data files all fragmented all over the disk? You are telling me that this does not slow the system down??

That the disk head has to make alot of swings/passes to assemble that file does not take place???

If it's only reading 64 kbytes at a time, then it doesn't matter -- because it would make those multiple head movements anyway.

If you attempt to do a large contiguous read, it gets split up anyway, regardless of whether the file is fragmented. That's the nature of disk controllers. They can't read an arbitrarily large amount of information at once; instead, they have to split large requests into multiple small ones. This is even the case for large reads of a contiguous file. It's unavoidable.

Yes, the extra seeks have to happen -- but if they happen between reads anyway, they make no difference.

Quote:
It depends on the degree of fragmentation! Your "IF" cuts both ways...

The "degree" of fragmentation isn't important. The size of the fragments is somewhat important. The average transfer size is even more important. And the number of Split I/Os is the true statistic that demonstrates if fragmentation is having any effect at all.

Quote:
There's your entire big "IF" again... what if it is? What if a large database is in pieces all over a disk from constant inserts to it via Insert queries? Same idea as a Quake III game data file from above! It will take ALOT longer to load! No questions there!

If the database is only being read 64 kbytes at a time, it won't make a blind bit of difference. Similarly, if it's only being written 64 kbytes at a time. And most I/Os are less than this size.

Quote:
Again, this depends on the degree of fragmentation, pretty common sense! But, I see your point also... but, try to see mine! I've seen databases SO torn apart by deletes & inserts, that defrags & internal compacts/reorgs to them inside of them? Made them speedup, bigtime!

I see the point you're trying to make, but I know from experience that split I/Os are rare, even on highly fragmented files on highly fragmented disks. And if I'm not getting split I/Os then the fragmentation does not matter.

Quote:
Don't ask me then! Ask DosFreak! He saw & participated in tests I ran & that others ran as well! 3 different testing softwares, 3 different testers conducted the tests. I was amazed when my Dual CPU Pentium III 1121ghz beat a Dual Athlon @ 1.4ghz, & also a Dual Palomino @ 1.2ghz! No reason to lie here believe me... ask DosFreak! His machine is a HAIR faster than mine!

It's quite simple, actually. A 1.4 GHz Athlon is faster than a 1100 MHz PIII, ceteris paribus. Give the Athlon an old RLL drive and then run a disk benchmark and obviously, it'll suck. But the processor and its memory subsystem are both faster on the Athlon (no question about this).

Quote:
Tell me how a fragmented large database reads slower (touche)... your argument depends on that single premise. It falls apart in the light of heavily fragmented disks!

No, it doesn't. Heavily fragmented disks don't suddenly start needing to do larger I/Os. They still only do small (<64 kbyte) I/Os, and those still don't get split.

Quote:
I never said anything of the kind that "defragging a non-split I/O on a fragmented file" would be faster... don't try put words in my mouth! A whole file is just healthier for the system,

I can guarantee that my OS cares not whether files are contiguous or fragmented.

Quote:
and the drive itself. Less head motion used to read it, and only 1 pass used.

Except that it doesn't work like that, except for very small files (and they would be read contiguously even with fragmentation). Large files can't be read in a single I/O transfer. It's always "read a bit, wait a bit for the OS to move the buffered data somewhere else, read a bit more, etc..". The nature of disk controllers.

Quote:
On your points above:

1.) Speedisk from Norton/Symantec, is probably NEVER going to break the filesystem as you state!

Yes, it could, and it would be quite easy. It relies on the FS working in a particular way, but the FS is not guaranteed to work in a particular way.

Quote:
They are in tight with Microsoft & always really have been! I am sure they are appraised of ANY changes coming from MS regarding this WELL beforehand!

Actually, they *aren't*, and this is one of the problems I have.

Quote:
Unless Microsoft wants to get rid of them etc. as a business ally! Fat chance!

Of course they do. MS are in bed with Executive Software.

Quote:
2.) Not everyone can afford a cluster of boxes like MS can do thru the old Wolfpack clustering 2 at a time or newer ones... so uptime? IS a plus!

If you require high availability, then you make sure you can afford a cluster. If you do not require high availability, then you can manage with a single machine, and downtime doesn't matter.

Quote:
3.) On defraggers? The original subject?? I keep Norton Speedisk & Execsoft Diskeeper around! They both have merits. I would like to try PerfectDisk by Raxco one day, just for the sake of trying it though!

It has a horrendous interface, and, like Diskeeper and O&O (Speed Disk isn't going anywhere near my computer, so I can't comment on it) is a bit buggy.

Quote:
PeterB/DrPizza, you hate Speedisk? Then, you might not like the new Diskeeper 7 then...

If it doesn't use the defrag FSCTLs then it has no place on any production machine. If it can only defrag partitions with >4kbyte clusters on XP/Win2K2, then that's fine, because XP extends the FSCTLs so that they can defrag the MFT (and other bits of metadata, I think), and so that they work on clusters larger than 4 kbytes.

Quote:
It must be patching the OS again (as old Diskeeper 1.0-1.09 I believe, had to for NT 3.5x to use them).

I don't think that it is, not least because WFP won't let it.

Quote:
Why do I say that? Well, I cannot defrag a volume here that is using 64k clusters using Diskeeper 6.0 Se... but I can with Speedisk!

That doesn't require patching the kernel, it merely means not using the FSCTLs provided by the FS drivers.

Quote:
Diskeeper 7? It can now do over 4k NTFS clusters! It MUST be patching the OS again like old ones did! Unless, they completely blew off using that functionality in the API they sold to MS in current models of Diskeeper! I am guessing, because PerfectDisk is also one that uses native API calls for defragmentations? It too, is limited on NTFS defrags on volumes with more than 4k NTFS cluster sizes!

My guess is that this ability is restricted to XP/2K2, because those expand the capabilities of the defrag FSCTLs to work with >4 kbyte clusters. To abandon the built-in FSCTLs would be a strange move indeed.

I can't find any mention of this new ability on Executive Software's web page, and I'm not running a WinXP machine to test. Where can I find more information about it?

One thing to note is that MS might update the NTFS driver in Win2K to match the one in WinXP (as they did in NT 4; I think that SP 4 shipped what was in effect the Win2K driver, so that it could cope with the updated NTFS version that Win2K uses). This might serve to retrofit the ability to defrag partitions with large clusters.

Share this post


Link to post
Quote:
Originally posted by AlecStaar
Consider the head movements are NOT all over the drive, but in the same general area, correct, on a contiguous file, right?

This assumes, normally incorrectly, that the heads don't move in-between.

Quote:
Less time than swinging say, from the middle of the disk (where the original file is striped out) to the nearer the end for the next part until it is all read!

Thus, a contiguous file is read faster, correct?

Generally, no, it isn't. Rapid sequential reads/writes of large chunks of file are rare. The only things I can think of off the top of my head where this happens are playing DVD movies, and hibernating/unhibernating. It's a remarkably rare action (PerfMon does not lie, though software companies often do).

Quote:
That happens on heavy fragmented files man, ones that are scattered allover the drive, usually on nearly full disks or ones over 70% full using NTFS!

Except that it really doesn't.

Quote:
You see they do occur, & you concede this! Those tiny things add up! In terms of detrimental effects & positive ones! They make a difference, I beg to differ here! Especially on near full or over 70% full disks that are fragged already.

The fullness is not a major concern (the number of files is more important). And if the seeks happen between reads/writes, then no, they do not matter.

Quote:
Not true! On a heavily fragged nearly full disk? The degree of that fragmentation can cause MORE of it & why I used the examples of slowdowns I have seen on HUGE databases because of fragmentations! You bust that file up all over a disk? ESPECIALLY ON A DISK THAT IS PAST 70% or so full already? You see it get slower.

In artificial benchmarks (which tend to do lots of rapid sequential reading/writing), sure. In real life (which doesn't), no.

Quote:
If the file IS scattered allover a drive? Fragmentation INTERNALLY w/ alot of slack in it & EXTERNALLY on disk from record deletions & inserts. Inserts are what cause the fragmentation externally if you ask me! They cause it to grow & fragment on disk, slowing it down & busting it into non-contiguous segments all over the drive! Not in the same contiguous area as the filesystem works to place that data down & mark it as part of the original file w/ a pointer.

I don't know about the database that you use, but the ones I use (mostly SQL Server and DB2) aren't so simplistic as to work like that. They won't enlarge the file a row at a time (as it were). They'll enlarge it by a sizable chunk at a time. Even if they have to enlarge the file often, they do so in a way that does not greatly increase the number of split I/Os.

Quote:
On a nearing full disk? This REALLY gets bad! The system has to struggle to place files down & does fragment them!

It doesn't have to "struggle" to place files down. It doesn't actually *care*.

Quote:
&, Microsoft said NTFS was frag-proof initially. Well, so much for that I guess! The proof's in the pudding now!

Actually, they said that NTFS didn't have its performance damaged by fragmentation, except in the most extreme cases (where average fragment size is around the same as average I/O transfer size).

Quote:
You're saying fragmentation does not hurt system performance? I remember Microsoft saying NTFS would be 'frag proof', this is not the case!

No, they said its performance wouldn't be damaged. This isn't the same statement as saying it doesn't get fragmented -- it does.

Quote:
I want to know something: Did you get your information about this from an old Microsoft Tech-Ed article? As good as MS is, they are not perfect. Nobody is.

Which information?

Quote:
Again: When a disk is over 70% or so full & you have for instance, a growing database due to insertions? & alot of the data is fragmented from other things already? You WILL fragment your file & then the disk will be slowed down reading it!

Only if the fragment size is around the same size as the average I/O transfer size, and that is an extremely rare situation.

Quote:
Heavy frags on near full drives with fragmentation slows a disk down & it is pretty much, common sensical anyhow, ESPECIALLY ON DISKS NEARING OVER 70% full capacity w/ fragmented files on them!

No, it doesn't. You can't speed up an unsplit I/O by making the rest of the file contiguous.

Quote:
Ever play the card game "52 pickup"? Think of it in those terms. A nicely stacked decked is alot simpler to manage than a 52 pickup game!

The only part of the system that even knows the file is fragmented is the NTFS driver. Nothing else has a clue. And the NTFS driver doesn't care if a file is in one extent or a hundred.

Quote:
I have not seen that to date yet! It won't happen. Not a wise business move to lose an ally! Microsoft helps Symantec make money & vice-a-versa via license of Symantec technology & royalties no doubt paid for it! MS still uses WinFax technology in Office 2000 for Outlook if you need it! A revenue source for both parties, & a featureset boost!

Office is developed by a different group to the OSes (and the 9x group was a different group to the NT group). Some collusion in one area does not suggest collusion in another area. Hence the Office people making software for platforms that compete with the OS people (Office for Mac, etc.).

Quote:
Yes, & Symantec (note my winfax lite technology licensed example in Office 2000 above). Business: the arena of usurious relationships & fair-weather friends!

Except that helping Symantec with their defragger yields no benefits, because they're already relying on Executive Software to do the work there.

Quote:
Everyone requires it, but not ALL can afford it!

No, not everyone requires it. If the fileservers in our office were turned off at 1700 on a Friday and not turned on until 0800 on a Monday, it would make not a bit of difference to us. We don't need 100% availability, or even close.

Quote:
Pretty simple matter of economics really! Not everyone can afford to build a cluster of Dell PowerEdge rigs you know! Downtime DOES matter, to me at least!

If downtime matters to you/your business, you get a cluster. It's that simple.

Quote:
That's relative & a matter of opinion! Some guys I know are married to some REAL DOGS, but to them? They're beautiful... beauty is in the eye of the beholder, don't you think?? I found no bugs in it to date & lucky I guess! I have the latest patch for it via LiveUpdate!

No, I didn't say ugly -- I said horrendous. It uses hard-coded colours that renders portions of the UI unusable with non-standard colour schemes (certain icons are rendered near-invisible, for instance). That is to say, the UI is *broken*.

Quote:
Hmmm, why omit the feature then? Bad move... it limits their own defragger as well apparently, Diskeeper up to 6se for sure I know of, & most likely? PerfectDisk by Raxco since it uses those system calls that Diskeeper does (& that Execsoft created & was licensed by MS).

What feature is being "omitted"? And I'm not sure the Win2K defrag APIs are quite that simple. Not least because the FSCTLs are also available for FAT32, which NT4 didn't support.

Quote:
It patched a critical file PeterB/DrPizza. I am almost CERTAIN it was ntoskrnl.exe in fact, it was many years ago! You can research that if you like. I'd have to dig up old Cd's of Diskeeper 1.01-1.09 for NT 3.5x around here still to tell you which file exactly! Why they did not use it at MS or Diskeeper of all folks that API's inventor, is a STRANGE move!

I'm not sure what you're talking abuot. Why who didn't use what, when?

Quote:
Agreed, a strange move! BUT, one w/ benefits like not having to take down the OS to defrag directories, pagefiles, MFT$, etc. Uptime is assured, & not every company can afford failover clustering setups man.

Then that company does not, ipso facto, require high availability. Without specialized hardware, you can't get high availability from a single machine. If you want it, you *need* redundance. This is unavoidable.

Quote:
Just a financial fact unfortunately & how life is! Heck, maybe they can, but every try getting a raise? Or asking your IT/IS mgr. for money for things you don't REALLY need?

If you REALLY need high availability, then you REALLY need more than one machine. Having a single machine means that it can't ever have hardware swapped out and it can't ever have software installed/updated. Neither of these constraints is workable.

Quote:
Was wondering that myself & I looked as well! I heard it from Dosfreak in this post above I believe! Take a look, he is generally pretty spot-on on things, & I take his word alot on stuff!

See, if it's XP-only, then it's no surprise. The FSCTLs in XP are more fully-featured. They work on certain metadata files, they work with large clusters.

Quote:
That's a VERY good "could be", depends on the mechanics of defraggers & how dependent this is on the calls that do the defrags! I don't know about that part you just stated I am not that heavy into keeping up on "will be's" here only more into the "now" stuff until the new stuff appears & is tested thoroughly usually!

Well, allowing the FSCTLs to work with larger clusters won't break anything (older defraggers might not be able to defrag partitions with large clusters all the same, but they won't break). This, you see, is why the FSCTL approach is the best (and why I won't touch Speed Disk). It ensures that you won't be broken by changes in the FS, and by sticking to a published API, you aren't relying on the OS's internals working in a particular way. And you're ensuring that your software will continue to work in new OS versions.

Share this post


Link to post

Well, it's nice to see such a wealth of info from you guys on this topic. As for me, I only have a few things to say about it.

 

1. A scattered file on a harddrive takes longer to load than a contiguous load, period. MS states it here, and it's been a pretty widely accepted by product of the destructive and scattered behavior of hard disk systems. The only effect this has on the OS is that it will only slow down on reads (and writes if there's a small amount of freespace to write files to), and that's about it. However, I have heard of NT boxes that would grind to a hault due to extreme fragmentation and wound up formatting and reinstalling this system. Yeah, it seems odd, and I would think it had more to do with corruption than fragmentation, but my friend and his crew determined this on a few workstations at his office.

 

This behavior is the same thing that is exhibited in any database that has had numerous reads and writes to it, and then had many rows/columns (or entire objects) removed. Once you defragment it, performance will increase and return to normal. Many database system exhibit their own defragmentation schemes (such as Exchange 5.5/2K) that run auto-magically, while others can perform "offline" and/or manual defragmentation/compression runs (MS Access). The hard disk will work in the same way as a database with the main R/W portion being the tables/rows/columns and the MFT$ (for NTFS) as the index. The index will keep up with the location of data regardless of its location on the disk, but moving all the empty space to one area, and moving all your data to another helps a great deal.

 

Funny thing about databases though, is that they always tell you not to run defragmenters and virus scanners on the same partitions as the DBs themselves (unless they are specifically designed for them, like McAfee Groupshield and such). I can only imagine that the R/W behavior of each product tends to clash with most DB software.

 

2. Symantec is evil. They just HAVE to write their "software" (read: crap) SO specifically for an OS version (I would imagine by bypassing the MS APIs) that they break easily with OS patches/service packs/new versions. This happened with WinFax Pro 8 (the damn thing wouldn't let my PC reboot when I performed my brand-new fresh install of Win98 when it came out. Of course, it worked great in Win95 and needed patches to work again in Win98. Then, PCAnywhere v9.0/9.1 had this WONDERFUL ability to keep Win2K boxes from rebooting again upon installation due to a NETLOGON (awgina.dll in this case) filter to allow the software to interact with NT/Domain accounts. I saw this on CDs that indicated that they were "Windows 2000 compliant" as well. After a nice visit to the Symantec site, I found out about this file and they had a fix for it provided you either had a FAT 16/32 disk, or an OS/Utility that could provide access to the system partition if it was NTFS. Besides that, Speedisk killed my dad in 'Nam. Bastards...

 

smile

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×