Jump to content
Compatible Support Forums
Sign in to follow this  
Ron_Jeremy

Clearing page file at shutdown option

Recommended Posts

Quote:
Originally posted by AlecStaar
I've seen that before man... <snip> I must be missing your point!

I suspect the main reason is that certain companies saw this as a way of making money, and so spread sufficient FUD that people believed defragging to be important. They constructed some benchmarks using atypical disk access patterns to demonstrate their point (even though real-life usage showed no such problems) and started raking in money.

They were aided somewhat by the free cluster algorithm used by MS DOS and Windows 95.

Those OSes wrote new data to the first free cluster, even if the subsequent cluster was occupied. Under those OSes, even with a relatively empty hard disk, it was easy to split a file into blocks smaller than the average transfer size.

NT doesn't do this, and Windows 98 doesn't do this (unless it happens that the only free space is a single cluster, natch).

Fragmentation isn't the problem. It's split I/Os that are the problem.

Quote:
What about burst writes when the system is pressuring the drives to do those?

The OS lazy writes anyway, so it doesn't matter. It's even less of a problem with properly designed applications (if your application uses overlapped [non-blocking] I/O then it doesn't get slowed down (at all) because it doesn't have to wait for disk writes to finish, and can do something else whilst waiting for disk reads).

Quote:
Test it yourself, tell me you don't see an increase in speed...

I have tested it myself. Quite extensively. Real world tests, not synthetic.

I could only get performance to be noticably damaged by inflicting truly horrendous (and completely unrealistic) fragmentation on the drive.

That is, I filled the drive with 4 kbyte (single cluster) files. I deleted alternate 4 kbyte files (so the largest free space was a single cluster). Then I stuck a database onto the disk. Then I deleted the rest of the 4 kbyte files, and stuck data into the database so that it used the remaining disk space (this gave the interesting situation of occupying the entire disk but being as discontiguous as possible).

And, yeah, performance went down the toilet. Split I/Os went through the roof, because virtually every I/O on the disk was split.

A disk will never get that bad in real life, even a really full disk.

(I did similar tests with larger files (8, 16, 32, 256, 1024, 4096, 8192 kbyte), with similar deletion patterns, and also with mixed sizes. Above 256 kbytes, performance was mostly normal, above 4096 kbytes, almost completely normal).

I also tested other things (not just the database); for instance, installing the OS to the drive (with half the files removed). Again, similar story. As long as each fragment was more than 256 kbytes in size, performance was not noticably different (timing with a stopwatch).

Quote:
if you can fragment up a drive, <snip> slower man!

Not IME. And I've tested it a *lot*.

Quote:
I guess what <snip> use your box.

I know where the problems lie -- split I/Os -- and I also know that the vast majority of transfers on my disk are tiny in comparison to the size of disk fragments -- of the order of 64 kbytes or so.

Quote:
Ok, let's go with what you said... What if seeks don't happen between those reads/writes? On a dedicated box that performs one task with only 1 database on it?

I haven't ever used a database that's big enough to have a dedicated server but that also only services one query at a time. There's a lot of head movement because there's a number of things going on at once.

Quote:
What about burst writes, that capability & capacity is built into most modern disks!

Bursting normally occurs to and from the cache anyway.

Quote:
Really? What <snip> more time!

A "massive commit" involves simply telling the transaction log, "Yep, that's done". Not I/O intensive.

Quote:
I used Access <snip> to them as well.

I haven't ever seen an Access database where the bottleneck was caused by something other than it being an Access database. I can't *wait* until MS ditches Jet and uses the SQL Server engine across the board.

Quote:
Consider the Access .mdb example (you don't need SQL Server for alot of smaller clients & applications you know, lol) & not everyone can even afford the licensing SQL Server or Oracle full models they need that anyhow!

For a lot of smaller applications, we use MSDE. It's based on SQL Server (with restrictions on database size and concurrent users), and can be distributed royalty-free with applications developed in Visual Studio and/or MS Office Developer Edition. Even the low-end doesn't need Access.

Quote:
You admit fragmentation DOES affect performance then, as does MS! It does!

Not practically, no.

Quote:
Just plain physics of the heads having to move all over the drive.

Which they would do anyway.

Quote:
If the heads of the disk have to move all over more than one pass to pickup a file (BECAUSE OF THE FILE BEING FRAGMENTED & ALLOVER THE DRIVE, not because of other programs I/O requests), you are telling me it does not affect it?

Yes, because that happens so rarely.

Quote:
If I have a deck of cards to read in my hand, nicely organized, & this is physical world, like the disk deals in!

For me to look at them is a matter of picking up the deck & looking thru it. It's in ONE CHUNK!

If they are scattered ALL OVER MY ROOM? I have to pick them up first, then read them. More time, simple! Can't you see this?

Yes, I can.

What you've ignored is that the disk controller physically isn't capable of picking up the entire deck at once, no matter how it's organized. It can pick up a few cards, then it has to wait, then a few more, then it has to wait, and so on. And it only rarely has to pick up the entire deck anyway. Most of the time, it only wants a couple of cards from the middle, and if that's the case, the only issue is, "are they in the MFT, or are they somewhere else"? As long as they're "somewhere else" it doesn't matter if they're contiguous or not.

Quote:
PeterB, it was pretty much acknowledged they stated it would be fragment resistant/immune by MS. It was big news when NT first released & considered a selling point... you said it above, not me, so using our OWN words here:

"Software Companies Lie"

They erred. Not the first time marketers did that.

Except that in this case, in real-world usage, they didn't lie.

Quote:
I never said you could!

Please, don't put words in my mouth! Where did I EVER SAY THAT?

(Quote it verbatim if you would, thanks)

You said it implicitly, by suggesting that fragmentation mattered a damn.

Quote:
You look at this TOO software centric, & not as an entire system w/ BOTH hardware & software!

The performance hit of fragmentation is disk head movements, or extra ones that in unfragmented files, is not present!

No, this is just it. I've tested it considerably (speccing up servers for a client; we needed to know if defrag tools were worth including (and if so, which ones)). After considerable tests (of the sort mentioned earlier) the conclusion was fairly clear -- fragmentation was a non-issue (if the database was such that it was really being harmed by fragmentation, it had probably outgrown the disks it lived on).

Quote:
Microsoft is not divided into SEPARATE companies & will not be. They are STILL MS, Office 2k still uses WinFax SYMANTEC/NORTON technology.

They are not a cohesive company. There are divisions within MS, and the aims of different parts of the company quite separate.

Quote:
Uptime to many companies IS the crucial factor... especially when EVERY SECOND OF IT IS MILLIONS!

No. Not uptime. Availability. Uptime is a dick size contest. Availability is what makes you money.

Quote:
You SHOULD if you can afford it! Like getting a raise, most IS dept's world wide don't get the monies say, Marketing does for example!

If you *NEED* something, you make sure you can afford it.

Quote:
There you go again, YOUR opinion. Pete, it's not the only one man!

This isn't "opinion". A UI that puts black text on a black background is objectively a bad UI.

Quote:
The ability or presence of the ability in Diskeeper to defrag NTFS disks with over 4k clusters in versions 1-6se.

It's not important; the only version that made such partitions with any regularity was NT 3.1.

Quote:
Really? I kept an NT 4.0 box running 1.5 years, without ANY maintenance, running a WildCat 5.0 NT based GUI BBS on it in Atlanta Ga. when I lived there... it can be done, hence 'Microsoft's 'insistence' on running servers with dedicated BackOffice apps one at a time per each server on them.

Yeah, it can be, but you can't guarantee it. That machine [obviously] needed service-packing, for instance.

Quote:
And, PeterB? Even IBM System 390's are not perfect! They can do the 4-5 9's ratings & go without fail for as long as 20 years, but they DO crash!

*Exactly*. This is precisely why a company that *NEEDS* uptime *cannot* afford to have only a single computer.

Quote:
If it fits more conditions for you? Then, naturally, for your use patterns?? It's better FOR YOU! Pete, there you go again man: Your ways & tastes are NOT the only ones! I value system uptime here, I hate reboots!

So do I. But ya know what? If I had to reboot the machine each time I had lunch, it wouldn't be the end of the world.

Quote:
(Want to know why? I have a pwd that is over 25 characters long & strong complexity characteristic of pwd is engaged here mixed case & alphanumeric! Try that sometime tell me you don't dislike uptime then!)

Type faster. :P

Share this post


Link to post

Welp, DrPizza seems like a pretty sharp guy, and this part:

 

"Yes, I can.

 

What you've ignored is that the disk controller physically isn't capable of picking up the entire deck at once, no matter how it's organized. It can pick up a few cards, then it has to wait, then a few more, then it has to wait, and so on. And it only rarely has to pick up the entire deck anyway. Most of the time, it only wants a couple of cards from the middle, and if that's the case, the only issue is, "are they in the MFT, or are they somewhere else"? As long as they're "somewhere else" it doesn't matter if they're contiguous or not."

 

seems like a brilliant point that he is making. I can understand his perspective, and it is indeed a good one. I have just seen fragmentation take performance into the toilet on many systems, and I know the importance of staying on top of these issues. In many cases, I rarely have to defragment my primary workstation as I don't add and remove large files and/or large numbers of files. Therefore, there are not as many "holes" scattered about being annoying. Using his logic, there would not be any perceivable decrease in performance of the average "power user" that might upgrade or simply reformat his/her system once every 1 or 2 years. I have had workstations that I have defragged after 18 months of operation that don't get a huge performance boost since there wasn't a large amount of defragmentation to begin with. On partitions with large amounts of file activity (like testing/development workstations and servers) steady control of fragmentation does help out with performance greatly on my systems. But, it is nice to see a well written argument to this, and I hope to see more of this activity from DP in the future.

Share this post


Link to post
Quote:
The difference? I don't see it at work in something I can use. He does not do GUI development as he stated above, showing me he is limiting himself based on his principles alone. GUI is where the money is, GUI is what users want!

ha ha ha.

It's funny that I make more money than the guys doing the work on the front-ends, then, isn't it?

Quote:
We've seen it, evidently, he has not. He is as you say, probably working with workstations, exclusively, & not large transaction processing based systems is why & ones that create ALOT of temp tables etc. & scratch areas. Creating & Deleting files is a killer, & causes this in conjunction with append or insert queries in the database realm at least from my experience.

No, 'fraid not. I gave rough details of the testing I've done; doing real queries on real databases (amongst other things).

Large sequential transfers are incredibly rare, pure and simple.

Share this post


Link to post

Welp, this has been interesting (and long). The only thing I was going to reply to is from APK:

 

"I know you are also a member at Ars Clutch."

 

I am not. I haven't been there in a couple of years or so. The last time I was at Ars Technica was to read a hardware review.

Share this post


Link to post

Sorry folks. Missed putting my two cents worth into this thread - I was out of town on business...

 

Couple of confirmations and clarifications:

 

PerfectDisk - because it uses the Microsoft defrag APIs - is limited on NT4 and Win2k to defragmenting NTFS partitions with a cluster size of 4k or less. This is enforced by the the defrag APIs. With Windows XP, this restriction goes away. Any defragmenter that can online defragment partitions with a cluster size >4k is NOT using the MS defrag APIs. If Diskeeper 7.0 is able to defragment partitions on non Windows XP systems where the cluster size is >4k, then Diskeeper is no longer using the MS defrag APIs - something that they (and to be honest - we as well) have been vocal about in positioning product.

 

In all of the "discussions" about fragmentation, what most people have lost sight of is that defragmenters work at the LOGICAL cluster level - NOT at the PHYSICAL cluster level! The file system works at the LOGICAL cluster level. The hard drive controller works at the PHYSICAL cluster level and does all of the LOGICAL to PHYSICAL mapping. All any defragmenter can do is to ensure that a file is LOGICALLY contiguous - which means that only 1 LOGICAL request has to be made to access the file. That is where the performance benefit comes into play. Even though a file is LOGICALLY contiguous is no guarantee that it is PHYSICALLY continguous on the actual hard drive.

 

Fragmentation is only an issue when you go to access a fragmented file. If a file is fragmented and is never accessed, then who cares!! However, it is easy to prove that fragmentation causes slower file access times.

 

For those that are interested, defragmentation has been identified as a KEY issue in terms of performance for Windows XP. Microsoft is recommending frequent defragmentation in order to keep WinXP running at peak speed.

 

- Greg/Raxco Software - maker of PerfectDisk. I work as a systems engineer for Raxco Software

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×