Would Write Performance Improve if a Reformatted Hard-Drive was Filled with Zeroes?

would-write-performance-improve-if-reformatted-hard-drive-filled-with-zeros-00

If you are going to reformat a hard-drive, is there anything that would ‘improve’ write performance afterward or is it something that you should not even worry about? Today’s SuperUser Q&A post has the answers to a curious reader’s questions.

Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.

Photo courtesy of Chris Bannister (Flickr).

The Question

SuperUser reader Brettetete wants to know if filling a hard-drive with zeroes would improve write performance:

I have a 2TB hard-drive that was 99 percent full. I have deleted the partitions with fdisk and formatted it as ext4. As far as I know, the actual data that was on the hard-drive still exists, yet the partition table was reassigned.

My question is: Would it improve the write performance for further write actions if the hard-drive was clean? By ‘clean’ I mean fill the hard-drive with zeroes? Something like:

  • dd if=/dev/zero of=/dev/sdx bs=1 count=4503599627370496

Would filling the hard-drive with zeroes improve write performance?

The Answer

SuperUser contributor Michael Kjörling has the answer for us:

No, it would not improve performance. HDDs do not work like that.

First, when you write any given data to a rotational drive, it gets transformed into magnetic domains that may actually look very different from the bit pattern you are writing. This is done in part because it is much easier to maintain synchronization when the pattern read back from the platter has a certain amount of variability. For example, a long string of ‘zero’ or ‘one’ values would make it very hard to maintain synchronization. Have you read 26,393 bits or 26,394 bits? How do you recognize the boundary between bits?

The techniques for doing this have evolved over time. For example, look up Modified Frequency Modulation, MMFM, Group Code Recording, and the more general technology of run-length limited encodings.

Second, when you write new data to a sector, the magnetic domains of the relevant portions of the platter are simply set to the desired value. This is done regardless of what the previous magnetic domain ‘was’ at that particular physical location. The platter is already spinning under the write head; first reading the current value, then writing the new value if and only if it is different. It would cause each write to require two revolutions (or an extra head for each platter), causing write latency to double or greatly increasing the complexity of the drive, in turn increasing cost.

Since the limiting factor in hard-drive sequential I/O performance is how quickly each bit passes under the read/write head, this would not even offer any benefit to the user. As an aside, the limiting factor in random I/O performance is how fast the read/write head can be positioned at the desired cylinder and then the desired sector arrives under the head. The major reason why SSDs can be so fast in random I/O workloads is that they completely eliminate both of these factors.

As pointed out by JakeGould, one reason why you might want to overwrite the drive with some fixed pattern (such as all zeroes) would be to ensure that no remnants of previously stored data can be recovered, either deliberately or accidentally. But doing so will not have any effect on the hard-drive’s performance going forward, for the reasons stated above.


Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

Akemi Iwaya is a devoted Mozilla Firefox user who enjoys working with multiple browsers and occasionally dabbling with Linux. She also loves reading fantasy and sci-fi stories as well as playing "old school" role-playing games. You can visit her on Twitter and .