How to Stress Test the Hard Drives in Your PC or Server

Which of your hard drives is the fastest, and is it really as fast as the manufacturer promised? Whether you have a desktop PC or a server, Microsoft’s free Diskspd utility will stress test and benchmark your hard drives.

NOTE: A previous version of this guide explained using Microsoft’s old “SQLIO” utility. However, Microsoft now only offers the “Diskspd” utility, which replaces SQLIO, so we’ve updated this guide with brand new instructions.

Why Use Diskspd?

If you want to know the IO ability of your drives, Diskspd makes an excellent tool. Diskspd will tell you the maximum capacity a server’s hard drives can handle, or point you at the fastest hard drive you should use for heavy workloads (or just demanding PC gaming) on a desktop PC.

As an example, let’s suppose that we have three drives on a server: an F drive, G drive and C drive. If we have our MDF on the F drive, the LDF on the G drive and our OS on our C drive, we can evaluate whether our setup is effective. For example, if the MDF file is the busiest file with the most reads and writes, we’d want it to be on the fastest drive.

Using the above example with the graphs (representing the writes and reads for the OS, LDF and MDF), we would place our MDF file on the fastest drive since our MDF is the busiest. If our Diskspd analysis showed that F was our fastest drive, we would place our MDF file on drive F.

Diskspd has been tested to work on desktop versions of Windows 7, 8, 8.1, 10, as well as Windows Server 2012, 2012 R2, and 2016 Technical Preview 5.

Once you’ve downloaded it, you’ll need to extract the contents of the .zip file to a folder on your computer. Note that the archive contains three different “diskspd.exe” files. The one in the “amd64fre” folder is for 64-bit Windows PCs, while the one in the “x86fre” folder is for 32-bit Windows PCs. If you’re using a 64-bit version of Windows, and you probably are, you’ll likely want to use the 64-bit version.

How Do I Perform a Stress Test?

To perform a single test, you can simply invoke a Diskspd command from an Administrator-enabled Command Prompt. On Windows 10 or 8.1, right-click the Start button and select “Command Prompt (Admin)”. On Windows 7, locate the “Command Prompt” shortcut in the Start menu, right-click it, and select “Run as Administrator.

First, use cd to switch to the directory containing the Diskspd.exe you want to use:

cd c:\path\to\diskspd\amd64fre

In our case, that looked like the command below.

Now, run the Diskspd command with the options you want to use. You’ll find a complete list of command line options and usage information in the 30-page DiskSpd_Documentation.pdf file included in the Diskspd archive you downloaded.

However, if you want to get up and running quickly, here’s an example command. The following command sets the block size to 16K (-b16K), runs a 30 second test (-d30), disables hardware and software cashing (-Sh), measures latency statistics (-L), uses two IO requests per thread (-o2) and four threads (-t4) per target, uses random access rather than sequential writing (-r), performs 30% write operations and 70% read operations (-w30).

It creates a file at c:\testfile.dat of 50 MB in size (-c50M). If you wanted to benchmark your D: drive instead, for example, you’d specify d:\testfile.dat

Diskspd.exe -b16K -d90 -Sh -L -o2 -t4 -r -w30 -c50M c:\testfile.dat

After however long you specify–30 seconds in the above test–the test results will be printed to the Command Prompt and you can view them.

Consult the results and you’ll see the average MB/s the drive reached during the test–how many write operations were performed a second, how many read operations were performed a second, and the total amount of input/output (IO) operations per second. These statistics are most useful when comparing multiple drives to see which is faster for certain operations, but they’ll also tell you exactly how much IO a hard drive can handle.

You can also dump the results to a text file you can view later with the > operator. For example, the below command runs the same command as above and places the results in the C:\testresults.txt file.

Diskspd.exe -b16K -d90 -Sh -L -o2 -t4 -r -w30 -c50M c:\testfile.dat > c:\testresults.txt

Repeat this process for your other drives, and compare.

If you’re trying to figure out which is the fastest hard drive for a certain workload, you should create a command that best matches that workload. For example, if it’s a server that only reads data and doesn’t write, you should perform a test of 100% reads that doesn’t measure any write performance. Run that stress test across multiple drives and compare the results to see which is faster for that type of work.

Note that there are many, many other command line options you can specify for Diskspd.exe. You’ll find the most complete, up-to-date list in the documentation that comes with the downloaded Diskspd.exe file itself, but here are some important options:

• -w denotes percentage of write and read operations. For example, entering -w40 will perform 40% write operations and thus 60% read operations. Entering -w100 will perform 100% write operations. Omitting the -w switch or entering -w0 will perform 0% write operations and thus 100% read operations.
• -r or -s determines whether the test uses either random access or sequential operations. Specify -r for random access or -s for sequential. This helps you test for either random file access (often a bunch of small files) or sequential file access (often one large file that’s read or written all at once).
• -t denotes number of threads that will be run at the same time, such as -t2 for two threads or -t6 for six threads.
• -o denotes number of outstanding requests per thread, such as -o4 for four requests or -o2 for two results.
• -d is the duration of the tests in seconds, such as -d90 for 90 seconds or -d120 for 120 seconds.
• -b is the block size of the reads or writes, such as -b16K for a 16K block size or -b64K for a 64K block size.

Using these options, you can tweak the benchmark command to see how your disk performs under varying loads. Once you’ve written a command that you feel approximates the type of workload you perform on your PC, you can stress test several drives and see which offers the best performance.

Chris Hoffman is a technology writer and all-around computer geek. He's as at home using the Linux terminal as he is digging into the Windows registry. Connect with him on Google+.

• Published 07/23/16
• Jamie

A great start to an article about drives - mentioning partition letters - If someone is claiming to be a technician and writing about drives I would expect them to know the difference, and to reflect that in their article Sort of like which is the fastest plane - and then saying how fast they can go when on the ground - it does not compute!

Now - determining the maximum capacity a server’s hard drives, and fastest of them - Er - as a 'user' you are not likely to have any means to determine which of the servers drives contain which part of your data, and it will probably be spread over a raid set of drives anyhow!And - if you are trying to manage a server rather than a PC, you are probably looking at a 'hive' of hundreds of drives - so C D & E are not drives, but are 'file-store volume sets' if the system is using Windos as that is limited to 2 'floppy', and 24 other storage devices.

Now - I accept the article is basically about stress testing the processing of reads and writes on a partition of a drive - and if that is what you want to do, then OK.

However, consider that reads and writes to a drive take different times depending on the data density at the part of the drive being used, and the definition of a partition will limit what parts of a drive are used - So, with the OS partition at the 'start' of the drive, expect I/O to be faster than the data being processed at the 'end' of the drive simply because there will probably be more data on a track, and a track takes the same time to pass under a head - regardless of where on the rotating drive the track is.Then - consider that the OS will be reading and writing data from the application, into the area of memory allocated as cache for the device/partition - and that will then be passed to, or got from the drives cache, from which it will have been read, or will be written, (and verified) on to the drive platter surface.So 3 partitions on 1 drive -(C:) OS with pagefile, 5000 files - maybe(D:)main data store ( hundreds of thousands of files - with entries in the MFT on that partition, and the final partition (E:) - a couple of thousand large files so a small MFT.Want to access data of a file on F: - first find the MFT entry - a scan of the MFT data that's already in memory - then get the drive to move the heads all the way to that part of the drive, read the data into it's cache, pass that data to the cache managed by the OS - Ah! - need to get some memory for allocation as that cache area - Lets expand the pagefile - Hmm - a couple of thousand reads and writes of the MFT on the OS partitionarea - that's the pagefile expanded, and some memory content written to it NOW the data can be taken in from the drive's cache - OK that drive cache was re-used for the OS works to expand and then write to the pagefile - lets go re-read the data from the partition F areanet processing - maybe 2000 accesses on partition C area to get in the 200MB (1 trackfull?) block of data from the partition E: area.real-time performance -of F - pitiful!Now - consider renaming a file on partition D - not actually accessing the data at all - just working through maybe 6GB of MFT to find the entry, and then tweaking the entries in that MFT to have the file details 'arranged' appropriately in the MFT- And - that would have used the main memory cache area allocated for the partition D: MFT - so probably no expansion of the pagefile - but lots of reading of the partition D: MFT - rename a file on D: - maybe more I/O than writing a new 10MB file -

The morale of the above - While you can determine the relative speed of access to partitions, the interaction of the OS and it's usage of entries on the other partitions can have far more effect on throughput than the actual data being processed would lead you to expect .Then consider cache management is usually on a the basis that the block that has not been referenced for the longest time is the prime candidate for re-use so 1000 blocks, and processing a 900 block MFT - all fits, read once and just re-use.Add enough files to double the number of entries in the in the MFT (OK - just 15% more) and the cache isn't big enough to hold all the entries - so re-use the longest not used - for the new block, and - Ah! want that discarded data block - so discard another block of data and reacquire that discarded block . repeat lots of times...

Yes - just adding the 1000 or so, small file entries associated with browsing a site and you can add seconds to the time needed to access every file.

That's why good - as in usually accurate, performance analysts are so valued.Especially those that know making more partitions on a drive may effect throughput, but only for specific workloads mixes.

More Articles You Might Like

• Get exclusive articles before everybody else.