Return to Solid State

Kyle Rankin

Issue #210, October 2011

Are modern SSDs worth the price if you use Linux?

Three years ago, I first reviewed an SSD (solid-state drive) under Linux (www.linuxjournal.com/article/10094). At the time, I had an ultra-portable laptop with a 4200rpm hard drive that really bogged down the performance on what was otherwise a pretty snappy little machine. Although there definitely were SSD reviews around, I didn't notice many comprehensive reviews for Linux. Instead of focusing strictly on benchmarks, I decided to focus more on real-world tests. In the end, I saw dramatic increases in speed for the SSD compared to my 4200rpm drive.

That may have been true back then, but what about today? For example, one thing that always bothered me about my first comparison was the fact that at the time, I had only a 4200rpm 1.8" drive available to me, and I was limited by my ATA/66 bus speed. My new laptop, a Lenovo ThinkPad X200s, came with a 7200rpm 2.5" SATA drive, and ever since I got the laptop, I've been curious to repeat my experiment with modern equipment. How would a modern SSD hold up to a modern 7200rpm SATA drive in real-world Linux use? Recently, Intel was kind enough to provide me with a review unit of its new 320 SSD line, a follow-up to the X25 SSD line, so I decided to repeat my experiments.

Editor's Note: All SSDs Are Not Created Equal

My Testing Methodology

As in the previous review, I focus mostly on real-world performance tests, but I still throw in some raw benchmark numbers for those of you in the crowd who are curious. Where it made sense, I ran multiple tests to confirm I got consistent results, and here, I report the best performance for each drive. Also, when I was concerned about file-caching skewing results, I booted the machine from scratch before a test. The 7200rpm drive is a 160GB Fujitsu MHZ2160B, and after its tests, I transferred an identical filesystem to the 160GB Intel 320 SSD.

Test 1: GRUB to Log In

I'll be honest, I actually don't boot my laptop all that much. My battery life is good enough that I usually just close the laptop lid when I'm not using it; it suspends to RAM, and I resume my session later. That said, distributions, such as Ubuntu, have focused on boot times in the past couple releases, and my 7200rpm drive seemed to boot Ubuntu 10.04 pretty fast, so I was curious whether I even would see an improvement with the SSD. I used a stopwatch to measure the time between pressing Enter at the GRUB prompt to when I saw the login screen. The boot process is both processor- and disk-intensive, and although the 7200rpm was fast, it turns out there still was room for improvement:

  • 7200rpm: 27 seconds

  • SSD: 16 seconds

Test 2: Log In to Desktop

The next logical test was to time how long it takes from the login screen until reaching a full, functioning desktop. In my case, that meant I started the stopwatch after I typed my password and pressed Enter, and I stopped the stopwatch once my desktop loaded and my terminals and Firefox appeared on the screen. In this case, the SSD really stood out by loading my desktop in less than half the time:

  • 7200rpm: 22 seconds

  • SSD: 9 seconds

Test 3: hdparm

For the next test, I ran hdparm with its traditional -Tt benchmarking options on both drives. Although not as full-featured as Bonnie++, the fact that hdparm outputs only a few metrics definitely makes it easier to do a comparison:

7200rpm:

$ sudo hdparm -Tt /dev/sda6
/dev/sda6:
 Timing cached reads:   6748 MB in  1.99 seconds = 3383.99 MB/sec
 Timing buffered disk reads:  220 MB in  3.01 seconds =  73.08 MB/sec

SSD:

$ sudo hdparm -Tt /dev/sda6
/dev/sda6:
 Timing cached reads:   7168 MB in  1.99 seconds = 3595.46 MB/sec
 Timing buffered disk reads:  500 MB in  3.00 seconds = 166.48 MB/sec

As you can see in these results, the SSD was a bit faster with cached reads but not by as much of a margin as in the past tests. That said, the buffered disk readers, again, were about twice as fast. This far along in the tests I started to notice a pattern: the SSD seems to be about two times as fast as my 7200rpm drive for most of the tests. The real question for me was whether it will maintain that kind of performance through the rest of the tests.

Test 4: Bonnie++

Although hdparm can be useful for getting basic hard drive performance metrics, when you want detailed information and a larger range of tests, Bonnie++ is the program to use. The downside, of course, is that you get a huge amount of benchmark data to wade through. I won't go through metric by metric. Instead, I show the output from the commands but talk about only a few highlights:

7200rpm (output modified to fit):

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
 186  99 71090  17 33138  19  1251  97 83941  17 175.2   4
Latency
 49232us    1079ms     759ms   54918us     188ms     294ms

------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
 /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 258   2 +++++ +++   369   3   345   3 +++++ +++   323   3
Latency
  314ms     633us     150ms     151ms     486us     398ms

SSD (output modified to fit):

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
 187  99 164010  40 85149  32  1325  99 297390 60  4636 124
Latency
 52047us     423ms     336ms   21716us   12432us    9048us

------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
 /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 9079  56 +++++ +++ 10451  63  8292  60 +++++ +++  7043  61
Latency
 5177us     283us    5218us   10723us      30us   16179us

Okay, that is a lot of data to go through, so let's just pick out a few metrics to highlight. What I found interesting was that except for the tests where the CPU was a limiting factor (like Per-Character tests), the SSD seemed to have dramatically better performance than the 7200rpm drive. The SSD is around twice as fast as the 7200rpm drive in Sequential Output Block and Rewrite tests (71,090 K/sec vs. 164,010 K/sec and 33,138 K/sec vs. 85,149 K/sec, respectively). Switching to reads and random seeks though, there's a much wider gap, such as in the Block test (83,941 K/sec vs. 297,390 K/sec), and the random seeks (a known strength for SSDs) are almost no comparison with 175.2 seeks per second compared to 4,636.

All of that said, it's with the file creation tests that the SSD performance really shines. In sequential file creation where the 7200rpm drive can create and delete 248 and 369 files per second, respectively, the SSD goes through 9,079 and 10,451 files per second. The same goes for random file creation with the 7200rpm's create and delete scores of 345 and 323 files per second compared to the SSD's 8,292 and 7,043 files per second. This is really one good reason why I have included so many real-world tests in this review. After all, you can look at some of these numbers and conclude that if you got the SSD, your system may be 20 times faster, yet it's how these numbers apply in real applications that ultimately matters.

Test 5: VirtualBox Torture Test

The next test I decided to perform was one that I felt might highlight some of the performance advantages of the SSD: virtualization. These days, it's more and more reasonable to use an ordinary laptop as a platform to run multiple virtual machines, and I although I knew that having multiple VMs on at the same time had the tendency to slow down my system greatly, I wondered whether the SSD might show an improvement. To try to quantify this, I invented the following test. I set up two identical VirtualBox VMs pointed at the same Ubuntu Server .iso file. Both systems were given their own dynamically allocated virtual hard drive. Then, I booted the VMs, and at the boot screen, I pointed them to a kickstart file I created to automate the Ubuntu Server installation process. I started the stopwatch the moment I pressed Enter on one of the VMs, then quickly started the second at the same time. I didn't stop the stopwatch until the last system rebooted and showed my login screen. What I figured was that having both VMs read from the same .iso file and write to separate virtual disks on the same physical hard drive would be a good torture test for the drive. To avoid any issues with file caches, when it came to the SSD, I created all new VMs with new disks. Here are the results:

  • 7200rpm: 11 minutes, 7 seconds

  • SSD: 10 minutes, 32 seconds

I admit these results really surprised me. I was expecting a much more dramatic difference, but it turned out that although the test really sounded like one that would work out the hard drives, in both cases, it was my dual-core CPU that got the workout. It goes to show that disk I/O isn't always the bottleneck, and sometimes a faster drive doesn't make everything faster.

Test 6: Filesystem Traversal

For the final test, I wanted to compare how fast both drives allowed me to traverse the filesystem. In my February 2008 column (www.linuxjournal.com/article/9944), I talked about how to clean up space on your hard drive using the “duck command” or:

$ cd /
$ du -ck | sort -n

This command traverses the entire filesystem and calculates how much space is being taken up by the files in each directory recursively. When it's done, it creates a tally that shows you not only which overall directories take up the most space, but also which directories within them are the real culprits. Because I realized that the sort command wouldn't help me test the disks at all, I skipped it and just timed how long it took to run du -ck starting from /:

7200rpm:

real	5m25.551s
user	0m2.700s
sys	0m18.589s

SSD:

real	0m41.663s
user	0m2.040s
sys	0m13.453s

This result also really surprised me. Based on some of my other tests, I thought the SSD would be faster but only by around a factor of two. I definitely didn't expect the SSD to complete the test in an eighth of the time. I think part of this must have to do with the dramatically better random seek performance.

Conclusion

When it comes to SSDs, most people these days realize that this performance does come at a price. The question is, is it worth it? Honestly, that's a difficult question to answer. After all, in a good deal of my tests, I did see two times or better performance out of the SSD, and in some cases, dramatically better performance. I also must admit that I saw other more subtle improvements. For example, when my network backup job ran on my 7200rpm drive, I definitely would notice as my system slowed to a crawl. In fact, sometimes I'd even kill the rsync and run the backup later, because I could barely browse the Web. On the SSD, I didn't even notice when the backup ran. When it comes to performance, by the way, I should note that right now, not only does the Intel 320 series of drives have a wide range of prices depending on what size drive you get, according to Intel's product page, the lower-capacity drives actually tout reduced speeds.

This particular 160GB drive currently retails for around $300 on-line. Although it's true that's a lot of money for 160GB of storage, honestly, I tend to store bulk files on a file server at home. Back when 32GB SSDs were the affordable norm, you had to make some storage sacrifices for speed, but at this point, I am comfortable with the 160GB drive my laptop came with. It really does just come down to price. You will have to make your own judgment based on the price versus the performance; however, an important factor for me that makes me lean toward the SSD is how much extra life it gives to my current laptop. The processor and RAM in this laptop are more than adequate for my daily needs, and really any performance issues I've ever had with the machine could be traced back to the storage. I expect with the increased performance of an SSD, I could get another few years out of a laptop I already have had for quite some time—a laptop that would cost much more than the price of an SSD to replace.

Kyle Rankin is a Sr. Systems Administrator in the San Francisco Bay Area and the author of a number of books, including The Official Ubuntu Server Book, Knoppix Hacks and Ubuntu Hacks. He is currently the president of the North Bay Linux Users' Group.