SSDs More Reliable In Long-Term Test

John Lister's picture

Newly-published tests suggest solid state drives (SSDs) are much more reliable in the long term than traditional drives. The tests only cover a specific use case, but it's one that could let users find a good mix of cost and performance.

SSDs work in a similar way to USB memory sticks, with data accessed through a flow of electricity. It means they don't have any moving parts, unlike a hard drive that spins round and has a moving arm to access data, a little like a vinyl record player.

That brings several key advantages - most notably that SSDs are quicker to access data. They are also significantly less susceptible to physical damage and parts wearing out. And they aren't as reliant on the physical arrangement of the data to boost efficiency, hence not needing defragmentation.

The downside is that SSDs are still considerably more expensive than traditional drives of the same capacity. That means any comparison between the two may depend on how significant the greater resilience is in reality.

Real World Testing

Most testing of hard drives involves reading and writing to them consistently for a short period, meaning a much faster pace of use than is normal in the real world.

Researchers then usually take the failure rates and figure out how long it would take average users to read or write to the disk that number of times and then extrapolate the expected lifespan.

However, an online storage and backup company called BackBlaze has been tracking its use of both SSDs and hard drives to find out how long they lasted in the real world. While its use pattern isn't exactly the same as ordinary business of consumer users, it's a lot closer than what happens in lab testing.

BackBlaze says that across the first four years of use, SSDs were consistently more reliable than traditional drives, though both showed a similar trend of increasing failure rates in the second year and then little change through to the fourth year. (Source:

Year Five Brings Surprise

The big surprise came in the fifth year where the failure rate for traditional drives almost doubled, while the failure rate for SSDs actually fell. At this point the chances of an SSD failing during the year were under one percent, compared with almost four percent for traditional drives. (Source:

BackBlaze will continue to monitor the results in future years. That should make for interesting findings as in previous tests, traditional drives have become increasingly unreliable between five and eight years.

One important caveat is that the test specifically looked at drives used as boot disks, meaning they housed the operating system and were used when the computer started up. That does mean the results could be good news for users who've chosen to solve the cost vs performance dilemma by having smaller SSD as a bootup disk to house data and documents.

What's Your Opinion?

How reliable have you found hard drives? Do you look at reliability test results when buying a drive? How long do you expect a hard drive to last?

Rate this article: 
Average: 4.5 (8 votes)


olds97_lss's picture

I've had 1 SSD fail and 3 HDD's fail in the past 8 years. The SSD was an intel brand 140GB I believe. I do a full drive backup pretty frequently as an image to a HDD on another computer so I can easily rebuild the drive if necessary. I use SSD's as my primary drive on all 4 of my computers (3 windows, one ubuntu) and I create a full image the windows drives once a month or so. Haven't figured out a way to do that with my ubuntu laptop... but I just tinker on it.

The HDD's were primarily just storage throughout that time. One was an external drive, the other 2 internal. All relatively large, 6TB-8TB. I use them as my media servers storage drive, so they don't get a ton of writing other than the initial full copy. I have those drives with a full clone external as well in case it dies. One WD (internal), one seagate (external) and one (internal) I can't recall the brand...

I use my primary computer to rip/convert any bluray/dvd that I buy then use a script to copy the file to the media server internal 10TB and an external 8TB drive so I always have 2 copies of them in case one or the other dies on me. Way too much time/work invested to lose that to not have constantly current backup.

Dennis Faas's picture

If you have 3 drives (minimum) the same size you can build a RAID 5 and if one drive dies, the other two are used to rebuild the dead drive using parity bits which are striped across all volumes. Just insert a new drive and voila, the data is once again fault tolerant and no loss of data on the remaining two. You could save a lot of storage doing it this way, though all drives need to be internal and attached to a RAID controller.

Note that with 3 x 8TB in RAID 5, you would only have 16TB of available space because the remaining space is used to calculate parity bits. Using this method, it doesn't matter which drive dies. I have 8 x 2TBs set up this way, and 1 x 512GB SSD set up as a cache. The controller I use is LSI 9265-8i with cachecade purchased separately for the SSD cache, though it's not necessary but drastically improves performance.

In the last 20 years I've done it like this, I've never lost any data - so long as you keep the RAID fully redundant (i.e. all drives present).