cancel
Showing results for 
Search instead for 
Did you mean: 

SSD NAND Endurance

idata
Esteemed Contributor III

Can this be true?

http://translate.google.com/translate?hl=en&sl=zh-CN&u=http://www.pceva.com.cn/html/2011/storgetest_... This site experimented to see how much data could be written to an X25-V before the NAND expired. They wrote to the device 24/7. (Seems they paused once every 12 hour to run TRIM from the toolbox).

Translation makes it hard to understand exactly how they ran the experiment, but they state:

A class (Sequential?)

  • 10~100KB 80% 10 ~ 100KB 80% (System thumbnails and the like)
  • 100~500KB 10%100 ~ 500KB accounted for 10% (JPG images and the like)
  • 1~5MB 5% 5% 1 ~ 5MB (big picture, MP3 and the like)
  • 5~45MB 5% 5% 5 ~ 45MB (video clips and the like)

Where B is random write categories: (Random?)

  • 1~100 100% 1 to 100 bytes of 100% (System log and the like)

In total there were able to achieve 0.73PB in 6,185 hours! That is a phenomenal amount of writes, which appears to be way over spec of 20GB of host writes per day for a min of 5 years.

Here is http://botchyworld.iinaa.net/ssd_x25v.htm another one 0.86PB in 7,224 hours!

Does that come down to the work load or are Intel specs extremely conservative?

15 REPLIES 15

idata
Esteemed Contributor III

The first link (Mandarin) indicates that after their test, they had a total amount of 0x5D (93) reallocated blocks. The second link (Japanese) indicates that after their test, they had a total amount of 0xAA (170) reallocated blocks.

I wish I could get some clarification from actual Intel engineers if SMART attribute 0x05 on the X25-V, X25-M, 320-series, and 510-series represented actual 512-byte LBA reallocations/remappings (like on a mechanical HDD), or if it represents NAND flash pages (I'd need to know page size; usually 256KByte or 512KByte) reallocated/remapped. I'd really love if one of the Intel engineers chimed in here with something concrete. It would benefit the smartmontools project as well.

Regardless, this proves that wear levelling does in fact work quite well -- but what also matters is how much data they're writing to the drive. Wear levelling becomes less and less effective the less free (unused or erased -- yes there is a difference) NAND flash pages there are. E.g. a drive that's got a filesystem that's 95% full is going to perform horribly given the lack of free pages to apply wear levelling to.

I would have been more impressed had these guys not used TRIM at all. There are many present-day OSes which do not implement TRIM (such as major server operating system players: Solaris, OpenSolaris, and FreeBSD) on some filesystem layers (such as ZFS), and in other cases (UFS on FreeBSD) every single LBA is TRIM'd during a delete operation (which is inefficient, we know). Therefore, I'd be truly impressed if these numbers were from systems not using TRIM at all.

Still a neat couple of sites, but like I said..... 🙂