05-11-2011 03:23 AM
Can this be true?
http://translate.google.com/translate?hl=en&sl=zh-CN&u=http://www.pceva.com.cn/html/2011/storgetest_... This site experimented to see how much data could be written to an X25-V before the NAND expired. They wrote to the device 24/7. (Seems they paused once every 12 hour to run TRIM from the toolbox).
Translation makes it hard to understand exactly how they ran the experiment, but they state:
A class (Sequential?)
Where B is random write categories: (Random?)
In total there were able to achieve 0.73PB in 6,185 hours! That is a phenomenal amount of writes, which appears to be way over spec of 20GB of host writes per day for a min of 5 years.
Here is http://botchyworld.iinaa.net/ssd_x25v.htm another one 0.86PB in 7,224 hours!
Does that come down to the work load or are Intel specs extremely conservative?
05-12-2011 05:23 AM
The first link (Mandarin) indicates that after their test, they had a total amount of 0x5D (93) reallocated blocks. The second link (Japanese) indicates that after their test, they had a total amount of 0xAA (170) reallocated blocks.
I wish I could get some clarification from actual Intel engineers if SMART attribute 0x05 on the X25-V, X25-M, 320-series, and 510-series represented actual 512-byte LBA reallocations/remappings (like on a mechanical HDD), or if it represents NAND flash pages (I'd need to know page size; usually 256KByte or 512KByte) reallocated/remapped. I'd really love if one of the Intel engineers chimed in here with something concrete. It would benefit the smartmontools project as well.
Regardless, this proves that wear levelling does in fact work quite well -- but what also matters is how much data they're writing to the drive. Wear levelling becomes less and less effective the less free (unused or erased -- yes there is a difference) NAND flash pages there are. E.g. a drive that's got a filesystem that's 95% full is going to perform horribly given the lack of free pages to apply wear levelling to.
I would have been more impressed had these guys not used TRIM at all. There are many present-day OSes which do not implement TRIM (such as major server operating system players: Solaris, OpenSolaris, and FreeBSD) on some filesystem layers (such as ZFS), and in other cases (UFS on FreeBSD) every single LBA is TRIM'd during a delete operation (which is inefficient, we know). Therefore, I'd be truly impressed if these numbers were from systems not using TRIM at all.
Still a neat couple of sites, but like I said..... 🙂