05-11-2011 03:23 AM
Can this be true?
http://translate.google.com/translate?hl=en&sl=zh-CN&u=http://www.pceva.com.cn/html/2011/storgetest_... This site experimented to see how much data could be written to an X25-V before the NAND expired. They wrote to the device 24/7. (Seems they paused once every 12 hour to run TRIM from the toolbox).
Translation makes it hard to understand exactly how they ran the experiment, but they state:
A class (Sequential?)
Where B is random write categories: (Random?)
In total there were able to achieve 0.73PB in 6,185 hours! That is a phenomenal amount of writes, which appears to be way over spec of 20GB of host writes per day for a min of 5 years.
Here is http://botchyworld.iinaa.net/ssd_x25v.htm another one 0.86PB in 7,224 hours!
Does that come down to the work load or are Intel specs extremely conservative?
05-17-2011 03:55 AM
Re: XS: Something is up. I'm not sure I can trust his results based on that. Here's why:
SMART attributes 0xe2 (226; Workload Media Wear Indicator), 0xe3 (227; Workload Host Reads Percentage), and 0xe4 (228; Workload Minutes) are all a raw value of 0xffff (65535). This doesn't mean anything to most people, but it does mean something to those of us familiar with the X25 series of drives and the 320 series: there's a special SMART test vendor code (0x40) you can submit to the X25 and 320 series which will clear some (but not all) SMART stats. Specifically, it resets the 3 above attributes to value 0xffff. After a few minutes of the drive having these attributes reset, they start incrementing/behaving normally again. The X25-V behaves identically in this regard to the X25-M, the X25-E, and the 320 series.
In every screen shot that fellow has posted so far, those attributes remain at 0xffff.
I don't know if something seriously wonky is going on because he's using a Kingston drive (firmware may differ? Unsure; I do see he's running the 02HD firmware rather than the 02M3 firmware), if he's intentionally clearing them every time, or if he's editing the screenshots. I really don't know. If someone else has a Kingston model with the exact same firmware and can confirm the drive always keeps those 3 attributes at value 0xffff, that would be enough for me to believe I'm wrong.
It would really help if he wouldn't use silly SMART monitoring software like that CrystalDiskInfo crap and instead used smartmontools. I guess if he's running Vista or Windows 7 he doesn't have much of a choice though.
EDIT: Ah, I see he's keeping 12GB of free space (presumably at the end of the SSD) for wear levelling. Okay, that's at least reasonable, and I'm less inclined to refuse acknowledgement of his tests. I'd still like some validation of the Kingston drive behaviour with the above 3 attributes though. Here's an example from a 320 series drive that's been used normally (nothing stressful):
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE226 Workld_Media_Wear_Indic 0x0032 100 100 000 Old_age Always - 5 227 Workld_Host_Reads_Perc 0x0032 100 100 000 Old_age Always - 70 228 Workload_Minutes 0x0032 100 100 000 Old_age Always - 2463805-17-2011 06:34 AM
The experiment is being run by two different people using the exact same test file. Win 7 is being used so that the drive can benefit from TRIM.
I very much doubt Anvil is clearing SMART info and for sure he will not be editing screenshots to skew results. Both XS members running this experiment are above reproach in this regard.
I agree it would have been better to check the workload for an hour using SMART tools to then calculate the projected wear rate and then periodically monitor progress to see how accurate the SMART info was, but I guess the objective of the test was to just find out how long the NAND lasted.
It would be interesting to know how SMART wear out values are calculated. On the actual condition of the NAND or a predetermined P/E value. If its the latter SMART is not that helpful in context of seeing how long the NAND will last.
EDIT:
It would also be interesting to know how PE values are calculated. Is it the minimum? Average?
05-17-2011 08:42 AM
koitsu wrote:
In every screen shot that fellow has posted so far, those attributes remain at 0xffff.
I don't know if something seriously wonky is going on because he's using a Kingston drive (firmware may differ? Unsure; I do see he's running the 02HD firmware rather than the 02M3 firmware), if he's intentionally clearing them every time, or if he's editing the screenshots. I really don't know. If someone else has a Kingston model with the exact same firmware and can confirm the drive always keeps those 3 attributes at value 0xffff, that would be enough for me to believe I'm wrong.
It would really help if he wouldn't use silly SMART monitoring software like that CrystalDiskInfo crap and instead used smartmontools. I guess if he's running Vista or Windows 7 he doesn't have much of a choice though.
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE226 Workld_Media_Wear_Indic 0x0032 100 100 000 Old_age Always - 5 227 Workld_Host_Reads_Perc 0x0032 100 100 000 Old_age Always - 70 228 Workload_Minutes 0x0032 100 100 000 Old_age Always - 24638Hi, and I'm Anvil at the XS forum
I noticed the # FFFF values but didn't really care, the important thing here is host writes and that has been reported correctly.
The Kingston drive did not support TRIM initially, well lets not get into that on the Intel forum, ask DuckieHo if you need more info
I downloaded smartmontools from sourceforge and ran it with the -a SDA parameters and now it has actually cleared/reset those attributes so here we go.
SMART Attributes Data Structure revision number: 5
Vendor Specific SMART Attributes with Thresholds:ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0020 100 100 000 Old_age Offline - 0 4 Start_Stop_Count 0x0030 100 100 000 Old_age Offline - 0 5 Reallocated_Sector_Ct 0x0032 100 100 000 Old_age Always - 2 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 174 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 93192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 33225 Load_Cycle_Count 0x0030 200 200 000 Old_age Offline - 148963226 Load-in_Time 0x0032 100 100 000 Old_age Always - 9856227 Torq-amp_Count 0x0032 100 100 000 Old_age Always - 0228 Power-off_Retract_Count 0x0032 100 100 000 Old_age Always - 1010375343232 Available_Reservd_Space 0x0033 099 099 010 Pre-fail Always - 0233 Media_Wearout_Indicator 0x0032 096 096 000 Old_age Always - 0184 End-to-End_Error 0x0033 100 100 099 Pre-fail Always - 0 SMART Error Log Version: 1No Errors LoggedI'll check and post those attributes later at XS.
05-17-2011 04:06 PM
1) smartmontools *absolutely* did not change anything or reset anything. The SMART monitoring utility you've been using, simply put, is buggy or broken. OSes can sometimes cache SMART attribute data -- yes, you read that correctly -- and smartmontools deals with that situation properly. Guaranteed CrystalDiskInfo is broken in this regard, and as such please don't use it for SMART attribute monitoring.
2) I believe you meant to run "smartctl -a /dev/sda" (smartctl lets you get away with removal of the /dev/ part, and I'm a little surprised it let you use capitals for the device name). This command, again, does not do any modification operations to a drive.
3) Please make sure you're using the latest development build of smartmontools, not the stable build. There are attribute decoding improvements for Intel SSDs -- specifically the Intel models -- in the development builds:
http://smartmontools-win32.dyndns.org/smartmontools/ http://smartmontools-win32.dyndns.org/smartmontools/
Furthermore -- and this is important for the 320-series drive you plan on testing -- please see this bug report (opened by me) and use the drivedb-add.h file the developer provided (you can place it in the same directory as your smartctl.exe binary).
http://sourceforge.net/apps/trac/smartmontools/ticket/168 http://sourceforge.net/apps/trac/smartmontools/ticket/168
You'll know if it's working if you see the string "Device is: In smartctl database [for details use: -P show]" in the output near the top.
Good luck.
05-26-2011 06:57 PM
There are a couple of points to keep in mind in these analyses:
1. Obviously these experiments are emphasizing raw endurance. Part of the endurance spec limitation relates to the data retention, which isn't being measured in these experiments.
2. Believe it or not, the measurement method is likely undercounting the raw endurance of the Nand. In these experiments, the users are basically cycling the SSD (and Nand) as fast as possible. However, the faster the Nand cycles, the less time it has to "anneal".
Here is a link to a paper discussing cycling and distrubted cycling:
http://www.usenix.org/event/hotstorage10/tech/full_papers/Mohan.pdf www.usenix.org/event/hotstorage10/tech/full_papers/Mohan.pdf