12-02-2013 04:13 AM
Hello, everyone:
I bought an Intel SSD 530 120G for my laptop several days ago. It worked well with the OS Win8.1 Pro x64.
When I paid attention to the NAND writes, something make me confused.
The situation is as follow:
The SSD with the OS is the first(primary) Disk, and the HDD is the second one. I have moved the cache of IE, chrome and Firefox to the Hard Drive using IE setting or mklink command, and verified it correctly. With the explorer working, the written data stream from cache is produced in the HDD partition theoretically, also I have got this conclusion through the System's Resource Monitor and the Diskmon from Microsoft website. When I cached several Movies embedded in any explorer without other operation separately, there are lots of written data traffic produced in the HDD partition, and just little data wrote in system disk(SSD), it's no doubt. Finally, each test(using one kind of explorer) improved less than 200Mb in Total Host Writes which is normal for system operating, but this process also consumed about 3Gb SSD's Total NAND writes in total in the CrystalDiskInfo 6.0.1. Also I have got the same result with the newly Intel SSD Toolbox, AIDA64 3.20 and CrystalDiskInfo 6.0.1. In fact, this written data traffic produced by explorer's cache in HDD is calculated into the SSD's total NAND writes.
Actually I'm not care of the SSD's wear, and I'm sure it couldn't reach the limited lifespan with normal usage until next generation product arrives. This accidental discovery confused me now, and the result above make me suspect the theory, Putting IE/Chrome or System cache into other medium/drive saving your SSD's wear.
Q:Here, I want to know what makes this strange condition happen, the drivers, system's bug, bad support for old mainboard, the system's setting&config or the special system log?
Testing condition:
Thinkpad R400(GM45 motherboard)/P8700/8Gb RAM/Intel 530 SSD+Hitachi 7k500/Intel 5300 AGN/Win 8.1 Pro X64 with the Win 8.1's Default config and drivers, except trunning the service Superfetch off mannually.
I could make sure the location of explorer cache(IE, Chrome, Firefox) in HDD, also the written data traffic in HDD, and the vast imprived NAND writes in SSD simultaneously.
Thanks for your help.
12-10-2015 05:09 PM
Hello ales.fila,
The difference between Total Host Writes and Total NAND Writes depends on the type of data and workload for each system. In the last case, there is a big difference, you might want to consider the recommendations mentioned in previous posts and also run TRIM manually on a schedule using the Intel® SSD Optimizer from Intel® SSD Toolbox.
For better assistance, please obtain the SSD log from the Toolbox, then attach the resulting file to this thread using the Advanced Editor. Also, let us know the type of tasks/workload for these drives.
12-11-2015 01:58 AM
Rule of thumb for 530 SSD with Windows 10: multiply the "Power-on Hours Count" by 10Gbytes, add it to the "Host Writes" and you roughly get an estimate on your "Total Nand Writes". This is even valid on an idle system with no work load and is NOT acceptable. Or there is some sort of bug in the "Total Nand Write" counter.
Again: Please, please Intel explain the difference why we see this on the 530 series and not on the 520 series.
12-11-2015 03:10 AM
Yeah how do you explain that the NAND Write keep increasing at (at least) 1GB/hr even when there's less than 1GB Host Write for at least 4 hrs?
Honestly has this ever been passed to your engineers? Do you still care about it? Like do you consider 530 EOL'd and will not fix it no matter what defect is in the firmware? If that's the case I will just stop wasting my time.
I don't believe your guys cannot notice any unusual NAND Write if they have ever tested it a bit in Windows.
EDIT:
Before I freshly install Windows 10 1511 from ISO and test I even fully TRIM'd my drive in Linux with blkdiscard.
I am even aware that the drives won't be truly fully TRIM'd unless the range(s) in the ATA commands are 4k-aligned since the minimum erasable block is apparently 4KiB / 8-sectors (you can try to TRIM 7 or 65535 sectors and check by hexdump'ing the drive), so I have been using the following to do the full TRIM:
let step=(65535-65535%8)*512; blkdiscard -p $step /dev/sdX
so that blkdiscard will TRIM 65528 sectors per range and 1 range per ATA command. (With hdparm and some scripting one can do 65528 per range and 64 ranges per ATA command, which fully uses the maximum 512-byte payload allowed by the drive, but not with blkdiscard and the current implementation in kernel.)
12-11-2015 07:44 AM
So I switched back to Linux on this drive:
https://ptpb.pw/RXcj https://ptpb.pw/RXcj
And here is a recent 4-hr idling test in Linux:
https://gist.github.com/tomty89/c918a7d2b45c106a4435/revisions?diff=split Revisions · gist:c918a7d2b45c106a4435 · GitHub
12-14-2015 09:10 AM
Another twenty-hour idling test with Windows on the drive:
https://gist.github.com/tomty89/e2d9920bed6699c171e7/revisions?diff=split Revisions · gist:e2d9920bed6699c171e7 · GitHub
Host Write is (220453-220447)*32=192MiB, NAND Write is 6GiB, the amplification is 32x. Are you really gonna keep on acting like you didn't see any of our posts?