03-08-2010 08:36 PM
http://cafe.naver.com/intelssd.cafe
The link above is for semi-official intel ssd online community in Korea, it is managed by a korean worker in Intel Company Korea.
This guy had a meeting with an engineer from Intel USA.
during the presentation, he saw that a table confirming that
when 160Gb G2 is partioned as 160gb/144gb/128gb for usable space and leaving the rest unpartitioned,
there was performance gain in 4k: 15/42/53, I personally assume that this is for 4k read.
and the engineer also mentioned that RST 9.6 or above supports trim.
I will update this thread once i confirm on these things with him.
03-11-2010 03:11 AM
Those are two very big assumptions. Writes are likely to be random small files and consequently write amplification is an issue. What is the erase count on the G2's? 5K?
03-11-2010 03:53 AM
NandFlashGuy:
If a X25-M G2 160GB SSD has a 5 year lifetime with a 20GB/day worksation workload.
Is it correct to assume a X25-M G2 80GB SSD has a 2,5 year lifetime with the same 20GB/day worksation workload ?
The 160GB SSD has 15TB of random write lifetime and 370TB sequencial write lifetime (from the endurance presentation).
The same 160GB SSD has 5 year lifetime with a 20GB/day workstation workload (from it's specs).
5 year at 20 GB/day is 5*365*20 = 36500 GB = 35 TB lifetime
Is it correct to assume (for the 160GB X25-M) if I set max size to 144GB capacity (42TB of random write lifetime, 2,8*15TB=48TB), I'll increase the 160GB SSD lifetime to:
2,8*5 = 14 years lifetime at 20GB/day
2,8*35TB = 98 TB lifetime
03-11-2010 05:17 PM
@Mr Wolf:
Intel's datasheet says both the 80GB and the 160GB have a 5-year, 20GB per day spec. This conflicts with the basic idea that the 160GB drive has twice the Nand, hence you should be able to write it roughly twice as fast or twice as long. You can assume that Intel is sandbagging on the 160GB specs.
The difference in random write and sequential write numbers that you cite from the presentation is a reflection of the internal write amplication under different usage conditions. A "real" consumer workload is a mix of random and sequential writes, so that's why the datasheet spec is beween the "ideal" values for sequential and random.
The "lifetime" benefits you will see from changing capacity is a function of your usage model. Hence, I don't think you can accurately predict the write amplication vs space capacity without in depth knowledge of the workload and the firmware. But for a typical consumer usage model (ie. not a server), the write amplication is already quite low, so there will be neglible endurance benefits from reducing your capacity.
Keep in mind the presentation was written with enterprise endurance in mind, where the write amplication can be much higher than normal.03-12-2010 04:21 AM
Thanks for the advice, I agree with your arguments.
My workstation profile is bellow the 20GB day (5 year lifetime spec), plus I have a 160GB drive with probably twice that lifetime.
03-12-2010 11:41 AM