05-11-2009 04:38 AM
What is the best raid stripe size for the X25-E and X-25-M?
Will the existing line of Intel SSD's be compatible with Windows 7 TRIM command? If so when is this likely to happen?
If TRIM is going to be supported will it be supported via RAID?
Is there any update on the new 34nm SSD product line? When will specs be available?
Will the 34nm technology use the same controller or will it be a new controller?
EDIT:
I can now answer part of my own question. After extensive testing with IOmeter I have concluded that a 128k stripe size is going to work best for raid 0 in the vast majority of cases, and certainly for normal OS use. That is based on test results using two different controllers and stripe sizes ranging between 16k & 1042k. (Tests on one controller were limited to 256k due to limitations of the controller.) This involved a lot of work, for something that could easily have been explained.
Intel....
you are currently dealing with enthusiasts in your new SSD market, who are interested in the technology and want to know as much as possible about it. Why have the anonymous corporate attitude to your new and exciting product line? Why is there the party line of saying nothing about such an exciting product? (Even something as basic as letting people know they should be using AHCI and not IDE mode).MS seemed to have learnt that this is not the way to go with all the fantastic work they have done with Windows 7. Enthusiasts are raving about Windows 7 and that is going to really help Windows 7 launch to the mainstream with maximum impact.
Every now and then it is good to throw your (loyal) dog a bone 😉
/message/14887# 14887 Various questions regarding SSD
06-25-2009 01:02 AM
you can't find information about the 'move' because there is no 'move'. whats the point to this adapter? fine, you're getting full duplex to the adapter but from there on you are using sata's half duplex the same way as if you were connected but now you're adding a few nanoseconds delay to the whole process of caching, translating sas to sata commands and writting. And as a bonus, you'll be creating more possibilities for operation and compatibility problems.
Sata drives should work as sata drives. if you want SAS ssds, all you have to do is wait. http://www.toshiba.com/taec/news/press_releases/2009/memy_09_552.jsp
OJ
06-25-2009 08:07 AM
Hi Jeff_rys, of course I knew about the SAS and FC interface SSD But they belongs to real industrial products, and they are not justified for the cost if what you need is not super duper durable SSD, are you working in the space and may encounter collision with some UFOs so your storage systems require 10G-20G shock resistent? But well EMC choose to take them for their quarter million SSD SAN where you can build the similar or faster thing for less than 10% of what they offer STEC is the name, but well, they can't be bothered to sell their stuff on the street like the mass production one from Intel/OCZ.
As for PCIe SSD.....I just realized another sad story......instead of making them in parallel, they just grab an Xscale IOP348 1200Mhz or 800Mhz chip, solder the SSD and bypass the SAS connection and force them into a particular form factor.....So the underlying is still a bunch or SSD (mostly 4xMLC), connect to a RAID card and channel through PCIe.......very dissapointing and lazy design.....My advice is to buy the smallest one to try out first....there are very cheap products like Photofast from Taiwan(USD4.5K for 1TB), there are also very expensive product from TexasMemorySystem (USD18K for 450GB)....
But how much are we talking about to build a decent durable SSD with Adaptec RAID + 16pcs X25-M? About USD6K for 1.1TB effective storage. You can still resell this set up away in any combination in the market easily, try reselling a PCIe SSD card away?
All I want to know about is when Intel can issue the "dual port dongle".....if it is really working or not, gotta try out then we would know......
SLC is not the ONE....it's just too small to be used for any server environment. You will end up buy many many drives because of shortage in capacity. Must as well take half of the investment and use Intel MLC to do the trick, use it for 3 years, and then sell it in the market and get back some of your investment
06-26-2009 02:47 AM
hi Redux,
i must add here: was it because 64kb stripe was used that Adaptec failed to perform. Afterall they claim that 256 kb is best for performance, although for the Intel drives maybe 128kb would be best
Jeff
07-07-2009 03:54 PM
I've just dumped my 5405 and I've switched to on board raid. Why? The last f/w update from Adaptec failed to solve latency or deal with the fast ramp up speed of these drives. It slows the drives down and the ony reason it's not that noticable is because cache on the 5405 covers it up. Don't believe me? http://www.alex-is.de/PHP/fusion/news.php Run this benchmark and check the access time and 4K-64Thrd scores. On the 5405 using 2 X25-e's I get 100mb/s read & 77mb/s writes. Access time 0.203 & 0.105. Using onboard raid I get around 280mb/s read write and access time of 0.09. The 5405 is optimised for hdd not ssd and the latest f/w update from Adaptec failed to resolve that.
07-07-2009 10:18 PM
redux, dun "dump" it!!! Please give it to me!!!!
By the way, mind sharing your system set up? The new firmware supposed to iron out some bugs with Nehalem chips. Have you tried using older firmwares? Did you report it to Adaptec?