cancel
Showing results for 
Search instead for 
Did you mean: 

two dead 750 ssd's (2 hours & 2 days)

pbrow3
New Contributor

Hi,

I bricked one 750 ssd after about 2 hours of on time. The second replacement ssd died after a few days.

I'm kinda wondering what's happening here.

The 1st drive died before I could really use it and has been replaced. The second was working fine as my root file system and

failed on system power on. I'm waiting to return this device. I'm not getting a replacement.

I'm running a Linux system, 4.2.0 on a p9x79 motherboard. It has 3 pcie 3.0 slots, In both cases the ssd was running fine

and failed to respond on power up.

Are there any Linux s/w tools to query what the problem is?

8 REPLIES 8

ASouz7
Valued Contributor

borwn10,

We are going to check this further. Meanwhile, you can take a look at these links:

http://www.intel.com/support/ssdc/hpssd/ssd-750/sb/CS-035484.htm Intel® SSD 750 Series — Tested Motherboards

http://www.intel.com/support/ssdc/hpssd/ssd-750/sb/CS-035483.htm?wapkw=intel+ssd+750+series Intel® SSD 750 Series — Frequently Asked Questions (FAQ)

Regarding Linux*, you can check the contact methods and recommended forums from the team that designs and maintains the NVME driver for Linux:

https://www.kernel.org/category/contact-us.html https://www.kernel.org/category/contact-us.html

pbrow3
New Contributor

Hi,

Thanks for the response

The Asus p9x79 is not on the list of tested motherboards. But, it has the required pcie 3.0 slots.

I did not enable the UEFI bios. I did not expect to boot from the drive. I also did not want to

add more complications to install.

I was in contact with a very helpful intel software person on the nvme form. I had a problem with the second

card (when it was working) (the first card also.. but I never got a chance to run performance tests) that I could not

achieve the performance benchmarks that were published.

Keith Busch figured it out. I somehow has turned off the MSI-X interrupt support in the kernel. It had been

turned off for some time.

Sadly after Keith found the fix the 750 died. Alas, Keith has no software tools to wake a bricked ssd.

But, it did work and well..till it just stopped.

I bought both cards from Newegg.ca. I can give you the order number and the rma number etc if you want to

track down the first card. When Newegg respond to my last e-mails I can let you know.

ASouz7
Valued Contributor

Hello brown10,

We have a few questions in order to understand this issue better.

1- When you mentioned the drive disappeared, did id do it during power on after shut down, or waking up after sleep/hibernation?

2- How did 1st drive die?

3- Was the 1st drive recognized in Bios? 2nd drive?

4- Which Linux distro and version was used?

pbrow3
New Contributor

Hi,

1. The drive failed during power up. There is no sleep/hibernation on my Linux computer.

When the computer was powered up there was a big delay before the bios message was displayed.

There was also a big delay when the os was booted. The logs:

Sep 07 15:38:45 [kernel] [ 20.821369] nvme 0000:03:00.0: Device not ready; aborting initialisation

Sep 07 15:38:45 [kernel] [ 20.821423] nvme 0000:03:00.0: Device failed to resume

Now, the first drive was never used much. I was working on getting the drive to be the root file system.

The second drive was being used as the root file system and I shut down the system and when it

was powered up again the nvme ssd was not able to be mounted.

I did not see any abnormal shutdown logs.

2. The first drive failed on power on. It was returned.

3. The bios does not support nvme. But, the failed drive (1 and 2) caused a big delay in the boot process.

The bios message (version id etc) took an unusually long time to appear.

4. I'm using a Gentoo distribution. The kernel version was 4.2.0

Additional info: There are 3 pcie 3 slots in the asusp9x79. When the first drive failed I moved

the card to the second slot (it was in the second slot). This did not fix the problem.

When the second drive failed I removed it from the system.

The led's are off after it failed.

There are two other pcie boards in the system: one 680 video card and one pciex1 sound card.

Both the sound and video cards are working fine.

The second card was working fine when it was powered down.

I do not have the logs because they are on the drive. But, here is the hdpam

performance results:

hdparm -tT --direct /dev/nvme0n1

/dev/nvme0n1:

Timing O_DIRECT cached reads: 3200 MB in 2.00 seconds = 1599.90 MB/sec

Timing O_DIRECT disk reads: 4572 MB in 3.00 seconds = 1523.42 MB/sec

and this might be useful:

Fw: 8EV10135

Model SSDPEDMW400G4