cancel
Showing results for 
Search instead for 
Did you mean: 

sectorsize affects ssd performance

llpyh
New Contributor

I use Intel P3608 which has 2 nvme disks. Then change the sectorsize of nvme0n1 from 512 to 4096. The other nvme1n1 is kept unchange. However, I find that the bandwidth has declined rapidly. Why ?

The fio scripts and result are shown below:

Notice: nvme0n1 has sectorsize 4096, nvme1n1 has sectorsize 512.

----------------------------------------------------------------------------------

fio -filename=/dev/nvme1n1 -cpumask=0xffff -direct=1 -rw=randrw -refill_buffers -norandommap -randrepeat=0 -ioengine=libaio -bs=$block_size -rwmixread=100 -iodepth=$io_depth -numjobs=$num_jobs -runtime=20 -ramp_time=10 -group_reporting -name=nvme_randr_${block_size}_iodepth${io_depth}_libaio_numjobs${num_jobs}

nvme_randr_4k_iodepth1_libaio_numjobs1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1

fio-3.0

Starting 1 process

Jobs: 1 (f=1): [r(1)][100.0%][r=297MiB/s,w=0KiB/s][r=76.1k,w=0 IOPS][eta 00m:00s]

nvme_randr_4k_iodepth1_libaio_numjobs1: (groupid=0, jobs=1): err= 0: pid=8966: Tue Mar 6 19:44:27 2018

read: IOPS=76.5k, BW=299MiB/s (313MB/s)(5977MiB/20001msec)

slat (nsec): min=1750, max=31293, avg=2147.38, stdev=327.48

clat (nsec): min=356, max=167857, avg=10301.18, stdev=3725.49

lat (usec): min=9, max=172, avg=12.51, stdev= 3.76

clat percentiles (nsec):

| 1.00th=[ 9408], 5.00th=[ 9536], 10.00th=[ 9792], 20.00th=[ 9920],

| 30.00th=[ 9920], 40.00th=[ 9920], 50.00th=[10048], 60.00th=[10048],

| 70.00th=[10048], 80.00th=[10176], 90.00th=[10304], 95.00th=[10688],

| 99.00th=[13376], 99.50th=[15936], 99.90th=[66048], 99.95th=[69120],

| 99.99th=[91648]

bw ( KiB/s): min=302856, max=311623, per=100.00%, avg=306151.45, stdev=1719.88, samples=40

iops : min=75714, max=77905, avg=76537.82, stdev=429.87, samples=40

lat (nsec) : 500=0.01%, 750=0.01%

lat (usec) : 4=0.01%, 10=53.43%, 20=46.13%, 50=0.01%, 100=0.41%

lat (usec) : 250=0.01%

cpu : usr=14.68%, sys=14.80%, ctx=1530025, majf=0, minf=4

IO depths : 1=150.5%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%

submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

issued rwt: total=1530228,0,0, short=0,0,0, dropped=0,0,0

latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):

READ: bw=299MiB/s (313MB/s), 299MiB/s-299MiB/s (313MB/s-313MB/s), io=5977MiB (6268MB), run=20001-20001msec

Disk stats (read/write):

nvme1n1: ios=2302993/0, merge=0/0, ticks=20493/0, in_queue=20437, util=67.92%

[root@localhost ~]# fio -filename=/dev/nvme0n1 -cpumask=0xffff -direct=1 -rw=randrw -refill_buffers -norandommap -randrepeat=0 -ioengine=libaio -bs=$block_size -rwmixread=100 -iodepth=$io_depth -numjobs=$num_jobs -runtime=20 -ramp_time=10 -group_reporting -name=nvme_randr_${block_size}_iodepth${io_depth}_libaio_numjobs${num_jobs}

nvme_randr_4k_iodepth1_libaio_numjobs1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1

fio-3.0

Starting 1 process

Jobs: 1 (f=1): [r(1)][100.0%][r=39.5MiB/s,w=0KiB/s][r=10.1k,w=0 IOPS][eta 00m:00s]

nvme_randr_4k_iodepth1_libaio_numjobs1: (groupid=0, jobs=1): err= 0: pid=9083: Tue Mar 6 19:45:04 2018

read: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(789MiB/20001msec)

slat (nsec): min=1506, max=37487, avg=3800.48, stdev=1066.33

clat (usec): min=41, max=719, avg=94.17, stdev= 9.47

lat (usec): min=45, max=723, avg=98.08, stdev= 9.53

clat percentiles (usec):

| 1.00th=[ 76], 5.00th=[ 81], 10.00th=[ 85], 20.00th=[ 87],

| 30.00th=[ 87], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 100],

| 70.00th=[ 103], 80.00th=[ 104], 90.00th=[ 105], 95.00th=[ 106],

| 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 119], 99.95th=[ 122],

| 99.99th=[ 165]

bw ( KiB/s): min=40184, max=40592, per=100.00%, avg=40382.30, stdev=100.65, samples=40

iops : min=10046, max=10148, avg...

4 REPLIES 4

idata
Esteemed Contributor III

Hi lpyh,

Thank you for contacting our support community. We understand your situation regarding the Intel® SSD DC P3608 Series sector size. In order to better understand your situation, could you please confirm the following information:*Could you please confirm the operating system you are using? If Linux* please let us know the kernel version as well. *Could you please let us know what command did you use to change the sector size? *Please provide us the Intel® SSD DC P3608 Series SMART details, we recommend you to use the Intel® SSD Data Center Tool. Please follow the steps below: 1-If you are running Windows* please download http://downloadcenter.intel.com/downloads/eula/27497/Intel-SSD-Data-Center-Tool?httpDown=https://dow... this file or If you are running Linux* please use http://downloadcenter.intel.com/downloads/eula/27497/Intel-SSD-Data-Center-Tool?httpDown=https://dow... this file. 2-Please check the User guide in order to install the tool(the user guide that is included in both folders and contains the same info), the document name is: Intel_SSD_Data_Center_Tool_Install_Guide_3_x_330713-004US.pdf3-Once the tool is installed, please issue the following command: isdct show -smart -intelssd (Index/SerialNumber)We look forward to hearing back from you.Regards,Junior M.

idata
Esteemed Contributor III

Hi lpyh,

We would like to know if you read our previous post. If you have any other questions, we'll be waiting for your response.Regards, Junior M.

llpyh
New Contributor

Thank you for your reply. I think this problem has solved. The reason is that one disk is formatted using "isdct start -intelssd 0 -nvmeformat LBAFormat=0" while the other is not.

When I format them both, the performance is almost the same.

idata
Esteemed Contributor III

Hello lpyh,

Thanks for updating the thread with the solution. It is good to know both parts of the drive are now showing similar performance.Please let us know if there's anything else we can do for you.Best regards,Eugenio F.