<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Critical performance drop on newly created large file in Solid State Drives (NAND)</title>
    <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19274#M7601</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;After reviewing the settings, we would like to verify the following:For the read test, could you please try: fio –output=test_result.txt –name=myjob –filename=/dev/nvme0n1 –ioengine=libaio –direct=1 –norandommap –randrepeat=0 –runtime=600 –blocksize=4K –rw=randread –iodepth=32 –numjobs=4 –group_reporting.It is important to notice that we normally run the tests with 4 threads and iodepth=32, for the blocksize=4K.Please let us know as we may need to keep researching about this.NC</description>
    <pubDate>Wed, 20 Jul 2016 13:37:23 GMT</pubDate>
    <dc:creator>idata</dc:creator>
    <dc:date>2016-07-20T13:37:23Z</dc:date>
    <item>
      <title>Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19271#M7598</link>
      <description>&lt;UL&gt;&lt;LI&gt;NVMe drive model:  Intel SSD DC P3700 U.2 NVMe SSD&lt;/LI&gt;&lt;LI&gt;Capacity: 764G&lt;/LI&gt;&lt;LI&gt;FS: XFS&lt;/LI&gt;&lt;LI&gt;Other HW:&lt;UL&gt;&lt;LI&gt;AIC SB122A-PH&lt;/LI&gt;&lt;LI&gt;8 Intel NVMe DC P3700 2 on CPU 0, 6 on CPU 1&lt;/LI&gt;&lt;LI&gt;128 GiB RAM (8 x 16 DDR4 2400Mhz DIMMs)&lt;/LI&gt;&lt;LI&gt;2 x Intel E5-2620v3 2.4Ghz CPUs &lt;/LI&gt;&lt;LI&gt;2 x Intel DC S2510 SATA SSDs  (one is used a system drive). &lt;/LI&gt;&lt;LI&gt;Note that both are engineering samples provided by Intel NSG.  But all have had the latest firmware updated using isdct 3.0.0.&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;LI&gt;OS: CentOS Linux release 7.2.1511 (Core) &lt;/LI&gt;&lt;LI&gt;Kernel: Linux fs00 3.10.0-327.22.2.el7.x86_64 # 1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We have been testing two Intel DC P3700 U.2 800GB NVMe SSDs to see the impact of the emulated sector size to throughput (512 vs 4096). Using fio 2.12, we observed a puzzling collapse of performance. The steps are given below.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Steps&lt;/B&gt;:&lt;/P&gt;&lt;P&gt;1. Copy or write sequentially single large file (300G or larger)&lt;/P&gt;&lt;P&gt;2. Start fio test with the following config:&lt;/P&gt;&lt;P&gt;&lt;I&gt;[readtest]&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;thread=1&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;blocksize=2m&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;filename=/export/beegfs/data0/file_000000&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;rw=randread&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;direct=1&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;buffered=0&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;ioengine=libaio&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;nrfiles=1&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;gtod_reduce=0&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;numjobs=32&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;iodepth=128&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;runtime=360&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;group_reporting=1&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;percentage_random=90&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;3. Observe extremely slow performance:&lt;/P&gt;&lt;P&gt;&lt;I&gt;fio-2.12&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;Starting 32 threads&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;readtest: (groupid=0, jobs=32): err= 0: pid=5097: Thu Jul 14 13:00:25 2016&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&lt;I&gt;  read : io=65536KB, bw=137028B/s, iops=0, runt=489743msec&lt;/I&gt;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    slat (usec): min=4079, max=7668, avg=5279.19, stdev=662.80&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    clat (msec): min=3, max=25, avg=18.97, stdev= 6.16&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     lat (msec): min=8, max=31, avg=24.25, stdev= 6.24&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    clat percentiles (usec):&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     |  1.00th=[ 3280],  5.00th=[ 4320], 10.00th=[ 9664], 20.00th=[17536],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 30.00th=[18816], 40.00th=[20352], 50.00th=[20608], 60.00th=[21632],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 70.00th=[21632], 80.00th=[22912], 90.00th=[25472], 95.00th=[25472],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 99.00th=[25472], 99.50th=[25472], 99.90th=[25472], 99.95th=[25472],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 99.99th=[25472]&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    lat (msec) : 4=3.12%, 10=9.38%, 20=25.00%, 50=62.50%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;  cpu          : usr=0.00%, sys=74.84%, ctx=792583, majf=0, minf=16427&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, &amp;gt;=64=0.0%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     issued    : total=r=32/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     latency   : target=0, window=0, percentile=100.00%, depth=128&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;Run status group 0 (all jobs):&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;   READ: io=65536KB, aggrb=133KB/s, minb=133KB/s, maxb=133KB/s, mint=489743msec, maxt=489743msec&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;Disk stats (read/write):&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;  nvme0n1: ios=0/64317, merge=0/0, ticks=0/1777871, in_queue=925406, util=0.19%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;4. Repeat the test&lt;/P&gt;&lt;P&gt;5. Performance is much higher:&lt;/P&gt;&lt;P&gt;&lt;I&gt;fio-2.12&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;Starting 32 threads&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;readtest: (groupid=0, jobs=32): err= 0: pid=5224: Thu Jul 14 13:11:58 2016&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt; &lt;B&gt; read : io=861484MB, bw=2389.3MB/s, iops=1194, runt=360564msec&lt;/B&gt;&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    slat (usec): min=111, max=203593, avg=26742.15, stdev=21321.98&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    clat (msec): min=414, max=5176, avg=3391.05, stdev=522.29&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     lat (msec): min=414, max=5247, avg=3417.79, stdev=524.75&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    clat percentiles (msec):&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     |  1.00th=[ 1614],  5.00th=[ 2376], 10.00th=[ 2802], 20.00th=[ 3097],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 30.00th=[ 3228], 40.00th=[ 3359], 50.00th=[ 3458], 60.00th=[ 3556],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 70.00th=[ 3654], 80.00th=[ 3785], 90.00th=[ 3949], 95.00th=[ 4080],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 4752], 99.95th=[ 4883],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&amp;lt;e...&lt;/P&gt;</description>
      <pubDate>Thu, 14 Jul 2016 20:40:06 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19271#M7598</guid>
      <dc:creator>ANaza</dc:creator>
      <dc:date>2016-07-14T20:40:06Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19272#M7599</link>
      <description>&lt;P&gt;Since we deal with similar situation, I tried the above steps and confirmed on our machine this issue.  In fact, I also tried it with both XFS and EXT4.  The symptom showed up regardless.&lt;/P&gt;</description>
      <pubDate>Fri, 15 Jul 2016 15:11:36 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19272#M7599</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-15T15:11:36Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19273#M7600</link>
      <description>&lt;P&gt;AlexNZ,&lt;/P&gt;Thanks for bringing this situation to our attention, we would like to verify this and provide a solution as fast as possible. Please allow us some time to check on this and we will keep you all posted.NC</description>
      <pubDate>Fri, 15 Jul 2016 15:54:34 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19273#M7600</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-15T15:54:34Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19274#M7601</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;After reviewing the settings, we would like to verify the following:For the read test, could you please try: fio –output=test_result.txt –name=myjob –filename=/dev/nvme0n1 –ioengine=libaio –direct=1 –norandommap –randrepeat=0 –runtime=600 –blocksize=4K –rw=randread –iodepth=32 –numjobs=4 –group_reporting.It is important to notice that we normally run the tests with 4 threads and iodepth=32, for the blocksize=4K.Please let us know as we may need to keep researching about this.NC</description>
      <pubDate>Wed, 20 Jul 2016 13:37:23 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19274#M7601</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-20T13:37:23Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19275#M7602</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;With proposed settings I received the following result:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;myjob: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;...&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;fio-2.12&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;Starting 4 processes&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;myjob: (groupid=0, jobs=4): err= 0: pid=23560: Wed Jul 20 07:06:08 2016&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;  read : io=1092.2GB, bw=1863.1MB/s, iops=477156, runt=600001msec&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    slat (usec): min=1, max=63, avg= 2.76, stdev= 1.57&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    clat (usec): min=14, max=3423, avg=260.81, stdev=90.86&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     lat (usec): min=18, max=3426, avg=263.68, stdev=90.84&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    clat percentiles (usec):&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     |  1.00th=[  114],  5.00th=[  139], 10.00th=[  157], 20.00th=[  185],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 30.00th=[  207], 40.00th=[  229], 50.00th=[  251], 60.00th=[  274],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 70.00th=[  298], 80.00th=[  326], 90.00th=[  374], 95.00th=[  422],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 99.00th=[  532], 99.50th=[  588], 99.90th=[  716], 99.95th=[  788],&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     | 99.99th=[ 1048]&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    bw (KB  /s): min= 5400, max=494216, per=25.36%, avg=484036.11, stdev=14017.77&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    lat (usec) : 20=0.01%, 50=0.01%, 100=0.23%, 250=49.61%, 500=48.54%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    lat (usec) : 750=1.55%, 1000=0.06%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    lat (msec) : 2=0.01%, 4=0.01%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;  cpu          : usr=15.00%, sys=41.78%, ctx=77056567, majf=0, minf=264&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, &amp;gt;=64=0.0%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, &amp;gt;=64=0.0%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     issued    : total=r=286294132/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;     latency   : target=0, window=0, percentile=100.00%, depth=32&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;Run status group 0 (all jobs):&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;   READ: io=1092.2GB, aggrb=1863.1MB/s, minb=1863.1MB/s, maxb=1863.1MB/s, mint=600001msec, maxt=600001msec&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;Disk stats (read/write):&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;  nvme0n1: ios=286276788/29109, merge=0/0, ticks=72929877/10859607, in_queue=84848144, util=99.33%&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;But in this case it was the test for raw device (/dev/nvme0n1), whereas in our case it was file on XFS on NVMe drive.&lt;/P&gt;&lt;P&gt;Also during latest tests we determined that flushing page cache (echo 1 &amp;gt; /proc/sys/vm/drop_caches) solves the problem.&lt;/P&gt;&lt;P&gt;Why does page cache affect direct IO - is still the question.&lt;/P&gt;&lt;P&gt;Can it be something specific to NVMe drivers?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;AlexNZ&lt;/P&gt;</description>
      <pubDate>Wed, 20 Jul 2016 14:32:36 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19275#M7602</guid>
      <dc:creator>ANaza</dc:creator>
      <dc:date>2016-07-20T14:32:36Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19276#M7603</link>
      <description>&lt;P&gt;I read this thread with strong interest.  I concur with AlexNZ, testing files residing on a file system is far more useful in production situations.  We do so to figure out the overhead of&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;local file system (XFS, EXT4 etc)&lt;/LI&gt;&lt;LI&gt;Distributed file system(s) (Lustre, GPFS etc)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;over raw devices (individual and aggregated).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The following suggestion from NC is only for testing raw devices.&lt;/P&gt;fio –output=test_result.txt –name=myjob –filename=/dev/nvme0n1 –ioengine=libaio –direct=1 –norandommap –randrepeat=0 –runtime=600 –blocksize=4K –rw=randread –iodepth=32 –numjobs=4 –group_reporting.&lt;P&gt;On our end, we have done many hundreds of raw device tests.  Results are always in line with what Intel has published.  But this particular file testing result, as I posted on July 15, is a "shocker"!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;It would be great to know why fio reading a regular file from a NVMe SSD with direct=1 is still affected by data in the page cache.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Another point: we understand why usually for Intel NVMe SSDs, numjobs=4 and iodepth=32 are used. But such settings are &lt;I&gt;only&lt;/I&gt; optimal for raw devices, right?   When it comes to reading/writing regular files, IMHO we should configure fio using parameter values that match as closely as possible to that of the actual workloads.  NC, your view?&lt;/P&gt;</description>
      <pubDate>Wed, 20 Jul 2016 15:52:37 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19276#M7603</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-20T15:52:37Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19277#M7604</link>
      <description>&lt;P&gt;Hello all,&lt;/P&gt;According to this situation and checking all the information provided, we will be escalating the situation here and we will be updating here. Please expect a response anytime soon.NC</description>
      <pubDate>Wed, 20 Jul 2016 21:29:55 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19277#M7604</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-20T21:29:55Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19278#M7605</link>
      <description>&lt;P&gt;Hello all,&lt;/P&gt;We would like you to try the test but before that could you please try to TRIM the drives first?, once you do that please share the results back to us.Also, please make sure you are using the correct driver in this &lt;A href="https://downloadcenter.intel.com/download/23929/Intel-SSD-Data-Center-Family-for-NVMe-Drivers" rel="nofollow noopener noreferrer"&gt;https://downloadcenter.intel.com/download/23929/Intel-SSD-Data-Center-Family-for-NVMe-Drivers&lt;/A&gt; link.Something important to mention is that the performance tools we use are Synthetic Benchmarking tools, as explained in the Intel® Solid-State Drive DC P3700 evaluation guide, and these are intended to measure the behavior of the SSD without taking into consideration other components in the system that would add "bottlenecks". Synthetic benchmarks measure raw drive I/O transfer rates.&lt;A href="http://manuals.ts.fujitsu.com/file/12176/fujitsu_intel-ssd-dc-pcie-eg-en.pdf" rel="nofollow noopener noreferrer"&gt;http://manuals.ts.fujitsu.com/file/12176/fujitsu_intel-ssd-dc-pcie-eg-en.pdf&lt;/A&gt; Here is the evaluation guide.Please let us know.NC</description>
      <pubDate>Fri, 22 Jul 2016 21:04:54 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19278#M7605</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-22T21:04:54Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19279#M7606</link>
      <description>&lt;P&gt;Thanks for your follow-up.  I did try fstrim on a DC P3700 NVMe SSD here.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;First of all, lets get the driver and firmware issue out of the way.  The server runs CentOS 7.2:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  uname -a&lt;/P&gt;&lt;P&gt;Linux fs11 3.10.0-327.22.2.el7.x86_64 # 1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  cat /etc/redhat-release&lt;/P&gt;&lt;P&gt;CentOS Linux release 7.2.1511 (Core)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We also use the latest isdct:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  isdct version&lt;/P&gt;&lt;P&gt;- Version Information -&lt;/P&gt;&lt;P&gt;Name: Intel(R) Data Center Tool&lt;/P&gt;&lt;P&gt;Version: 3.0.0&lt;/P&gt;&lt;P&gt;Description: Interact and configure Intel SSDs.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;And, according to the tool, the drive is healthy:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  isdct show -intelssd 2&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;- Intel SSD DC P3700 Series CVFT515400401P6JGN -&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Bootloader : 8B1B0131&lt;/P&gt;&lt;P&gt;DevicePath : /dev/nvme2n1&lt;/P&gt;&lt;P&gt;DeviceStatus : Healthy&lt;/P&gt;&lt;P&gt;Firmware : 8DV10171&lt;/P&gt;&lt;P&gt;FirmwareUpdateAvailable : The selected Intel SSD contains current firmware as of this tool release.&lt;/P&gt;&lt;P&gt;Index : 2&lt;/P&gt;&lt;P&gt;ModelNumber : INTEL SSDPE2MD016T4&lt;/P&gt;&lt;P&gt;ProductFamily : Intel SSD DC P3700 Series&lt;/P&gt;&lt;P&gt;SerialNumber : CVFT515400401P6JGN&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;While the drive had a file system (XFS), with data, I ran fstrim:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  fstrim -v /export/beegfs/data2&lt;/P&gt;&lt;P&gt;&lt;B&gt;fstrim: /export/beegfs/data2: FITRIM ioctl failed: Input/output error&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So, I umount the XFS, use isdct delete to remove all data, recreated the XFS, mount it again, and then ran fstrim:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Same outcome. Please see the session log below:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  df -h&lt;/P&gt;&lt;P&gt;Filesystem      Size  Used Avail Use% Mounted on&lt;/P&gt;&lt;P&gt;/dev/sda3       192G  2.4G  190G   2% /&lt;/P&gt;&lt;P&gt;devtmpfs         63G     0   63G   0% /dev&lt;/P&gt;&lt;P&gt;tmpfs            63G     0   63G   0% /dev/shm&lt;/P&gt;&lt;P&gt;tmpfs            63G   26M   63G   1% /run&lt;/P&gt;&lt;P&gt;tmpfs            63G     0   63G   0% /sys/fs/cgroup&lt;/P&gt;&lt;P&gt;/dev/sda1       506M  166M  340M  33% /boot&lt;/P&gt;&lt;P&gt;/dev/sdb        168G   73M  157G   1% /export/beegfs/meta&lt;/P&gt;&lt;P&gt;tmpfs            13G     0   13G   0% /run/user/99&lt;/P&gt;&lt;P&gt;/dev/nvme2n1    1.5T  241G  1.3T  17% /export/beegfs/data2&lt;/P&gt;&lt;P&gt;tmpfs            13G     0   13G   0% /run/user/0&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  umount /export/beegfs/data2&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  df -h&lt;/P&gt;&lt;P&gt;Filesystem      Size  Used Avail Use% Mounted on&lt;/P&gt;&lt;P&gt;/dev/sda3       192G  2.4G  190G   2% /&lt;/P&gt;&lt;P&gt;devtmpfs         63G     0   63G   0% /dev&lt;/P&gt;&lt;P&gt;tmpfs            63G     0   63G   0% /dev/shm&lt;/P&gt;&lt;P&gt;tmpfs            63G   26M   63G   1% /run&lt;/P&gt;&lt;P&gt;tmpfs            63G     0   63G   0% /sys/fs/cgroup&lt;/P&gt;&lt;P&gt;/dev/sda1       506M  166M  340M  33% /boot&lt;/P&gt;&lt;P&gt;/dev/sdb    ...&lt;/P&gt;</description>
      <pubDate>Sun, 24 Jul 2016 17:38:22 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19279#M7606</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-24T17:38:22Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19280#M7607</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I can confirm that after TRIM result is still poor.&lt;/P&gt;&lt;P&gt;Actually, after quick looking at linux kernel code, including XFS implementation, I found that even during direct reading, page cache is still involved.&lt;/P&gt;&lt;P&gt;But such poor performance still looks weired.&lt;/P&gt;</description>
      <pubDate>Sun, 24 Jul 2016 17:46:44 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19280#M7607</guid>
      <dc:creator>ANaza</dc:creator>
      <dc:date>2016-07-24T17:46:44Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19281#M7608</link>
      <description>&lt;P&gt;Just a quick supplement regarding the I/O errors that I reported in my last reply: I even tried to do the following:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;umount the drive&lt;/LI&gt;&lt;LI&gt;Do a nvmeformat: isdct start -intelssd 2 -nvmeformat LBAformat=3 SecureEraseSetting=0 ProtectionInformation=0 MetaDataSettings=0&lt;/LI&gt;&lt;LI&gt;Recreate XFS&lt;/LI&gt;&lt;LI&gt;mount the XFS&lt;/LI&gt;&lt;LI&gt;ran fstrim -v to the mount point. &lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I still got&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  dmesg |tail -11&lt;/P&gt;&lt;P&gt;[987891.677911]  nvme2n1: unknown partition table&lt;/P&gt;&lt;P&gt;[987898.749260] XFS (nvme2n1): Mounting V4 Filesystem&lt;/P&gt;&lt;P&gt;[987898.752844] XFS (nvme2n1): Ending clean mount&lt;/P&gt;&lt;P&gt;[987948.612051] blk_update_request: I/O error, dev nvme2n1, sector 3070890712&lt;/P&gt;&lt;P&gt;[987948.612088] blk_update_request: I/O error, dev nvme2n1, sector 3087667912&lt;/P&gt;&lt;P&gt;[987948.612151] blk_update_request: I/O error, dev nvme2n1, sector 3121222312&lt;/P&gt;&lt;P&gt;[987948.612193] blk_update_request: I/O error, dev nvme2n1, sector 3062502112&lt;/P&gt;&lt;P&gt;[987948.612211] blk_update_request: I/O error, dev nvme2n1, sector 3104445112&lt;/P&gt;&lt;P&gt;[987948.612228] blk_update_request: I/O error, dev nvme2n1, sector 3079279312&lt;/P&gt;&lt;P&gt;[987948.612296] blk_update_request: I/O error, dev nvme2n1, sector 3096056512&lt;/P&gt;&lt;P&gt;[987948.612314] blk_update_request: I/O error, dev nvme2n1, sector 3112833712&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So, unlike SCSI drives that I used years ago, format didn't remap "bad" sectors.  Would appreciate a hint as to how to get this issue resolved too.&lt;/P&gt;</description>
      <pubDate>Sun, 24 Jul 2016 17:57:56 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19281#M7608</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-24T17:57:56Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19282#M7609</link>
      <description>&lt;P&gt;I tried to narrow down the cause of the issue with fstrim more. It seems to me the hardware (i.e. the NVMe SSD itself) is responsible, rather than the software layer on top of it (XFS).  So I decided to add a partition table first and create XFS on the partition. As is evident below, adding the partition didn't help.  &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is the drive faulty?  If yes, then why isdct still deems its DeviceStatus Healthy?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  isdct delete -f -intelssd 2&lt;/P&gt;&lt;P&gt;Deleting...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;- Intel SSD DC P3700 Series CVFT515400401P6JGN -&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Status : Delete successful.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  parteed -a optimal /dev/nvme2n1 mklabel gpt&lt;/P&gt;&lt;P&gt;-bash: parteed: command not found&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  parted -a optimal /dev/nvme2n1 mklabel gpt&lt;/P&gt;&lt;P&gt;Information: You may need to update /etc/fstab.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  parted /dev/nvme2n1 mkpart primary 1048576B 100%&lt;/P&gt;&lt;P&gt;Information: You may need to update /etc/fstab.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  parted /dev/nvme2n1                                        &lt;/P&gt;&lt;P&gt;GNU Parted 3.1&lt;/P&gt;&lt;P&gt;Using /dev/nvme2n1&lt;/P&gt;&lt;P&gt;Welcome to GNU Parted! Type 'help' to view a list of commands.&lt;/P&gt;&lt;P&gt;(parted) print                                                            &lt;/P&gt;&lt;P&gt;Model: Unknown (unknown)&lt;/P&gt;&lt;P&gt;Disk /dev/nvme2n1: 1600GB&lt;/P&gt;&lt;P&gt;Sector size (logical/physical): 4096B/4096B&lt;/P&gt;&lt;P&gt;Partition Table: gpt&lt;/P&gt;&lt;P&gt;Disk Flags: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Number  Start  End    Size    File system  Name    Flags&lt;/P&gt;&lt;P&gt;&lt;B&gt; 1      1049kB  1600GB  1600GB              primary&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;(parted) quit      &lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  mkfs.xfs -K -f -d agcount=24 -l size=128m,version=2 -i size=512 -s size=4096 /dev/nvme2n1&lt;/P&gt;&lt;P&gt;meta-data=/dev/nvme2n1          isize=512    agcount=24, agsize=16279311 blks&lt;/P&gt;&lt;P&gt;        =                      sectsz=4096  attr=2, projid32bit=1&lt;/P&gt;&lt;P&gt;        =                      crc=0        finobt=0&lt;/P&gt;&lt;P&gt;data    =                      bsize=4096  blocks=390703446, imaxpct=5&lt;/P&gt;&lt;P&gt;        =                      sunit=0      swidth=0 blks&lt;/P&gt;&lt;P&gt;naming  =version 2              bsize=4096  ascii-ci=0 ftype=0&lt;/P&gt;&amp;lt;p style="color: # 222222; font-family: arial, sans-serif; fon...</description>
      <pubDate>Sun, 24 Jul 2016 19:08:18 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19282#M7609</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-24T19:08:18Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19283#M7610</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;Thanks everyone for trying the suggestion. We would like to gather all these inputs and research here with our department in order to work in a resolution for all of you.Please allow us some time to do the research, we will keep you posted.NC</description>
      <pubDate>Mon, 25 Jul 2016 16:35:31 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19283#M7610</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-25T16:35:31Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19284#M7611</link>
      <description>&lt;P&gt;Thanks for following-up.  I reviewed what I had done regarding the fstrim, and the tests that I have done, and came up two &lt;I&gt;additional&lt;/I&gt; plausible causes:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;In the way I do mkfs.xfs, I always use the -K option, what if I don't use it?&lt;/LI&gt;&lt;LI&gt;I would like to take advantage of the variable sector support provided by DC P3700. So, we are evaluating the performance benefits of using large SectorSize these days.  Thus, the NVMe SSDs that I tested fstrim on has a 4096 sector size. What happens if I retain the default 512?&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;My tests indicates that the Intel DC P3700 firmware or the NVMe Linux driver or both may have a bug&lt;/I&gt;.  The following are my evidences. Please review.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We use a lot of Intel DC P3700 SSDs of various capacities - 800GB  to 1.6TB are two common ones - and have done hundreds of tests over them.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We also understand that with Intel NVMe DC P3700 SSDs, there is no need to run trim at all. The firmware's garbage collection takes care of such needs transparently and behind the scene.  But still, IMHO it's a good idea when sector size is changed, well-known Linux utilities still work as anticipated.  We ran into this issue by serendipity, and got a "nice" surprise along the way &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Case 1. mkfs.xfs without -K&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We will pick one /dev/nvme2n1, umount it, isdct delete all data on it, mkfs.xfs without the -K flag, and then run fstrim.&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  man mkfs.xfs&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  lsblk &lt;/P&gt;&lt;P&gt;NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT&lt;/P&gt;&lt;P&gt;sda       8:0    0 223.6G  0 disk &lt;/P&gt;&lt;P&gt;├─sda1    8:1    0   512M  0 part /boot&lt;/P&gt;&lt;P&gt;├─sda2    8:2    0  31.5G  0 part [SWAP]&lt;/P&gt;&lt;P&gt;└─sda3    8:3    0 191.6G  0 part /&lt;/P&gt;&lt;P&gt;sdb       8:16   0 223.6G  0 disk /export/beegfs/meta&lt;/P&gt;&lt;P&gt;sdc       8:32   0  59.6G  0 disk &lt;/P&gt;&lt;P&gt;sdd       8:48   0  59.6G  0 disk &lt;/P&gt;&lt;P&gt;sr0      11:0    1  1024M  0 rom  &lt;/P&gt;&lt;P&gt;nvme0n1 259:2    0   1.5T  0 disk /export/beegfs/data0&lt;/P&gt;&lt;P&gt;nvme1n1 259:6    0   1.5T  0 disk /export/beegfs/data1&lt;/P&gt;&lt;P&gt;nvme2n1 259:7    0   1.5T  0 disk /export/beegfs/data2&lt;/P&gt;&lt;P&gt;nvme3n1 259:5    0   1.5T  0 disk /export/beegfs/data3&lt;/P&gt;&lt;P&gt;nvme4n1 259:0    0   1.5T  0 disk /export/beegfs/data4&lt;/P&gt;&lt;P&gt;nvme5n1 259:3    0   1.5T  0 disk &lt;/P&gt;&lt;P&gt;nvme6n1 259:1    0   1.5T  0 disk &lt;/P&gt;&lt;P&gt;nvme7n1 259:4    0   1.5T  0 disk &lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  umount /export/beegfs/data2&lt;/P&gt;&lt;P&gt;[root@fs11 ~]#  lsblk &lt;/P&gt;&lt;P&gt;NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT&lt;/P&gt;&lt;P&gt;sda       8:0    0 223.6G  0 disk &lt;/P&gt;&lt;P&gt;├─sda1    8:1    0   512M  0 part /boot&lt;/P&gt;&lt;P&gt;├─sda2    8:2    0  31.5G  0 part [SWAP]&lt;/P&gt;&lt;P&gt;└─sda3    8:3    0 191.6G  0 part /&lt;/P&gt;&lt;P&gt;sdb       8:16   0 223.6G  0 disk /export/beegfs/meta&lt;/P&gt;&lt;P&gt;sdc       8:32   0  59.6G  0 disk &lt;/P&gt;&lt;P&gt;sdd       8:48   0  59.6G  0 disk &lt;/P&gt;&lt;P&gt;sr0      11:0    1  1024M  0 rom  &lt;/P&gt;&lt;P&gt;nvme0n1 259:2    0   1.5T  0 disk /export/beegfs/data0&lt;/P&gt;&lt;P&gt;nvme1n1 259:6    0   1.5T  0 disk /export/beegfs/data1&lt;/P&gt;&lt;P&gt;nvme2n1 259:7    0   1.5T  0 disk &lt;/P&gt;&lt;P&gt;nvme3n1 259:5    0   1.5T  0 disk /export/beegfs/data3&lt;/P&gt;&lt;P&gt;nvme4n1 259:0    0   1.5T  0 disk /export/beegfs/data4&lt;/P&gt;&amp;lt;p style="padding-left: 3...</description>
      <pubDate>Mon, 25 Jul 2016 21:54:45 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19284#M7611</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-25T21:54:45Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19285#M7612</link>
      <description>&lt;P&gt;Hello everyone,&lt;/P&gt;We would like to address the performance drop questions first so we don't mix the situations.Can you please confirm if this was the process you followed:-Create large file-Flush page cache-Run FIONow, at which step are you flushing the page cache to avoid performance drop?Please let us know.NC</description>
      <pubDate>Thu, 28 Jul 2016 15:20:12 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19285#M7612</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-28T15:20:12Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19286#M7613</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;First we skipped flushing of page cache and performed fio testing right after large file creation. And with such approach results of direct reading tests were very poor.&lt;/P&gt;&lt;P&gt;But as I mentioned above, we found that flushing page cache after file creation improves situation. Which is confusing because O_DIRECT mode has to skip any operations with page cache.&lt;/P&gt;&lt;P&gt;Later I reviewed Linux kernel code and found that it performs some operations with page cache even in direct mode. So now I suspect that this issue is rather related to Linux kernel.&lt;/P&gt;</description>
      <pubDate>Thu, 28 Jul 2016 15:54:03 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19286#M7613</guid>
      <dc:creator>ANaza</dc:creator>
      <dc:date>2016-07-28T15:54:03Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19287#M7614</link>
      <description>&lt;P&gt;AlexNZ,&lt;/P&gt;Thanks for the information provided, we will continue with our testing here and we will let you know soon.NC</description>
      <pubDate>Thu, 28 Jul 2016 21:58:30 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19287#M7614</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-07-28T21:58:30Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19288#M7615</link>
      <description>&lt;P&gt;Hello Everyone,&lt;/P&gt;Our engineering team is running some investigation on this report and we will share any results once we get them.Thanks.NC</description>
      <pubDate>Thu, 04 Aug 2016 22:28:00 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19288#M7615</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-08-04T22:28:00Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19289#M7616</link>
      <description>&lt;P&gt;Hi AlexNZ,&lt;/P&gt; Chances are that your findings with the Kernel are the reason for this drop. We understand that Linux users can submit Kernel questions, findings and bugs here:  &lt;A href="https://bugzilla.kernel.org/" rel="nofollow noopener noreferrer"&gt;https://bugzilla.kernel.org/&lt;/A&gt;. Here are some instructions that we found:  &lt;A href="https://www.kernel.org/pub/linux/docs/lkml/reporting-bugs.html" rel="nofollow noopener noreferrer"&gt;https://www.kernel.org/pub/linux/docs/lkml/reporting-bugs.html&lt;/A&gt; It is very important to bear in mind that the benchmarking we provide using FIO (or even IOMeter for Windows), as per the evaluation guide, are not done the same way you've reported to be doing it, since the evaluation guide (shared on previous post) states that those are synthetic tools, which are used for raw disk, and you all seem to be getting the actual numbers we've shared on the SSD's spec's when measuring raw disk… the drop you see is once the file system is created and, as you may know, even different type of file systems may cause different SSD's performance numbers, some interesting articles on this (that you may even be aware of, but still worth to share):  &lt;A href="http://www.linux-magazine.com/Issues/2015/172/Tuning-Your-SSD" rel="nofollow noopener noreferrer"&gt;http://www.linux-magazine.com/Issues/2015/172/Tuning-Your-SSD&lt;/A&gt;  &lt;A href="https://wiki.archlinux.org/index.php/Solid_State_Drives" rel="nofollow noopener noreferrer"&gt;https://wiki.archlinux.org/index.php/Solid_State_Drives&lt;/A&gt;  &lt;A href="http://www.phoronix.com/scan.php?page=article&amp;amp;item=linux-43-ssd&amp;amp;num=1" rel="nofollow noopener noreferrer"&gt;http://www.phoronix.com/scan.php?page=article&amp;amp;item=linux-43-ssd&amp;amp;num=1&lt;/A&gt;Please let us know if you have any questions.NC</description>
      <pubDate>Mon, 08 Aug 2016 14:54:19 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19289#M7616</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2016-08-08T14:54:19Z</dc:date>
    </item>
    <item>
      <title>Re: Critical performance drop on newly created large file</title>
      <link>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19290#M7617</link>
      <description>&lt;P&gt;Hello NC,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for your reply.&lt;/P&gt;&lt;P&gt;I'll consider asking Kernel community about it. But since I know how to avoid this effect and know that kernel actually manipulates page cache in direct mode, I think it's not longer so important.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Alex&lt;/P&gt;</description>
      <pubDate>Mon, 08 Aug 2016 21:14:58 GMT</pubDate>
      <guid>https://community.solidigm.com/t5/solid-state-drives-nand/critical-performance-drop-on-newly-created-large-file/m-p/19290#M7617</guid>
      <dc:creator>ANaza</dc:creator>
      <dc:date>2016-08-08T21:14:58Z</dc:date>
    </item>
  </channel>
</rss>

