09-02-2019 04:00 PM
Random write to Intel SSDPEDKE040T7.
After 1.4TB random write, SSD write performance drops to half.
The test script :
#!/bin/sh
nvme="$1"
if [ -z "$nvme" ]; then
echo "Usage $0 <nvme>"
exit 1
fi
ls "$nvme" || exit 1
size_k=$(grep $(basename $nvme) /proc/partitions | awk '{print $3}')
echo "$nvme: size_k $size_k"
size_128k=$(($size_k / 128))
len_128k=$((4096+1))
while :; do
pos=$(perl -e '{print int(rand()*('$size_128k'-'$len_128k')) . "\n"}')
pos=$((pos - pos % 4096))
echo -n "$pos: "
/data/misc/dd if=/dev/zero bs=128k count=$len_128k of=$nvme oflag=direct seek=$pos 2>&1 | grep copied
done
The attach is its output.
The output shows the write performance drops from 1.9 GB/s to 1.0 GB/s and stay in 1.0GB/s.
A sequential dd all SSD brings the performance back.
Can you explain what causes it?
Can you can provide a spec to explain it?
Best regards,
Guofeng
09-04-2019 08:46 PM
Hi GLi29;
Greetings from Intel® SSD communities.
As per your request, we will need to perform additional research, before proceeding with further details.
While we run the tests, we will work on the behavior reported and get back with updates as soon as possible.
Don’t hesitate to contact us in case you might have any additional inquires.
Have a nice day.
Santiago A.
Intel® Customer Support Technician
A Contingent Worker at Intel
09-12-2019 10:50 PM
Hi GLi29;
Greetings from Intel® SSD communities.
The SSD read and write speeds found on the product brief and from http://ark.intel.com are the results from tests ran on a laboratory using "Ideal Conditions." Meaning they used a brand new, empty SSD, on a temperature controlled room, using very powerful systems.
If there’s data on the drive, or is running an operating system, SSD will never reach this speeds on a real world environment. It is important to acknowledge this before starting on any troubleshooting of this sort.
For Benchmarking, FIO as the tool recommended for Linux environments. Our recommendation for you to download and Install FIO Benchmarking Tool , please download the FIO master zip from GITHUB at https://github.com/axboe/fio (On this link, you can find additional information from the creators of this tool; please check the README notes)
You will also need libaio-devel rpm in order to install the necessary .h files to build the libaio ioengine. Also rpms gcc and zlib-devel are needed for proper FIO operation. These rpms should be on the installation source for your Linux distribution.
unzip fio-master.zip
cd fio-master
./configure
make
make install
# verify fio installed by running the below command
fio -–version
Don’t hesitate to contact us in case you might have any additional inquiries.
Have a nice day.
Santiago A.
Intel® Customer Support Technician
A Contingent Worker at Intel
Third Party Content: Web Sites and Materials may contain user or third party submitted content; such content is not reviewed, approved or endorsed by Intel and is provided solely as a convenience to our customers and users. Under no circumstances will Intel be liable in any way for any third party submitted or provided content, including, but not limited to, any errors or omissions, or damages of any kind. ANY MATERIAL DOWNLOADED OR OTHERWISE OBTAINED THROUGH THE USE OF THE MATERIALS IS DONE AT YOUR OWN DISCRETION AND RISK AND THAT YOU WILL BE SOLELY RESPONSIBLE FOR ANY DAMAGE TO YOUR COMPUTER SYSTEM OR OTHER DEVICE OR LOSS OF DATA THAT RESULTS FROM THE DOWNLOAD OF ANY SUCH MATERIAL. By your use you agree that you must evaluate, and bear all risks associated with, the use of any third party content, including any reliance on the accuracy, completeness, or usefulness of such content. All postings and use of the Web Sites or Material are subject to these Terms of Use and any other program and site specific terms.
09-13-2019 09:08 AM
Hi Santiago,
Thanks for your reply.
I am okay with the performance figure in the product brief.
We will use SSD to save a huge and high rate data, not Linux system or any known file system.
If you can provide the technique details about what I see in my test, it would be easier for us to design it.
For example, at the moment, we use a block list to track block clean/dirty.
This list will cause the random write after long period allocation/free.
We have a cleanup but it seems not efficient.
That is why I ask for the technique details,
Such as, what triggers it? Why after an all-driver sequential write the the performance is back?
Why the slow performance happens after ~1.x TB random write?
Is this related to wearing level balance or anything else?
Any firmware revision you would suggest for our case?
Thanks, Guofeng
09-13-2019 09:46 PM
Hi GLi29;
Thank you for your reply to Intel® SSD communities.
For your request, please let us explain this goes beyond of our support scope since FIO it’s a Third Party product.
We might be able to help and address some of your concerns; however, in order to do that, we need you to run the benchmark tests using FIO and not the script you’re currently using; the reason is, we can't validate that script.
After getting new results from FIO we can try to help you; however, please let us insist, this is beyond of support scope.
Don’t hesitate to contact us in case you might have any additional inquiries.
Have a nice day.
Santiago A.
Intel® Customer Support Technician
A Contingent Worker at Intel