Testing Tools
Use fio tool, it is recommended to test with the libaio engine
Installation Method
Linux: yum install fio.x86_64
fio Parameter Description
Parameter | Description |
---|---|
-direct=1 | Skip cache and write directly to disk |
-iodepth=128 | The depth of the IO queue of the request |
-rw=write | Read and write strategy, optional values are randread (random read), randwrite (random write), read (sequential read), write (sequential write), randrw (mixed random read-write) |
-ioengine=libaio | IO engine configuration, it is recommended to use libaio |
-bs=4k | Block size configuration, you can use 4k, 8k, 16k etc. |
-size=200G | The size of the test-generated file |
-numjobs=1 | Thread count configuration |
-runtime=1000 | The length of the test run, in seconds |
-group_reporting | Test result summary display |
-name=test | Test task name |
-filename=/data/test | Test output path and file name |
Common test examples are as follows:
- Latency performance test:
Read Latency:
fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=4k -size=200G -numjobs=1 -runtime=1000 -group_reporting -name=test -filename=/data/test
Write Latency:
fio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=4k -size=200G -numjobs=1 -runtime=1000 -group_reporting -name=test -filename=/data/test
- Throughput performance test:
Read Bandwidth:
fio -direct=1 -iodepth=32 -rw=read -ioengine=libaio -bs=256k -size=200G -numjobs=4 -runtime=1000 -group_reporting -name=test -filename=/data/test
Write Bandwidth:
fio -direct=1 -iodepth=32 -rw=write -ioengine=libaio -bs=256k -size=200G -numjobs=4 -runtime=1000 -group_reporting -name=test -filename=/data/test
- IOPS performance test(4k, 4*32 queue, random read-write):
Read IOPS:
fio -direct=1 -iodepth=32 -rw=randread -ioengine=libaio -bs=4k -size=200G -numjobs=4 -runtime=1000 -group_reporting -name=test -filename=/data/test
Write IOPS:
fio -direct=1 -iodepth=32 -rw=randwrite -ioengine=libaio -bs=4k -size=200G -numjobs=4 -runtime=1000 -group_reporting -name=test -filename=/data/test
RSSD Performance Test
As the performance of the cloud disk and the pressure test conditions play a key role when testing the cloud disk, to fully exploit the system performance of multi-core and multi-threading, and to put out the performance indicator of 1.2 million IOPS for the RSSD cloud disk, you can refer to the following rssd_test.sh script:
#!/bin/bash
numjobs=16 # Test thread number, do not exceed the number of CPU cores, default 16
iodepth=32 # IO queue depth per thread, default 32
bs=4k # Size per I/O, default 4k
rw=randread # Read and write strategy, default random read
dev_name=vdb # Test block device name, default vdb
if [[ $# == 0 ]]; then
echo "Default test: `basename $0` $numjobs $iodepth $bs $rw $dev_name"
echo "Or you can specify parameter:"
echo "`basename $0` numjobs iodepth bs rw dev_name"
elif [[ $# == 5 ]]; then
numjobs=$1
iodepth=$2
bs=$3
rw=$4
dev_name=$5
else
echo "Parameter number error!"
echo "`basename $0` numjobs iodepth bs rw dev_name"
exit 1
fi
nr_cpus=`cat /proc/cpuinfo |grep "processor" |wc -l`
if [ $nr_cpus -lt $numjobs ];then
echo "Numjobs is more than cpu cores, exit!"
exit -1
fi
nu=$((numjobs+1))
cpulist=""
for ((i=1;i<10;i++))
do
list=`cat /sys/block/${dev_name}/mq/*/cpu_list | awk '{if(i<=NF) print $i;}' i="$i" | tr -d ',' | tr '\n' ','
if [ -z $list ];then
break
fi
cpulist=${cpulist}${list}
done
spincpu=`echo $cpulist | cut -d ',' -f 2-${nu}` # Do not use core 0
echo $spincpu
echo $numjobs
echo 2 > /sys/block/${dev_name}/queue/rq_affinity
sleep 5
# Execute fio command
fio --ioengine=libaio --runtime=30s --numjobs=${numjobs} --iodepth=${iodepth} --bs=${bs} --rw=${rw} --filename=/dev/${dev_name} --time_based=1 --direct=1 --name=test --group_reporting --cpus_allowed=$spincpu --cpus_allowed_policy=split
Testing Description
-
According to the user’s test environment, the input parameters of the script can be specified. If it is not specified, the default test method will be executed.
-
Direct testing of bare disks will destroy the file system structure. If there is data on the cloud disk, you can set filename=[specific file path, such as /mnt/test.image]. If there is no data, you can directly set filename=[device name, like /dev/vdb in this example]
Script Explanation
Block Device Parameters
-
When testing an instance, the command echo 2 > /sys/block/vdb/queue/rq_affinity in the script modifies the value of the rq_affinity parameter in the block device of the cloud host instance to 2.
-
When the value of rq_affinity is 1: When the block device receives an I/O Completion event, the I/O is sent back to the Group where the vCPU that processed the I/O issuance process is located for handling. In multi-threaded concurrent scenarios, I/O Completion may be concentrated on a single vCPU, creating a bottleneck and limiting performance improvement.
-
When the value of rq_affinity is 2: When the block device receives an I/O Completion event, the I/O is executed on the vCPU where it was originally issued. In multi-threaded concurrent scenarios, this allows full utilization of the performance of each vCPU.
Binding to Corresponding vCPUs
-
In normal mode, a device has only one Request-Queue. In multi-threaded concurrent I/O processing, this single Request-Queue becomes a performance bottleneck.
-
In the latest Multi-Queue mode, a device can have multiple Request-Queues for processing I/O, which can fully leverage the performance of backend storage. If you have 4 I/O threads, you need to bind each of the 4 threads to the CPU Core corresponding to a different Request-Queue. This way, you can fully utilize Multi-Queue to improve performance.
-
fio provides parameters cpusallowed and cpus_allowed_policy to bind vCPU. Take vdb cloud disk as an example, run ls /sys/block/vdb/mq/ to view the QueueId of the device named vdb cloud disk, and run cat /sys/block/vdb/mq/$QueueId/cpu_list to view the cpu_core_id bound to the QueueId of the device named vdb cloud disk.