Linux 使用 fio 测试磁盘 I/O 性能 fio 是一个常见的用于测试磁盘 I/O 性能的工具,支持 19 种不同的 I/O 引擎,包括:sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio 等等。fio 一直在更新,最新的版本是 v3.19,它的官网是 fio。
fio 有两种方式对磁盘进行压力测试,一种是命令行指定参数,另外一种是读取配置文件,两者差不太多,但后者可以配合 sh,screen 等方式,保持长期的运行。
安装 $ sudo apt install fio 配置文件参数说明 配置文件属于 ini 格式的,即有区块概念,区块下通过“=”设置键值对。
filename: 指定文件 (设备) 的名称。可以通过冒号分割同时指定多个文件,如 filename=/dev/sda:/dev/sdb。 directory: 设置 filename 的路径前缀。在后面的基准测试中,采用这种方式来指定设备。 name: 指定 job 的名字,在命令行中表示新启动一个 job。 direct: bool 类型,默认为 0, 如果设置成 1,表示不使用 io buffer。 ioengine: I/O 引擎,现在 fio 支持 19 种 ioengine。默认值是 sync 同步阻塞 I/O,libaio 是 Linux 的 native 异步 I/O。关于同步异步,阻塞和非阻塞模型可以参考文章 使用异步 I/O 大大提高应用程序的性能”。 iodepth: 如果 ioengine 采用异步方式,该参数表示一批提交保持的 io 单元数。该参数可参考文章“Fio 压测工具和 io 队列深度理解和误区”。 rw: I/O 模式,随机读写,顺序读写等等。可选值:read,write,randread,randwrite,rw,randrw。 bs: I/O block 大小,默认是 4k。测试顺序读写时可以调大。 size: 指定 job 处理的文件的大小。 numjobs: 指定 job 的克隆数(线程)。 time_based: 如果在 runtime 指定的时间还没到时文件就被读写完成,将继续重复知道 runtime 时间结束。 runtime: 指定在多少秒后停止进程。如果未指定该参数,fio 将执行至指定的文件读写完全完成。 group_reporting: 当同时指定了 numjobs 了时,输出结果按组显示。 命令行使用 # 顺序读 $ fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest # 顺序写 $ fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest # 随机读 $ fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest # 随机写 $ fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest # 混合随机读写 $ fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop 配置文件启动 # fio.conf [global] ioengine=libaio iodepth=128 direct=0 thread=1 numjobs=16 norandommap=1 randrepeat=0 runtime=60 ramp_time=6 size=1g directory=/your/path [read4k-rand] stonewall group_reporting bs=4k rw=randread [read64k-seq] stonewall group_reporting bs=64k rw=read [write4k-rand] stonewall group_reporting bs=4k rw=randwrite [write64k-seq] stonewall group_reporting bs=64k rw=write $ fio fio.conf read4k-rand: (groupid=0, jobs=16): err= 0: pid=2571: Tue May 12 15:28:36 2020 read: IOPS=33.4k, BW=131MiB/s (137MB/s)(7834MiB/60002msec) slat (nsec): min=1703, max=3754.6k, avg=476047.81, stdev=421903.44 clat (usec): min=4, max=93701, avg=60830.41, stdev=9669.21 lat (usec): min=171, max=94062, avg=61307.14, stdev=9735.47 clat percentiles (usec): | 1.00th=[41681], 5.00th=[45876], 10.00th=[48497], 20.00th=[51643], | 30.00th=[54789], 40.00th=[57410], 50.00th=[60556], 60.00th=[63701], | 70.00th=[66847], 80.00th=[69731], 90.00th=[73925], 95.00th=[77071], | 99.00th=[81265], 99.50th=[82314], 99.90th=[85459], 99.95th=[86508], | 99.99th=[88605] bw ( KiB/s): min= 6240, max=11219, per=6.27%, avg=8377.40, stdev=1197.43, samples=1920 iops : min= 1560, max= 2804, avg=2094.01, stdev=299.31, samples=1920 lat (usec) : 10=0.01%, 250=0.01%, 500=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.02%, 50=14.27% lat (msec) : 100=85.80% cpu : usr=0.59%, sys=1.94%, ctx=1545619, majf=0, minf=0 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=107.7% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwt: total=2003580,0,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=128 read64k-seq: (groupid=1, jobs=16): err= 0: pid=2590: Tue May 12 15:28:36 2020 read: IOPS=10.7k, BW=674MiB/s (707MB/s)(12.3GiB/18607msec) slat (usec): min=8, max=12592, avg=1495.75, stdev=3792.74 clat (usec): min=2, max=193705, avg=189323.53, stdev=12000.95 lat (usec): min=21, max=193723, avg=190819.12, stdev=11397.67 clat percentiles (msec): | 1.00th=[ 180], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 190], | 30.00th=[ 192], 40.00th=[ 192], 50.00th=[ 192], 60.00th=[ 192], | 70.00th=[ 192], 80.00th=[ 192], 90.00th=[ 192], 95.00th=[ 192], | 99.00th=[ 194], 99.50th=[ 194], 99.90th=[ 194], 99.95th=[ 194], | 99.99th=[ 194] bw ( KiB/s): min=41000, max=43682, per=6.12%, avg=42274.07, stdev=551.21, samples=592 iops : min= 640, max= 682, avg=660.07, stdev= 8.64, samples=592 lat (usec) : 4=0.01%, 50=0.01%, 100=0.01%, 250=0.02%, 500=0.01% lat (usec) : 750=0.01% lat (msec) : 10=0.01%, 20=0.05%, 50=0.19%, 100=0.26%, 250=100.46% cpu : usr=0.13%, sys=2.13%, ctx=49722, majf=0, minf=16 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=131.4% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwt: total=198676,0,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=128 write4k-rand: (groupid=2, jobs=16): err= 0: pid=2607: Tue May 12 15:28:36 2020 write: IOPS=2409k, BW=9411MiB/s (9868MB/s)(16.0GiB/1741msec) clat percentiles (nsec): | 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0], | 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0], | 70.00th=[ 0], 80.00th=[ 0], 90.00th=[ 0], 95.00th=[ 0], | 99.00th=[ 0], 99.50th=[ 0], 99.90th=[ 0], 99.95th=[ 0], | 99.99th=[ 0] cpu : usr=14.12%, sys=85.68%, ctx=2337, majf=0, minf=16 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwt: total=0,4194304,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=128 write64k-seq: (groupid=3, jobs=16): err= 0: pid=2623: Tue May 12 15:28:36 2020 write: IOPS=221k, BW=13.5GiB/s (14.5GB/s)(16.0GiB/1188msec) clat percentiles (nsec): | 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0], | 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0], | 70.00th=[ 0], 80.00th=[ 0], 90.00th=[ 0], 95.00th=[ 0], | 99.00th=[ 0], 99.50th=[ 0], 99.90th=[ 0], 99.95th=[ 0], | 99.99th=[ 0] cpu : usr=3.63%, sys=96.11%, ctx=1862, majf=0, minf=16 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwt: total=0,262144,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=128 Run status group 0 (all jobs): READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=7834MiB (8215MB), run=60002-60002msec Run status group 1 (all jobs): READ: bw=674MiB/s (707MB/s), 674MiB/s-674MiB/s (707MB/s-707MB/s), io=12.3GiB (13.2GB), run=18607-18607msec Run status group 2 (all jobs): WRITE: bw=9411MiB/s (9868MB/s), 9411MiB/s-9411MiB/s (9868MB/s-9868MB/s), io=16.0GiB (17.2GB), run=1741-1741msec Run status group 3 (all jobs): WRITE: bw=13.5GiB/s (14.5GB/s), 13.5GiB/s-13.5GiB/s (14.5GB/s-14.5GB/s), io=16.0GiB (17.2GB), run=1188-1188msec Disk stats (read/write): nvme0n1: ios=1819561/61, merge=0/580, ticks=2577676/16, in_queue=2501764, util=96.22% 从上述结果的 bw 和 iops 来看,这是块走 pcie 3.0 * 2 的 ssd,大概率是 m.2 接口的 SSD。
...