Code:
mike@beastie:/plexserv$ sudo fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=5g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
TEST: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][58.3%][w=536MiB/s][w=536 IOPS][eta 00m:05s]
Jobs: 1 (f=1): [W(1)][86.7%][w=536MiB/s][w=535 IOPS][eta 00m:02s]
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=34992: Sun Sep 29 19:05:42 2024
write: IOPS=471, BW=472MiB/s (495MB/s)(10.0GiB/21713msec); 0 zone resets
slat (usec): min=201, max=3831, avg=1504.26, stdev=692.03
clat (usec): min=2, max=6321.0k, avg=65618.34, stdev=344197.00
lat (usec): min=330, max=6322.8k, avg=67123.13, stdev=344263.32
clat percentiles (msec):
| 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 16],
| 30.00th=[ 41], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59],
| 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 65], 95.00th=[ 71],
| 99.00th=[ 82], 99.50th=[ 87], 99.90th=[ 6275], 99.95th=[ 6342],
| 99.99th=[ 6342]
bw ( KiB/s): min=354304, max=2258944, per=100.00%, avg=658597.16, stdev=443965.52, samples=31
iops : min= 346, max= 2206, avg=643.16, stdev=433.56, samples=31
lat (usec) : 4=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.04%, 4=0.06%, 10=8.30%, 20=14.21%, 50=10.73%
lat (msec) : 100=66.31%, >=2000=0.30%
fsync/fdatasync/sync_file_range:
sync (nsec): min=1190, max=1190, avg=1190.00, stdev= 0.00
sync percentiles (nsec):
| 1.00th=[ 1192], 5.00th=[ 1192], 10.00th=[ 1192], 20.00th=[ 1192],
| 30.00th=[ 1192], 40.00th=[ 1192], 50.00th=[ 1192], 60.00th=[ 1192],
| 70.00th=[ 1192], 80.00th=[ 1192], 90.00th=[ 1192], 95.00th=[ 1192],
| 99.00th=[ 1192], 99.50th=[ 1192], 99.90th=[ 1192], 99.95th=[ 1192],
| 99.99th=[ 1192]
cpu : usr=2.19%, sys=14.37%, ctx=61743, majf=1, minf=14
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=99.4%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=472MiB/s (495MB/s), 472MiB/s-472MiB/s (495MB/s-495MB/s), io=10.0GiB (10.7GB), run=21713-21713msec
mike@beastie:/plexserv$ sudo fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=5g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs
=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
Jobs: 1 (f=1)
TEST: (groupid=0, jobs=1): err= 0: pid=59840: Sun Sep 29 19:06:15 2024
read: IOPS=3633, BW=3634MiB/s (3810MB/s)(10.0GiB/2818msec)
slat (usec): min=229, max=977, avg=273.12, stdev=42.13
clat (usec): min=2, max=25607, avg=8477.49, stdev=1126.32
lat (usec): min=256, max=26581, avg=8750.89, stdev=1164.08
clat percentiles (usec):
| 1.00th=[ 8094], 5.00th=[ 8094], 10.00th=[ 8094], 20.00th=[ 8094],
| 30.00th=[ 8094], 40.00th=[ 8160], 50.00th=[ 8160], 60.00th=[ 8160],
| 70.00th=[ 8160], 80.00th=[ 8160], 90.00th=[ 9896], 95.00th=[10028],
| 99.00th=[12911], 99.50th=[13435], 99.90th=[21890], 99.95th=[23725],
| 99.99th=[25297]
bw ( MiB/s): min= 2850, max= 3816, per=99.10%, avg=3601.20, stdev=422.17, samples=5
iops : min= 2850, max= 3816, avg=3601.20, stdev=422.17, samples=5
lat (usec) : 4=0.02%, 500=0.02%, 750=0.02%, 1000=0.02%
lat (msec) : 2=0.08%, 4=0.16%, 10=95.22%, 20=4.32%, 50=0.15%
cpu : usr=1.38%, sys=98.58%, ctx=9, majf=0, minf=8203
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=99.4%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=3634MiB/s (3810MB/s), 3634MiB/s-3634MiB/s (3810MB/s-3810MB/s), io=10.0GiB (10.7GB), run=2818-2818msec
---------------now to test pool plexdata-------------------------------------------------------------------------
mike@beastie:/plexdata$ sudo fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=5g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjob
s=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
TEST: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][50.0%][w=343MiB/s][w=343 IOPS][eta 00m:07s]
Jobs: 1 (f=1): [W(1)][68.4%][w=316MiB/s][w=316 IOPS][eta 00m:06s]
Jobs: 1 (f=1): [W(1)][90.5%][w=345MiB/s][w=345 IOPS][eta 00m:02s]
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=59884: Sun Sep 29 19:08:54 2024
write: IOPS=317, BW=318MiB/s (333MB/s)(10.0GiB/32234msec); 0 zone resets
slat (usec): min=208, max=4409, avg=2156.49, stdev=1124.99
clat (usec): min=2, max=10200k, avg=97425.75, stdev=555468.28
lat (usec): min=283, max=10203k, avg=99582.82, stdev=555597.61
clat percentiles (msec):
| 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 18],
| 30.00th=[ 49], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 89],
| 70.00th=[ 91], 80.00th=[ 92], 90.00th=[ 95], 95.00th=[ 102],
| 99.00th=[ 112], 99.50th=[ 116], 99.90th=[10134], 99.95th=[10134],
| 99.99th=[10134]
bw ( KiB/s): min=30720, max=2840576, per=100.00%, avg=453700.27, stdev=434427.67, samples=45
iops : min= 30, max= 2774, avg=443.07, stdev=424.25, samples=45
lat (usec) : 4=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.04%, 4=0.08%, 10=14.39%, 20=5.71%, 50=9.92%
lat (msec) : 100=64.39%, 250=5.11%, >=2000=0.30%
fsync/fdatasync/sync_file_range:
sync (nsec): min=972, max=972, avg=972.00, stdev= 0.00
sync percentiles (nsec):
| 1.00th=[ 972], 5.00th=[ 972], 10.00th=[ 972], 20.00th=[ 972],
| 30.00th=[ 972], 40.00th=[ 972], 50.00th=[ 972], 60.00th=[ 972],
| 70.00th=[ 972], 80.00th=[ 972], 90.00th=[ 972], 95.00th=[ 972],
| 99.00th=[ 972], 99.50th=[ 972], 99.90th=[ 972], 99.95th=[ 972],
| 99.99th=[ 972]
cpu : usr=1.62%, sys=9.34%, ctx=62597, majf=0, minf=15
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=99.4%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=318MiB/s (333MB/s), 318MiB/s-318MiB/s (333MB/s-333MB/s), io=10.0GiB (10.7GB), run=32234-32234msec
mike@beastie:/plexdata$ sudo fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=5g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs
=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
Jobs: 1 (f=1)
TEST: (groupid=0, jobs=1): err= 0: pid=109236: Sun Sep 29 19:09:21 2024
read: IOPS=3629, BW=3630MiB/s (3806MB/s)(10.0GiB/2821msec)
slat (usec): min=225, max=947, avg=273.48, stdev=40.52
clat (usec): min=2, max=24871, avg=8487.41, stdev=1089.44
lat (usec): min=260, max=25820, avg=8761.18, stdev=1125.83
clat percentiles (usec):
| 1.00th=[ 8094], 5.00th=[ 8094], 10.00th=[ 8094], 20.00th=[ 8160],
| 30.00th=[ 8160], 40.00th=[ 8160], 50.00th=[ 8160], 60.00th=[ 8160],
| 70.00th=[ 8160], 80.00th=[ 8225], 90.00th=[10028], 95.00th=[10028],
| 99.00th=[12518], 99.50th=[13304], 99.90th=[21103], 99.95th=[22938],
| 99.99th=[24511]
bw ( MiB/s): min= 2864, max= 3806, per=99.11%, avg=3597.60, stdev=412.03, samples=5
iops : min= 2864, max= 3806, avg=3597.60, stdev=412.03, samples=5
lat (usec) : 4=0.02%, 500=0.02%, 750=0.02%, 1000=0.02%
lat (msec) : 2=0.08%, 4=0.16%, 10=93.69%, 20=5.86%, 50=0.14%
cpu : usr=0.67%, sys=99.29%, ctx=5, majf=0, minf=8203
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=99.4%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=3630MiB/s (3806MB/s), 3630MiB/s-3630MiB/s (3806MB/s-3806MB/s), io=10.0GiB (10.7GB), run=2821-2821msec
mike@beastie:/plexserv$ sudo zpool iostat -v 30 5
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
plexdata 340K 10.9T 0 18 2.30K 6.23M
raidz1-0 340K 10.9T 0 18 2.30K 6.23M
int1d1 - - 0 6 783 2.08M
int1d2 - - 0 6 783 2.08M
int1d3 - - 0 6 783 2.08M
---------- ----- ----- ----- ----- ----- -----
plexserv 202K 10.9T 0 11 65 9.15M
int1d4 71K 3.62T 0 3 21 3.07M
int1d5 68K 3.62T 0 4 21 3.03M
ext1d1 63K 3.62T 0 3 21 3.06M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
plexdata 340K 10.9T 0 0 0 0
raidz1-0 340K 10.9T 0 0 0 0
int1d1 - - 0 0 0 0
int1d2 - - 0 0 0 0
int1d3 - - 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
plexserv 202K 10.9T 0 0 0 0
int1d4 71K 3.62T 0 0 0 0
int1d5 68K 3.62T 0 0 0 0
ext1d1 63K 3.62T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
plexdata 340K 10.9T 0 0 0 0
raidz1-0 340K 10.9T 0 0 0 0
int1d1 - - 0 0 0 0
int1d2 - - 0 0 0 0
int1d3 - - 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
plexserv 202K 10.9T 0 0 0 0
int1d4 71K 3.62T 0 0 0 0
int1d5 68K 3.62T 0 0 0 0
ext1d1 63K 3.62T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
plexdata 340K 10.9T 0 0 0 0
raidz1-0 340K 10.9T 0 0 0 0
int1d1 - - 0 0 0 0
int1d2 - - 0 0 0 0
int1d3 - - 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
plexserv 202K 10.9T 0 0 0 0
int1d4 71K 3.62T 0 0 0 0
int1d5 68K 3.62T 0 0 0 0
ext1d1 63K 3.62T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
plexdata 340K 10.9T 0 0 0 0
raidz1-0 340K 10.9T 0 0 0 0
int1d1 - - 0 0 0 0
int1d2 - - 0 0 0 0
int1d3 - - 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
plexserv 202K 10.9T 0 0 0 0
int1d4 71K 3.62T 0 0 0 0
int1d5 68K 3.62T 0 0 0 0
ext1d1 63K 3.62T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
mike@beastie:/plexserv$
I suspect that the pool plexdata will increase in throughput once I re-set it up back up with 6 to 11 matching drives , plus adding a spare drive or two for insurance for that pool.