Code:
mike@beastie:/metapool$ sudo zpool historyHistory for 'metapool':
2024-09-18.18:59:07 zpool create metapool raidz2 sda1 sdb1 sdc1 sdd1 sde1 -f
mike@beastie:/metapool$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [W(1)][43.8%][w=407MiB/s][w=407 IOPS][eta 00m:09s]
Jobs: 1 (f=1): [W(1)][56.5%][w=200MiB/s][w=200 IOPS][eta 00m:10s]
Jobs: 1 (f=1): [W(1)][76.0%][w=297MiB/s][w=297 IOPS][eta 00m:06s]
Jobs: 1 (f=1): [W(1)][89.3%][w=199MiB/s][w=199 IOPS][eta 00m:03s]
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
Jobs: 1 (f=1): [W(1)][97.4%][eta 00m:01s]
TEST: (groupid=0, jobs=1): err= 0: pid=63250: Wed Sep 18 19:01:40 2024
write: IOPS=274, BW=275MiB/s (288MB/s)(10.0GiB/37276msec); 0 zone resets
slat (usec): min=273, max=5649, avg=2693.44, stdev=1534.30
clat (usec): min=3, max=9683.3k, avg=112275.40, stdev=528669.95
lat (usec): min=296, max=9684.9k, avg=114969.53, stdev=528747.76
clat percentiles (msec):
| 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 17], 20.00th=[ 27],
| 30.00th=[ 54], 40.00th=[ 67], 50.00th=[ 81], 60.00th=[ 97],
| 70.00th=[ 115], 80.00th=[ 140], 90.00th=[ 153], 95.00th=[ 157],
| 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 9731], 99.95th=[ 9731],
| 99.99th=[ 9731]
bw ( KiB/s): min=43008, max=1923072, per=100.00%, avg=364580.57, stdev=296478.87, samples=56
iops : min= 42, max= 1878, avg=356.04, stdev=289.53, samples=56
lat (usec) : 4=0.01%, 10=0.03%, 20=0.01%, 500=0.01%, 750=0.01%
lat (usec) : 1000=0.01%
lat (msec) : 2=0.04%, 4=0.10%, 10=0.59%, 20=13.05%, 50=12.72%
lat (msec) : 100=35.31%, 250=37.81%, >=2000=0.30%
fsync/fdatasync/sync_file_range:
sync (nsec): min=1218, max=1218, avg=1218.00, stdev= 0.00
sync percentiles (nsec):
| 1.00th=[ 1224], 5.00th=[ 1224], 10.00th=[ 1224], 20.00th=[ 1224],
| 30.00th=[ 1224], 40.00th=[ 1224], 50.00th=[ 1224], 60.00th=[ 1224],
| 70.00th=[ 1224], 80.00th=[ 1224], 90.00th=[ 1224], 95.00th=[ 1224],
| 99.00th=[ 1224], 99.50th=[ 1224], 99.90th=[ 1224], 99.95th=[ 1224],
| 99.99th=[ 1224]
cpu : usr=1.47%, sys=10.71%, ctx=67787, majf=0, minf=15
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=275MiB/s (288MB/s), 275MiB/s-275MiB/s (288MB/s-288MB/s), io=10.0GiB (10.7GB), run=37276-37276msec
mike@beastie:/metapool$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
Jobs: 1 (f=0): [f(1)][-.-%][r=3668MiB/s][r=3668 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=109995: Wed Sep 18 19:02:33 2024
read: IOPS=3571, BW=3572MiB/s (3745MB/s)(10.0GiB/2867msec)
slat (usec): min=230, max=925, avg=277.80, stdev=38.00
clat (usec): min=2, max=24505, avg=8587.61, stdev=1108.96
lat (usec): min=261, max=25431, avg=8865.67, stdev=1138.93
clat percentiles (usec):
| 1.00th=[ 5407], 5.00th=[ 8291], 10.00th=[ 8291], 20.00th=[ 8291],
| 30.00th=[ 8291], 40.00th=[ 8291], 50.00th=[ 8356], 60.00th=[ 8356],
| 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[10028], 95.00th=[10028],
| 99.00th=[11600], 99.50th=[11863], 99.90th=[20317], 99.95th=[22676],
| 99.99th=[24249]
bw ( MiB/s): min= 2874, max= 3730, per=99.09%, avg=3539.20, stdev=373.67, samples=5
iops : min= 2874, max= 3730, avg=3539.20, stdev=373.67, samples=5
lat (usec) : 4=0.05%, 500=0.05%, 750=0.05%, 1000=0.05%
lat (msec) : 2=0.20%, 4=0.34%, 10=84.79%, 20=14.37%, 50=0.12%
cpu : usr=0.94%, sys=99.02%, ctx=7, majf=0, minf=8206
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=3572MiB/s (3745MB/s), 3572MiB/s-3572MiB/s (3745MB/s-3745MB/s), io=10.0GiB (10.7GB), run=2867-2867msec
Huh so the zpool create -o ashift=12 <poolname> uuid/sd*1 -f command when adding the drives which is supposed to make the zfs pool faster according to a website OTHER THAN this forum.