Page 4 of 14 FirstFirst ... 23456 ... LastLast
Results 31 to 40 of 134

Thread: Seemingly sporadic slow ZFS IO since 22.04

  1. #31
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by MAFoElffen View Post
    Yes. 2 days.
    I'm not patient once I've hit the button on something for the NMVe 2 days yes, but the RAM is two different eBay sellers not Amazon so I'm at the mercy of their dispatch attentiveness and Royal Mail.

    Quote Originally Posted by MAFoElffen View Post
    I think your calc on your max is off. Isn't 16GiB = 16000000000 bytes? I think you only set it to around 7GB...
    you're absolutely right, good spot! I was reading something about it in a blog and it listed the values for various configs. So this morning when I set 16GB (or thought I was) I was referring to:

    Code:
    16GB=17179869184, 8GB=8589934592, 4GB=4294967296, 2GB=2147483648, 1GB=1073741824
    And as you have noticed:

    Code:
    options zfs zfs_arc_min=1073741824
    options zfs zfs_arc_max=7179869184
    I missed the leading 1 when I copy/pasted!

    Corrected now!

    Code:
    options zfs zfs_arc_min=1073741824
    options zfs zfs_arc_max=17179869184
    I did wonder why memory usage was lower than usual. Specifying even 7179869184 has given me the best day I've had in weeks though, so we're heading in the right direction

  2. #32
    Join Date
    Aug 2016
    Location
    Wandering
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Ok lets talk Drive's again....
    Code:
    Run status group 0 (all jobs):
      WRITE: bw=2318MiB/s (2431MB/s), 2318MiB/s-2318MiB/s (2431MB/s-2431MB/s), io=10.0GiB (10.7GB), run=4417-4417msec
    Big Difference
    Code:
    Drives:
      Local Storage: total: raw: 5.06 TiB usable: 5.02 TiB used: 13.84 GiB (0.3%)
      ID-1: /dev/nvme0n1 vendor: Samsung model: SSD 980 PRO 1TB size: 931.51 GiB
    I hope yours has a satisfactory ending.as well.
    With realization of one's own potential and self-confidence in one's ability, one can build a better world.
    Dalai Lama>>
    Code Tags | System-info | Forum Guide lines | Arch Linux, Debian Unstable, FreeBSD

  3. #33
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by 1fallen View Post
    Ok lets talk Drive's again....
    Sorry you may have mentioned this a way back in the thread, but what are your other drives? You mentioned the WD blue for L2ARC and SLOGs, are your others SSDs or HDDs? Just wondering how you're getting that result Edit: LOL, thought this was your array performance. Having re-read this morning, I've just realised it's the NMVe bench, which you did post a while back. I get sleep issues so am often up to the early hours - but probably not the best time to be looking at numbers

    Gotta turn in, it's past 3am. Will pick up again tomorrow.

    Edit2: As soon as mine is here I will share the results!
    Last edited by tkae-lp; December 3rd, 2023 at 02:34 PM.

  4. #34
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    It arrived early it's not all rainbows and sunshine though

    So NVMe is in:

    Code:
    zpool status
      pool: Tank
     state: ONLINE
      scan: scrub repaired 0B in 10:00:01 with 0 errors on Mon Nov 13 23:01:24 2023
    config:
    
        NAME                                                                STATE     READ WRITE CKSUM
        Tank                                                                ONLINE       0     0     0
          raidz2-0                                                          ONLINE       0     0     0
            ata-ST4000DM000-1F2168_S300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM004-2CV104_ZTT4XXXX                                 ONLINE       0     0     0
            ata-ST4000DM004-2CV104_ZTT4XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
        logs    
          nvme-Samsung_SSD_980_PRO_with_Heatsink_2TB_S6WRNS0W53XXXX-part2  ONLINE       0     0     0
        cache
          nvme-Samsung_SSD_980_PRO_with_Heatsink_2TB_S6WRNS0W53XXXX-part1  ONLINE       0     0     0
    
    errors: No known data errors
    But there still seems to be quite a lot of fluctuation, and perofrmance is not as good as I would have expected..... Movies are snappier though:

    Code:
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][35.0%][w=498MiB/s][w=498 IOPS][eta 00m:13s]
    Jobs: 1 (f=1): [W(1)][65.0%][w=345MiB/s][w=345 IOPS][eta 00m:07s] 
    Jobs: 1 (f=1): [W(1)][79.2%][w=492MiB/s][w=492 IOPS][eta 00m:05s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                        
    Jobs: 1 (f=0): [f(1)][100.0%][w=271MiB/s][w=271 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=266993: Mon Dec  4 02:49:09 2023
      write: IOPS=397, BW=397MiB/s (416MB/s)(10.0GiB/25784msec); 0 zone resets
        slat (usec): min=248, max=17985, avg=2188.59, stdev=1462.81
        clat (usec): min=4, max=3376.2k, avg=77616.88, stdev=187185.64
         lat (usec): min=1640, max=3377.8k, avg=79806.00, stdev=187507.94
        clat percentiles (msec):
         |  1.00th=[   10],  5.00th=[   15], 10.00th=[   18], 20.00th=[   48],
         | 30.00th=[   49], 40.00th=[   52], 50.00th=[   52], 60.00th=[   56],
         | 70.00th=[   72], 80.00th=[   96], 90.00th=[  136], 95.00th=[  169],
         | 99.00th=[  220], 99.50th=[  236], 99.90th=[ 3373], 99.95th=[ 3373],
         | 99.99th=[ 3373]
       bw (  KiB/s): min=137216, max=2140160, per=100.00%, avg=460390.40, stdev=350853.49, samples=45
       iops        : min=  134, max= 2090, avg=449.60, stdev=342.63, samples=45
      lat (usec)   : 10=0.05%
      lat (msec)   : 2=0.02%, 4=0.05%, 10=3.47%, 20=12.74%, 50=18.92%
      lat (msec)   : 100=45.32%, 250=19.13%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1080, max=1080, avg=1080.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1080],  5.00th=[ 1080], 10.00th=[ 1080], 20.00th=[ 1080],
         | 30.00th=[ 1080], 40.00th=[ 1080], 50.00th=[ 1080], 60.00th=[ 1080],
         | 70.00th=[ 1080], 80.00th=[ 1080], 90.00th=[ 1080], 95.00th=[ 1080],
         | 99.00th=[ 1080], 99.50th=[ 1080], 99.90th=[ 1080], 99.95th=[ 1080],
         | 99.99th=[ 1080]
      cpu          : usr=2.18%, sys=26.02%, ctx=71826, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=397MiB/s (416MB/s), 397MiB/s-397MiB/s (416MB/s-416MB/s), io=10.0GiB (10.7GB), run=25784-25784msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=0): [f(1)][-.-%][r=3937MiB/s][r=3936 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=307629: Mon Dec  4 02:49:30 2023
      read: IOPS=3826, BW=3827MiB/s (4012MB/s)(10.0GiB/2676msec)
        slat (usec): min=174, max=1009, avg=258.50, stdev=145.75
        clat (usec): min=2, max=26948, avg=7982.63, stdev=4280.07
         lat (usec): min=181, max=27951, avg=8241.47, stdev=4416.80
        clat percentiles (usec):
         |  1.00th=[ 3818],  5.00th=[ 5800], 10.00th=[ 5800], 20.00th=[ 5866],
         | 30.00th=[ 5997], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194],
         | 70.00th=[ 6259], 80.00th=[10814], 90.00th=[12256], 95.00th=[19530],
         | 99.00th=[22414], 99.50th=[22676], 99.90th=[22938], 99.95th=[24773],
         | 99.99th=[26608]
       bw (  MiB/s): min= 1754, max= 4748, per=97.17%, avg=3718.40, stdev=1353.35, samples=5
       iops        : min= 1754, max= 4748, avg=3718.40, stdev=1353.35, samples=5
      lat (usec)   : 4=0.05%, 250=0.05%, 500=0.05%, 750=0.08%, 1000=0.06%
      lat (msec)   : 2=0.24%, 4=0.52%, 10=77.15%, 20=17.30%, 50=4.50%
      cpu          : usr=1.31%, sys=88.07%, ctx=1538, majf=0, minf=8203
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=3827MiB/s (4012MB/s), 3827MiB/s-3827MiB/s (4012MB/s-4012MB/s), io=10.0GiB (10.7GB), run=2676-2676msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][53.8%][w=475MiB/s][w=475 IOPS][eta 00m:06s]
    Jobs: 1 (f=1): [W(1)][76.5%][w=137MiB/s][w=137 IOPS][eta 00m:04s] 
    Jobs: 1 (f=1): [W(1)][90.5%][w=410MiB/s][w=410 IOPS][eta 00m:02s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                        
    TEST: (groupid=0, jobs=1): err= 0: pid=334563: Mon Dec  4 02:50:04 2023
      write: IOPS=456, BW=457MiB/s (479MB/s)(10.0GiB/22429msec); 0 zone resets
        slat (usec): min=227, max=111811, avg=1994.58, stdev=2098.70
        clat (usec): min=3, max=2008.2k, avg=67553.89, stdev=118485.07
         lat (usec): min=284, max=2009.9k, avg=69548.96, stdev=119234.16
        clat percentiles (msec):
         |  1.00th=[    9],  5.00th=[    9], 10.00th=[   12], 20.00th=[   16],
         | 30.00th=[   47], 40.00th=[   48], 50.00th=[   53], 60.00th=[   58],
         | 70.00th=[   63], 80.00th=[   75], 90.00th=[  113], 95.00th=[  184],
         | 99.00th=[  309], 99.50th=[  338], 99.90th=[ 1989], 99.95th=[ 2005],
         | 99.99th=[ 2005]
       bw (  KiB/s): min=92160, max=2594816, per=100.00%, avg=497963.71, stdev=447542.73, samples=41
       iops        : min=   90, max= 2534, avg=486.29, stdev=437.05, samples=41
      lat (usec)   : 4=0.02%, 10=0.03%, 500=0.01%, 750=0.02%, 1000=0.01%
      lat (msec)   : 2=0.07%, 4=0.16%, 10=7.78%, 20=14.80%, 50=20.62%
      lat (msec)   : 100=44.93%, 250=9.29%, 500=1.96%, 2000=0.23%, >=2000=0.07%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1320, max=1320, avg=1320.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1320],  5.00th=[ 1320], 10.00th=[ 1320], 20.00th=[ 1320],
         | 30.00th=[ 1320], 40.00th=[ 1320], 50.00th=[ 1320], 60.00th=[ 1320],
         | 70.00th=[ 1320], 80.00th=[ 1320], 90.00th=[ 1320], 95.00th=[ 1320],
         | 99.00th=[ 1320], 99.50th=[ 1320], 99.90th=[ 1320], 99.95th=[ 1320],
         | 99.99th=[ 1320]
      cpu          : usr=2.56%, sys=23.95%, ctx=66043, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=457MiB/s (479MB/s), 457MiB/s-457MiB/s (479MB/s-479MB/s), io=10.0GiB (10.7GB), run=22429-22429msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][-.-%][r=2752MiB/s][r=2752 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=417205: Mon Dec  4 02:50:13 2023
      read: IOPS=2721, BW=2722MiB/s (2854MB/s)(10.0GiB/3762msec)
        slat (usec): min=308, max=1025, avg=364.47, stdev=32.71
        clat (usec): min=3, max=27352, avg=11278.54, stdev=1011.09
         lat (usec): min=360, max=28208, avg=11643.38, stdev=1026.37
        clat percentiles (usec):
         |  1.00th=[ 7242],  5.00th=[11207], 10.00th=[11207], 20.00th=[11207],
         | 30.00th=[11207], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338],
         | 70.00th=[11338], 80.00th=[11338], 90.00th=[11731], 95.00th=[11731],
         | 99.00th=[11994], 99.50th=[12780], 99.90th=[22938], 99.95th=[25035],
         | 99.99th=[26870]
       bw (  MiB/s): min= 2484, max= 2754, per=99.69%, avg=2713.43, stdev=101.18, samples=7
       iops        : min= 2484, max= 2754, avg=2713.43, stdev=101.18, samples=7
      lat (usec)   : 4=0.05%, 500=0.05%, 750=0.04%, 1000=0.01%
      lat (msec)   : 2=0.15%, 4=0.26%, 10=0.81%, 20=98.47%, 50=0.17%
      cpu          : usr=1.25%, sys=98.64%, ctx=11, majf=0, minf=8203
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=2722MiB/s (2854MB/s), 2722MiB/s-2722MiB/s (2854MB/s-2854MB/s), io=10.0GiB (10.7GB), run=3762-3762msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][50.0%][w=344MiB/s][w=344 IOPS][eta 00m:07s]
    Jobs: 1 (f=1): [W(1)][81.2%][w=329MiB/s][w=329 IOPS][eta 00m:03s] 
    Jobs: 1 (f=1): [W(1)][100.0%][w=408MiB/s][w=408 IOPS][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    TEST: (groupid=0, jobs=1): err= 0: pid=417553: Mon Dec  4 02:50:40 2023
      write: IOPS=477, BW=478MiB/s (501MB/s)(10.0GiB/21442msec); 0 zone resets
        slat (usec): min=228, max=5360, avg=1811.29, stdev=960.33
        clat (usec): min=3, max=2891.3k, avg=64485.53, stdev=157705.98
         lat (usec): min=396, max=2892.8k, avg=66297.42, stdev=157867.23
        clat percentiles (msec):
         |  1.00th=[    9],  5.00th=[   12], 10.00th=[   15], 20.00th=[   20],
         | 30.00th=[   48], 40.00th=[   52], 50.00th=[   58], 60.00th=[   66],
         | 70.00th=[   71], 80.00th=[   81], 90.00th=[   91], 95.00th=[  103],
         | 99.00th=[  140], 99.50th=[  148], 99.90th=[ 2869], 99.95th=[ 2903],
         | 99.99th=[ 2903]
       bw (  KiB/s): min=16384, max=2220032, per=100.00%, avg=537276.63, stdev=392435.49, samples=38
       iops        : min=   16, max= 2168, avg=524.68, stdev=383.24, samples=38
      lat (usec)   : 4=0.01%, 10=0.04%, 500=0.01%, 750=0.02%, 1000=0.02%
      lat (msec)   : 2=0.05%, 4=0.14%, 10=2.79%, 20=18.78%, 50=16.38%
      lat (msec)   : 100=56.35%, 250=5.12%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=791, max=791, avg=791.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  788],  5.00th=[  788], 10.00th=[  788], 20.00th=[  788],
         | 30.00th=[  788], 40.00th=[  788], 50.00th=[  788], 60.00th=[  788],
         | 70.00th=[  788], 80.00th=[  788], 90.00th=[  788], 95.00th=[  788],
         | 99.00th=[  788], 99.50th=[  788], 99.90th=[  788], 99.95th=[  788],
         | 99.99th=[  788]
      cpu          : usr=2.61%, sys=28.97%, ctx=65408, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=478MiB/s (501MB/s), 478MiB/s-478MiB/s (501MB/s-501MB/s), io=10.0GiB (10.7GB), run=21442-21442msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=0): [f(1)][-.-%][r=2236MiB/s][r=2236 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=504277: Mon Dec  4 02:50:52 2023
      read: IOPS=2809, BW=2809MiB/s (2946MB/s)(10.0GiB/3645msec)
        slat (usec): min=301, max=826, avg=353.16, stdev=27.27
        clat (usec): min=3, max=24458, avg=10931.36, stdev=948.03
         lat (usec): min=347, max=25285, avg=11284.82, stdev=960.59
        clat percentiles (usec):
         |  1.00th=[ 7046],  5.00th=[10814], 10.00th=[10814], 20.00th=[10945],
         | 30.00th=[10945], 40.00th=[10945], 50.00th=[10945], 60.00th=[10945],
         | 70.00th=[10945], 80.00th=[10945], 90.00th=[11338], 95.00th=[11469],
         | 99.00th=[12125], 99.50th=[12649], 99.90th=[20579], 99.95th=[22676],
         | 99.99th=[23987]
       bw (  MiB/s): min= 2588, max= 2842, per=99.70%, avg=2800.86, stdev=94.09, samples=7
       iops        : min= 2588, max= 2842, avg=2800.86, stdev=94.09, samples=7
      lat (usec)   : 4=0.05%, 500=0.05%, 750=0.05%
      lat (msec)   : 2=0.15%, 4=0.29%, 10=0.83%, 20=98.47%, 50=0.12%
      cpu          : usr=0.85%, sys=99.12%, ctx=13, majf=0, minf=8204
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=2809MiB/s (2946MB/s), 2809MiB/s-2809MiB/s (2946MB/s-2946MB/s), io=10.0GiB (10.7GB), run=3645-3645msec
    --------------------------------------------------------------------------------
    /mnt/Tank #
    I think I'm going to replace the SATA cables next. A friend has some premium corsair ones he isn't using from an 8 bay. I can use those for the array.

    It still doesn't look like the underlying problem has gone away.

    I shouldn't be getting 1/4 of the write speed 1fallen has with a 980 Pro when he's using a WD Blue. Unless I have missed something here??

    Did all 10 SATA cables suddenly fail when I moved to Jammy? Nah. This is just naff. I am so tempted to move 'on to' Focal or back to Bionic just to test when I get a couple of hours.

    Edit: you guys are both on Noble, even Noble testing sounds good at this point.

    Edit2: Sorry, I'm just totally frustrated at this point. I've never had a single issue with ZFS on Ubuntu until Jammy. I feel like the NVMe has only helped mask the problem.
    Last edited by tkae-lp; December 4th, 2023 at 04:17 AM.

  5. #35
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Sorry, have been frustrated myself with Launchpad, related to reported ZFS Issues, and bug reports.

    I felt sorry for other Users affected and created this, so I cn start documenting things for supporting Users:
    https://github.com/Mafoelffen1/OpenZFS-Ubuntu-Admin

    You readjusted you ARC max setting to half your new RAM now, and redid your update-initramfs again, right? Just making sure that isn't still at 16GB...

    Well heck. Dumping the caches and freeing up memory had helped, so sort of pointed to increasing memory and adding caches.

    I think what may help and what that is pointing to now is the need to audit what else is going on, when that slowdown occurs. If the caches and memory is not the answer, then something else is dragging it down intermittently.

    That is the "key", and what is making that hard to pin down. It's an intermittent problem.

    EDIT: Divng deeper to confirm if the SATA ports on that board are SATA III, and the drive specs. Something... I don't see what is going on there.
    Last edited by MAFoElffen; December 4th, 2023 at 05:44 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  6. #36
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Dang. LMFAO!!! I could swear you posted a link to you rMB's manual. I loked for what seemed like forever... I found it in the greyed out text in tha one post you edited...

    Then... My head was swimming trying to figure out all the variations and conditions of that manual. I feel your pain, and what you meant by the memory caveats. The storage section is the same!
    Storage • 10 x SATA3 6.0 Gb/s Connectors, support RAID (RAID
    0, RAID 1, RAID 5, RAID 10 and Intel Rapid Storage 13),
    NCQ, AHCI, Hot Plug and ASRock HDD Saver Technology
    (S_SATA3_3 connector is shared with the eSATA port)
    (S_SATA3_2 connector is shared with Ultra M.2 Socket)
    * RAID is supported on SATA3_0 ~ SATA3_5 ports only.
    • 1 x eSATA Connector, supports NCQ, AHCI and Hot Plug
    • 1 x Ultra M.2 Socket, supports M.2 SATA3 6.0 Gb/s module
    and M.2 PCI Express module up to Gen3 x4 (32 Gb/s)
    Your HHD's are all SATA III 6GB/s. You rSATA ports are all SATA III 6GB/s.

    Has 10 SATA ports but... S_SATA3_3 is hared with eSATA port, and S_SATA3_@ i shared with the M.2 Ultra port... Being shared on that Bus, that usually means that an shared on that M.2 port only supports 6GB/s SATA, not an M.2 PCIe card. BUT elsewhere:
    The Ultra M.2 Socket (M2_1) can accommodate either a M.2 SATA3 6.0 Gb/s module or a M.2 PCI
    Express module up to Gen3 x4 (32 Gb/s).

    Please be noted that the Ultra M.2 Socket (M2_1) is shared with the S_SATA3_2 connector; you can only choose either the Ultra M.2 Socket
    (M2_1) or the S_SATA3_2 connector to use.

    * If M.2 PCI Express module is installed, PCIE3 will be disabled.
    So if I am reading that right... If you use M.2 slot one, it disables both the SATA port S_SATA3_2 & PCIe slot 3?

    Then out of 6 PCIe x16 slots, 2 are x16 lane, 3 are 8 lane, and 1 is 4 lane?

    Wow. I typed that all out and my head is still spinning. LOL

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  7. #37
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by MAFoElffen View Post
    My head was swimming trying to figure out all the variations and conditions of that manual. I feel your pain, and what you meant by the memory caveats. The storage section is the same!
    Yeah, it's not been fun getting through that manual lol. Good spot on that SATA port getting disabled, I didn't even notice that! I only noticed that PCIe slot 3 would be disabled.... on page 22 it also says that if I use eSATA at rear, I lose S_SATA3_3:

    Code:
    These ten SATA3
    connectors support SATA
    data cables for internal
    storage devices with up
    to 6.0 Gb/s data transfer
    rate. If the eSATA port
    on the rear I/O has been
    connected, the internal
    S_SATA3_3 will not
    function. If the Ultra M.2
    Socket has been occupied,
    the internal S_SATA3_2
    will not function.

    I suppose there are a LOT of ports... they can only accommodate so much for a £200 mobo ;D What I don't understand is..... with the NVMe in place, S_SATA3_2 doesn't appear to be disabled?! I mean, my entire array is still there:


    Code:
      pool: Tank
     state: ONLINE
      scan: scrub repaired 0B in 10:00:01 with 0 errors on Mon Nov 13 23:01:24 2023
    config:
    
        NAME                                                                STATE     READ WRITE CKSUM
        Tank                                                                ONLINE       0     0     0
          raidz2-0                                                          ONLINE       0     0     0
            ata-ST4000DM000-1F2168_S300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM004-2CV104_ZTT4XXXX                                 ONLINE       0     0     0
            ata-ST4000DM004-2CV104_ZTT4XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX                                 ONLINE       0     0     0
        logs    
          nvme-Samsung_SSD_980_PRO_with_Heatsink_2TB_S6WRNS0W538XXXX-part2  ONLINE       0     0     0
        cache
          nvme-Samsung_SSD_980_PRO_with_Heatsink_2TB_S6WRNS0W538XXXX-part1  ONLINE       0     0     0
    I show 11 drives connected:



    Perhaps a firmware update since the manual was printed allows them to run at a reduced rate? Regardless, let's not leave the NVMe in that slot.

    The manual says:

    Code:
    PCIE1 (PCIe 3.0 x16 slot) is used for PCI Express x16 lane width graphics cards.
    PCIE2 (PCIe 3.0 x16 slot) is used for PCI Express x8 lane width graphics cards.
    PCIE3 (PCIe 3.0 x16 slot) is used for PCI Express x8 lane width graphics cards.
    PCIE4 (PCIe 3.0 x16 slot) is used for PCI Express x16 lane width graphics cards.
    PCIE5 (PCIe 2.0 x16 slot) is used for PCI Express x4 lane width graphics cards.
    PCIE6 (PCIe 3.0 x16 slot) is used for PCI Express x8 lane width graphics cards.
    This slot is free: PCIE4 (PCIe 3.0 x16 slot) is used for PCI Express x16 lane width graphics cards. (See Roadmap below)


    No I haven't adjusted my ARC yet, because I don't actually have any of the RAM - It's all been dispatched, I'm just waiting for it. I'm still on 32GB. Hopefully by the end of this week I'll be on 128GB.

    So here's my sort of roadmap thinking before we dig deeper:

    1. Grab those SATA cables from my mate, eliminate aged copper wiring
    2. Wait for the RAM to arrive, max it out, set arc to 64GB
    3. Grab this: https://amzn.eu/d/he4RABR and move the NVMe to to Slot 4 - I know x16 is way overkill but lets eliminate as much as we can.

    The NVMe to PCIe adapter will arrive tomorrow - I'll remove the ARC and SLOGs from the array and test just the speed of the Samsung 980 Pro in the M.2 Ultra slot where it is now before I move it over, then test again once it's moved to PCIe Slot 4.

    Quote Originally Posted by MAFoElffen View Post
    Wow. I typed that all out and my head is still spinning. LOL
    Mine too, this whole thing has been a bit of a 'deep end' learning curve for me, but it's been fun (except that manual! lol) and I have learned a lot.
    Attached Images Attached Images
    Last edited by tkae-lp; December 4th, 2023 at 09:10 PM. Reason: Missing Amazon link, doh! Typos.

  8. #38
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Both my servers have a PCIe Quad m.2 adapters with onboard bifurcation... This is the card I use for that (and I am very happy with):
    M.2 Key SSD Expansion Card ANM24PE16 4-Port PCIe3.0 X16 With PLX8748 Controller On my server, I have 4xNVMe RAIDZ2 array on it's card.

    On one of those, my server, I also have two of these... https://www.amazon.com/ACTIMED-Power...09FLGR1X9?th=1 That is only x1, because I had two x1 PCIe slot open on it. That is actually what I have my disk caches in, for my SSD 5xSSD RAIDZ2 array. They have x4 lane cards that would have been faster. I liked that both came with heat sinks.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  9. #39
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    I've saved that first one you linked in my Ali cart, but the second link doesn't work.

    If I understand it correctly, the main advantage of that ANM24PE16 card is that it allows multiple NVMes even if your motherboard doesn't support PCIe bifurcation?

    Do you think what I grabbed on Amazon is total rubbish? (genuinely interested in your opinion!) my thinking was for a single NVMe, it's a straight through connection so wouldn't matter (nothing to go wrong and nothing to deal with traffic for more than 1 NVMe) There's no components to get hot either, the components that appear to get hot are on the cards that don't require your mobo to have bifurcation because they're doing the work on board. Eg. If I look at something like this: https://amzn.eu/d/gP2jhjR it clearly states bifurcation support is needed but doesn't have a heat sink because there's not much to it. Even something more expensive like this: https://amzn.eu/d/isaN3WK the cooling seems to be for the NVMes themselves, not the device circuitry (presumably because it's not dealing with the bifurcation).

    In any case, I checked my mobo manual and I do not have support for bifurcation (at least a search shows the word doesn't appear in the entire thing). So unless you think what I've ordered is utter crap, I'll see how that goes for now and then maybe grab one of those cards you linked to on Ali in the new year for future expansions.
    Last edited by tkae-lp; December 4th, 2023 at 09:13 PM.

  10. #40
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    UPGRADE TIME!




    Here's the benches for just the NVMe in the M.2 slot on the mobo - ext4 fs:

    Code:
    /mnt/Tank # cd /mnt/speedtest && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting 
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    TEST: Laying out IO file (1 file / 2048MiB)
    Jobs: 1 (f=1): [W(1)][-.-%][w=3236MiB/s][w=3236 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=3475938: Tue Dec  5 17:10:43 2023
      write: IOPS=3217, BW=3217MiB/s (3373MB/s)(10.0GiB/3183msec); 0 zone resets
        slat (usec): min=32, max=229, avg=83.12, stdev=18.04
        clat (usec): min=2023, max=25077, avg=9835.94, stdev=1441.38
         lat (usec): min=2090, max=25185, avg=9919.39, stdev=1441.26
        clat percentiles (usec):
         |  1.00th=[ 4146],  5.00th=[ 9503], 10.00th=[ 9503], 20.00th=[ 9503],
         | 30.00th=[ 9503], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9634],
         | 70.00th=[10028], 80.00th=[10159], 90.00th=[10159], 95.00th=[10421],
         | 99.00th=[14484], 99.50th=[17695], 99.90th=[23462], 99.95th=[24249],
         | 99.99th=[24773]
       bw (  MiB/s): min= 3204, max= 3242, per=100.00%, avg=3223.67, stdev=16.17, samples=6
       iops        : min= 3204, max= 3242, avg=3223.67, stdev=16.17, samples=6
      lat (msec)   : 4=0.97%, 10=66.02%, 20=32.73%, 50=0.28%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=754, max=754, avg=754.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  756],  5.00th=[  756], 10.00th=[  756], 20.00th=[  756],
         | 30.00th=[  756], 40.00th=[  756], 50.00th=[  756], 60.00th=[  756],
         | 70.00th=[  756], 80.00th=[  756], 90.00th=[  756], 95.00th=[  756],
         | 99.00th=[  756], 99.50th=[  756], 99.90th=[  756], 99.95th=[  756],
         | 99.99th=[  756]
      cpu          : usr=15.24%, sys=13.89%, ctx=10212, majf=0, minf=11
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=3217MiB/s (3373MB/s), 3217MiB/s-3217MiB/s (3373MB/s-3373MB/s), io=10.0GiB (10.7GB), run=3183-3183msec
    
    Disk stats (read/write):
      nvme0n1: ios=0/24092, merge=0/50, ticks=0/232564, in_queue=232578, util=96.88%
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    /mnt/speedtest # cd /mnt/speedtest && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][-.-%][w=3226MiB/s][w=3226 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=3476169: Tue Dec  5 17:10:51 2023
      write: IOPS=3217, BW=3217MiB/s (3373MB/s)(10.0GiB/3183msec); 0 zone resets
        slat (usec): min=28, max=6077, avg=85.47, stdev=63.53
        clat (usec): min=1909, max=29194, avg=9831.99, stdev=1622.54
         lat (usec): min=1963, max=29289, avg=9917.81, stdev=1621.38
        clat percentiles (usec):
         |  1.00th=[ 4047],  5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9503],
         | 30.00th=[ 9503], 40.00th=[ 9503], 50.00th=[ 9503], 60.00th=[ 9634],
         | 70.00th=[10028], 80.00th=[10028], 90.00th=[10159], 95.00th=[10290],
         | 99.00th=[15401], 99.50th=[17957], 99.90th=[27395], 99.95th=[28181],
         | 99.99th=[28967]
       bw (  MiB/s): min= 3170, max= 3254, per=100.00%, avg=3227.67, stdev=34.00, samples=6
       iops        : min= 3170, max= 3254, avg=3227.67, stdev=34.00, samples=6
      lat (msec)   : 2=0.10%, 4=0.89%, 10=66.72%, 20=31.99%, 50=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=794, max=794, avg=794.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  796],  5.00th=[  796], 10.00th=[  796], 20.00th=[  796],
         | 30.00th=[  796], 40.00th=[  796], 50.00th=[  796], 60.00th=[  796],
         | 70.00th=[  796], 80.00th=[  796], 90.00th=[  796], 95.00th=[  796],
         | 99.00th=[  796], 99.50th=[  796], 99.90th=[  796], 99.95th=[  796],
         | 99.99th=[  796]
      cpu          : usr=15.24%, sys=14.36%, ctx=10215, majf=0, minf=13
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=3217MiB/s (3373MB/s), 3217MiB/s-3217MiB/s (3373MB/s-3373MB/s), io=10.0GiB (10.7GB), run=3183-3183msec
    
    Disk stats (read/write):
      nvme0n1: ios=0/21901, merge=0/8, ticks=0/210619, in_queue=210632, util=96.82%
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    /mnt/speedtest # cd /mnt/speedtest && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][-.-%][r=3399MiB/s][r=3399 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=3476732: Tue Dec  5 17:11:09 2023
      read: IOPS=3388, BW=3388MiB/s (3553MB/s)(10.0GiB/3022msec)
        slat (usec): min=27, max=437, avg=67.48, stdev=22.48
        clat (usec): min=1016, max=15803, avg=9337.20, stdev=1130.69
         lat (usec): min=1057, max=15871, avg=9405.05, stdev=1128.62
        clat percentiles (usec):
         |  1.00th=[ 6587],  5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8586],
         | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634],
         | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10683], 95.00th=[11076],
         | 99.00th=[11994], 99.50th=[12649], 99.90th=[14484], 99.95th=[15270],
         | 99.99th=[15664]
       bw (  MiB/s): min= 3340, max= 3408, per=100.00%, avg=3389.33, stdev=25.13, samples=6
       iops        : min= 3340, max= 3408, avg=3389.33, stdev=25.13, samples=6
      lat (msec)   : 2=0.11%, 4=0.21%, 10=75.05%, 20=24.64%
      cpu          : usr=2.25%, sys=27.38%, ctx=8986, majf=0, minf=8202
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=3388MiB/s (3553MB/s), 3388MiB/s-3388MiB/s (3553MB/s-3553MB/s), io=10.0GiB (10.7GB), run=3022-3022msec
    
    Disk stats (read/write):
      nvme0n1: ios=19615/746, merge=0/5, ticks=143388/147, in_queue=143546, util=97.39%
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    /mnt/speedtest # cd /mnt/speedtest && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][-.-%][r=3401MiB/s][r=3401 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=3476925: Tue Dec  5 17:11:15 2023
      read: IOPS=3384, BW=3384MiB/s (3548MB/s)(10.0GiB/3026msec)
        slat (usec): min=29, max=462, avg=67.94, stdev=24.57
        clat (usec): min=1565, max=15506, avg=9349.91, stdev=1144.48
         lat (usec): min=1631, max=15563, avg=9418.21, stdev=1142.15
        clat percentiles (usec):
         |  1.00th=[ 6390],  5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8586],
         | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634],
         | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10683], 95.00th=[11076],
         | 99.00th=[12125], 99.50th=[13435], 99.90th=[15008], 99.95th=[15270],
         | 99.99th=[15270]
       bw (  MiB/s): min= 3344, max= 3416, per=100.00%, avg=3384.67, stdev=29.25, samples=6
       iops        : min= 3344, max= 3416, avg=3384.67, stdev=29.25, samples=6
      lat (msec)   : 2=0.06%, 4=0.13%, 10=75.18%, 20=24.64%
      cpu          : usr=2.18%, sys=27.50%, ctx=8954, majf=0, minf=8203
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=3384MiB/s (3548MB/s), 3384MiB/s-3384MiB/s (3548MB/s-3548MB/s), io=10.0GiB (10.7GB), run=3026-3026msec
    
    Disk stats (read/write):
      nvme0n1: ios=20818/738, merge=0/0, ticks=150824/124, in_queue=150949, util=97.39%
    See you in a bit.....
    Attached Images Attached Images

Page 4 of 14 FirstFirst ... 23456 ... LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •