Page 8 of 14 FirstFirst ... 678910 ... LastLast
Results 71 to 80 of 136

Thread: Seemingly sporadic slow ZFS IO since 22.04

  1. #71
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    I don't see anything jumping out at me.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  2. #72
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by MAFoElffen View Post
    I don't see anything jumping out at me.
    Thanks, that's reassuring. I'll report back once I get this new SSD installed.

  3. #73
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Ok, re-installation and setup complete. The only thing I haven't done is ksplice. I'm going to see if the problem comes back and if it doesn't, install ksplice to rule out/in that at the same time.

    2 reads/writes one after another.

    Code:
    cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=0): [f(1)][100.0%][r=2986MiB/s][r=2986 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=510970: Fri Dec 15 19:19:08 2023
      read: IOPS=1490, BW=1491MiB/s (1563MB/s)(10.0GiB/6869msec)
        slat (usec): min=281, max=166664, avg=667.62, stdev=3297.33
        clat (usec): min=2, max=284276, avg=20265.78, stdev=25931.44
         lat (usec): min=326, max=285159, avg=20933.79, stdev=26638.64
        clat percentiles (msec):
         |  1.00th=[    8],  5.00th=[   11], 10.00th=[   11], 20.00th=[   11],
         | 30.00th=[   11], 40.00th=[   11], 50.00th=[   11], 60.00th=[   11],
         | 70.00th=[   11], 80.00th=[   12], 90.00th=[   52], 95.00th=[   65],
         | 99.00th=[  129], 99.50th=[  180], 99.90th=[  232], 99.95th=[  279],
         | 99.99th=[  284]
       bw (  MiB/s): min=  266, max= 2960, per=94.21%, avg=1404.46, stdev=1213.29, samples=13
       iops        : min=  266, max= 2960, avg=1404.46, stdev=1213.29, samples=13
      lat (usec)   : 4=0.05%, 500=0.05%, 750=0.05%, 1000=0.02%
      lat (msec)   : 2=0.13%, 4=0.23%, 10=0.75%, 20=79.90%, 50=7.38%
      lat (msec)   : 100=9.35%, 250=2.00%, 500=0.09%
      cpu          : usr=0.57%, sys=60.91%, ctx=531, majf=0, minf=8205
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=1491MiB/s (1563MB/s), 1491MiB/s-1491MiB/s (1563MB/s-1563MB/s), io=10.0GiB (10.7GB), run=6869-6869msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                          
    Jobs: 1 (f=0): [f(1)][100.0%][w=271MiB/s][w=271 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=526631: Fri Dec 15 19:19:34 2023
      write: IOPS=1182, BW=1182MiB/s (1240MB/s)(10.0GiB/8662msec); 0 zone resets
        slat (usec): min=199, max=2888, avg=554.19, stdev=474.42
        clat (usec): min=3, max=3014.4k, avg=26122.23, stdev=163675.32
         lat (usec): min=244, max=3016.6k, avg=26676.90, stdev=163815.68
        clat percentiles (msec):
         |  1.00th=[    8],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
         | 30.00th=[    8], 40.00th=[    8], 50.00th=[    9], 60.00th=[   14],
         | 70.00th=[   19], 80.00th=[   27], 90.00th=[   42], 95.00th=[   53],
         | 99.00th=[   68], 99.50th=[   73], 99.90th=[ 3004], 99.95th=[ 3004],
         | 99.99th=[ 3004]
       bw (  MiB/s): min=   96, max= 4012, per=100.00%, avg=1661.50, stdev=1333.62, samples=12
       iops        : min=   96, max= 4012, avg=1661.50, stdev=1333.62, samples=12
      lat (usec)   : 4=0.01%, 10=0.04%, 250=0.01%, 500=0.03%, 750=0.04%
      lat (usec)   : 1000=0.03%
      lat (msec)   : 2=0.13%, 4=0.24%, 10=51.28%, 20=22.40%, 50=18.48%
      lat (msec)   : 100=7.01%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1273, max=1273, avg=1273.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1272],  5.00th=[ 1272], 10.00th=[ 1272], 20.00th=[ 1272],
         | 30.00th=[ 1272], 40.00th=[ 1272], 50.00th=[ 1272], 60.00th=[ 1272],
         | 70.00th=[ 1272], 80.00th=[ 1272], 90.00th=[ 1272], 95.00th=[ 1272],
         | 99.00th=[ 1272], 99.50th=[ 1272], 99.90th=[ 1272], 99.95th=[ 1272],
         | 99.99th=[ 1272]
      cpu          : usr=4.93%, sys=70.80%, ctx=12994, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=1182MiB/s (1240MB/s), 1182MiB/s-1182MiB/s (1240MB/s-1240MB/s), io=10.0GiB (10.7GB), run=8662-8662msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][-.-%][r=2940MiB/s][r=2940 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=591376: Fri Dec 15 19:19:44 2023
      read: IOPS=2850, BW=2851MiB/s (2989MB/s)(10.0GiB/3592msec)
        slat (usec): min=279, max=1192, avg=347.79, stdev=46.49
        clat (usec): min=3, max=31039, avg=10762.67, stdev=1306.38
         lat (usec): min=325, max=32233, avg=11110.82, stdev=1340.39
        clat percentiles (usec):
         |  1.00th=[ 6915],  5.00th=[10421], 10.00th=[10421], 20.00th=[10421],
         | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10552],
         | 70.00th=[10552], 80.00th=[10683], 90.00th=[12387], 95.00th=[12387],
         | 99.00th=[13698], 99.50th=[15401], 99.90th=[26346], 99.95th=[28705],
         | 99.99th=[30540]
       bw (  MiB/s): min= 2292, max= 3006, per=99.68%, avg=2841.71, stdev=245.85, samples=7
       iops        : min= 2292, max= 3006, avg=2841.71, stdev=245.85, samples=7
      lat (usec)   : 4=0.04%, 10=0.01%, 500=0.05%, 750=0.05%
      lat (msec)   : 2=0.15%, 4=0.29%, 10=0.88%, 20=98.30%, 50=0.23%
      cpu          : usr=0.86%, sys=99.03%, ctx=64, majf=0, minf=8203
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=2851MiB/s (2989MB/s), 2851MiB/s-2851MiB/s (2989MB/s-2989MB/s), io=10.0GiB (10.7GB), run=3592-3592msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                          
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=591662: Fri Dec 15 19:20:00 2023
      write: IOPS=1262, BW=1263MiB/s (1324MB/s)(10.0GiB/8108msec); 0 zone resets
        slat (usec): min=183, max=2929, avg=470.22, stdev=407.95
        clat (usec): min=3, max=3305.8k, avg=24472.97, stdev=179936.26
         lat (usec): min=254, max=3307.7k, avg=24943.64, stdev=180040.18
        clat percentiles (msec):
         |  1.00th=[    6],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
         | 30.00th=[    8], 40.00th=[    8], 50.00th=[    9], 60.00th=[   10],
         | 70.00th=[   14], 80.00th=[   19], 90.00th=[   31], 95.00th=[   51],
         | 99.00th=[   62], 99.50th=[   65], 99.90th=[ 3306], 99.95th=[ 3306],
         | 99.99th=[ 3306]
       bw (  MiB/s): min=  308, max= 4056, per=100.00%, avg=1993.80, stdev=1391.49, samples=10
       iops        : min=  308, max= 4056, avg=1993.80, stdev=1391.49, samples=10
      lat (usec)   : 4=0.04%, 10=0.01%, 500=0.04%, 750=0.05%, 1000=0.06%
      lat (msec)   : 2=0.17%, 4=0.36%, 10=60.14%, 20=20.16%, 50=14.06%
      lat (msec)   : 100=4.62%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=956, max=956, avg=956.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  956],  5.00th=[  956], 10.00th=[  956], 20.00th=[  956],
         | 30.00th=[  956], 40.00th=[  956], 50.00th=[  956], 60.00th=[  956],
         | 70.00th=[  956], 80.00th=[  956], 90.00th=[  956], 95.00th=[  956],
         | 99.00th=[  956], 99.50th=[  956], 99.90th=[  956], 99.95th=[  956],
         | 99.99th=[  956]
      cpu          : usr=5.22%, sys=76.59%, ctx=9165, majf=0, minf=14
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=1263MiB/s (1324MB/s), 1263MiB/s-1263MiB/s (1324MB/s-1324MB/s), io=10.0GiB (10.7GB), run=8108-8108msec
    Have I done something wrong with the arc settings?

    Code:
        time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  size     c  avail
    19:42:31     0     0      0     0    0     0    0     0    0  2.1G  8.0G   115G
    Code:
    ~ » cat /etc/modprobe.d/zfs.conf                                                                             
    options zfs zfs_arc_min=8589934592
    options zfs zfs_arc_max=68719476736
    It seems like a lot less memory is being used now:

    Code:
    ~ » free -m  | grep ^Mem | tr -s ' ' | cut -d ' ' -f 3              
    5432

  4. #74
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Forget that, I've caned the array on several devices and it's ramping up:

    Code:
    ~ » free -m  | grep ^Mem | tr -s ' ' | cut -d ' ' -f 3 
    14626
    Code:
     time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  size     c  avail
    20:25:23     7     1     14     0    0     1  100     0    0   15G   15G   101G
    --------------------------------------------------------------------------------
    I guess now we just wait...

    Edit: should that miss% be that high?

    Code:
        time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  size     c  avail
    20:26:35    13     2     15     0    0     2  100     0    0   20G   20G    96G
    --------------------------------------------------------------------------------
    Code:
    cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                          
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    TEST: (groupid=0, jobs=1): err= 0: pid=262705: Fri Dec 15 20:28:00 2023
      write: IOPS=1101, BW=1101MiB/s (1155MB/s)(10.0GiB/9297msec); 0 zone resets
        slat (usec): min=187, max=453027, avg=529.84, stdev=6312.16
        clat (usec): min=3, max=3904.5k, avg=28037.27, stdev=214980.73
         lat (usec): min=246, max=3906.7k, avg=28567.56, stdev=215184.64
        clat percentiles (msec):
         |  1.00th=[    6],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
         | 30.00th=[    8], 40.00th=[    8], 50.00th=[    8], 60.00th=[    9],
         | 70.00th=[   12], 80.00th=[   18], 90.00th=[   21], 95.00th=[   54],
         | 99.00th=[  271], 99.50th=[  464], 99.90th=[ 3876], 99.95th=[ 3910],
         | 99.99th=[ 3910]
       bw (  MiB/s): min=  374, max= 3962, per=100.00%, avg=1812.55, stdev=1421.10, samples=11
       iops        : min=  374, max= 3962, avg=1812.55, stdev=1421.10, samples=11
      lat (usec)   : 4=0.04%, 10=0.01%, 250=0.01%, 500=0.03%, 750=0.05%
      lat (usec)   : 1000=0.04%
      lat (msec)   : 2=0.17%, 4=0.34%, 10=66.80%, 20=19.43%, 50=7.63%
      lat (msec)   : 100=3.95%, 250=0.30%, 500=0.91%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=289, max=289, avg=289.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  290],  5.00th=[  290], 10.00th=[  290], 20.00th=[  290],
         | 30.00th=[  290], 40.00th=[  290], 50.00th=[  290], 60.00th=[  290],
         | 70.00th=[  290], 80.00th=[  290], 90.00th=[  290], 95.00th=[  290],
         | 99.00th=[  290], 99.50th=[  290], 99.90th=[  290], 99.95th=[  290],
         | 99.99th=[  290]
      cpu          : usr=4.54%, sys=66.77%, ctx=7293, majf=1, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=1101MiB/s (1155MB/s), 1101MiB/s-1101MiB/s (1155MB/s-1155MB/s), io=10.0GiB (10.7GB), run=9297-9297msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][-.-%][r=2932MiB/s][r=2931 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=302568: Fri Dec 15 20:28:19 2023
      read: IOPS=2875, BW=2876MiB/s (3015MB/s)(10.0GiB/3561msec)
        slat (usec): min=285, max=1555, avg=344.94, stdev=38.58
        clat (usec): min=2, max=24739, avg=10677.57, stdev=1039.32
         lat (usec): min=329, max=25648, avg=11022.85, stdev=1056.27
        clat percentiles (usec):
         |  1.00th=[ 6849],  5.00th=[10421], 10.00th=[10421], 20.00th=[10552],
         | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10683],
         | 70.00th=[10683], 80.00th=[10945], 90.00th=[10945], 95.00th=[11207],
         | 99.00th=[14484], 99.50th=[14877], 99.90th=[20841], 99.95th=[22676],
         | 99.99th=[24249]
       bw (  MiB/s): min= 2680, max= 2936, per=99.74%, avg=2868.00, stdev=88.69, samples=7
       iops        : min= 2680, max= 2936, avg=2868.00, stdev=88.69, samples=7
      lat (usec)   : 4=0.05%, 500=0.05%, 750=0.05%, 1000=0.01%
      lat (msec)   : 2=0.13%, 4=0.29%, 10=0.85%, 20=98.45%, 50=0.13%
      cpu          : usr=1.10%, sys=98.51%, ctx=57, majf=0, minf=8204
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=2876MiB/s (3015MB/s), 2876MiB/s-2876MiB/s (3015MB/s-3015MB/s), io=10.0GiB (10.7GB), run=3561-3561msec
    Edit2: Watching Avatar in 4K with Dolby has used up some memory!

    Code:
     time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  size     c  avail
    20:31:01     0     0      0     0    0     0    0     0    0   35G   35G    81G
    Code:
    free -m  | grep ^Mem | tr -s ' ' | cut -d ' ' -f 3
    40319
    Code:
    cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting 
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][-.-%][r=2873MiB/s][r=2872 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=351649: Fri Dec 15 20:33:16 2023
      read: IOPS=2781, BW=2782MiB/s (2917MB/s)(10.0GiB/3681msec)
        slat (usec): min=274, max=1177, avg=356.52, stdev=46.53
        clat (usec): min=3, max=31076, avg=11029.95, stdev=1331.64
         lat (usec): min=336, max=32255, avg=11386.82, stdev=1365.82
        clat percentiles (usec):
         |  1.00th=[ 6980],  5.00th=[10683], 10.00th=[10683], 20.00th=[10683],
         | 30.00th=[10814], 40.00th=[10814], 50.00th=[10814], 60.00th=[10814],
         | 70.00th=[10945], 80.00th=[10945], 90.00th=[12780], 95.00th=[12911],
         | 99.00th=[14091], 99.50th=[15795], 99.90th=[26346], 99.95th=[28705],
         | 99.99th=[30540]
       bw (  MiB/s): min= 2216, max= 2874, per=99.59%, avg=2770.57, stdev=245.38, samples=7
       iops        : min= 2216, max= 2874, avg=2770.57, stdev=245.38, samples=7
      lat (usec)   : 4=0.04%, 10=0.01%, 500=0.05%, 750=0.05%
      lat (msec)   : 2=0.15%, 4=0.29%, 10=0.84%, 20=98.34%, 50=0.23%
      cpu          : usr=1.17%, sys=98.72%, ctx=5, majf=0, minf=8205
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=2782MiB/s (2917MB/s), 2782MiB/s-2782MiB/s (2917MB/s-2917MB/s), io=10.0GiB (10.7GB), run=3681-3681msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][-.-%][eta 00m:00s]                          
    TEST: (groupid=0, jobs=1): err= 0: pid=351921: Fri Dec 15 20:33:29 2023
      write: IOPS=1393, BW=1394MiB/s (1461MB/s)(10.0GiB/7348msec); 0 zone resets
        slat (usec): min=198, max=2094, avg=328.80, stdev=166.25
        clat (usec): min=3, max=3952.5k, avg=22179.16, stdev=216630.55
         lat (usec): min=261, max=3952.8k, avg=22508.45, stdev=216629.88
        clat percentiles (msec):
         |  1.00th=[    6],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
         | 30.00th=[    8], 40.00th=[    9], 50.00th=[    9], 60.00th=[    9],
         | 70.00th=[    9], 80.00th=[   11], 90.00th=[   19], 95.00th=[   24],
         | 99.00th=[   27], 99.50th=[   27], 99.90th=[ 3943], 99.95th=[ 3943],
         | 99.99th=[ 3943]
       bw (  MiB/s): min= 1330, max= 3926, per=100.00%, avg=2848.29, stdev=1008.95, samples=7
       iops        : min= 1330, max= 3926, avg=2848.29, stdev=1008.95, samples=7
      lat (usec)   : 4=0.04%, 10=0.01%, 500=0.05%, 750=0.05%, 1000=0.05%
      lat (msec)   : 2=0.20%, 4=0.36%, 10=77.43%, 20=13.69%, 50=7.82%
      lat (msec)   : >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1089, max=1089, avg=1089.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1096],  5.00th=[ 1096], 10.00th=[ 1096], 20.00th=[ 1096],
         | 30.00th=[ 1096], 40.00th=[ 1096], 50.00th=[ 1096], 60.00th=[ 1096],
         | 70.00th=[ 1096], 80.00th=[ 1096], 90.00th=[ 1096], 95.00th=[ 1096],
         | 99.00th=[ 1096], 99.50th=[ 1096], 99.90th=[ 1096], 99.95th=[ 1096],
         | 99.99th=[ 1096]
      cpu          : usr=6.29%, sys=85.31%, ctx=3121, majf=0, minf=14
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=1394MiB/s (1461MB/s), 1394MiB/s-1394MiB/s (1461MB/s-1461MB/s), io=10.0GiB (10.7GB), run=7348-7348msec
    Last edited by tkae-lp; December 15th, 2023 at 09:34 PM.

  5. #75
    Join Date
    Aug 2016
    Location
    Wandering
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Looking pretty good so far.
    With realization of one's own potential and self-confidence in one's ability, one can build a better world.
    Dalai Lama>>
    Code Tags | System-info | Forum Guide lines | Arch Linux, Debian Unstable, FreeBSD

  6. #76
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Let's see how this pans out over a few days...

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  7. #77
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by 1fallen View Post
    Looking pretty good so far.
    Seemingly so..... Interestingly, the Samsung SSD is much heavier than the Sandisk. There's clearly a lot more heat discipation in there.... temp is almost half that of the Sandisk IIRC: Disk is OK (27° C / 81° F) pretty sure the Sandisk ran in the high 40s. It's been powered on enough by now to get well up to temp:

    Code:
    "Description","Value","Flags","Page, Offset"
    "    Power-on Hours","11","---","0x01, 0x010"
    Quote Originally Posted by MAFoElffen View Post
    Let's see how this pans out over a few days...
    Yes, I won't be so quick to victory dance this time! As of tomorrow, I'l have a lot of spare time to will be able to continue to thrash it (if you can call basic reading of media files thrashing!).

    Right now:

    Code:
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                          
    Jobs: 1 (f=0): [f(1)][100.0%][w=271MiB/s][w=271 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=520167: Fri Dec 15 23:21:52 2023
      write: IOPS=1160, BW=1161MiB/s (1217MB/s)(10.0GiB/8820msec); 0 zone resets
        slat (usec): min=195, max=2127, avg=446.67, stdev=345.36
        clat (usec): min=3, max=4211.4k, avg=26614.95, stdev=230854.66
         lat (usec): min=253, max=4211.6k, avg=27062.14, stdev=230860.39
        clat percentiles (msec):
         |  1.00th=[    6],  5.00th=[    9], 10.00th=[    9], 20.00th=[    9],
         | 30.00th=[    9], 40.00th=[    9], 50.00th=[    9], 60.00th=[    9],
         | 70.00th=[   14], 80.00th=[   18], 90.00th=[   31], 95.00th=[   42],
         | 99.00th=[   55], 99.50th=[   62], 99.90th=[ 4212], 99.95th=[ 4212],
         | 99.99th=[ 4212]
       bw (  MiB/s): min=  384, max= 3832, per=100.00%, avg=1993.80, stdev=1330.94, samples=10
       iops        : min=  384, max= 3832, avg=1993.80, stdev=1330.94, samples=10
      lat (usec)   : 4=0.04%, 10=0.01%, 500=0.05%, 750=0.04%, 1000=0.05%
      lat (msec)   : 2=0.18%, 4=0.32%, 10=66.43%, 20=15.73%, 50=15.77%
      lat (msec)   : 100=1.08%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=786, max=786, avg=786.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  788],  5.00th=[  788], 10.00th=[  788], 20.00th=[  788],
         | 30.00th=[  788], 40.00th=[  788], 50.00th=[  788], 60.00th=[  788],
         | 70.00th=[  788], 80.00th=[  788], 90.00th=[  788], 95.00th=[  788],
         | 99.00th=[  788], 99.50th=[  788], 99.90th=[  788], 99.95th=[  788],
         | 99.99th=[  788]
      cpu          : usr=5.45%, sys=75.48%, ctx=10726, majf=0, minf=16
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=1161MiB/s (1217MB/s), 1161MiB/s-1161MiB/s (1217MB/s-1217MB/s), io=10.0GiB (10.7GB), run=8820-8820msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][-.-%][w=177MiB/s][w=177 IOPS][eta 00m:00s]  
    TEST: (groupid=0, jobs=1): err= 0: pid=567469: Fri Dec 15 23:22:02 2023
      write: IOPS=1492, BW=1493MiB/s (1565MB/s)(10.0GiB/6859msec); 0 zone resets
        slat (usec): min=188, max=1782, avg=287.70, stdev=114.15
        clat (usec): min=3, max=3888.7k, avg=20698.96, stdev=213157.67
         lat (usec): min=233, max=3889.0k, avg=20987.04, stdev=213156.72
        clat percentiles (msec):
         |  1.00th=[    6],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
         | 30.00th=[    8], 40.00th=[    8], 50.00th=[    8], 60.00th=[    8],
         | 70.00th=[    9], 80.00th=[   10], 90.00th=[   13], 95.00th=[   17],
         | 99.00th=[   24], 99.50th=[   42], 99.90th=[ 3876], 99.95th=[ 3876],
         | 99.99th=[ 3876]
       bw (  MiB/s): min= 2684, max= 4042, per=100.00%, avg=3323.00, stdev=602.93, samples=6
       iops        : min= 2684, max= 4042, avg=3323.00, stdev=602.93, samples=6
      lat (usec)   : 4=0.04%, 10=0.01%, 250=0.02%, 500=0.04%, 750=0.05%
      lat (usec)   : 1000=0.06%
      lat (msec)   : 2=0.20%, 4=0.38%, 10=80.29%, 20=17.81%, 50=0.80%
      lat (msec)   : >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=939, max=939, avg=939.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  940],  5.00th=[  940], 10.00th=[  940], 20.00th=[  940],
         | 30.00th=[  940], 40.00th=[  940], 50.00th=[  940], 60.00th=[  940],
         | 70.00th=[  940], 80.00th=[  940], 90.00th=[  940], 95.00th=[  940],
         | 99.00th=[  940], 99.50th=[  940], 99.90th=[  940], 99.95th=[  940],
         | 99.99th=[  940]
      cpu          : usr=6.61%, sys=88.48%, ctx=2231, majf=0, minf=16
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=1493MiB/s (1565MB/s), 1493MiB/s-1493MiB/s (1565MB/s-1565MB/s), io=10.0GiB (10.7GB), run=6859-6859msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][-.-%][r=3016MiB/s][r=3016 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=613737: Fri Dec 15 23:22:11 2023
      read: IOPS=2926, BW=2927MiB/s (3069MB/s)(10.0GiB/3499msec)
        slat (usec): min=275, max=1055, avg=339.06, stdev=46.04
        clat (usec): min=2, max=31307, avg=10481.74, stdev=1296.16
         lat (usec): min=320, max=32361, avg=10821.15, stdev=1330.75
        clat percentiles (usec):
         |  1.00th=[ 6587],  5.00th=[10159], 10.00th=[10159], 20.00th=[10159],
         | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290],
         | 70.00th=[10421], 80.00th=[10421], 90.00th=[11994], 95.00th=[12125],
         | 99.00th=[13435], 99.50th=[15139], 99.90th=[26346], 99.95th=[28967],
         | 99.99th=[30802]
       bw (  MiB/s): min= 2356, max= 3022, per=99.24%, avg=2904.33, stdev=268.84, samples=6
       iops        : min= 2356, max= 3022, avg=2904.33, stdev=268.84, samples=6
      lat (usec)   : 4=0.05%, 500=0.05%, 750=0.05%, 1000=0.03%
      lat (msec)   : 2=0.15%, 4=0.30%, 10=0.95%, 20=98.20%, 50=0.22%
      cpu          : usr=0.89%, sys=99.06%, ctx=16, majf=0, minf=8204
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=2927MiB/s (3069MB/s), 2927MiB/s-2927MiB/s (3069MB/s-3069MB/s), io=10.0GiB (10.7GB), run=3499-3499msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank/ && fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=0): [f(1)][100.0%][r=2713MiB/s][r=2712 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=613942: Fri Dec 15 23:22:19 2023
      read: IOPS=2661, BW=2662MiB/s (2791MB/s)(10.0GiB/3847msec)
        slat (usec): min=296, max=1148, avg=372.74, stdev=48.82
        clat (usec): min=2, max=33466, avg=11525.97, stdev=1407.46
         lat (usec): min=356, max=34588, avg=11899.09, stdev=1444.27
        clat percentiles (usec):
         |  1.00th=[ 7308],  5.00th=[11076], 10.00th=[11207], 20.00th=[11207],
         | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11338],
         | 70.00th=[11338], 80.00th=[11469], 90.00th=[13304], 95.00th=[13435],
         | 99.00th=[14746], 99.50th=[16581], 99.90th=[28443], 99.95th=[31065],
         | 99.99th=[32900]
       bw (  MiB/s): min= 2104, max= 2752, per=99.41%, avg=2646.00, stdev=239.23, samples=7
       iops        : min= 2104, max= 2752, avg=2646.00, stdev=239.23, samples=7
      lat (usec)   : 4=0.05%, 500=0.05%, 750=0.04%, 1000=0.01%
      lat (msec)   : 2=0.15%, 4=0.27%, 10=0.80%, 20=98.37%, 50=0.26%
      cpu          : usr=1.01%, sys=98.91%, ctx=7, majf=0, minf=8205
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=2662MiB/s (2791MB/s), 2662MiB/s-2662MiB/s (2791MB/s-2791MB/s), io=10.0GiB (10.7GB), run=3847-3847msec
    Code:
        time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  size     c  avail
    23:22:41     0     0      0     0    0     0    0     0    0   21G   37G    94G
    Code:
    free -m  | grep ^Mem | tr -s ' ' | cut -d ' ' -f 3
    26356
    See you on the other side of the rabbit hole

  8. #78
    Join Date
    Aug 2016
    Location
    Wandering
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by tkae-lp View Post

    See you on the other side of the rabbit hole

    Sometimes that's Soo True!
    With realization of one's own potential and self-confidence in one's ability, one can build a better world.
    Dalai Lama>>
    Code Tags | System-info | Forum Guide lines | Arch Linux, Debian Unstable, FreeBSD

  9. #79
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    While I traverse the warren, do you guys know how to hide a ZFS 'element' in the XFCE4 sidebar? I did it on Bionic, and I can't remember how....

    I have both the Tank and the L2ARC/SLOG in the side panel:



    When accidentally clicking on it, it prompts for root password which is annoying. I can't remember how I hid these before?

    Edit: Sorry, I realise this is off-topic
    Last edited by tkae-lp; December 16th, 2023 at 03:40 AM.

  10. #80
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    • Open Settings.
    • Select Appearance in the sidebar.
    • Click 'configure dock behaviour' in the Dock section.
    • Slide the 'Show Volumes and Devices' toggle to off.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

Page 8 of 14 FirstFirst ... 678910 ... LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •