Page 3 of 14 FirstFirst 1234513 ... LastLast
Results 21 to 30 of 136

Thread: Seemingly sporadic slow ZFS IO since 22.04

  1. #21
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by MAFoElffen View Post
    Ready to try a test on the cheap, without spending any money yet?
    Yes I've already bought the memory and NMVe though. NMVe isn't here yet. Should be here Monday. It's the 2TB module.

    Quote Originally Posted by MAFoElffen View Post
    This is the entries I want you to change, noted from my /etc/modprobe.d/zfs.conf file:
    Code:
    mafoelffen@Mikes-B460M:~$ grep zfs_arc_ /etc/modprobe.d/zfs.conf
    # This is the default for mine, which is half the total memory (set to 67GB). Yours will be about 16GB... 
    options zfs zfs_arc_max=68719476736
    # The default for minimum is about 1GB
    options zfs zfs_arc_min=1073741824
    Try editing your and set to 4GB. It is in bytes, so 4x1024x10000=40960000...
    Is zfs.conf definitely meant to always be present? Because I do not have it on my installation....

    Code:
    /etc/modprobe.d # ls zfs.conf
    ls: cannot access 'zfs.conf': No such file or directory
    Or did you mean just create it? If it's meant to always be there, we may have discovered the problem!

    Edit: I think the memory is a worthwhile upgrade. I could cancel the NMVe, but then it'll only improve things so I'm not opposed to spending a bit on it. We'll call it my server's Christmas present, and I'll provide benchmarks for you guys
    Last edited by tkae-lp; December 2nd, 2023 at 03:06 AM.

  2. #22
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    If this file is indeed meant to be always present, we could knock this one on the head for future with something added to your sys info script:

    Code:
    #!/bin/bash
    if [ -e /etc/modprobe.d/zfs.conf ]
    then
        echo "ZFS: Modprobe conf file is present."
    else
        echo "ZFS: Modprobe conf file is missing!"
    fi
    Edit: Still.. makes me wonder why on earth it's not there?! I certainly didn't remove it!
    Last edited by tkae-lp; December 2nd, 2023 at 03:26 AM.

  3. #23
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    I've got to hit the sack, it's nearly 3am. I've created the conf file with just the min and max value, updated initramfs and rebooted. I'll report back tomorrow (today!)

  4. #24
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Ok, so curiosity got me... it hasn't helped. Here's fio write performed again and again. Mem usage fluctuated between 7GB-9.5GB used. It degrades as each subsequent test is performed, then suddenly increases to about 505MB/s again :-S

    Really hitting the sack now. Until tomorrow....

    Code:
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][50.0%][w=485MiB/s][w=485 IOPS][eta 00m:07s]
    Jobs: 1 (f=1): [W(1)][81.2%][w=565MiB/s][w=565 IOPS][eta 00m:03s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                        
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=218196: Sat Dec  2 03:39:44 2023
      write: IOPS=518, BW=519MiB/s (544MB/s)(10.0GiB/19739msec); 0 zone resets
        slat (usec): min=223, max=5441, avg=1577.67, stdev=657.74
        clat (usec): min=3, max=3579.9k, avg=59445.30, stdev=193945.51
         lat (usec): min=325, max=3581.7k, avg=61023.68, stdev=194025.35
        clat percentiles (msec):
         |  1.00th=[   11],  5.00th=[   13], 10.00th=[   14], 20.00th=[   22],
         | 30.00th=[   50], 40.00th=[   51], 50.00th=[   56], 60.00th=[   57],
         | 70.00th=[   59], 80.00th=[   64], 90.00th=[   70], 95.00th=[   78],
         | 99.00th=[   88], 99.50th=[  101], 99.90th=[ 3574], 99.95th=[ 3574],
         | 99.99th=[ 3574]
       bw (  KiB/s): min=131072, max=2045952, per=100.00%, avg=618682.18, stdev=346402.86, samples=33
       iops        : min=  128, max= 1998, avg=604.18, stdev=338.28, samples=33
      lat (usec)   : 4=0.02%, 10=0.02%, 20=0.01%, 500=0.02%, 750=0.01%
      lat (usec)   : 1000=0.01%
      lat (msec)   : 2=0.07%, 4=0.15%, 10=0.46%, 20=18.67%, 50=18.09%
      lat (msec)   : 100=61.96%, 250=0.21%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1326, max=1326, avg=1326.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1320],  5.00th=[ 1320], 10.00th=[ 1320], 20.00th=[ 1320],
         | 30.00th=[ 1320], 40.00th=[ 1320], 50.00th=[ 1320], 60.00th=[ 1320],
         | 70.00th=[ 1320], 80.00th=[ 1320], 90.00th=[ 1320], 95.00th=[ 1320],
         | 99.00th=[ 1320], 99.50th=[ 1320], 99.90th=[ 1320], 99.95th=[ 1320],
         | 99.99th=[ 1320]
      cpu          : usr=3.83%, sys=20.61%, ctx=68366, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=519MiB/s (544MB/s), 519MiB/s-519MiB/s (544MB/s-544MB/s), io=10.0GiB (10.7GB), run=19739-19739msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][50.0%][w=422MiB/s][w=422 IOPS][eta 00m:07s]
    Jobs: 1 (f=1): [W(1)][76.5%][w=237MiB/s][w=237 IOPS][eta 00m:04s] 
    Jobs: 1 (f=1): [W(1)][95.0%][w=479MiB/s][w=479 IOPS][eta 00m:01s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                         
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=282657: Sat Dec  2 03:40:25 2023
      write: IOPS=389, BW=389MiB/s (408MB/s)(10.0GiB/26298msec); 0 zone resets
        slat (usec): min=235, max=2469.9k, avg=1863.58, stdev=28206.96
        clat (usec): min=3, max=7211.0k, avg=79337.13, stdev=422662.74
         lat (usec): min=286, max=7212.5k, avg=81201.28, stdev=423608.56
        clat percentiles (msec):
         |  1.00th=[   11],  5.00th=[   12], 10.00th=[   13], 20.00th=[   15],
         | 30.00th=[   28], 40.00th=[   50], 50.00th=[   52], 60.00th=[   55],
         | 70.00th=[   59], 80.00th=[   68], 90.00th=[   79], 95.00th=[   85],
         | 99.00th=[   95], 99.50th=[ 2500], 99.90th=[ 7215], 99.95th=[ 7215],
         | 99.99th=[ 7215]
       bw (  KiB/s): min=51200, max=2254848, per=100.00%, avg=618546.73, stdev=427097.25, samples=33
       iops        : min=   50, max= 2202, avg=604.03, stdev=417.09, samples=33
      lat (usec)   : 4=0.03%, 10=0.02%, 500=0.02%, 750=0.01%, 1000=0.02%
      lat (msec)   : 2=0.08%, 4=0.15%, 10=0.51%, 20=26.19%, 50=18.09%
      lat (msec)   : 100=53.98%, 2000=0.30%, >=2000=0.61%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1210, max=1210, avg=1210.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1208],  5.00th=[ 1208], 10.00th=[ 1208], 20.00th=[ 1208],
         | 30.00th=[ 1208], 40.00th=[ 1208], 50.00th=[ 1208], 60.00th=[ 1208],
         | 70.00th=[ 1208], 80.00th=[ 1208], 90.00th=[ 1208], 95.00th=[ 1208],
         | 99.00th=[ 1208], 99.50th=[ 1208], 99.90th=[ 1208], 99.95th=[ 1208],
         | 99.99th=[ 1208]
      cpu          : usr=2.10%, sys=16.31%, ctx=60150, majf=0, minf=14
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=389MiB/s (408MB/s), 389MiB/s-389MiB/s (408MB/s-408MB/s), io=10.0GiB (10.7GB), run=26298-26298msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][58.3%][w=573MiB/s][w=573 IOPS][eta 00m:05s]
    Jobs: 1 (f=1): [W(1)][81.2%][eta 00m:03s]                         
    Jobs: 1 (f=1): [W(1)][90.5%][eta 00m:02s]                         
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                         
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    TEST: (groupid=0, jobs=1): err= 0: pid=386756: Sat Dec  2 03:41:04 2023
      write: IOPS=362, BW=362MiB/s (380MB/s)(10.0GiB/28270msec); 0 zone resets
        slat (usec): min=209, max=3544.7k, avg=1909.22, stdev=37972.15
        clat (usec): min=3, max=8709.7k, avg=85298.83, stdev=519900.81
         lat (usec): min=298, max=8710.8k, avg=87208.59, stdev=521282.88
        clat percentiles (msec):
         |  1.00th=[   10],  5.00th=[   12], 10.00th=[   12], 20.00th=[   14],
         | 30.00th=[   20], 40.00th=[   49], 50.00th=[   52], 60.00th=[   56],
         | 70.00th=[   60], 80.00th=[   63], 90.00th=[   71], 95.00th=[   77],
         | 99.00th=[   89], 99.50th=[ 3574], 99.90th=[ 8658], 99.95th=[ 8658],
         | 99.99th=[ 8658]
       bw (  KiB/s): min=12288, max=2347008, per=100.00%, avg=638016.00, stdev=482903.71, samples=32
       iops        : min=   12, max= 2292, avg=623.06, stdev=471.59, samples=32
      lat (usec)   : 4=0.03%, 10=0.02%, 500=0.02%, 750=0.01%, 1000=0.02%
      lat (msec)   : 2=0.07%, 4=0.12%, 10=0.76%, 20=29.00%, 50=14.66%
      lat (msec)   : 100=54.38%, 2000=0.30%, >=2000=0.61%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1240, max=1240, avg=1240.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1240],  5.00th=[ 1240], 10.00th=[ 1240], 20.00th=[ 1240],
         | 30.00th=[ 1240], 40.00th=[ 1240], 50.00th=[ 1240], 60.00th=[ 1240],
         | 70.00th=[ 1240], 80.00th=[ 1240], 90.00th=[ 1240], 95.00th=[ 1240],
         | 99.00th=[ 1240], 99.50th=[ 1240], 99.90th=[ 1240], 99.95th=[ 1240],
         | 99.99th=[ 1240]
      cpu          : usr=2.14%, sys=14.95%, ctx=58856, majf=0, minf=16
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=362MiB/s (380MB/s), 362MiB/s-362MiB/s (380MB/s-380MB/s), io=10.0GiB (10.7GB), run=28270-28270msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][38.9%][w=310MiB/s][w=310 IOPS][eta 00m:11s]
    Jobs: 1 (f=1): [W(1)][65.0%][w=409MiB/s][w=409 IOPS][eta 00m:07s] 
    Jobs: 1 (f=1): [W(1)][95.0%][w=497MiB/s][w=497 IOPS][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                         
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    Jobs: 1 (f=1): [W(1)][100.0%][w=221MiB/s][w=221 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=493848: Sat Dec  2 03:41:39 2023
      write: IOPS=321, BW=321MiB/s (337MB/s)(10.0GiB/31858msec); 0 zone resets
        slat (usec): min=240, max=6105, avg=1925.09, stdev=995.47
        clat (usec): min=4, max=12154k, avg=96014.61, stdev=663816.93
         lat (usec): min=378, max=12156k, avg=97940.44, stdev=663882.41
        clat percentiles (msec):
         |  1.00th=[   10],  5.00th=[   11], 10.00th=[   19], 20.00th=[   31],
         | 30.00th=[   51], 40.00th=[   52], 50.00th=[   57], 60.00th=[   61],
         | 70.00th=[   68], 80.00th=[   79], 90.00th=[  101], 95.00th=[  117],
         | 99.00th=[  174], 99.50th=[  178], 99.90th=[12147], 99.95th=[12147],
         | 99.99th=[12147]
       bw (  KiB/s): min=126976, max=2115584, per=100.00%, avg=510384.47, stdev=320831.75, samples=40
       iops        : min=  124, max= 2066, avg=498.40, stdev=313.31, samples=40
      lat (usec)   : 10=0.05%, 500=0.01%, 1000=0.01%
      lat (msec)   : 2=0.06%, 4=0.09%, 10=4.80%, 20=5.86%, 50=18.38%
      lat (msec)   : 100=60.69%, 250=9.75%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1304, max=1304, avg=1304.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1304],  5.00th=[ 1304], 10.00th=[ 1304], 20.00th=[ 1304],
         | 30.00th=[ 1304], 40.00th=[ 1304], 50.00th=[ 1304], 60.00th=[ 1304],
         | 70.00th=[ 1304], 80.00th=[ 1304], 90.00th=[ 1304], 95.00th=[ 1304],
         | 99.00th=[ 1304], 99.50th=[ 1304], 99.90th=[ 1304], 99.95th=[ 1304],
         | 99.99th=[ 1304]
      cpu          : usr=1.69%, sys=16.58%, ctx=70160, majf=0, minf=14
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=321MiB/s (337MB/s), 321MiB/s-321MiB/s (337MB/s-337MB/s), io=10.0GiB (10.7GB), run=31858-31858msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][53.8%][w=425MiB/s][w=425 IOPS][eta 00m:06s]
    Jobs: 1 (f=1): [W(1)][81.2%][w=313MiB/s][w=313 IOPS][eta 00m:03s] 
    Jobs: 1 (f=1): [W(1)][90.5%][w=119MiB/s][w=119 IOPS][eta 00m:02s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                        
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    Jobs: 1 (f=1): [W(1)][100.0%][w=112MiB/s][w=112 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=578729: Sat Dec  2 03:42:19 2023
      write: IOPS=277, BW=278MiB/s (291MB/s)(10.0GiB/36890msec); 0 zone resets
        slat (usec): min=273, max=22967, avg=2274.51, stdev=2123.06
        clat (usec): min=3, max=13585k, avg=111322.53, stdev=744421.43
         lat (usec): min=333, max=13587k, avg=113597.80, stdev=744580.40
        clat percentiles (msec):
         |  1.00th=[   11],  5.00th=[   13], 10.00th=[   14], 20.00th=[   17],
         | 30.00th=[   50], 40.00th=[   54], 50.00th=[   56], 60.00th=[   58],
         | 70.00th=[   66], 80.00th=[   75], 90.00th=[  174], 95.00th=[  243],
         | 99.00th=[  321], 99.50th=[  330], 99.90th=[13624], 99.95th=[13624],
         | 99.99th=[13624]
       bw (  KiB/s): min=96256, max=2224128, per=100.00%, avg=434393.87, stdev=425821.34, samples=47
       iops        : min=   94, max= 2172, avg=424.21, stdev=415.84, samples=47
      lat (usec)   : 4=0.01%, 10=0.04%, 500=0.01%, 750=0.02%
      lat (msec)   : 2=0.07%, 4=0.12%, 10=0.35%, 20=23.37%, 50=7.84%
      lat (msec)   : 100=52.45%, 250=11.36%, 500=4.06%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1621, max=1621, avg=1621.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1624],  5.00th=[ 1624], 10.00th=[ 1624], 20.00th=[ 1624],
         | 30.00th=[ 1624], 40.00th=[ 1624], 50.00th=[ 1624], 60.00th=[ 1624],
         | 70.00th=[ 1624], 80.00th=[ 1624], 90.00th=[ 1624], 95.00th=[ 1624],
         | 99.00th=[ 1624], 99.50th=[ 1624], 99.90th=[ 1624], 99.95th=[ 1624],
         | 99.99th=[ 1624]
      cpu          : usr=1.62%, sys=14.02%, ctx=79482, majf=0, minf=14
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=278MiB/s (291MB/s), 278MiB/s-278MiB/s (291MB/s-291MB/s), io=10.0GiB (10.7GB), run=36890-36890msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][50.0%][w=166MiB/s][w=166 IOPS][eta 00m:07s]
    Jobs: 1 (f=1): [W(1)][54.2%][w=50.0MiB/s][w=50 IOPS][eta 00m:11s] 
    Jobs: 1 (f=1): [W(1)][63.3%][w=303MiB/s][w=303 IOPS][eta 00m:11s] 
    Jobs: 1 (f=1): [W(1)][89.3%][w=493MiB/s][w=493 IOPS][eta 00m:03s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                        
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    Jobs: 1 (f=1): [W(1)][97.7%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][98.0%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][98.2%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][98.5%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][98.6%][eta 00m:01s] 
    TEST: (groupid=0, jobs=1): err= 0: pid=654785: Sat Dec  2 03:43:30 2023
      write: IOPS=146, BW=147MiB/s (154MB/s)(9.77GiB/68196msec); 0 zone resets
        slat (usec): min=278, max=33423, avg=2604.53, stdev=3357.37
        clat (usec): min=3, max=42142k, avg=211033.60, stdev=2339285.11
         lat (usec): min=670, max=42144k, avg=213638.72, stdev=2339378.36
        clat percentiles (msec):
         |  1.00th=[   11],  5.00th=[   13], 10.00th=[   14], 20.00th=[   21],
         | 30.00th=[   50], 40.00th=[   51], 50.00th=[   53], 60.00th=[   61],
         | 70.00th=[   65], 80.00th=[   89], 90.00th=[  153], 95.00th=[  279],
         | 99.00th=[  667], 99.50th=[  760], 99.90th=[17113], 99.95th=[17113],
         | 99.99th=[17113]
       bw (  KiB/s): min=40960, max=2269184, per=100.00%, avg=385217.21, stdev=411038.22, samples=53
       iops        : min=   40, max= 2216, avg=376.19, stdev=401.40, samples=53
      lat (usec)   : 4=0.01%, 10=0.02%, 20=0.01%, 750=0.01%
      lat (msec)   : 2=0.03%, 4=0.06%, 10=0.19%, 20=19.65%, 50=22.10%
      lat (msec)   : 100=40.69%, 250=11.16%, 500=3.97%, 750=1.55%, 1000=0.24%
      lat (msec)   : >=2000=0.31%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=2011, max=2011, avg=2011.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 2008],  5.00th=[ 2008], 10.00th=[ 2008], 20.00th=[ 2008],
         | 30.00th=[ 2008], 40.00th=[ 2008], 50.00th=[ 2008], 60.00th=[ 2008],
         | 70.00th=[ 2008], 80.00th=[ 2008], 90.00th=[ 2008], 95.00th=[ 2008],
         | 99.00th=[ 2008], 99.50th=[ 2008], 99.90th=[ 2008], 99.95th=[ 2008],
         | 99.99th=[ 2008]
      cpu          : usr=0.85%, sys=6.88%, ctx=79480, majf=0, minf=16
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10000,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=9.77GiB (10.5GB), run=68196-68196msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][50.0%][w=254MiB/s][w=254 IOPS][eta 00m:07s]
    Jobs: 1 (f=1): [W(1)][76.5%][w=562MiB/s][w=561 IOPS][eta 00m:04s] 
    Jobs: 1 (f=1): [W(1)][100.0%][w=260MiB/s][w=260 IOPS][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    TEST: (groupid=0, jobs=1): err= 0: pid=742479: Sat Dec  2 03:43:55 2023
      write: IOPS=481, BW=482MiB/s (505MB/s)(10.0GiB/21261msec); 0 zone resets
        slat (usec): min=263, max=27187, avg=1687.74, stdev=993.41
        clat (usec): min=4, max=3965.2k, avg=64055.65, stdev=215970.10
         lat (usec): min=346, max=3966.7k, avg=65744.08, stdev=216092.80
        clat percentiles (msec):
         |  1.00th=[   10],  5.00th=[   11], 10.00th=[   12], 20.00th=[   16],
         | 30.00th=[   47], 40.00th=[   54], 50.00th=[   57], 60.00th=[   59],
         | 70.00th=[   65], 80.00th=[   69], 90.00th=[   80], 95.00th=[  109],
         | 99.00th=[  146], 99.50th=[  153], 99.90th=[ 3943], 99.95th=[ 3943],
         | 99.99th=[ 3977]
       bw (  KiB/s): min=233472, max=2215936, per=100.00%, avg=583203.51, stdev=433588.73, samples=35
       iops        : min=  228, max= 2164, avg=569.51, stdev=423.43, samples=35
      lat (usec)   : 10=0.05%, 500=0.02%, 750=0.01%, 1000=0.02%
      lat (msec)   : 2=0.06%, 4=0.13%, 10=1.00%, 20=24.64%, 50=8.89%
      lat (msec)   : 100=59.00%, 250=5.89%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1371, max=1371, avg=1371.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1368],  5.00th=[ 1368], 10.00th=[ 1368], 20.00th=[ 1368],
         | 30.00th=[ 1368], 40.00th=[ 1368], 50.00th=[ 1368], 60.00th=[ 1368],
         | 70.00th=[ 1368], 80.00th=[ 1368], 90.00th=[ 1368], 95.00th=[ 1368],
         | 99.00th=[ 1368], 99.50th=[ 1368], 99.90th=[ 1368], 99.95th=[ 1368],
         | 99.99th=[ 1368]
      cpu          : usr=2.66%, sys=21.11%, ctx=66517, majf=0, minf=16
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=482MiB/s (505MB/s), 482MiB/s-482MiB/s (505MB/s-505MB/s), io=10.0GiB (10.7GB), run=21261-21261msec
    --------------------------------------------------------------------------------
    /mnt/Tank # cd /mnt/Tank && fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][53.8%][w=400MiB/s][w=400 IOPS][eta 00m:06s]
    Jobs: 1 (f=1): [W(1)][81.2%][w=442MiB/s][w=442 IOPS][eta 00m:03s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                        
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    Jobs: 1 (f=1): [W(1)][100.0%][w=126MiB/s][w=126 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=803014: Sat Dec  2 03:44:38 2023
      write: IOPS=380, BW=381MiB/s (399MB/s)(10.0GiB/26884msec); 0 zone resets
        slat (usec): min=243, max=3653, avg=1620.77, stdev=745.03
        clat (usec): min=3, max=10294k, avg=81065.23, stdev=561997.91
         lat (usec): min=350, max=10295k, avg=82686.66, stdev=562052.66
        clat percentiles (msec):
         |  1.00th=[   11],  5.00th=[   12], 10.00th=[   13], 20.00th=[   16],
         | 30.00th=[   50], 40.00th=[   55], 50.00th=[   57], 60.00th=[   59],
         | 70.00th=[   65], 80.00th=[   69], 90.00th=[   74], 95.00th=[   82],
         | 99.00th=[   95], 99.50th=[  100], 99.90th=[10268], 99.95th=[10268],
         | 99.99th=[10268]
       bw (  KiB/s): min=40960, max=2252800, per=100.00%, avg=600485.65, stdev=402828.38, samples=34
       iops        : min=   40, max= 2200, avg=586.41, stdev=393.39, samples=34
      lat (usec)   : 4=0.02%, 10=0.03%, 500=0.02%, 750=0.01%, 1000=0.01%
      lat (msec)   : 2=0.06%, 4=0.14%, 10=0.66%, 20=21.55%, 50=8.54%
      lat (msec)   : 100=68.49%, 250=0.17%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1287, max=1287, avg=1287.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1288],  5.00th=[ 1288], 10.00th=[ 1288], 20.00th=[ 1288],
         | 30.00th=[ 1288], 40.00th=[ 1288], 50.00th=[ 1288], 60.00th=[ 1288],
         | 70.00th=[ 1288], 80.00th=[ 1288], 90.00th=[ 1288], 95.00th=[ 1288],
         | 99.00th=[ 1288], 99.50th=[ 1288], 99.90th=[ 1288], 99.95th=[ 1288],
         | 99.99th=[ 1288]
      cpu          : usr=2.48%, sys=16.26%, ctx=64841, majf=0, minf=16
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=381MiB/s (399MB/s), 381MiB/s-381MiB/s (399MB/s-399MB/s), io=10.0GiB (10.7GB), run=26884-26884msec
    --------------------------------------------------------------------------------
    /mnt/Tank #
    Last edited by tkae-lp; December 2nd, 2023 at 04:51 AM.

  5. #25
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by tkae-lp View Post
    If this file is indeed meant to be always present, we could knock this one on the head for future with something added to your sys info script:

    Code:
    #!/bin/bash
    if [ -e /etc/modprobe.d/zfs.conf ]
    then
        echo "ZFS: Modprobe conf file is present."
    else
        echo "ZFS: Modprobe conf file is missing!"
    fi
    Edit: Still.. makes me wonder why on earth it's not there?! I certainly didn't remove it!
    It isn't there by default unless you add it to make adjustments to ARC... Or other adjustment to the kernel 'zfs' module. Here is mine:
    Code:
    mafoelffen@Mikes-B460M:~$ grep . /etc/modprobe.d/zfs.conf
    # ARC tuning
    # Setting up ZFS ARC size on Ubuntu as per my needs
    # Balanced with my system needs for KVM
    # Set Max ARC size => 32GB == 34359738368 Bytes
    # Set Max ARC size => 64GB == 68719476736 Bytes
    options zfs zfs_arc_max=68719476736
     
    # Set Min ARC size => 2GB == 2147483648 Bytes
    options zfs zfs_arc_min=1073741824
    I remember it from OpenSolaric & Solaris, pre- Oracle.
    RE: https://docs.oracle.com/cd/E26505_01...pterzfs-3.html
    Yes, I've been around awhile. If you read through the old doc's, It says it could possibly take all the available memory, and you should cap it at 80%... When you talk to the FreeBSM people, they think it stops at 50%, LOL. You saw that with those commands that I gave you to drop the caches, showed you just how much memory it can chew up. (And how much can be freed up.)

    Back in 2005, memory was more costly and had smaller limits. So we "had" to make adjustments, that was sometimes like a balancing act. The old Solaris Administrator Docs still have a lot of information. (1fallen tells me they make his head swim. LOL) So do the docs at OpenZFS: https://openzfs.github.io/openzfs-docs/man/index.html

    Then I came across it again with this: https://www.cyberciti.biz/faq/how-to...-debian-linux/

    L2ARC (2nd Level Adaptive Replacement Cache), is a hardware read cache... Which adding that increases your read cache, while reducing the need to have it all in memory. So you can use less RAM. RE: https://docs.oracle.com/cd/E27998_01..._accesses.html

    Going to add it in tomorrow to the script tomorrow... But differently. If it exists, print out the values. With a note, to remind people that if it exits, if they change the size of their RAM, to reevaluate their ARC settings... ARC is a read cache. Sort of like the Intel RST was about (but failed). I thik about 512GB is a good for those. At least in our tests.

    SLOG is the write cache. Most people say that a hardware SLOG should be about 10-16GB, and not over... No one seemed to know why that number, or what would happen if it got bigger. Some say that if it is bigger, that ZFS does not and will not use it. So, I ignored that it our benchmarks to see what happens... Sometimes I have to see things for myself to confirm that. Go big or go broke right? LOL. The performance kept climbing until about 1TB. It definitely can use over 16GB's and be happy with it. If I made SLOG bigger than 1TG though, then the performance started dropping off again. So 1TB was the sweet spot in our tests.
    Last edited by MAFoElffen; December 2nd, 2023 at 06:33 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  6. #26
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by MAFoElffen View Post
    It isn't there by default unless you add it to make adjustments to ARC... Or other adjustment to the kernel 'zfs' module.
    Gotcha, makes sense! Would have been good if that was the solution, but never mind

    Quote Originally Posted by MAFoElffen View Post
    I remember it from OpenSolaric & Solaris, pre- Oracle.
    RE: https://docs.oracle.com/cd/E26505_01...pterzfs-3.html
    Yes, I've been around awhile. If you read through the old doc's, It says it could possibly take all the available memory, and you should cap it at 80%...
    I only had a brief flicker with OpenSolaris because someone had mentioned ZFS and it intrigued me, but I had only in the recent months played with YellowDog and Ximian Gnome (IIRC, could be wrong about that) but I was a total nix noob so it was very alien to me. I ended up back with Win. To be fair, I still use Windows, I just love my Ubuntu servers I started using Debian probably about the time of the first RasPi (that might have literally been because of the Pi), then moved to Open Media Vault on a different ARM box, but wanted more power and better storage, so ZFS and Xeons came into play. Then after a time because Canonical make it SO DAMN EASY (Thanks guys!) to use ZFS, it became the no brainer. I never have to worry about jumping a kernel version and backports not having caught up. Love it.

    Quote Originally Posted by MAFoElffen View Post
    When you talk to the FreeBSM people, they think it stops at 50%, LOL. You saw that with those commands that I gave you to drop the caches, showed you just how much memory it can chew up. (And how much can be freed up.
    Well, no disrespect to different communities but I toyed with FreeNAS/TrueNAS for a time when looking at ZFS, and I read some very bizarre things on those forums. I don't remember which forum it was, but one subject that always perplexed me was around the use of ECC. Was it essential or even needed? There were obviously louder voices than others, but I'm not entirely sure that all the advice was totally sound I think it boiled down to confusion between: Will it run without it? vs But should you? I just felt a company that sold products that use that sort of tech shouldn't really be saying it's ok not to use ECC, even if it does work without it (albeit with less benefits). I certainly haven't come across anyone in a production setting use it without <--- this could be a political firestorm ;D

    Quote Originally Posted by MAFoElffen View Post
    Back in 2005, memory was more costly and had smaller limits. So we "had" to make adjustments, that was sometimes like a balancing act. The old Solaris Administrator Docs still have a lot of information. (1fallen tells me they make his head swim. LOL) So do the docs at OpenZFS: https://openzfs.github.io/openzfs-docs/man/index.html
    Mine too, since you mention it, hence going back to Windows. I haven't looked much at OpenZFS but I don't care too much for Oracle's docs either - probably because they're copy/pasted from Solaris. I actually find a quick Google and some well know tutorial blog way more informative! Such as the below link.

    Quote Originally Posted by MAFoElffen View Post
    Then I came across it again with this: https://www.cyberciti.biz/faq/how-to...-debian-linux/

    L2ARC (2nd Level Adaptive Replacement Cache), is a hardware read cache... Which adding that increases your read cache, while reducing the need to have it all in memory. So you can use less RAM. RE: https://docs.oracle.com/cd/E27998_01..._accesses.html
    I found that article very interesting. I have been playing with ARC size this morning and I am currently trialing 16GB as per your setup (half RAM). So far it seems quite good. No buffering watching movies yet! I thought it was originally 16GB anyway? But then I noticed something:

    Code:
    arcstat
        time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  size     c  avail
    05:10:50     0     0      0     0    0     0    0     0    0  6.7G  6.7G    12G
    I may have messed up my bytes calculation though, I thought I had done 16GB or maybe I've worked out 16GiB.... so it's actually 12GB

    Code:
    options zfs zfs_arc_min=1073741824
    options zfs zfs_arc_max=7179869184
    Regardless, this does seem better.

    Quote Originally Posted by MAFoElffen View Post
    Going to add it in tomorrow to the script tomorrow... But differently. If it exists, print out the values. With a note, to remind people that if it exits, if they change the size of their RAM, to reevaluate their ARC settings... ARC is a read cache. Sort of like the Intel RST was about (but failed).
    Nice one

    Quote Originally Posted by MAFoElffen View Post
    I thik about 512GB is a good for those. At least in our tests.

    SLOG is the write cache. Most people say that a hardware SLOG should be about 10-16GB, and not over... No one seemed to know why that number, or what would happen if it got bigger. Some say that if it is bigger, that ZFS does not and will not use it. So, I ignored that it our benchmarks to see what happens... Sometimes I have to see things for myself to confirm that. Go big or go broke right? LOL. The performance kept climbing until about 1TB. It definitely can use over 16GB's and be happy with it. If I made SLOG bigger than 1TG though, then the performance started dropping off again. So 1TB was the sweet spot in our tests.
    Do you think I've overshot it a bit getting the 2TB then? To be fair the 1TB wasn't much cheaper - but it hasn't dispatched yet, so I can change the order. But if you think the 2TB is beneficial I will keep it. The other option is ... I do also have a 1TB in a tiny enclosure I carry with my laptop. It would be minimal effort to copy contents to the 2TB that's coming and I could use that.

    So what are your thoughts at this point? Why has this happened? Do we still not know or is it that I probably should have had L2ARC and SLOGs on an NMVe from day 1 and I just 'somehow' never noticed a problem in Bionic? I'm totally perplexed how I've never had a problem until now. If each drive is rated at 180MB/sec, isn't that sufficient? This is what I don't understand! I have no doubt that the NMVe will improve things, but if there's still an underlying issue there it'd be great to know. It would be a huge PITA, but I had considered just throwing Bionic on a spare SSD and seeing if the problems disappeared.
    Last edited by tkae-lp; December 2nd, 2023 at 04:36 PM.

  7. #27
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    This is what I would do... Like I instructed, allocate 2 partitions. 512GB to 1TB for L2ARC cache, 1TB for SLOG. The thing is since one is read, and the other write, and it is SSD tech, they do pretty well, even though on the same physical disk.

    I don't always allocate all my storage. Since mine is almost all ZFS pools (or if you had LVM somewhere there), then having extra, is always there for an emergency where you are tight on space. You could then add wherever you needed it. Or just a partition to use to rsync to for things...

    About the highest I would suspect with your being HDD is 550 GB/s to 750 GB/s. I think I remember getting my old workstation up to 850GB/s with HDD pool and NVMe disk caches.
    Last edited by MAFoElffen; December 2nd, 2023 at 05:50 PM. Reason: Needed coffee.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  8. #28
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Brilliant, thanks. The NMVe arrives Monday and I do have some free time that day so we shall see soon enough what magic that creates I'll probably just split it 50/50 between L2ARC and SLOG. This machine backs up any important stuff that's not the array (basically docker config) to a Microserver, so I can't actually think I would use any spare space in that 2TB.

    I've also heard back from the seller with about 700 of those MTA36ASF2G72PZ-2G1A2IJ modules and they are telling me that they are compatible with the MTA36ASF2G72PZ-2G1A2IG modules, they say they are the same and it's just a part number (reminds me of what my friend said about his dodgy boss!). This is great, because I could max out the board for another £72 and have the full 128GB of memory. We'll see what the CC balance looks like when the Christmas shopping is finished!

  9. #29
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    I got an itchy trigger finger and offered £60 for 4 of the MTA36ASF2G72PZ-2G1A2IJ's and they accepted.

    So at some point next week I'll be maxed out on RAM and have the NMVe. If that doesn't do it....!

    I know I said I wasn't going to but, wth. I'll just have to get my bum in gear and get some stuff on Gumtree! I've spent £230 on a server that hasn't had anything spent on it for over 5 years.

  10. #30
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Yes. 2 days.

    I think your calc on your max is off. Isn't 16GiB = 16000000000 bytes? I think you only set it to around 7GB...

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

Page 3 of 14 FirstFirst 1234513 ... LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •