Page 1 of 14 12311 ... LastLast
Results 1 to 10 of 136

Thread: Seemingly sporadic slow ZFS IO since 22.04

  1. #1
    Join Date
    Nov 2023
    Beans
    76

    Seemingly sporadic slow ZFS IO since 22.04

    Hi,

    I'm a long time Linux user, but still consider myself a total amateur so please excuse any blatant school boy errors and gaps in knowledge, I'm basically a hobbyist tinkerer

    I've recently moved my main media server from Bionic to Jammy - it seemed sensible enough skipping focal but I'm starting to wish I hadn't as a bit of googling seems to indicate that there could be some ZFS issues with jammy, although a lot of the posts I found were vague and/or unhelpful to my situation. I have no idea what it causing the issues I am seeing, so I would appreciate some help diagnosing it.

    I have a ZFS array of 8 disks in ZRAID-2, the disks are a few years old, but show no SMART errors. They are aligned properly (as far as I can tell) with ashift=12. The main point is: In bionic, I had never an issue. Since moving to Jammy, I have experienced daily slow downs.

    What is confusing me is that I have checked iotop and sysstat and I cannot find a process or application that is causing this. Ie. it's not something that's caning the array or a scrub when the slow downs occur.

    For example:

    Code:
    dd if=/dev/zero of=/mnt/Tank/testfile bs=1G count=6 oflag=dsync
    6+0 records in
    6+0 records out
    6442450944 bytes (6.4 GB, 6.0 GiB) copied, 477.699 s, 13.5 MB/s
    YUK! What the?!

    I would normally expect speeds many times higher even under heavy use, or that's certainly what I have been seeing with this particular box on Bionic.

    Yet iostat -d 2 shows this whilst writing the testfile:

    Code:
    Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
    loop0             0.00         0.00         0.00         0.00          0          0          0
    loop1             0.00         0.00         0.00         0.00          0          0          0
    loop10            0.00         0.00         0.00         0.00          0          0          0
    loop11            0.00         0.00         0.00         0.00          0          0          0
    loop12            0.00         0.00         0.00         0.00          0          0          0
    loop13            0.00         0.00         0.00         0.00          0          0          0
    loop14            0.00         0.00         0.00         0.00          0          0          0
    loop2             0.00         0.00         0.00         0.00          0          0          0
    loop3             0.00         0.00         0.00         0.00          0          0          0
    loop4             0.00         0.00         0.00         0.00          0          0          0
    loop5             0.00         0.00         0.00         0.00          0          0          0
    loop6             0.00         0.00         0.00         0.00          0          0          0
    loop7             0.00         0.00         0.00         0.00          0          0          0
    loop8             0.00         0.00         0.00         0.00          0          0          0
    loop9             0.00         0.00         0.00         0.00          0          0          0
    sda              10.50        22.00        46.00         0.00         44         92          0
    sdb              14.50        10.00       114.00         0.00         20        228          0
    sdc              10.00        10.00        46.00         0.00         20         92          0
    sdd               9.00        22.00        42.00         0.00         44         84          0
    sde               0.00         0.00         0.00         0.00          0          0          0
    sdf               8.50        10.00        42.00         0.00         20         84          0
    sdg              91.50         0.00       792.00         0.00          0       1584          0
    sdh               0.00         0.00         0.00         0.00          0          0          0
    sdi              12.00        10.00        56.00         0.00         20        112          0
    sdj              11.50        22.00        52.00         0.00         44        104          0
    Note: SDE and SDG are NOT part of the array.

    And iotop -o shows only some docker apps and occasional journald. All processes of which were present in Bionic and didn't cause a problem. I have stopped the docker containers to confirm and the problem doesn't go away.

    Here's it repeated 10 mins later:

    Code:
    rm  /mnt/Tank/testfile && dd if=/dev/zero  of=/mnt/Tank/testfile bs=1G count=6 oflag=dsync
    6+0 records in
    6+0 records out
    6442450944 bytes (6.4 GB, 6.0 GiB) copied, 39.5305 s, 163 MB/s
    and again 10 mins later:

    Code:
    rm /mnt/Tank/testfile && dd if=/dev/zero of=/mnt/Tank/testfile bs=1G count=6 oflag=dsync
    6+0 records in
    6+0 records out
    6442450944 bytes (6.4 GB, 6.0 GiB) copied, 28.5787 s, 225 MB/s
    So here's the interesting bit. If I reboot - no change. If I export the pool and re-import, I get "up to a day" of good performance again.

    The tank is a fair way used but not critically full, fragged, unhealthy etc:

    Code:
    zpool get capacity,size,health,fragmentation
    NAME       PROPERTY       VALUE     SOURCE
    Tank  capacity       67%       -
    Tank  size           29T       -
    Tank  health         ONLINE    -
    Tank  fragmentation  2%        -
    There are no snaps. Box has plenty of free memory and an E5 Xeon. I'm really scratching my head here. Hoping this is something really silly I've done, because I'm considering testing the array with Focal if I can't get this sorted. Any help greatly appreciated!
    Last edited by tkae-lp; December 11th, 2023 at 05:47 AM. Reason: Corrected release code name error

  2. #2
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    I have 4 systems here as Ubuntu 22.04 and ZFS-On-Root.

    Please show the output of these within CODE Tags:
    Code:
    sudo zpool -v status <your_raidz_poolname> # Post the ouput of this within CODE Tags
    arc_summary > ./arc_summary.txt # Attach this to a post
    Then in terminal, change directory in the filesystem to where you inside that pool, then do
    Code:
    sudo apt update
    sudo apt install fio
    # navigate to where you want the benchmark to test. Make sure you have at least 2GB free space.
    fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    Here is the results of that on one of mine in a 5 disk RAIDZ2...
    Code:
    mafoelffen@Mikes-B460M:/data$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                          
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=3678515: Tue Nov 28 09:46:11 2023
      write: IOPS=1032, BW=1033MiB/s (1083MB/s)(10.0GiB/9917msec); 0 zone resets
        slat (usec): min=115, max=9460, avg=486.33, stdev=526.56
        clat (nsec): min=1329, max=4938.7M, avg=29869302.58, stdev=270328002.16
         lat (usec): min=150, max=4940.0k, avg=30355.88, stdev=270371.59
        clat percentiles (msec):
         |  1.00th=[    5],  5.00th=[    6], 10.00th=[    6], 20.00th=[    6],
         | 30.00th=[    6], 40.00th=[    6], 50.00th=[    8], 60.00th=[   10],
         | 70.00th=[   12], 80.00th=[   32], 90.00th=[   35], 95.00th=[   53],
         | 99.00th=[   74], 99.50th=[   78], 99.90th=[ 4933], 99.95th=[ 4933],
         | 99.99th=[ 4933]
       bw (  MiB/s): min=  452, max= 5082, per=100.00%, avg=1993.80, stdev=1665.68, samples=10
       iops        : min=  452, max= 5082, avg=1993.80, stdev=1665.68, samples=10
      lat (usec)   : 2=0.02%, 4=0.02%, 50=0.01%, 250=0.02%, 500=0.05%
      lat (usec)   : 750=0.04%, 1000=0.06%
      lat (msec)   : 2=0.17%, 4=0.35%, 10=62.03%, 20=12.71%, 50=18.79%
      lat (msec)   : 100=5.43%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=523, max=523, avg=523.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  524],  5.00th=[  524], 10.00th=[  524], 20.00th=[  524],
         | 30.00th=[  524], 40.00th=[  524], 50.00th=[  524], 60.00th=[  524],
         | 70.00th=[  524], 80.00th=[  524], 90.00th=[  524], 95.00th=[  524],
         | 99.00th=[  524], 99.50th=[  524], 99.90th=[  524], 99.95th=[  524],
         | 99.99th=[  524]
      cpu          : usr=4.03%, sys=28.00%, ctx=25070, majf=14, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=1033MiB/s (1083MB/s), 1033MiB/s-1033MiB/s (1083MB/s-1083MB/s), io=10.0GiB (10.7GB), run=9917-9917msec
    
    mafoelffen@Mikes-B460M:/data$ sudo zpool status -v datapool
    [sudo] password for mafoelffen:
      pool: datapool
     state: ONLINE
    status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
    action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
       see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
      scan: scrub repaired 0B in 00:20:08 with 0 errors on Wed Nov 22 08:44:43 2023
    config:
    
        NAME                                                   STATE     READ WRITE CKSUM
        datapool                                               ONLINE       0     0     0
          raidz2-0                                             ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNM0TA09560A-part1  ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNM0TA11601H-part1  ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNM0TA47393M-part1  ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W330507J-part1  ONLINE       0     6     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNM0TB08933B-part1  ONLINE       0     0     0
        logs    
          nvme-Samsung_SSD_970_EVO_2TB_S464NB0KB10521K-part2   ONLINE       0     0     0
        cache
          nvme-Samsung_SSD_970_EVO_2TB_S464NB0KB10521K-part1   ONLINE       0     0     0
    
    errors: No known data errors
    Dang. (Doing a Scrub on that pool now...)

    EDIT: Scrub done. Reran benchmark. Same test results (same speed).
    Last edited by MAFoElffen; November 28th, 2023 at 09:22 PM. Reason: Update

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  3. #3
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Hi,

    Thanks for the comprehensive reply. Yeah this is an odd one. I've been using ZFS on Ubuntu for years and have never had any issues until now, not even the slightest blip.

    As requested:

    Code:
    sudo zpool status -v Tank
      pool: Tank
     state: ONLINE
      scan: scrub repaired 0B in 10:00:01 with 0 errors on Mon Nov 13 23:01:24 2023
    config:
    
        NAME                                 STATE     READ WRITE CKSUM
        Tank                                  ONLINE       0     0     0
          raidz2-0                           ONLINE       0     0     0
            ata-ST4000DM000-1F2168_S300XXXX  ONLINE       0     0     0
            ata-ST4000DM004-2CV104_ZTT4XXXX  ONLINE       0     0     0
            ata-ST4000DM004-2CV104_ZTT4XXXX  ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX  ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX  ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX  ONLINE       0     0     0
            ata-ST4000DM000-1F2168_W300XXXX  ONLINE       0     0     0
    
    errors: No known data errors
    arc_summary: https://pastebin.ubuntu.com/p/5ZDtsNzX7v/

    FIO:

    Code:
    fio --name TEST --eta-newline=5s --filename=temp.file --rw=write  --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio  --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60  --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    TEST: Laying out IO file (1 file / 2048MiB)
    Jobs: 1 (f=1): [W(1)][46.7%][w=274MiB/s][w=274 IOPS][eta 00m:08s]
    Jobs: 1 (f=1): [W(1)][65.0%][w=392MiB/s][w=392 IOPS][eta 00m:07s] 
    Jobs: 1 (f=1): [W(1)][73.1%][w=40.0MiB/s][w=40 IOPS][eta 00m:07s] 
    Jobs: 1 (f=1): [W(1)][73.5%][w=63.0MiB/s][w=63 IOPS][eta 00m:09s] 
    Jobs: 1 (f=1): [W(1)][83.8%][w=289MiB/s][w=289 IOPS][eta 00m:06s] 
    Jobs: 1 (f=1): [W(1)][97.4%][w=232MiB/s][w=232 IOPS][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][97.7%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][97.8%][eta 00m:01s]
    TEST: (groupid=0, jobs=1): err= 0: pid=1241014: Tue Nov 28 23:44:29 2023
      write: IOPS=235, BW=235MiB/s (247MB/s)(10.0GiB/43556msec); 0 zone resets
        slat (usec): min=252, max=53915, avg=3569.49, stdev=4839.84
        clat (usec): min=3, max=7052.4k, avg=131421.17, stdev=405561.87
         lat (usec): min=313, max=7056.0k, avg=134991.41, stdev=407262.28
        clat percentiles (msec):
         |  1.00th=[   11],  5.00th=[   13], 10.00th=[   16], 20.00th=[   21],
         | 30.00th=[   63], 40.00th=[   72], 50.00th=[   81], 60.00th=[   88],
         | 70.00th=[   99], 80.00th=[  120], 90.00th=[  207], 95.00th=[  426],
         | 99.00th=[  944], 99.50th=[ 1028], 99.90th=[ 7013], 99.95th=[ 7013],
         | 99.99th=[ 7080]
       bw (  KiB/s): min=28672, max=2134016, per=100.00%, avg=280756.99, stdev=332876.74, samples=74
       iops        : min=   28, max= 2084, avg=274.18, stdev=325.08, samples=74
      lat (usec)   : 4=0.02%, 10=0.03%, 500=0.02%, 1000=0.01%
      lat (msec)   : 2=0.05%, 4=0.13%, 10=0.63%, 20=18.89%, 50=7.08%
      lat (msec)   : 100=43.91%, 250=21.15%, 500=4.10%, 750=2.07%, 1000=1.32%
      lat (msec)   : 2000=0.29%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1223, max=1223, avg=1223.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1224],  5.00th=[ 1224], 10.00th=[ 1224], 20.00th=[ 1224],
         | 30.00th=[ 1224], 40.00th=[ 1224], 50.00th=[ 1224], 60.00th=[ 1224],
         | 70.00th=[ 1224], 80.00th=[ 1224], 90.00th=[ 1224], 95.00th=[ 1224],
         | 99.00th=[ 1224], 99.50th=[ 1224], 99.90th=[ 1224], 99.95th=[ 1224],
         | 99.99th=[ 1224]
      cpu          : usr=1.68%, sys=10.99%, ctx=76215, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=235MiB/s (247MB/s), 235MiB/s-235MiB/s (247MB/s-247MB/s), io=10.0GiB (10.7GB), run=43556-43556msec
    and again 10 mins later:

    Code:
    fio --name TEST  --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g  --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32  --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    TEST: Laying out IO file (1 file / 2048MiB)
    Jobs: 1 (f=1): [W(1)][11.7%][eta 00m:53s]
    Jobs: 1 (f=1): [W(1)][20.0%][w=3072KiB/s][w=3 IOPS][eta 00m:48s]
    Jobs: 1 (f=1): [W(1)][30.0%][w=3075KiB/s][w=3 IOPS][eta 00m:42s] 
    Jobs: 1 (f=1): [W(1)][40.0%][w=4096KiB/s][w=4 IOPS][eta 00m:36s] 
    Jobs: 1 (f=1): [W(1)][50.0%][w=4100KiB/s][w=4 IOPS][eta 00m:30s] 
    Jobs: 1 (f=1): [W(1)][60.0%][w=3072KiB/s][w=3 IOPS][eta 00m:24s] 
    Jobs: 1 (f=1): [W(1)][70.0%][w=2050KiB/s][w=2 IOPS][eta 00m:18s] 
    Jobs: 1 (f=1): [W(1)][80.0%][w=2048KiB/s][w=2 IOPS][eta 00m:12s] 
    Jobs: 1 (f=1): [W(1)][90.0%][w=3072KiB/s][w=3 IOPS][eta 00m:06s] 
    Jobs: 1 (f=1): [W(1)][100.0%][w=3075KiB/s][w=3 IOPS][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][1.4%][w=3072KiB/s][w=3 IOPS][eta 01h:10m:46s]
    TEST: (groupid=0, jobs=1): err= 0: pid=1368279: Tue Nov 28 23:53:39 2023
      write: IOPS=2, BW=3003KiB/s (3075kB/s)(177MiB/60356msec); 0 zone resets
        slat (msec): min=221, max=660, avg=340.97, stdev=83.41
        clat (usec): min=13, max=14102k, avg=9758855.47, stdev=3033929.82
         lat (msec): min=363, max=14413, avg=10099.83, stdev=3058.23
        clat percentiles (msec):
         |  1.00th=[  363],  5.00th=[ 2567], 10.00th=[ 5470], 20.00th=[ 8221],
         | 30.00th=[ 8792], 40.00th=[ 9597], 50.00th=[10268], 60.00th=[10671],
         | 70.00th=[11073], 80.00th=[12281], 90.00th=[13355], 95.00th=[13758],
         | 99.00th=[14026], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160],
         | 99.99th=[14160]
       bw (  KiB/s): min= 2048, max= 4104, per=99.90%, avg=3000.53, stdev=1026.58, samples=99
       iops        : min=    2, max=    4, avg= 2.93, stdev= 1.00, samples=99
      lat (usec)   : 20=0.56%
      lat (msec)   : 500=0.56%, 750=0.56%, 1000=0.56%, 2000=1.69%, >=2000=96.05%
      cpu          : usr=0.03%, sys=0.28%, ctx=1432, majf=0, minf=10
      IO depths    : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.5%, 16=9.0%, 32=82.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.7%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,177,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=3003KiB/s (3075kB/s), 3003KiB/s-3003KiB/s (3075kB/s-3075kB/s), io=177MiB (186MB), run=60356-60356msec
    But if I use my previous testing method as something to measure against, I don't appear to be in a massive slow down right now:

    Code:
    dd if=/dev/zero of=/mnt/Tank/testfile bs=1G count=6 oflag=dsync 
    6+0 records in
    6+0 records out
    6442450944 bytes (6.4 GB, 6.0 GiB) copied, 56.0559 s, 115 MB/s
    I may have to monitor this for a couple of days and re-run fio when I see things drop to the abysmal 35MB/s... unless anything stands out to you? Nothing on a physical level has changed since Bionic. This has to be some sort of config issue?


    Edit: here's another one:

    Code:
    fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    TEST: Laying out IO file (1 file / 2048MiB)
    Jobs: 1 (f=1): [W(1)][23.3%][w=134MiB/s][w=134 IOPS][eta 00m:23s]
    Jobs: 1 (f=1): [W(1)][35.1%][w=188MiB/s][w=188 IOPS][eta 00m:24s] 
    Jobs: 1 (f=1): [W(1)][44.2%][w=167MiB/s][w=167 IOPS][eta 00m:24s] 
    Jobs: 1 (f=1): [W(1)][49.0%][eta 00m:26s]                         
    Jobs: 1 (f=1): [W(1)][51.7%][eta 00m:29s]                       
    Jobs: 1 (f=1): [W(1)][61.7%][w=4100KiB/s][w=4 IOPS][eta 00m:23s] 
    Jobs: 1 (f=1): [W(1)][71.7%][eta 00m:17s]                        
    Jobs: 1 (f=1): [W(1)][81.7%][w=180MiB/s][w=180 IOPS][eta 00m:11s]
    Jobs: 1 (f=1): [W(1)][91.7%][eta 00m:05s]                        
    Jobs: 1 (f=1): [W(1)][100.0%][w=5120KiB/s][w=5 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=1759323: Wed Nov 29 00:20:56 2023
      write: IOPS=100, BW=101MiB/s (106MB/s)(6076MiB/60205msec); 0 zone resets
        slat (usec): min=49, max=6932.0k, avg=9330.20, stdev=147502.33
        clat (msec): min=2, max=11969, avg=307.67, stdev=1279.78
         lat (msec): min=2, max=11969, avg=317.00, stdev=1303.51
        clat percentiles (msec):
         |  1.00th=[   39],  5.00th=[   57], 10.00th=[   65], 20.00th=[   65],
         | 30.00th=[   69], 40.00th=[   77], 50.00th=[   80], 60.00th=[  124],
         | 70.00th=[  167], 80.00th=[  186], 90.00th=[  262], 95.00th=[  472],
         | 99.00th=[10000], 99.50th=[11208], 99.90th=[12013], 99.95th=[12013],
         | 99.99th=[12013]
       bw (  KiB/s): min= 2048, max=466944, per=100.00%, avg=179392.93, stdev=151093.97, samples=69
       iops        : min=    2, max=  456, avg=175.19, stdev=147.55, samples=69
      lat (msec)   : 4=0.03%, 10=0.07%, 20=0.28%, 50=0.87%, 100=53.83%
      lat (msec)   : 250=34.53%, 500=5.74%, 750=1.10%, 1000=1.04%, 2000=0.64%
      lat (msec)   : >=2000=1.86%
      cpu          : usr=0.60%, sys=0.72%, ctx=2228, majf=0, minf=12
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,6076,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=6076MiB (6371MB), run=60205-60205msec
    
    Disk stats (read/write):
      sdg: ios=153/12344, merge=59/268, ticks=133897/2984697, in_queue=3133595, util=98.20%
    Last edited by tkae-lp; November 29th, 2023 at 01:24 AM. Reason: Another fio output

  4. #4
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    This one could be of interest. I was trying to watch a 4K film and it kept buffering:

    Code:
    6+0 records in
    6+0 records out
    6442450944 bytes (6.4 GB, 6.0 GiB) copied, 166.54 s, 38.7 MB/s
    FIO:

    Code:
    fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    TEST: Laying out IO file (1 file / 2048MiB)
    Jobs: 1 (f=1): [W(1)][46.7%][w=544MiB/s][w=543 IOPS][eta 00m:08s]
    Jobs: 1 (f=1): [W(1)][81.2%][w=539MiB/s][w=538 IOPS][eta 00m:03s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]                        
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s] 
    Jobs: 1 (f=1): [W(1)][97.4%][eta 00m:01s]  
    Jobs: 1 (f=1): [W(1)][97.7%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][98.0%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][98.2%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][98.4%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][98.5%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][98.6%][eta 00m:01s] 
    Jobs: 1 (f=1): [W(1)][97.5%][eta 00m:02s] 
    Jobs: 1 (f=1): [W(1)][97.6%][eta 00m:02s] 
    TEST: (groupid=0, jobs=1): err= 0: pid=2088223: Wed Nov 29 03:39:02 2023
      write: IOPS=123, BW=123MiB/s (129MB/s)(9.77GiB/81154msec); 0 zone resets
        slat (usec): min=330, max=2347, avg=1662.85, stdev=375.10
        clat (usec): min=8, max=64494k, avg=251199.55, stdev=3581236.17
         lat (usec): min=1764, max=64496k, avg=252863.29, stdev=3581238.17
        clat percentiles (msec):
         |  1.00th=[   16],  5.00th=[   29], 10.00th=[   30], 20.00th=[   44],
         | 30.00th=[   55], 40.00th=[   57], 50.00th=[   57], 60.00th=[   59],
         | 70.00th=[   59], 80.00th=[   59], 90.00th=[   59], 95.00th=[   59],
         | 99.00th=[   64], 99.50th=[   66], 99.90th=[17113], 99.95th=[17113],
         | 99.99th=[17113]
       bw (  KiB/s): min=241664, max=1162986, per=100.00%, avg=600417.24, stdev=172620.70, samples=34
       iops        : min=  236, max= 1135, avg=586.32, stdev=168.50, samples=34
      lat (usec)   : 10=0.04%
      lat (msec)   : 2=0.03%, 4=0.04%, 10=0.12%, 20=1.00%, 50=18.98%
      lat (msec)   : 100=79.48%, >=2000=0.31%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=2055, max=2055, avg=2055.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 2064],  5.00th=[ 2064], 10.00th=[ 2064], 20.00th=[ 2064],
         | 30.00th=[ 2064], 40.00th=[ 2064], 50.00th=[ 2064], 60.00th=[ 2064],
         | 70.00th=[ 2064], 80.00th=[ 2064], 90.00th=[ 2064], 95.00th=[ 2064],
         | 99.00th=[ 2064], 99.50th=[ 2064], 99.90th=[ 2064], 99.95th=[ 2064],
         | 99.99th=[ 2064]
      cpu          : usr=0.79%, sys=7.03%, ctx=66585, majf=0, minf=16
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10000,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=9.77GiB (10.5GB), run=81154-81154msec
    The job percentages are very different this time.

  5. #5
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    That is crazy how much difference there is in this:
    Code:
    WRITE: bw=235MiB/s (247MB/s), 235MiB/s-235MiB/s (247MB/s-247MB/s), io=10.0GiB (10.7GB), run=43556-43556msec
    WRITE: bw=3003KiB/s (3075kB/s), 3003KiB/s-3003KiB/s (3075kB/s-3075kB/s), io=177MiB (186MB), run=60356-60356msec
    WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=6076MiB (6371MB), run=60205-60205msec
    The drives alone are rated at 180MB/s for each drive. You have my interest and curiosity.

    Please post the output of these:
    Code:
    sudo zpool list -o name,free,cap,frag,ashift tank
    sudo zfs list -r -o name,used,usedbydataset,avail,compressratio,recordsize,mounted tank
    Then run the 'system-info' script in my signature line... But before you run the script, run it with this startup option
    Code:
    ./system-info --details
    That will turn on more details about the storage controllers and the ZFS filesystems for the report. Please choose to upload it to a pastebin and poet the URL to it.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  6. #6
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    I had considered faulty hardware such as SATA cables, but the issue occurred exactly at the point of moving to Jammy.

    Code:
    sudo zpool list -o name,free,cap,frag,ashift Tank 
    NAME        FREE    CAP   FRAG  ASHIFT
    Tank       9.29T    67%     2%      12

    Code:
    sudo zfs list -r -o name,used,usedbydataset,avail,compressratio,recordsize,mounted Tank 
    NAME                USED  USEDDS  AVAIL  RATIO  RECSIZE  MOUNTED
    Tank               14.0T   14.0T  6.36T  1.00x     128K  yes
    Tank/Docker        14.2G   14.2G  6.36T  1.10x     128K  yes
    Tank/Backups       15.5G   15.5G  6.36T  1.05x     128K  yes
    Script output: https://paste.ubuntu.com/p/96zC2YWRCG/

  7. #7
    #&thj^% is offline I Ubuntu, Therefore, I Am
    Join Date
    Aug 2016
    Beans
    Hidden!

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by MAFoElffen View Post
    That is crazy how much difference there is in this:
    My Interest as well:
    Code:
    Run status group 0 (all jobs):
      WRITE: bw=1788MiB/s (1875MB/s), 1788MiB/s-1788MiB/s (1875MB/s-1875MB/s), io=10.0GiB (10.7GB), run=5728-5728msec
    Your Cap may come in to play mine:
    Code:
    └─> sudo zpool list -o name,free,cap,frag,ashift rpool
    NAME    FREE    CAP   FRAG  ASHIFT
    rpool   222G     4%     1%      12
    ┌───────────────────>
    │~ 
    └─> sudo zpool list -o name,free,cap,frag,ashift bpool
    NAME    FREE    CAP   FRAG  ASHIFT
    bpool  1.75G     6%     0%      12

  8. #8
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    I see yo have 11 SATA'a with the OS being on SSD.

    I'm seeing a NVidia GPU, which is probable in Slot 6.

    I see 6 SATA slots on the mainboard, with another SATA HBA (possibly) in slot #5(?)

    Which of those disks are on the mainboard and which are on the HBA?

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  9. #9
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Dang. was writing this and accidentally closed the tab. Ouch. Writing this again. LOL

    I see 32GB of memory. 21TB of zpool. 8GB + (1BG/TBx21TB=21GB) = 29GB just for ZFS... Leaving 3GB. ARC is going to take up to 10.5GB...

    If you are watching movies when this happens, then it is doing reads, and using ARC...

    Next time it does that, try this:
    Code:
    RESET=$(grep . /sys/module/zfs/parameters/zfs_arc_shrinker_limit)
    sudo echo 0 > /sys/module/zfs/parameters/zfs_arc_shrinker_limit
    sudo echo 3 > /proc/sys/vm/drop_caches
    See if that improves that, that will clear the ARC caches, as if you had exported & imported the zpool... That would free up about 8GB of memory...

    Then later do
    Code:
    sudo echo $RESET > /sys/module/zfs/parameters/zfs_arc_shrinker_limit
    That will reset it back to it's default of 10000. If that solves it, then I'll describes how to reset the ARC size a bit lower. Or, do you have any free SATA slot left?

    Another thing I saw is that you have an NVidia Card & 11 SATA's. There is 6 SATA port on the mainboard. I'm assuming that you have the NVidia card in Slot 6? The other SATA HBA in Slot 5?

    Slot 6 is Gen 3.0 and Slot 5 is Gen 2....
    Code:
    PCI Express: Unidirectional Bandwidth in x1 and x16 Configurations
    
    Generation     Year of Release     Data Transfer Rate     Bandwidth x1     Bandwidth x16
    PCIe 1.0       2003                2.5 GT/s               250 MB/s         4.0 GB/s
    PCIe 2.0       2007                5.0 GT/s               500 MB/s         8.0 GB/s
    PCIe 3.0       2010                8.0 GT/s               1 GB/s           16 GB/s
    PCIe 4.0       2017                16 GT/s                2 GB/s           32 GB/s
    PCIe 5.0       2019                32 GT/s                4 GB/s           64 GB/s
    PCIe 6.0       2021                64 GT/s                8 GB/s           128 GB/s
    You could try swaying the GPU card and the HBA...

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  10. #10
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by 1fallen View Post
    Your Cap may come in to play
    I had wondered this, but my limited knowledge of ZFS is that the performance shouldn't degrade by any meaningful amount until about 80%? And why not on Bionic?

    Quote Originally Posted by MAFoElffen View Post
    Or, do you have any free SATA slot left?
    There is an unused NMVe slot, are you thinking I should throw something in there for arc and logs? (Edit: And, to be honest, along with the NMVe, probably time to drop some more memory in?)

    Quote Originally Posted by MAFoElffen View Post
    Another thing I saw is that you have an NVidia Card & 11 SATA's. There is 6 SATA port on the mainboard. I'm assuming that you have the NVidia card in Slot 6? The other SATA HBA in Slot 5?
    No, there are 10 on board SATA slots the board is a beast (or was at the time!) - I don't have an HBA card. 8 x 4TB SATA disks for the array, 1 x 500GB SATA for downloads (torrents/nzb so it doesn't cane the array) 1 x SATA SSD for OS... all on main board.

    GPU is in slot 5, I think? IIRC this is because it touches the Noctua CPU heat sink if in slot 6.

    Here is the board: https://www.asrock.com/mb/Intel/X99%20WS/ it's from the workstation line.

    The only thing I miss is IPMI, but we got it cheap and it supports ECC so that's what we ended up with.

    Quote Originally Posted by MAFoElffen View Post
    Next time it does that, try this
    Thanks - watch this space.. I'll report back asap.

    Edit: If the ARC is the issue, and more memory is required, what I don't get is how significantly different Jammy is to Bionic. I always start off with minimal server for my installs, and then because I do like a GUI do a bare minimum xfce install without using group packages wherever possible. Is Jammy really using that much more memory than Bionic?!
    Last edited by tkae-lp; December 23rd, 2023 at 07:20 PM. Reason: Additional info about GPU placement. Comment about more memory. Comment about Bionic/Jammy

Page 1 of 14 12311 ... LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •