Page 11 of 14 FirstFirst ... 910111213 ... LastLast
Results 101 to 110 of 136

Thread: Seemingly sporadic slow ZFS IO since 22.04

  1. #101
    Join Date
    Aug 2016
    Location
    Wandering
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by tkae-lp View Post
    I'm sorry you guys have been afflicted with this but - in some ways - this may be a good thing as you will be able to easier dig around and diagnose on your boxes, and you both know much more about ZFS that I do.

    Please let me know if there is anything I can do/try/test.
    No the speed is not up to snuff, but we have nothing or nowhere to start so I prod and prod till something reveals the problem.

    And don't be sorry this how we help fix new or existing bugs...it takes a community.

    Just my notes disregard anything below:
    Code:
    $zdb
    
    tank:
        version: 5000
        name: 'tank'
        state: 0
        txg: 19707
        pool_guid: 10926758658995060061
        errata: 0
        hostid: 320048488
        hostname: 'parrot'
        com.delphix:has_per_vdev_zaps
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 10926758658995060061
            create_txg: 4
            com.klarasystems:vdev_zap_root: 129
            children[0]:
                type: 'disk'
                id: 0
                guid: 10795441740398861942
                path: '/dev/sdb2'
                whole_disk: 0
                metaslab_array: 256
                metaslab_shift: 32
                ashift: 11
                asize: 499564150784
                is_log: 0
                DTL: 1548
                create_txg: 4
                com.delphix:vdev_zap_leaf: 130
                com.delphix:vdev_zap_top: 131
        features_for_read:
            com.delphix:hole_birth
            com.delphix:embedded_data
            com.klarasystems:vdev_zaps_v2
    ┌─[me@parrot]─[~]
    └──╼ $zpool get all
    NAME  PROPERTY                       VALUE                          SOURCE
    tank  size                           464G                           -
    tank  capacity                       27%                            -
    tank  altroot                        -                              default
    tank  health                         ONLINE                         -
    tank  guid                           10926758658995060061           -
    tank  version                        -                              default
    tank  bootfs                         -                              default
    tank  delegation                     on                             default
    tank  autoreplace                    off                            default
    tank  cachefile                      -                              default
    tank  failmode                       wait                           default
    tank  listsnapshots                  off                            default
    tank  autoexpand                     off                            default
    tank  dedupratio                     1.00x                          -
    tank  free                           338G                           -
    tank  allocated                      126G                           -
    tank  readonly                       off                            -
    tank  ashift                         0                              default
    tank  comment                        -                              default
    tank  expandsize                     -                              -
    tank  freeing                        0                              -
    tank  fragmentation                  0%                             -
    tank  leaked                         0                              -
    tank  multihost                      off                            default
    tank  checkpoint                     -                              -
    tank  load_guid                      2699787098362106378            -
    tank  autotrim                       off                            default
    tank  compatibility                  off                            default
    tank  bcloneused                     0                              -
    tank  bclonesaved                    0                              -
    tank  bcloneratio                    1.00x                          -
    tank  feature@async_destroy          enabled                        local
    tank  feature@empty_bpobj            active                         local
    tank  feature@lz4_compress           active                         local
    tank  feature@multi_vdev_crash_dump  enabled                        local
    tank  feature@spacemap_histogram     active                         local
    tank  feature@enabled_txg            active                         local
    tank  feature@hole_birth             active                         local
    tank  feature@extensible_dataset     active                         local
    tank  feature@embedded_data          active                         local
    tank  feature@bookmarks              enabled                        local
    tank  feature@filesystem_limits      enabled                        local
    tank  feature@large_blocks           enabled                        local
    tank  feature@large_dnode            enabled                        local
    tank  feature@sha512                 enabled                        local
    tank  feature@skein                  enabled                        local
    tank  feature@edonr                  enabled                        local
    tank  feature@userobj_accounting     active                         local
    tank  feature@encryption             enabled                        local
    tank  feature@project_quota          active                         local
    tank  feature@device_removal         enabled                        local
    tank  feature@obsolete_counts        enabled                        local
    tank  feature@zpool_checkpoint       enabled                        local
    tank  feature@spacemap_v2            active                         local
    tank  feature@allocation_classes     enabled                        local
    tank  feature@resilver_defer         enabled                        local
    tank  feature@bookmark_v2            enabled                        local
    tank  feature@redaction_bookmarks    enabled                        local
    tank  feature@redacted_datasets      enabled                        local
    tank  feature@bookmark_written       enabled                        local
    tank  feature@log_spacemap           active                         local
    tank  feature@livelist               enabled                        local
    tank  feature@device_rebuild         enabled                        local
    tank  feature@zstd_compress          enabled                        local
    tank  feature@draid                  enabled                        local
    tank  feature@zilsaxattr             active                         local
    tank  feature@head_errlog            active                         local
    tank  feature@blake3                 enabled                        local
    tank  feature@block_cloning          enabled                        local
    tank  feature@vdev_zaps_v2           active                         local
    This is one problem on mine:
    Code:
    tank  ashift                         0
    This will cause very slow write speeds.
    I'm not done yet still disregard (I need these notes in case I blow my system up LOL)
    My history
    Code:
    sudo zpool history tank
    History for 'tank':
    2023-11-29.13:28:07 zpool create -f tank /dev/sda2
    2023-11-29.14:47:10 zpool export tank
    2023-11-29.20:28:10 zpool import tank
    2023-11-30.07:35:53 zpool import tank
    2023-11-30.08:54:09 zpool import -c /etc/zfs/zpool.cache -aN
    2023-11-30.12:43:22 zpool import -c /etc/zfs/zpool.cache -aN
    2023-11-30.14:46:25 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-01.07:49:14 zpool import tank
    2023-12-01.11:29:03 zfs create tank/filesystem
    2023-12-01.11:30:27 zfs snapshot tank/filesystem@friday
    2023-12-01.13:22:25 zpool export rpool/ROOT/ubuntu_2wtpxc@friday tank
    2023-12-03.08:53:48 zpool import tank
    2023-12-03.13:41:24 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-04.09:49:21 zpool import -f tank
    2023-12-04.09:51:07 zpool export tank
    2023-12-04.11:05:45 zpool import tank
    2023-12-04.11:06:00 zfs create -o canmount=on -o mountpoint=/test tank/test
    2023-12-04.11:28:16 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-04.13:41:38 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-05.07:23:49 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-05.07:35:28 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-05.09:05:30 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-07.10:26:53 zpool import tank
    2023-12-07.12:15:42 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-07.14:22:40 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-08.12:51:41 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-08.12:59:27 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-08.16:00:25 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-09.11:39:00 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-09.11:52:22 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-09.14:38:28 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-09.14:42:25 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-09.15:24:27 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-09.15:33:06 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-09.16:00:56 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-10.06:49:46 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-10.17:32:37 zpool import tank
    2023-12-11.09:31:43 zpool import tank
    2023-12-11.11:17:00 zpool import tank
    2023-12-12.18:55:23 zpool import tank
    2023-12-12.19:48:11 zpool export tank
    2023-12-19.12:28:24 zpool import tank
    2023-12-19.12:53:32 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-20.15:24:26 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-21.07:05:43 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-21.13:07:05 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-21.13:10:20 zpool import -c /etc/zfs/zpool.cache -aN
    2023-12-25.07:44:20 zpool import tank
    2024-01-09.13:06:55 zpool import -f tank
    2024-01-09.14:35:03 zfs destroy tank/filesystem@friday
    2024-01-12.11:37:35 zpool import -f tank
    2024-01-13.06:43:44 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-13.06:54:25 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-13.09:53:01 zpool import -f tank
    2024-01-13.13:19:31 zpool import -f tank
    2024-01-13.14:57:26 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-13.15:17:35 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-13.18:23:07 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-16.08:28:28 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-16.08:37:42 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-17.07:53:42 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-17.08:00:46 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-18.08:10:53 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-19.02:37:59 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-19.02:51:02 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-19.17:13:38 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-20.06:29:26 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-20.13:52:46 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-21.08:19:48 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-21.08:32:50 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-21.15:23:27 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-21.15:36:43 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-21.17:16:40 zpool import -f tank
    2024-01-22.16:34:27 zpool import -f tank
    2024-01-22.16:46:25 zpool scrub tank
    2024-01-22.16:49:55 zpool export tank
    2024-01-23.11:12:14 zpool import -f tank
    2024-01-24.05:48:48 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-24.08:48:19 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-24.08:54:41 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-24.09:03:09 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-24.09:08:42 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-24.09:17:39 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-24.09:21:14 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-24.09:26:17 zpool import -c /etc/zfs/zpool.cache -aN
    2024-01-24.13:27:06 zpool import -c /etc/zfs/zpool.cache -aN
    Temp Check:
    Code:
    === START OF SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    
    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        28 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          10%
    Percentage Used:                    0%
    Data Units Read:                    3,723,733 [1.90 TB]
    Data Units Written:                 6,534,694 [3.34 TB]
    Host Read Commands:                 124,029,758
    Host Write Commands:                202,144,712
    Controller Busy Time:               422
    Power Cycles:                       3,277
    Power On Hours:                     670
    Unsafe Shutdowns:                   326
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      1
    Warning  Comp. Temperature Time:    0
    Critical Comp. Temperature Time:    0
    
    Warning: NVMe Get Log truncated to 0x200 bytes, 0x200 bytes zero filled
    Error Information (NVMe Log 0x01, 16 of 256 entries)
    No Errors Logged
    Last edited by 1fallen; January 25th, 2024 at 01:20 AM. Reason: add notes for future advice
    With realization of one's own potential and self-confidence in one's ability, one can build a better world.
    Dalai Lama>>
    Code Tags | System-info | Forum Guide lines | Arch Linux, Debian Unstable, FreeBSD

  2. #102
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Quote Originally Posted by 1fallen View Post
    No the speed is not up to snuff, but we have nothing or nowhere to start so I prod and prod till something reveals the problem.

    And don't be sorry this how we help fix new or existing bugs...it takes a community.
    Being an amateur tinkerer in many things, I often come across issues with whatever it is that I have decided to take my hand to. Usually, ~80% of them are solved with a quick search, ~15% with an extended search and almost all of the remaining 5% a quick post - but this is the first time I have experienced an issue with one of my hobbies that has required such a long running thread due to such an elusive issue. I guess what I am trying to say is that I thoroughly appreciate your continued help with this. The community vibe is strong with this one

  3. #103
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    "The Force is strong in this one..."

    Still here. I'm not having that problem. (Yet.) I did have some other strangeness. Which was odd and I cannot explain a why for it, really.

    It lost track of the device device name for one of my L2ARC's. (???) Easy to fix, and the nothing was really wrong with it after I re-added it. But I cannot explain the why, or even how it could have done that. It wasn't even something that was valid for a vdev. That is the first time I have seen that in 18 years
    Last edited by MAFoElffen; January 26th, 2024 at 03:44 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  4. #104
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    That's odd... not sure what to make of that! I have one of my NVMe drive rename itself on import, but it still works, it's just annoying.

    Today has been horrible for speed Can't even watch a SD file without it freezing every 10 seconds.

    Code:
    WRITE: bw=96.6MiB/s (101MB/s), 96.6MiB/s-96.6MiB/s (101MB/s-101MB/s), io=5800MiB (6082MB), run=60061-60061msec
    Code:
    READ: bw=3195MiB/s (3350MB/s), 3195MiB/s-3195MiB/s (3350MB/s-3350MB/s), io=10.0GiB (10.7GB), run=3205-3205msec
    So odd that it's the write that affects it. I can only imagine it's something to do with the docker config files, like writing to the emby DB.

    I've resorted to using VLC today and browsing a samba share :-S

  5. #105
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    The thing I have a problem grasping, is that the process of what you are doing (watching something from your media server) is a read process. Why is the read speed high, and the write speed so low?

    I must not be seeing "something" there that is going on.

    I think we need to capture some sampling data to look want might be going on...

    Look at this, and use something similar. I will explain what is going on, so you can modify the command to what is on yours
    Code:
    mafoelffen@Mikes-ThinkPad-T520:~/Scripts$ iostat -x --pretty sda2 sda3 sda3 sda4 sda5 sda6 2 | tee ~/Scripts/iostat.log.txt
    Linux 6.2.0-39-generic (Mikes-ThinkPad-T520)     01/27/2024     _x86_64_    (8 CPU)
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               3.31    0.16    1.05    0.06    0.00   95.42
    
         r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
        0.00      0.11     0.00   0.00    0.42    38.35 sda2
        0.00      0.12     0.00   0.00    0.43    44.00 sda3
        0.00      0.11     0.00   0.00    0.41    38.35 sda4
        0.00      0.11     0.00   0.00    0.49    38.35 sda5
        0.11      0.94     0.09  43.42    1.01     8.23 sda6
    
         w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.29      5.31     1.04  78.35    0.25    18.48 sda6
    
         d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         f/s f_await  aqu-sz  %util Device
        0.00    0.00    0.00   0.00 sda2
        0.00    0.00    0.00   0.00 sda3
        0.00    0.00    0.00   0.00 sda4
        0.00    0.00    0.00   0.00 sda5
        0.00    0.00    0.00   0.02 sda6
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.81    0.00    0.25    0.00    0.00   98.94
    
         r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         f/s f_await  aqu-sz  %util Device
        0.00    0.00    0.00   0.00 sda2
        0.00    0.00    0.00   0.00 sda3
        0.00    0.00    0.00   0.00 sda4
        0.00    0.00    0.00   0.00 sda5
        0.00    0.00    0.00   0.00 sda6
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.56    0.00    0.13    0.00    0.00   99.31
    
         r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         f/s f_await  aqu-sz  %util Device
        0.00    0.00    0.00   0.00 sda2
        0.00    0.00    0.00   0.00 sda3
        0.00    0.00    0.00   0.00 sda4
        0.00    0.00    0.00   0.00 sda5
        0.00    0.00    0.00   0.00 sda6
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.75    0.00    0.19    0.00    0.00   99.06
    
         r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         f/s f_await  aqu-sz  %util Device
        0.00    0.00    0.00   0.00 sda2
        0.00    0.00    0.00   0.00 sda3
        0.00    0.00    0.00   0.00 sda4
        0.00    0.00    0.00   0.00 sda5
        0.00    0.00    0.00   0.00 sda6
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               1.13    0.00    0.38    0.00    0.00   98.50
    
         r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         f/s f_await  aqu-sz  %util Device
        0.00    0.00    0.00   0.00 sda2
        0.00    0.00    0.00   0.00 sda3
        0.00    0.00    0.00   0.00 sda4
        0.00    0.00    0.00   0.00 sda5
        0.00    0.00    0.00   0.00 sda6
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.75    0.00    0.50    0.25    0.00   98.50
    
         r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         f/s f_await  aqu-sz  %util Device
        0.00    0.00    0.00   0.00 sda2
        0.00    0.00    0.00   0.00 sda3
        0.00    0.00    0.00   0.00 sda4
        0.00    0.00    0.00   0.00 sda5
        0.00    0.00    0.00   0.00 sda6
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               1.19    0.00    0.50    0.00    0.00   98.31
    
         r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         f/s f_await  aqu-sz  %util Device
        0.00    0.00    0.00   0.00 sda2
        0.00    0.00    0.00   0.00 sda3
        0.00    0.00    0.00   0.00 sda4
        0.00    0.00    0.00   0.00 sda5
        0.00    0.00    0.00   0.00 sda6
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               1.38    0.00    0.88    0.00    0.00   97.75
    
         r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         f/s f_await  aqu-sz  %util Device
        0.00    0.00    0.00   0.00 sda2
        0.00    0.00    0.00   0.00 sda3
        0.00    0.00    0.00   0.00 sda4
        0.00    0.00    0.00   0.00 sda5
        0.00    0.00    0.00   0.00 sda6
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               1.12    0.00    0.69    0.25    0.00   97.94
    
         r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
        0.00      0.00     0.00   0.00    0.00     0.00 sda2
        0.00      0.00     0.00   0.00    0.00     0.00 sda3
        0.00      0.00     0.00   0.00    0.00     0.00 sda4
        0.00      0.00     0.00   0.00    0.00     0.00 sda5
        0.00      0.00     0.00   0.00    0.00     0.00 sda6
    
         f/s f_await  aqu-sz  %util Device
        0.00    0.00    0.00   0.00 sda2
        0.00    0.00    0.00   0.00 sda3
        0.00    0.00    0.00   0.00 sda4
        0.00    0.00    0.00   0.00 sda5
        0.00    0.00    0.00   0.00 sda6
    
    ^C
    I'm using iostat to display i/o statistics of devices. It does not recognize ZFS pools names, So I give it the disk/partition names I want to watch, that relate to the partitions the pools are using. It is taking samples at 2 second intervals. It is displaying to screen, and writing to a log file, so I can review it. Use <Cntrl><C> to stop it after about 2 minutes or so.

    Run this when it is slow, so maybe we can see what is going on.
    Last edited by MAFoElffen; January 28th, 2024 at 01:15 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  6. #106
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    I agree, it's very puzzling.

    I'll monitor over the next few days and give you some iostat logs!

  7. #107
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Thank you. Very curious in seeing those samplings. They should be small enough to upload as text file attachments. But if not, then upload them to pastebins at paste.ubuntu.com with an expiration set to them of 1 year or less. I'm not a fan of taking up space for something like that forever.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  8. #108
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Will do. 1 year is very generous I was going to do a month! LOL

    Whatever this thing is, it must have ears and they must be burning... I've had perfect performance today! Skipping forwards and back in 4K no problem at the moment...

  9. #109
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Dang! It's trying to hide from us... LOL.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  10. #110
    Join Date
    Nov 2023
    Beans
    76

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Wait! I have something! Maybe...

    Here is the log: https://paste.ubuntu.com/p/ZW3PT9qdCN/

    I've monitored every connected disk.

    Disk key:

    Code:
    nvme0n1p1 = zfs zraid2 cache
    nvme0n1p2 = zfs zraid2 slogs
    sda1 = zfs zraid2 data disk
    sdb1 = zfs zraid2 data disk
    sdc1 = zfs zraid2 data disk
    sdd1 = zfs zraid2 data disk
    sde1 = zfs zraid2 data disk
    sdf1 = zfs zraid2 data disk
    sdg1 = zfs zraid2 data disk
    sdh1 = zfs zraid2 data disk
    sdi1 = 500GB OS SSD
    sdj1 = 500GB temp downloads disk
    Continuing to monitor....

Page 11 of 14 FirstFirst ... 910111213 ... LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •