Page 1 of 3 123 LastLast
Results 1 to 10 of 29

Thread: Ubuntu 14 vs 20x File Copy Performance Slow

Hybrid View

  1. #1
    Join Date
    Feb 2024
    Beans
    15

    Ubuntu 14 vs 20x File Copy Performance Slow

    ISSUE: File Copy on Ubuntu 23 virtual server is slow
    VCENTER, ESXI Host, and Virtual Server Details:
    Name: Version: FileSystem:
    VCenter 6.7.0 Build 18485185
    ESXi Host/Hypervisor 6.7.0, 17499825 - PowerEdge R630
    Datastore TINTRI 5.4.0.2-11870.57000.25421 NFS 3
    Virtual Server1 Ubuntu 14.04.1 Ext4
    Virtual Server2 Ubuntu 23.10 Ext4

    Server 1 and Server 2 created in VCenter with exact same resources including: CPU, MEM, and HD settings.
    Server 1 and Server 2 Ubuntu Installation: Used default options. Chose SSH Remote. No other extra Applications.
    Basically, I created these two servers as basic as possible.
    Post Install on Both Servers: Ran:
    apt-get update, apt-get install open-vm-tools

    File Copy Test Case Output from: time for ((i = 0 ; i < 10 ; i++ )) ; do cp ubuntu-20.04.6-live-server-amd64.iso test.iso ; rm test.iso ; done
    Server1 Ubuntu 14:
    Code:
    real    0m25.784s
    user    0m0.072s
    sys     0m16.468s
    Server2 Ubuntu 23:
    Code:
    real    0m45.702s
    user    0m0.002s
    sys     0m21.520s
    IOStat Generation During File Copy:
    Server1 Ubuntu 14:
    Code:
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00     0.50    0.00   49.00     0.00 33792.00  1379.27     3.27   37.96    0.00   37.96   1.02   5.00
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00     0.00    0.00  514.43     0.00 526774.13  2048.00    41.65   83.71    0.00   83.71   1.66  85.37
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00     0.00    0.54  465.41     2.16 458862.70  1969.61   106.03  206.17  268.00  206.10   1.94  90.59
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00    20.73    0.00  485.49     0.00 475486.01  1958.78   106.08  221.61    0.00  221.61   1.76  85.60
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00     0.53    0.53  500.00     2.12 495204.23  1978.73    98.51  197.59  256.00  197.53   1.65  82.75
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00    21.03    0.00  536.41     0.00 530477.95  1977.88    87.97  167.59    0.00  167.59   1.59  85.54
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00     1.04    0.52  510.88     2.07 506694.30  1981.61   102.89  187.20  240.00  187.15   1.75  89.33
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00     0.54    0.00  490.32     0.00 481169.89  1962.67   114.77  242.60    0.00  242.60   1.82  89.46
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00    26.88    0.54  505.91     2.15 514324.73  2031.10   108.56  232.52   44.00  232.72   1.89  95.48
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00     0.52    0.00  562.69     0.00 560281.87  1991.43   104.56  185.82    0.00  185.82   1.50  84.35
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00    18.48    0.54  541.85     2.17 534339.13  1970.32   112.70  207.80  268.00  207.74   1.91 103.70
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00     8.19    0.00  536.26     0.00 528205.85  1969.97   143.83  256.77    0.00  256.77   1.94 104.09
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sdb               0.00     0.00    0.00   39.20     0.00 40136.68  2048.00     2.86  207.64    0.00  207.64   1.69   6.63
    Server2 Ubuntu 23:
    Code:
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.50      2.00     0.00   0.00    2.00     4.00  100.50 109696.00     2.00   1.95    9.53  1091.50    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    0.96  38.20
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  231.84 240509.45     7.96   3.32   10.60  1037.39    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.46  92.54
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  203.00 222116.00     4.50   2.17   12.00  1094.17    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.44  74.00
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  283.00 315136.00     8.00   2.75   11.26  1113.55    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    3.19  98.40
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.50      1.99     0.00   0.00    2.00     4.00  211.94 232601.00     5.97   2.74   12.50  1097.48    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.65  73.43
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  262.00 290816.00     7.50   2.78   11.84  1109.98    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    3.10  91.60
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  245.00 249154.00     9.50   3.73   10.53  1016.96    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.58  79.80
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  222.50 240326.00    10.50   4.51   11.88  1080.12    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.64  75.60
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              1.00      4.00     0.00   0.00    3.50     4.00  288.50 310692.00    10.50   3.51   10.51  1076.92    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    3.04  94.20
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  240.80 243130.35     6.47   2.62    9.74  1009.69    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.34  72.64
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  291.00 316326.00     9.50   3.16   11.03  1087.03    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    3.21  97.00
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  206.00 223794.00     7.50   3.51   12.11  1086.38    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.49  73.80
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.50      2.00     0.00   0.00    2.00     4.00  297.00 300024.00     7.50   2.46   10.32  1010.18    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    3.07  90.40
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  256.22 268149.25     5.97   2.28   10.23  1046.56    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.62  80.00
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  243.00 262212.00     8.50   3.38    9.97  1079.06    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.42  75.20
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.50      2.00     0.00   0.00    1.00     4.00  298.50 331776.00     7.50   2.45   10.19  1111.48    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    3.04  96.40
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  232.00 247362.00     5.50   2.32    9.19  1066.22    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.13  71.80
    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_awa
    it dareq-sz     f/s f_await  aqu-sz  %util
    sdb              0.00      0.00     0.00   0.00    0.00     0.00  236.50 249270.00    10.00   4.06   11.98  1054.00    0.00      0.00     0.00   0.00    0.
    00     0.00    0.00    0.00    2.83  87.20
    hdparm -Tt /dev/sdb Output:
    Server 1 Ubuntu 14:
    Code:
     Timing cached reads:   17156 MB in  2.00 seconds = 8583.76 MB/sec
     Timing buffered disk reads: 588 MB in  3.00 seconds = 195.90 MB/sec
    Server 2 Ubuntu 23:
    Code:
     Timing cached reads:   16750 MB in  2.00 seconds = 8383.89 MB/sec
     Timing buffered disk reads: 1172 MB in  3.00 seconds = 390.25 MB/sec
    Verify VCenter Virtio Enabled on Servers:
    Server1 Ubuntu14: /boot# grep CONFIG_VIRTIO config-$(uname -r)
    Code:
    CONFIG_VIRTIO_BLK=y
    CONFIG_VIRTIO_NET=y
    CONFIG_VIRTIO_CONSOLE=y
    CONFIG_VIRTIO=y
    CONFIG_VIRTIO_PCI=y
    CONFIG_VIRTIO_PCI_LEGACY=y
    CONFIG_VIRTIO_BALLOON=y
    CONFIG_VIRTIO_INPUT=m
    CONFIG_VIRTIO_MMIO=y
    CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y

    Server2 Ubuntu23: /boot# grep CONFIG_VIRTIO config-$(uname -r)
    Code:
    CONFIG_VIRTIO_VSOCKETS=m
    CONFIG_VIRTIO_VSOCKETS_COMMON=m
    CONFIG_VIRTIO_BLK=y
    CONFIG_VIRTIO_NET=y
    CONFIG_VIRTIO_CONSOLE=y
    CONFIG_VIRTIO_ANCHOR=y
    CONFIG_VIRTIO=y
    CONFIG_VIRTIO_PCI_LIB=y
    CONFIG_VIRTIO_PCI_LIB_LEGACY=y
    CONFIG_VIRTIO_MENU=y
    CONFIG_VIRTIO_PCI=y
    CONFIG_VIRTIO_PCI_LEGACY=y
    CONFIG_VIRTIO_VDPA=m
    CONFIG_VIRTIO_PMEM=m
    CONFIG_VIRTIO_BALLOON=y
    CONFIG_VIRTIO_MEM=m
    CONFIG_VIRTIO_INPUT=m
    CONFIG_VIRTIO_MMIO=y
    CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
    CONFIG_VIRTIO_DMA_SHARED_BUFFER=m
    CONFIG_VIRTIO_IOMMU=y
    CONFIG_VIRTIO_FS=m

    #Ran apt-get update on Both Servers
    sudo apt-get update


    # Ran open-vm-tools on both Servers - sudo apt-get install open-vm-tools
    Server1 Ubuntu 14:/boot# vmware-toolbox-cmd -v
    9.4.0.25793 (build-1280544)


    Server2 Ubuntu 23:/tmp# vmware-toolbox-cmd -v
    12.3.0.44994 (build-22234872)


    #Verified NTP is NOT Installed to avoid VCenter potential conflict with Synchronize guest time with host:
    Server1 Ubuntu 14:/boot# sudo service ntp status
    ntp: unrecognized service


    Server2 Ubuntu 23:/boot# sudo systemctl status ntp
    Failed to stop ntp.service: Unit ntp.service not loaded.


    #GRUB Boot Modifications on Server2 Ubuntu 23:
    #Set disk I/O scheduler to Noop (No Optimization)
    Server2 Ubuntu 23: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash elevator=noop"
    update-grub
    reboot
    cat /sys/block/sdb/queue/scheduler
    [noop] [mq-deadline] cfq
    time for ((i = 0 ; i < 10 ; i++ )) ; do cp ubuntu-20.04.6-live-server-amd64.iso test.iso ; rm test.iso ; done
    No file copy performance improvement on Ubuntu 23 Server.


    #Set mitigations=off
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash mitigations=off"
    update-grub
    reboot
    cat /sys/block/sdb/queue/scheduler
    none [mq-deadline]
    time for ((i = 0 ; i < 10 ; i++ )) ; do cp ubuntu-20.04.6-live-server-amd64.iso test.iso ; rm test.iso ; done
    No file copy performance improvement on Ubuntu 23 Server.




    #Mount Point Options
    mount | grep '^/'
    /dev/sda1 on / type ext4 (rw,errors=remount-ro)
    /dev/sdb1 on /d01 type ext4 (rw)




    #Cross-Compare fs vm settings:
    Server1 Ubuntu 14: sysctl -a | grep 'vm\|fs'
    Code:
    fs.aio-max-nr = 65536
    fs.aio-nr = 0
    fs.dentry-state = 33815 22349   45      0       0       0
    fs.dir-notify-enable = 1
    fs.epoll.max_user_watches = 824217
    fs.file-max = 401361
    fs.file-nr = 576        0       401361
    fs.inode-nr = 28005     329
    fs.inode-state = 28005  329     0       0       0       0       0
    fs.inotify.max_queued_events = 16384
    fs.inotify.max_user_instances = 128
    fs.inotify.max_user_watches = 8192
    fs.lease-break-time = 45
    fs.leases-enable = 1
    fs.mqueue.msg_default = 10
    fs.mqueue.msg_max = 10
    fs.mqueue.msgsize_default = 8192
    fs.mqueue.msgsize_max = 8192
    fs.mqueue.queues_max = 256
    fs.nr_open = 1048576
    fs.overflowgid = 65534
    fs.overflowuid = 65534
    fs.pipe-max-size = 1048576
    fs.pipe-user-pages-hard = 0
    fs.pipe-user-pages-soft = 16384
    fs.protected_hardlinks = 1
    fs.protected_symlinks = 1
    fs.quota.allocated_dquots = 0
    fs.quota.cache_hits = 0
    fs.quota.drops = 0
    fs.quota.free_dquots = 0
    fs.quota.lookups = 0
    fs.quota.reads = 0
    fs.quota.syncs = 88
    fs.quota.writes = 0
    fs.suid_dumpable = 2
    kernel.sched_cfs_bandwidth_slice_us = 5000
    sysctl: reading key "net.ipv6.conf.all.stable_secret"
    sysctl: reading key "net.ipv6.conf.default.stable_secret"
    sysctl: reading key "net.ipv6.conf.eth0.stable_secret"
    sysctl: reading key "net.ipv6.conf.lo.stable_secret"
    vm.admin_reserve_kbytes = 8192
    vm.block_dump = 0
    vm.compact_unevictable_allowed = 1
    vm.dirty_background_bytes = 0
    vm.dirty_background_ratio = 10
    vm.dirty_bytes = 0
    vm.dirty_expire_centisecs = 3000
    vm.dirty_ratio = 20
    vm.dirty_writeback_centisecs = 500
    vm.dirtytime_expire_seconds = 43200
    vm.drop_caches = 0
    vm.extfrag_threshold = 500
    vm.hugepages_treat_as_movable = 0
    vm.hugetlb_shm_group = 0
    vm.laptop_mode = 0
    vm.legacy_va_layout = 0
    vm.lowmem_reserve_ratio = 256   256     32      1
    vm.max_map_count = 65530
    vm.memory_failure_early_kill = 0
    vm.memory_failure_recovery = 1
    vm.min_free_kbytes = 67584
    vm.min_slab_ratio = 5
    vm.min_unmapped_ratio = 1
    vm.mmap_min_addr = 65536
    vm.nr_hugepages = 0
    vm.nr_hugepages_mempolicy = 0
    vm.nr_overcommit_hugepages = 0
    vm.nr_pdflush_threads = 0
    vm.numa_zonelist_order = default
    vm.oom_dump_tasks = 1
    vm.oom_kill_allocating_task = 0
    vm.overcommit_kbytes = 0
    vm.overcommit_memory = 0
    vm.overcommit_ratio = 50
    vm.page-cluster = 3
    vm.panic_on_oom = 0
    vm.percpu_pagelist_fraction = 0
    vm.stat_interval = 1
    vm.swappiness = 60
    vm.user_reserve_kbytes = 125478
    vm.vfs_cache_pressure = 100
    vm.zone_reclaim_mode = 0

    Server2 Ubuntu 23: sysctl -a | grep 'vm\|fs'
    Code:
    fs.aio-max-nr = 65536
    fs.aio-nr = 0
    fs.binfmt_misc.python3/11 = enabled
    fs.binfmt_misc.python3/11 = interpreter /usr/bin/python3.11
    fs.binfmt_misc.python3/11 = flags:
    fs.binfmt_misc.python3/11 = offset 0
    fs.binfmt_misc.python3/11 = magic a70d0d0a
    fs.binfmt_misc.status = enabled
    fs.dentry-state = 121588        99325   45      0       45257   0
    fs.dir-notify-enable = 1
    fs.epoll.max_user_watches = 867105
    fs.fanotify.max_queued_events = 16384
    fs.fanotify.max_user_groups = 128
    fs.fanotify.max_user_marks = 31556
    fs.file-max = 9223372036854775807
    fs.file-nr = 1216       0       9223372036854775807
    fs.inode-nr = 76293     658
    fs.inode-state = 76293  658     0       0       0       0       0
    fs.inotify.max_queued_events = 16384
    fs.inotify.max_user_instances = 128
    fs.inotify.max_user_watches = 29677
    fs.lease-break-time = 45
    fs.leases-enable = 1
    fs.mount-max = 100000
    fs.mqueue.msg_default = 10
    fs.mqueue.msg_max = 10
    fs.mqueue.msgsize_default = 8192
    fs.mqueue.msgsize_max = 8192
    fs.mqueue.queues_max = 256
    fs.nr_open = 1048576
    fs.overflowgid = 65534
    fs.overflowuid = 65534
    fs.pipe-max-size = 1048576
    fs.pipe-user-pages-hard = 0
    fs.pipe-user-pages-soft = 16384
    fs.protected_fifos = 1
    fs.protected_hardlinks = 1
    fs.protected_regular = 2
    fs.protected_symlinks = 1
    fs.quota.allocated_dquots = 0
    fs.quota.cache_hits = 0
    fs.quota.drops = 0
    fs.quota.free_dquots = 0
    fs.quota.lookups = 0
    fs.quota.reads = 0
    fs.quota.syncs = 142
    fs.quota.writes = 0
    fs.suid_dumpable = 2
    fs.verity.require_signatures = 0
    kernel.firmware_config.force_sysfs_fallback = 0
    kernel.firmware_config.ignore_sysfs_fallback = 0
    kernel.sched_cfs_bandwidth_slice_us = 5000
    vm.admin_reserve_kbytes = 8192
    vm.compact_unevictable_allowed = 1
    vm.compaction_proactiveness = 20
    vm.dirty_background_bytes = 0
    vm.dirty_background_ratio = 10
    vm.dirty_bytes = 0
    vm.dirty_expire_centisecs = 3000
    vm.dirty_ratio = 20
    vm.dirty_writeback_centisecs = 500
    vm.dirtytime_expire_seconds = 43200
    vm.extfrag_threshold = 500
    vm.hugetlb_optimize_vmemmap = 0
    vm.hugetlb_shm_group = 0
    vm.laptop_mode = 0
    vm.legacy_va_layout = 0
    vm.lowmem_reserve_ratio = 256   256     32      0       0
    vm.max_map_count = 65530
    vm.memfd_noexec = 0
    vm.memory_failure_early_kill = 0
    vm.memory_failure_recovery = 1
    vm.min_free_kbytes = 67584
    vm.min_slab_ratio = 5
    vm.min_unmapped_ratio = 1
    vm.mmap_min_addr = 65536
    vm.mmap_rnd_bits = 28
    vm.mmap_rnd_compat_bits = 8
    vm.nr_hugepages = 0
    vm.nr_hugepages_mempolicy = 0
    vm.nr_overcommit_hugepages = 0
    vm.numa_stat = 1
    vm.numa_zonelist_order = Node
    vm.oom_dump_tasks = 1
    vm.oom_kill_allocating_task = 0
    vm.overcommit_kbytes = 0
    vm.overcommit_memory = 0
    vm.overcommit_ratio = 50
    vm.page-cluster = 3
    vm.page_lock_unfairness = 5
    vm.panic_on_oom = 0
    vm.percpu_pagelist_high_fraction = 0
    vm.stat_interval = 1
    vm.swappiness = 60
    vm.unprivileged_userfaultfd = 0
    vm.user_reserve_kbytes = 121460
    vm.vfs_cache_pressure = 100
    vm.watermark_boost_factor = 15000
    vm.watermark_scale_factor = 10
    vm.zone_reclaim_mode = 0

    #Compare Reserved block count
    Server1 Ubuntu 23: tune2fs -l /dev/sdb1 | grep 'Reserved block count'
    Reserved block count: 1310694


    Server2 Ubuntu 23: tune2fs -l /dev/sdb1 | grep 'Reserved block count'
    Reserved block count: 1310694


    #Disabled and Purge ALL snapd and snapd.sockets on Server2:
    systemctl stop snapd
    systemctl disable snapd
    systemctl stop snapd.socket
    systemctl disable snapd.socket
    systemctl mask snapd
    systemctl mask snapd.socket
    apt purge snapd
    apt autoremove
    Reboot
    No file copy performance improvement on Ubuntu 23 Server.


    #Reverted to Snapshot Prior to SNAP removal from Server2.
    #Removed VCenter Snapshot of Server2.


    #Compare hugepage settings
    Server1 Ubuntu 14: cat /sys/kernel/mm/transparent_hugepage/enabled
    [always] madvise never


    Server2 Ubuntu 23:/d01/user# cat /sys/kernel/mm/transparent_hugepage/enabled
    always [madvise] never


    #Modify Dirty Parameters on Server2 Ubuntu23:
    echo 10 > /proc/sys/vm/dirty_background_ratio
    echo 20 > /proc/sys/vm/dirty_ratio
    echo 3000 > /proc/sys/vm/dirty_expire_centisecs
    echo 500 > /proc/sys/vm/dirty_writeback_centisecs
    echo $((64*1024*1024)) > /proc/sys/vm/dirty_background_bytes
    echo $((512*1024*1024)) > /proc/sys/vm/dirty_bytes
    sync; echo 3 > /proc/sys/vm/drop_caches
    reboot
    No file copy performance improvement on Ubuntu 23 Server.


    #Verify metadata_csum is enabled
    tune2fs -l /dev/sdb1 | grep grep "metadata_csum"
    metadata_csum


    #Defrag Disk sdb1 on Server2 Ubuntu 23 Server
    e2fsck /dev/sdb1
    e2fsck 1.46.5 (30-Dec-2021)
    DataVol: clean, 442605/201326592 files, 588558952/805305856 blocks
    reboot
    No file copy performance improvement on Ubuntu 23 Server.


    #Verify No snaps for Server2 Ubuntu 23:
    losetup -a
    0 rows


    #Verify loop on Server2 Ubuntu 23
    journalctl | grep loop
    Code:
    Feb 07 20:21:26 Server2 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=9199992)
    Feb 07 20:21:27 Server2 kernel: loop: module loaded
    Feb 07 20:21:27 Server2 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
    Feb 07 20:21:27 Server2 systemd[1]: modprobe@loop.service: Deactivated successfully.
    Feb 07 20:21:27 Server2 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
    Feb 07 20:37:44 Server2 kernel: Calibrating delay loop (skipped) preset value.. 4399.99 BogoMIPS (lpj=8799992)
    Feb 07 20:37:44 Server2 kernel: loop: module loaded
    Feb 07 20:37:44 Server2 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
    Feb 07 20:37:44 Server2 systemd[1]: modprobe@loop.service: Deactivated successfully.
    Feb 07 20:37:44 Server2 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.

    #Show Current cgroup2 Settings on Server2 Ubuntu 23:
    mount | grep cgroup2
    cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory _recursiveprot)


    #Show Current Mounts on Server2 Ubuntu 23:
    cat /proc/mounts
    Code:
    sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
    proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
    udev /dev devtmpfs rw,nosuid,relatime,size=1947604k,nr_inodes=486901,mode=755,inode64 0 0
    devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
    tmpfs /run tmpfs rw,nosuid,nodev,noexec,relatime,size=400088k,mode=755,inode64 0 0
    /dev/sda2 / ext4 rw,relatime 0 0
    securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
    tmpfs /dev/shm tmpfs rw,nosuid,nodev,inode64 0 0
    tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k,inode64 0 0
    cgroup2 /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot 0 0
    pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
    bpf /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0
    systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=19240 0 0
    debugfs /sys/kernel/debug debugfs rw,nosuid,nodev,noexec,relatime 0 0
    mqueue /dev/mqueue mqueue rw,nosuid,nodev,noexec,relatime 0 0
    hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
    tracefs /sys/kernel/tracing tracefs rw,nosuid,nodev,noexec,relatime 0 0
    fusectl /sys/fs/fuse/connections fusectl rw,nosuid,nodev,noexec,relatime 0 0
    configfs /sys/kernel/config configfs rw,nosuid,nodev,noexec,relatime 0 0
    ramfs /run/credentials/systemd-sysusers.service ramfs ro,nosuid,nodev,noexec,relatime,mode=700 0 0
    ramfs /run/credentials/systemd-sysctl.service ramfs ro,nosuid,nodev,noexec,relatime,mode=700 0 0
    ramfs /run/credentials/systemd-tmpfiles-setup-dev.service ramfs ro,nosuid,nodev,noexec,relatime,mode=700 0 0
    vmware-vmblock /run/vmblock-fuse fuse.vmware-vmblock rw,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
    /dev/sdb1 /d01 ext4 rw,relatime 0 0
    ramfs /run/credentials/systemd-tmpfiles-setup.service ramfs ro,nosuid,nodev,noexec,relatime,mode=700 0 0
    binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0
    ramfs /run/credentials/systemd-resolved.service ramfs ro,nosuid,nodev,noexec,relatime,mode=700 0 0
    /dev/sdc1 /d02 ext4 rw,relatime 0 0
    tmpfs /run/user/1000 tmpfs rw,nosuid,nodev,relatime,size=400084k,nr_inodes=100021,mode=700,uid=1000,gid=1000,inode64 0 0
    Last edited by sthoreson; February 9th, 2024 at 05:54 PM.

  2. #2
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Ubuntu 14 vs 20x File Copy Performance Slow

    The post formatting is too hard to read. Please learn to use forum code-tags to maintain columns.

    Multiple kernel security fixes which impacted performance have been included in newer kernels.

    NFSv3 has been deprecated a long time - even in 2014. Please use NFSv4.

  3. #3
    Join Date
    Feb 2024
    Beans
    15

    Re: Ubuntu 14 vs 20x File Copy Performance Slow

    Thank you for your response. I reformatted the initial post to be much more readable as per recommendation.
    Can you confirm if the reason for the slow file copies is distinctly related to NFSv3 on the ESXi Host and the later version of Ubuntu 23?
    If so, is there any specific Ubuntu related documentation that can be referenced for the specifications concerning this situation?
    Again, thank you and appreciate your time.

  4. #4
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Ubuntu 14 vs 20x File Copy Performance Slow

    Quote Originally Posted by sthoreson View Post
    Can you confirm if the reason for the slow file copies is distinctly related to NFSv3 on the ESXi Host and the later version of Ubuntu 23?
    Nope. I confirm nothing.

    You skipped right over all the kernel security improvements.

    We've been using NFSv4 nearly 15 yrs. It was released in 2002. What are you waiting for?

    If you don't provide details for everything attempted, nobody will try to reproduce it, assuming they could. We dumped ESXi around 2011 due to license costs, so there's no way to try anything even close to your setup.

    With all testing, simplify each component and test. How fast is the networking without any storage involved? How fast is the storage without any networking involve? Which is the bottleneck? Then add on complexity and try different, but similar tests. Test NFS3, NFS4, scp, rsync, nc, sftp between the systems. Is it only these 2 systems in the same box? What about systems in different boxes? Test without the hypervisor involved at all. How fast is that? Could it be the hypervisor's file system?

    For testing storage, most people would use fio these days. https://arstechnica.com/gadgets/2020...-way-with-fio/


    But my gut is telling me that kernel security fixes since 2014 are the real issue. You have to use those security improvements. You can disable some with kernel boot options, but I don't. I decided that security is more important than performance.

  5. #5
    Join Date
    Feb 2024
    Beans
    15

    Re: Ubuntu 14 vs 20x File Copy Performance Slow

    Thank you again for your response. I skipped over all of the details for everything attempted and chose to summarize as I didn't want to over saturate the thread with pages and pages of output.
    However, I will detail these out for review. As per your emphasis to upgrade from NFS3 to NFS4, this has already been presented for priority consideration to the company IT Directors.
    Thank you for your patience as I work through detailing out the attempted modifications.

  6. #6
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Ubuntu 14 vs 20x File Copy Performance Slow

    As per what you said in post #1... You are comparing apples to oranges. You said you are running the same system requirement allocations for both. They do not use the same.

    In 2014, Ubuntu Server Edition min requirements was:
    Code:
    300 MHz x86 processor
    192 MiB of system memory (RAM)
    1 GB of disk space
    Graphics card and monitor capable of 640x480
    CD drive
    For 22.04 Server Edition it says:
    Code:
    CPU: 1 gigahertz or better
    RAM: 1 gigabyte or more
    Disk: a minimum of 2.5 gigabytes
    But I can tell you that it will peg 2GB in the install.

    There is a lot more going on, than it was with 14.04. You need to adjust the allocations to what it needs.

    One of the things I do, is that I beta test vSphere/vCenter updates and versions for VMware. from my notes:

    Use the LSI Logic virtual SCSI adapter instead of the BusLogic virtual SCSI adapter. You'll encounter less problems with Linux OS'es doing that.

    The vdisk performance is a lot better setting your VMDK files to static instead of dynamic.

    There are some quarks with vSphere and many Linux distro's, where booth fight each other for who is in charge of what. This often slows things down. Here it a few examples--

    You can get clock drift if both NTP and VMware Tools are fighting over who is in charge. Disable the time synchronization in VMware Tools by opening the virtual machine configuration file and set these options to FALSE:
    tools.syncTime
    tools.synchronize.restore
    time.synchronize.resume.disk
    time.synchronize.continue
    time.synchronize.shrink

    Let vSphere handle the disk I/O scheduling. You can turn off Linux from scheduling disk I/O, (so they don't fight for that) by editing /etc/default/grub and add "elevator-noop" as a boot parameter in line GRUB_CMDLINE_DEFAULT_LINUX.

    For Ubuntu server VM's, I usually allocate 2-4 vcpu's and 4GB RAM at the start and adjust from there.

    I hope those help.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  7. #7
    Join Date
    Feb 2024
    Beans
    15

    Re: Ubuntu 14 vs 20x File Copy Performance Slow

    #Recommened Changes from Ubuntu Forum Post:
    Modified CPU to 4.
    Modified RAM to 8G.
    Modified SCSI From VMWare Paravirtual to LSI Logic Parallel
    Added New HD, Thick Provision Lazy Zeroed, ext 4 filesystem


    Added these to Adv. Config on VM Server:
    tools.syncTime FALSE
    tools.synchronize.restore FALSE
    time.synchronize.resume.disk FALSE
    time.synchronize.continue FALSE
    time.synchronize.shrink FALSE


    Turned Off Ubuntu Disk I/O scheduling:
    Edited GRUB_CMDLINE_DEFAULT_LINUX "elevator-noop" to /etc/default/grub
    update-grub
    reboot


    Ran same copy test on both Server1 Ubuntu 14 (Unmodified) and Server2 Ubuntu 23 (Modified with recommended changes).
    Server1 Ubuntu 14:
    time for ((i = 0 ; i < 10 ; i++ )) ; do cp ubuntu-20.04.6-live-server-amd64.iso test.iso ; rm test.iso ; done
    Code:
    real    0m23.759s
    user    0m0.100s
    sys     0m15.676s

    On Server2 Ubuntu 23:
    time for ((i = 0 ; i < 10 ; i++ )) ; do cp ubuntu-20.04.6-live-server-amd64.iso test.iso ; rm test.iso ; done
    Code:
    real    0m44.284s
    user    0m0.015s
    sys     0m43.078s

    Sadly, the file copy slow performance on Ubuntu 23 remains.

  8. #8
    Join Date
    Feb 2024
    Beans
    15

    Re: Ubuntu 14 vs 20x File Copy Performance Slow

    I may have found the solution. Currently working through tests for confirmation.
    Basically, I boosted these and will post the final results.

    #Modify Dirty Parameters on Server2 Ubuntu23:
    echo 10 > /proc/sys/vm/dirty_background_ratio
    echo 20 > /proc/sys/vm/dirty_ratio
    echo 3000 > /proc/sys/vm/dirty_expire_centisecs
    echo 500 > /proc/sys/vm/dirty_writeback_centisecs
    echo $((64*1024*1024)) | sudo tee /proc/sys/vm/dirty_background_bytes
    echo $((512*1024*1024)) | sudo tee /proc/sys/vm/dirty_bytes
    sync; echo 3 > /proc/sys/vm/drop_caches

  9. #9
    Join Date
    Feb 2024
    Beans
    15

    Re: Ubuntu 14 vs 20x File Copy Performance Slow

    The issue is not resolved.
    I am now working through using your latest suggestion for a clearer benchmark.
    Thank you.

  10. #10
    Join Date
    Feb 2024
    Beans
    15

    Re: Ubuntu 14 vs 20x File Copy Performance Slow

    Using fio to get a more detailed Benchmark:
    fio --name TEST_WRITE --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting && fio --name TEST_READ --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    Code:
    TEST_WRITE: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.35
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][21.9%][w=283MiB/s][w=283 IOPS][eta 00m:25s]
    Jobs: 1 (f=1): [W(1)][39.4%][w=280MiB/s][w=280 IOPS][eta 00m:20s]
    Jobs: 1 (f=1): [W(1)][55.9%][w=274MiB/s][w=274 IOPS][eta 00m:15s]
    Jobs: 1 (f=1): [W(1)][73.5%][w=302MiB/s][w=302 IOPS][eta 00m:09s]
    Jobs: 1 (f=1): [W(1)][91.2%][w=220MiB/s][w=220 IOPS][eta 00m:03s]
    Jobs: 1 (f=1): [W(1)][100.0%][w=287MiB/s][w=287 IOPS][eta 00m:00s]
    TEST_WRITE: (groupid=0, jobs=1): err= 0: pid=1780: Tue Feb 13 17:10:47 2024
      write: IOPS=295, BW=296MiB/s (310MB/s)(10.0GiB/34622msec); 0 zone resets
        slat (usec): min=85, max=130670, avg=217.52, stdev=2591.22
        clat (msec): min=8, max=333, avg=107.82, stdev=32.53
         lat (msec): min=8, max=333, avg=108.03, stdev=32.43
        clat percentiles (msec):
         |  1.00th=[   23],  5.00th=[   56], 10.00th=[   78], 20.00th=[   89],
         | 30.00th=[   94], 40.00th=[   99], 50.00th=[  103], 60.00th=[  109],
         | 70.00th=[  118], 80.00th=[  131], 90.00th=[  148], 95.00th=[  167],
         | 99.00th=[  199], 99.50th=[  215], 99.90th=[  288], 99.95th=[  317],
         | 99.99th=[  330]
       bw (  KiB/s): min=219136, max=584558, per=100.00%, avg=302971.26, stdev=51309.14, samples=69
       iops        : min=  214, max=  570, avg=295.84, stdev=50.04, samples=69
      lat (msec)   : 10=0.19%, 20=0.72%, 50=2.46%, 100=42.12%, 250=54.36%
      lat (msec)   : 500=0.16%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=233773k, max=233773k, avg=233773249.00, stdev= 0.00
        sync percentiles (msec):
         |  1.00th=[  234],  5.00th=[  234], 10.00th=[  234], 20.00th=[  234],
         | 30.00th=[  234], 40.00th=[  234], 50.00th=[  234], 60.00th=[  234],
         | 70.00th=[  234], 80.00th=[  234], 90.00th=[  234], 95.00th=[  234],
         | 99.00th=[  234], 99.50th=[  234], 99.90th=[  234], 99.95th=[  234],
         | 99.99th=[  234]
      cpu          : usr=2.32%, sys=2.75%, ctx=9498, majf=0, minf=13
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
      WRITE: bw=296MiB/s (310MB/s), 296MiB/s-296MiB/s (310MB/s-310MB/s), io=10.0GiB (10.7GB), run=34622-34622msec
    
    
    Disk stats (read/write):
      sdc: ios=0/17646, merge=0/592, ticks=0/1852003, in_queue=1852002, util=99.71%
    TEST_READ: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.35
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][70.0%][r=889MiB/s][r=889 IOPS][eta 00m:03s]
    Jobs: 1 (f=1): [R(1)][100.0%][r=922MiB/s][r=921 IOPS][eta 00m:00s]
    TEST_READ: (groupid=0, jobs=1): err= 0: pid=1787: Tue Feb 13 17:10:58 2024
      read: IOPS=959, BW=960MiB/s (1006MB/s)(10.0GiB/10669msec)
        slat (usec): min=78, max=5005, avg=143.03, stdev=213.65
        clat (usec): min=10039, max=78712, avg=33013.74, stdev=5933.09
         lat (usec): min=10569, max=78793, avg=33156.77, stdev=5898.63
        clat percentiles (usec):
         |  1.00th=[16581],  5.00th=[27395], 10.00th=[28443], 20.00th=[28705],
         | 30.00th=[29230], 40.00th=[30016], 50.00th=[31327], 60.00th=[33162],
         | 70.00th=[35914], 80.00th=[38011], 90.00th=[40109], 95.00th=[41157],
         | 99.00th=[52167], 99.50th=[61080], 99.90th=[67634], 99.95th=[71828],
         | 99.99th=[76022]
       bw (  KiB/s): min=821248, max=1134592, per=99.99%, avg=982734.14, stdev=101143.49, samples=21
       iops        : min=  802, max= 1108, avg=959.62, stdev=98.68, samples=21
      lat (msec)   : 20=1.68%, 50=96.81%, 100=1.51%
      cpu          : usr=0.39%, sys=13.31%, ctx=7255, majf=0, minf=8204
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
       READ: bw=960MiB/s (1006MB/s), 960MiB/s-960MiB/s (1006MB/s-1006MB/s), io=10.0GiB (10.7GB), run=10669-10669msec
    
    
    Disk stats (read/write):
      sdc: ios=12573/3, merge=3058/1, ticks=399820/169, in_queue=399989, util=99.23%

Page 1 of 3 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •