Page 1 of 3 123 LastLast
Results 1 to 10 of 29

Thread: ZFS create

  1. #1
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    ZFS create

    Just starting out on ZFS.
    I did a test run of some scrub laptop drive in six drive 1- 5.25" bay enclosure (Sata only and thin drives I think 2.5" 9mm or less for each bay) to create a pool which I will destroy as many here have warned against the usage of laptop drive within any type of RAID configuration method.
    Like I said this was a test before the arrival of actual enterprise class drives and a supporting HBA/cabling arrives within the next 2 weeks
    Code:
    mike@beastie:~$ zpool status
      pool: teststore
     state: ONLINE
      scan: scrub repaired 0B in 00:00:26 with 0 errors on Sun Sep 15 07:19:36 2024
    config:
    
    
            NAME        STATE     READ WRITE CKSUM
            teststore   ONLINE       0     0     0
              raidz1-0  ONLINE       0     0     0
                sda1    ONLINE       0     0     0
                sdb1    ONLINE       0     0     0
                sdc1    ONLINE       0     0     0
                sdd1    ONLINE       0     0     0
                sde1    ONLINE       0     0     0
            spares
              sdf1      AVAIL
    
    
    errors: No known data errors
    mike@beastie:~$ zpool list
    NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    teststore  2.27T   504G  1.77T        -         -     0%    21%  1.00x    ONLINE  -
    mike@beastie:~$ zpool list
    NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    teststore  2.27T   566G  1.71T        -         -     0%    24%  1.00x    ONLINE  -
    mike@beastie:~$ zpool list
    NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    teststore  2.27T   858G  1.43T        -         -     0%    36%  1.00x    ONLINE  -
    mike@beastie:~$ zpool list
    NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    teststore  2.27T  1018G  1.27T        -         -     0%    43%  1.00x    ONLINE  -
    mike@beastie:~$ zpool list
    NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    teststore  2.27T  1.02T  1.24T        -         -     0%    45%  1.00x    ONLINE  -
    mike@beastie:~$ zpool status
      pool: teststore
     state: ONLINE
      scan: scrub repaired 0B in 00:00:26 with 0 errors on Sun Sep 15 07:19:36 2024
    config:
    
    
            NAME        STATE     READ WRITE CKSUM
            teststore   ONLINE       0     0     0
              raidz1-0  ONLINE       0     0     0
                sda1    ONLINE       0     0     0
                sdb1    ONLINE       0     0     0
                sdc1    ONLINE       0     0     0
                sdd1    ONLINE       0     0     0
                sde1    ONLINE       0     0     0
            spares
              sdf1      AVAIL
    
    
    errors: No known data errors
    mike@beastie:~$ zpool list
    NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    teststore  2.27T  1.22T  1.05T        -         -     0%    53%  1.00x    ONLINE  -
    mike@beastie:~$ zpool list
    NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    teststore  2.27T  1.23T  1.04T        -         -     0%    54%  1.00x    ONLINE  -
    mike@beastie:~$ zpool list
    NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    teststore  2.27T  1.34T   947G        -         -     0%    59%  1.00x    ONLINE  -
    mike@beastie:~$ zpool iostat -vl teststore
                  capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
    pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
    teststore   1.34T   947G      0    239  2.84K  39.9M   30ms    7ms   30ms    3ms    1us    1us    4us    3ms   52ms      -
      raidz1-0  1.34T   947G      0    239  2.84K  39.9M   30ms    7ms   30ms    3ms    1us    1us    4us    3ms   52ms      -
        sda1        -      -      0     49    564  7.98M   30ms    6ms   29ms    3ms    1us    2us    1us    2ms   28ms      -
        sdb1        -      -      0     49    623  7.98M   30ms    7ms   30ms    3ms    2us    1us    1us    3ms   62ms      -
        sdc1        -      -      0     49    575  7.98M   30ms    6ms   29ms    3ms    1us    1us   16us    2ms   50ms      -
        sdd1        -      -      0     45    573  7.98M   32ms    8ms   31ms    4ms    1us    1us    1us    4ms   60ms      -
        sde1        -      -      0     46    572  7.98M   30ms    7ms   29ms    4ms    1us    1us    1us    3ms   63ms      -
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
    mike@beastie:~$ zpool list
    NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    teststore  2.27T  1.34T   947G        -         -     0%    59%  1.00x    ONLINE  -
    mike@beastie:~$ zpool status
      pool: teststore
     state: ONLINE
      scan: scrub repaired 0B in 00:00:26 with 0 errors on Sun Sep 15 07:19:36 2024
    config:
    
    
            NAME        STATE     READ WRITE CKSUM
            teststore   ONLINE       0     0     0
              raidz1-0  ONLINE       0     0     0
                sda1    ONLINE       0     0     0
                sdb1    ONLINE       0     0     0
                sdc1    ONLINE       0     0     0
                sdd1    ONLINE       0     0     0
                sde1    ONLINE       0     0     0
            spares
              sdf1      AVAIL
    
    
    errors: No known data errors
    mike@beastie:~$ lsblk
    NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
    loop0         7:0    0  63.9M  1 loop /snap/core20/2105
    loop1         7:1    0  63.9M  1 loop /snap/core20/2318
    loop2         7:2    0    87M  1 loop /snap/lxd/27037
    loop3         7:3    0    87M  1 loop /snap/lxd/29351
    loop4         7:4    0  40.4M  1 loop /snap/snapd/20671
    loop5         7:5    0  38.8M  1 loop /snap/snapd/21759
    sda           8:0    0 465.8G  0 disk
    └─sda1        8:1    0 465.8G  0 part
    sdb           8:16   0 465.8G  0 disk
    └─sdb1        8:17   0 465.8G  0 part
    sdc           8:32   0 465.8G  0 disk
    └─sdc1        8:33   0 465.8G  0 part
    sdd           8:48   0 465.8G  0 disk
    └─sdd1        8:49   0 465.8G  0 part
    sde           8:64   0 465.8G  0 disk
    └─sde1        8:65   0 465.8G  0 part
    sdf           8:80   0 465.8G  0 disk
    └─sdf1        8:81   0 465.8G  0 part
    nvme0n1     259:0    0 238.5G  0 disk
    ├─nvme0n1p1 259:1    0     1G  0 part /boot/efi
    └─nvme0n1p2 259:2    0 237.4G  0 part/
    It went well so far, and I'm impressed with the ease, although still learning.

    But it started me to thinking, thus leading to an actual question.

    I had seen a post/article elsewhere where a person assigned an alias to the drives versus them just using the standard /dev/sd*# method. Which seems to make some sense in order to cleanly track down a faulty drive.
    (I'll have to reread the article/post as I don't recall if when he assembled if he used the assigned alias or drive id, but I think he used the alias)

    I've seen where others use the drive ID, which makes a lot of sense vs the method I used.

    In creating a pool with the incoming SAS drives (6 - 4TB each), which is the best method to assembling the drives into a single zpool?
    I know that is a subjective question.

    (this will be in a NAS/NFS machine to back up media from the Plex server, and basically some documents such as tax stamp documents from ATF & DD214's. Which I'm planning on using wakon lan configuration so it's not on 24x7)
    Last edited by sgt-mike; September 16th, 2024 at 08:14 PM.

  2. #2
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    While I have been waiting on a response on the above post I played with striped pools and Raisz1 in differing scenarios / configurations. meanwhile reading various write ups.
    Ran across one that discussed the 512 vs the 4096 alignment of the pool. so I decided to try it.

    Code:
    mike@beastie:~$ mike@beastie:~$ sudo zpool historyHistory for 'testpool':
    2024-09-17.23:04:14 zpool create -o ashift=12 testpool raidz1 sda1 sdb1 sdc1 sdd1 sde1 -f
    2024-09-17.23:15:13 zfs set compress=lz4 testpool
    
    
    # checking for the Pools Algined sectors is 4K aka ashift 12 which if memory serves me correct for a 512 the ashift value is 9
    
    
    mike@beastie:~$  sudo zdb -C testpool | grep ashift
                    ashift: 12
    
    
    # ok great all the drives are aligned 
    
    
    mike@beastie:~$ sudo zpool status
      pool: testpool
     state: ONLINE
    config:
    
    
            NAME        STATE     READ WRITE CKSUM
            testpool    ONLINE       0     0     0
              raidz1-0  ONLINE       0     0     0
                sda1    ONLINE       0     0     0
                sdb1    ONLINE       0     0     0
                sdc1    ONLINE       0     0     0
                sdd1    ONLINE       0     0     0
                sde1    ONLINE       0     0     0
    
    
    errors: No known data errors
    mike@beastie:~$ sudo zpool list
    NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    testpool  2.27T  22.8G  2.24T
    mike@beastie:~$ sudo zpool iostat -vl 30 5
                  capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
    pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
    testpool     190G  2.08T      0    507  1.59K  96.6M   32ms    6ms   32ms    3ms    1us    1us      -    2ms      -      -
      raidz1-0   190G  2.08T      0    507  1.59K  96.6M   32ms    6ms   32ms    3ms    1us    1us      -    2ms      -      -
        sda1        -      -      0    109    309  19.3M   25ms    5ms   25ms    3ms    1us    1us      -    2ms      -      -
        sdb1        -      -      0    104    286  19.3M   33ms    5ms   33ms    3ms    1us  953ns      -    2ms      -      -
        sdc1        -      -      0     95    271  19.3M   31ms    7ms   31ms    4ms    1us  998ns      -    3ms      -      -
        sdd1        -      -      0     93    470  19.3M   37ms    8ms   37ms    4ms    2us  975ns      -    3ms      -      -
        sde1        -      -      0    105    288  19.3M   25ms    5ms   25ms    3ms    1us    1us      -    2ms      -      -
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
                  capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
    pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
    testpool     194G  2.08T      0    813    955   139M   25ms    5ms   25ms    3ms    2us    1us      -    2ms      -      -
      raidz1-0   194G  2.08T      0    813    955   139M   25ms    5ms   25ms    3ms    2us    1us      -    2ms      -      -
        sda1        -      -      0    174      0  27.8M      -    4ms      -    2ms      -    1us      -    1ms      -      -
        sdb1        -      -      0    163    136  27.8M   25ms    4ms   25ms    3ms    1us  816ns      -    1ms      -      -
        sdc1        -      -      0    158    136  27.8M   25ms    6ms   25ms    3ms    3us  928ns      -    2ms      -      -
        sdd1        -      -      0    150    682  27.8M   25ms    6ms   25ms    3ms    2us  976ns      -    2ms      -      -
        sde1        -      -      0    166      0  27.8M      -    4ms      -    3ms      -    1us      -    1ms      -      -
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
                  capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
    pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
    testpool     198G  2.07T      0    647    819   131M   25ms    7ms   25ms    4ms    1us    1us      -    3ms      -      -
      raidz1-0   198G  2.07T      0    647    819   131M   25ms    7ms   25ms    4ms    1us    1us      -    3ms      -      -
        sda1        -      -      0    144      0  26.2M      -    5ms      -    3ms      -    1us      -    2ms      -      -
        sdb1        -      -      0    139    409  26.2M   20ms    6ms   20ms    3ms    1us    1us      -    2ms      -      -
        sdc1        -      -      0    117    136  26.2M   12ms   10ms   12ms    5ms  768ns    1us      -    4ms      -      -
        sdd1        -      -      0    117    273  26.2M   37ms    9ms   37ms    5ms    1us    1us      -    4ms      -      -
        sde1        -      -      0    128      0  26.2M      -    6ms      -    4ms      -    1us      -    3ms      -      -
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
                  capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
    pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
    testpool     202G  2.07T      0    567    819   139M   31ms    9ms   31ms    5ms    1us  876ns      -    4ms      -      -
      raidz1-0   202G  2.07T      0    567    819   139M   31ms    9ms   31ms    5ms    1us  876ns      -    4ms      -      -
        sda1        -      -      0    115      0  27.8M      -    9ms      -    5ms      -    1us      -    4ms      -      -
        sdb1        -      -      0    116    136  27.8M   25ms    9ms   25ms    5ms    1us  752ns      -    4ms      -      -
        sdc1        -      -      0    110    409  27.8M   29ms    9ms   29ms    5ms    1us  720ns      -    4ms      -      -
        sdd1        -      -      0    102    273  27.8M   37ms   11ms   37ms    6ms  768ns  752ns      -    5ms      -      -
        sde1        -      -      0    122      0  27.8M      -    7ms      -    4ms      -  912ns      -    3ms      -      -
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
                  capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
    pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
    testpool     205G  2.07T      0    573    682   132M   20ms    8ms   20ms    4ms    2us    1us      -    4ms      -      -
      raidz1-0   205G  2.07T      0    573    682   132M   20ms    8ms   20ms    4ms    2us    1us      -    4ms      -      -
        sda1        -      -      0    122    136  26.4M   25ms    7ms   25ms    4ms  768ns    1us      -    3ms      -      -
        sdb1        -      -      0    116      0  26.3M      -    8ms      -    4ms      -  787ns      -    3ms      -      -
        sdc1        -      -      0    109    136  26.4M   12ms    9ms   12ms    5ms  768ns  844ns      -    4ms      -      -
        sdd1        -      -      0    102    273  26.4M   18ms   10ms   18ms    6ms    4us  921ns      -    5ms      -      -
        sde1        -      -      0    122    136  26.4M   25ms    7ms   25ms    4ms  768ns  979ns      -    3ms      -      -
    ----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
    mike@beastie:~$
    when I ran the iostat I had my plex server (second gen i7) sending data to the NAS/NFS via scp, for my 2GB files the transfer was approx 15 secs or less.

    When I tried a pool that wasn't aligned I had a little higher bandwidth on the writes(by about 2 MB) but my reads was lower than with the sectors aligned.
    For some strange reason out of the 5 test drives I'm using /dev/sba1 was I/O min optimal at 4096 but the rest of the drive reported 512
    Code:
    mike@beastie:~$ sudo fdisk -l /dev/sd[abcde][sudo] password for mike:
    Disk /dev/sda: 465.76 GiB, 500107862016 bytes, 976773168 sectors
    Disk model: TOSHIBA MQ01ABF0
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: D8F0C9C3-F33A-284C-B647-5DE4701FD79D
    
    
    Device     Start       End   Sectors   Size Type
    /dev/sda1   2048 976773134 976771087 465.8G Linux filesystem
    
    
    
    
    Disk /dev/sdb: 465.76 GiB, 500107862016 bytes, 976773168 sectors
    Disk model: TOSHIBA MQ01ABF0
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 31DAF47B-2E32-7740-BFE9-6EB48600A314
    
    
    Device     Start       End   Sectors   Size Type
    /dev/sdb1   2048 976773134 976771087 465.8G Linux filesystem
    
    
    
    
    Disk /dev/sdc: 465.76 GiB, 500107862016 bytes, 976773168 sectors
    Disk model: TOSHIBA MQ01ABF0
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: F81E7542-8C6C-AC42-B52D-5333E42A1063
    
    
    Device     Start       End   Sectors   Size Type
    /dev/sdc1   2048 976773134 976771087 465.8G Linux filesystem
    
    
    
    
    Disk /dev/sdd: 465.76 GiB, 500107862016 bytes, 976773168 sectors
    Disk model: TOSHIBA MQ01ABF0
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: CA5AEF9E-6E7E-0043-9477-06D2CF49999B
    
    
    Device     Start       End   Sectors   Size Type
    /dev/sdd1   2048 976773134 976771087 465.8G Linux filesystem
    
    
    
    
    Disk /dev/sde: 465.76 GiB, 500107862016 bytes, 976773168 sectors
    Disk model: TOSHIBA MQ01ABF0
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 465CD09B-2631-844D-A651-A3632B376784
    
    
    Device     Start       End   Sectors   Size Type
    /dev/sde1   2048 976773134 976771087 465.8G Linux filesystem
    mike@beastie:~$
    Part of the reason I'm posting this is I'm not 100% sure if a higher bandwidth on read write is actually better. I'm assuming it is.
    I even tried a pool with 3 drives in one raidz1-0 (vdev) and the other two drives in a raidz1-1 (vdev) into one pool.

    The only downside I can think of in my testing is that it will be dependent on my network throughput. I could use FIO but not familiar with it yet.

  3. #3
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    I stumbled upon some fio commands for benchmarking ran some test on the testpool .
    Here are the results

    Code:
    mike@beastie:/testpool$ sync;fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4M --size=4G --readwrite=write --ramp_time=4test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=1
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][30.8%][w=216MiB/s][w=54 IOPS][eta 00m:18s]
    test: (groupid=0, jobs=1): err= 0: pid=1777710: Wed Sep 18 11:33:46 2024
      write: IOPS=54, BW=221MiB/s (231MB/s)(844MiB/3825msec); 0 zone resets
        slat (usec): min=16906, max=20276, avg=18196.26, stdev=739.38
        clat (nsec): min=1620, max=19497, avg=4672.02, stdev=2344.46
         lat (usec): min=16912, max=20278, avg=18207.51, stdev=736.89
        clat percentiles (nsec):
         |  1.00th=[ 1688],  5.00th=[ 1960], 10.00th=[ 2288], 20.00th=[ 3344],
         | 30.00th=[ 3728], 40.00th=[ 4016], 50.00th=[ 4320], 60.00th=[ 4640],
         | 70.00th=[ 4960], 80.00th=[ 5920], 90.00th=[ 6752], 95.00th=[ 7072],
         | 99.00th=[17280], 99.50th=[17280], 99.90th=[19584], 99.95th=[19584],
         | 99.99th=[19584]
       bw (  KiB/s): min=212992, max=238044, per=99.99%, avg=225933.14, stdev=9390.00, samples=7
       iops        : min=   52, max=   58, avg=55.14, stdev= 2.27, samples=7
      lat (usec)   : 2=5.71%, 4=31.43%, 10=60.95%, 20=1.90%
      cpu          : usr=0.31%, sys=9.41%, ctx=6756, majf=0, minf=58
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,210,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=1
    
    
    Run status group 0 (all jobs):
      WRITE: bw=221MiB/s (231MB/s), 221MiB/s-221MiB/s (231MB/s-231MB/s), io=844MiB (885MB), run=3825-3825msec
    mike@beastie:/testpool$ sync;fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4M --size=4G --readwrite=read --ramp_time=4
    test: (g=0): rw=read, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=1
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=0)
    test: (groupid=0, jobs=1): err= 0: pid=1801830: Wed Sep 18 11:34:04 2024
      read: IOPS=928, BW=3714MiB/s (3894MB/s)(4096MiB/1103msec)
      cpu          : usr=0.00%, sys=99.91%, ctx=0, majf=0, minf=1032
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=1
    
    
    Run status group 0 (all jobs):
       READ: bw=3714MiB/s (3894MB/s), 3714MiB/s-3714MiB/s (3894MB/s-3894MB/s), io=4096MiB (4295MB), run=1103-1103msec
    mike@beastie:/testpool$
    To be honest I don't know if the results are bad good average or what? Some feed back would be appreciated.
    My intent here is to have a baseline before the actual drives go in and to actually gain some knowledge before that.

  4. #4
    Join Date
    May 2018
    Location
    Here and There
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: ZFS create

    Quote Originally Posted by sgt-mike View Post
    To be honest I don't know if the results are bad good average or what? Some feed back would be appreciated.
    My intent here is to have a baseline before the actual drives go in and to actually gain some knowledge before that.
    I would say those are above average read writes.

    Not many here to my knowledge use ZFS on Buntu ATT. (Experimental)

    And what you now see will change randomly, depending on the load on transfers. (Size/Memory)

    There is a few very good posts here in UF, Search for MAFoElffen, I use Arch for my ZFS systems so I won't be much use to you.

    Here is one of many: https://ubuntuforums.org/showthread....ght=MAFoElffen
    Last edited by 1fallen; September 18th, 2024 at 07:46 PM.
    "When you practice gratefulness, there is a sense of respect toward others." >>Dalai Lama

  5. #5
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    Quote Originally Posted by 1fallen2 View Post
    I would say those are above average read writes.

    Not many here to my knowledge use ZFS on Buntu ATT. (Experimental)

    And what you now see will change randomly, depending on the load on transfers. (Size/Memory)

    There is a few very good posts here in UF, Search for MAFoElffen, I use Arch for my ZFS systems so I won't be much use to you.
    Yes 1fallen2

    MAFoElffen was the person whom helped me attempt to assemble this server (which is still not done, some more components and drives are en-route within the next few days/weeks USPS dependent), and we held brief discussions on ZFS.

    With me stating I would use the filesystem on that server, as I thought it would be great to learn that.

    As of late I've been unable to contact him, so as you mentioned I'll pretty much have to glean from what he has written in the past in other post. This leaves me hoping that nothing has happened except a simple vacation.
    Couple that with 1fallen stating he is leaving the forum, news /discussion within the cafe about a migration of the forum, make my view from the foxhole kinda bleak to say the least.

    On the server hardware side currently it is at 32GB ram (Asus X-99A MB, i7-5930K, boot drive is a WD NVMe 128gb) which within the next thirty days I plan on doubling the ram . I updated the BIOS to the latest from ASUS before anything was installed. From the Documentation from Asus it's kinda fuzzy, I'm not sure if it will address 64 or 128GB Ram Maximum which ever it is I plan of shooting to max out the Ram 32 GB at a time. That should boost some things in the write/read sides.

    The data drives (I guess calling them a vdev is correct verbiage) for this test are 5- Toshiba MQ01ABF050 family 500GB -5400rpm 2.5" in a 6 bay SATA drive enclosure. Driven by the MB SATA ports. Which I had used in a media server with MDADM raid5 configuration.

    They will be replaced with 6 - Dell Constellation ES.3 7.2K rpm 4TB SAS-6Gps connected to a LSI 9300-16i 16 port SAS HBA.
    (even though the seller stated what they did in the ad I'm hoping that the LSI Controller is in IT mode and not IR, while I'm sure I can flash the BIOS over to IT mode. I haven't flashed a SCSI Bios since the mid 1980's when drives was only in 5.25" sizes. That leaves me leery actually cautious of performing the task).

    My main reason for choosing the SAS controller was so I could use either SATA or SAS drives (obviously not in the same pool).

    Within the Asus Bios I have the capability to shut down the controller that is not driving the boot drive (the board has two SATA controllers).
    Which I'm considering doing, but on the fence - leaving them alone or if I should disable 1 controller, when the LSI HBA is installed. Thoughts anyone? In my mind it seems as it "should" clear some IRQ's
    Last edited by sgt-mike; September 18th, 2024 at 09:33 PM.

  6. #6
    Join Date
    May 2018
    Location
    Here and There
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: ZFS create

    Quote Originally Posted by sgt-mike View Post
    As of late I've been unable to contact him, so as you mentioned I'll pretty much have to glean from what he has written in the past in other post. This leaves me hoping that nothing has happened except a simple vacation.
    Yep on my end as well, We all hope the best here.
    Quote Originally Posted by sgt-mike View Post
    Yes 1fallen

    Within the Asus Bios I have the capability to shut down the controller (the board has two controllers) that is not driving the boot drive.
    Which I'm considering doing, but on the fence on - leaving them alone or if I should disable 1 controller, when the LSI HBA is installed. Thoughts anyone?

    Well just to be safe , be sure to export that pool or pools first. If that becomes necessary.
    Last edited by 1fallen; September 18th, 2024 at 09:18 PM.
    "When you practice gratefulness, there is a sense of respect toward others." >>Dalai Lama

  7. #7
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    Normally yes I would export but the pool actually contains nothing that can't reloaded, so no loss there. As you mentioned researching some of his post i did just that.

    Reviewed when he would test his system and the Fio commands, there is no way my current 2.5" 20 + year old 5400rpm scrub rusty drives would even think about keep up with MAL's (Mike's) samsung 870 2TB SSD's.

    I Destroyed the pool and then re - created this time as a 5 disk raidZ2 and still used the -o ashift=12 in the create command
    (see first line I ran zpool history, shows the setup commands I used supposed to align the drives, not sure if it really needed performance wise, I would have used uuid's to assemble the drives but it is strange that. When I run blkid or attempt to get the drive's with lsblk those drive's don't report the uuid. The NVMe does however show the uuid every time )

    Code:
    mike@beastie:/$ sudo zpool historyHistory for 'testpool':
    2024-09-18.16:39:05 zpool create -o ashift=12 testpool raidz2 sda1 sdb1 sdc1 sdd1 sde1 -f
    2024-09-18.16:40:24 zfs set compress=lz4 testpool
    
    
    mike@beastie:/$ sudo zpool status
      pool: testpool
     state: ONLINE
    config:
    
    
            NAME        STATE     READ WRITE CKSUM
            testpool    ONLINE       0     0     0
              raidz2-0  ONLINE       0     0     0
                sda1    ONLINE       0     0     0
                sdb1    ONLINE       0     0     0
                sdc1    ONLINE       0     0     0
                sdd1    ONLINE       0     0     0
                sde1    ONLINE       0     0     0
    
    
    errors: No known data errors
    mike@beastie:/$ sudo zpool list
    NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    testpool  2.27T  1.23M  2.27T        -         -     0%     0%  1.00x    ONLINE  -
    
    mike@beastie:/testpool$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    TEST: Laying out IO file (1 file / 2048MiB)
    Jobs: 1 (f=1): [W(1)][38.9%][w=554MiB/s][w=553 IOPS][eta 00m:11s]
    Jobs: 1 (f=1): [W(1)][56.5%][w=208MiB/s][w=208 IOPS][eta 00m:10s]
    Jobs: 1 (f=1): [W(1)][76.0%][w=436MiB/s][w=436 IOPS][eta 00m:06s]
    Jobs: 1 (f=1): [W(1)][89.3%][w=199MiB/s][w=199 IOPS][eta 00m:03s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][97.4%][eta 00m:01s]
    Jobs: 1 (f=1): [W(1)][97.4%][eta 00m:01s]
    TEST: (groupid=0, jobs=1): err= 0: pid=319677: Wed Sep 18 16:45:12 2024
      write: IOPS=267, BW=268MiB/s (281MB/s)(10.0GiB/38246msec); 0 zone resets
        slat (usec): min=274, max=9307, avg=2764.48, stdev=1544.85
        clat (usec): min=2, max=9938.6k, avg=115141.89, stdev=542234.64
         lat (usec): min=320, max=9940.5k, avg=117907.03, stdev=542325.14
        clat percentiles (msec):
         |  1.00th=[   11],  5.00th=[   13], 10.00th=[   15], 20.00th=[   24],
         | 30.00th=[   56], 40.00th=[   70], 50.00th=[   85], 60.00th=[  104],
         | 70.00th=[  123], 80.00th=[  138], 90.00th=[  150], 95.00th=[  155],
         | 99.00th=[  159], 99.50th=[  159], 99.90th=[ 9866], 99.95th=[ 9866],
         | 99.99th=[10000]
       bw (  KiB/s): min=200704, max=2181120, per=100.00%, avg=358184.42, stdev=311594.61, samples=57
       iops        : min=  196, max= 2130, avg=349.79, stdev=304.29, samples=57
      lat (usec)   : 4=0.02%, 10=0.03%, 500=0.01%, 750=0.01%, 1000=0.01%
      lat (msec)   : 2=0.04%, 4=0.08%, 10=0.87%, 20=12.54%, 50=11.25%
      lat (msec)   : 100=33.48%, 250=41.37%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1168, max=1168, avg=1168.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1176],  5.00th=[ 1176], 10.00th=[ 1176], 20.00th=[ 1176],
         | 30.00th=[ 1176], 40.00th=[ 1176], 50.00th=[ 1176], 60.00th=[ 1176],
         | 70.00th=[ 1176], 80.00th=[ 1176], 90.00th=[ 1176], 95.00th=[ 1176],
         | 99.00th=[ 1176], 99.50th=[ 1176], 99.90th=[ 1176], 99.95th=[ 1176],
         | 99.99th=[ 1176]
      cpu          : usr=1.44%, sys=10.42%, ctx=68611, majf=0, minf=14
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
      WRITE: bw=268MiB/s (281MB/s), 268MiB/s-268MiB/s (281MB/s-281MB/s), io=10.0GiB (10.7GB), run=38246-38246msec
    mike@beastie:/testpool$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=0): [f(1)][-.-%][r=3682MiB/s][r=3682 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=364773: Wed Sep 18 16:45:37 2024
      read: IOPS=3561, BW=3562MiB/s (3735MB/s)(10.0GiB/2875msec)
        slat (usec): min=229, max=890, avg=278.54, stdev=37.48
        clat (usec): min=2, max=23476, avg=8613.45, stdev=1134.15
         lat (usec): min=262, max=24368, avg=8892.27, stdev=1164.29
        clat percentiles (usec):
         |  1.00th=[ 5342],  5.00th=[ 8291], 10.00th=[ 8291], 20.00th=[ 8356],
         | 30.00th=[ 8356], 40.00th=[ 8356], 50.00th=[ 8356], 60.00th=[ 8356],
         | 70.00th=[ 8455], 80.00th=[ 8455], 90.00th=[10028], 95.00th=[10159],
         | 99.00th=[12649], 99.50th=[13042], 99.90th=[20317], 99.95th=[21890],
         | 99.99th=[23200]
       bw (  MiB/s): min= 2844, max= 3714, per=99.12%, avg=3530.40, stdev=384.01, samples=5
       iops        : min= 2844, max= 3714, avg=3530.40, stdev=384.01, samples=5
      lat (usec)   : 4=0.05%, 500=0.05%, 750=0.05%, 1000=0.05%
      lat (msec)   : 2=0.20%, 4=0.34%, 10=85.87%, 20=13.29%, 50=0.11%
      cpu          : usr=1.08%, sys=98.82%, ctx=8, majf=0, minf=8205
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
       READ: bw=3562MiB/s (3735MB/s), 3562MiB/s-3562MiB/s (3735MB/s-3735MB/s), io=10.0GiB (10.7GB), run=2875-2875msec
    mike@beastie:/testpool$
    on this attempt (test) the drives did pick up a bit in write, and drop a bit in read. looking over the data from a striped, raidZ1 vs the raidZ2.
    I think I'm preferring similar to MALFoELffen's setup the best.

    Edit added: as a after thought after posting, I rebooted the server and reran the fio test just to see if any differences - results was VERY close to the same.
    Last edited by sgt-mike; September 18th, 2024 at 11:31 PM.

  8. #8
    Join Date
    May 2018
    Location
    Here and There
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: ZFS create

    Refresh my memory here, do you have the same control-board as MAFoElffen?
    Also his drives for storage were Samsung 980 pro's
    When I run blkid or attempt to get the drive's with lsblk those drive's don't report the uuid.
    Use this, but what you see is normal with "lsblkid"
    Code:
     lsblk -f 
    NAME        FSTYPE     FSVER LABEL            UUID                                 FSAVAIL FSUSE% MOUNTPOINTS 
    sda                                                                                                
    ├─sda1      vfat       FAT32                  174D-2B3B                                            
    └─sda2      zfs_member 5000  zpcachyos        16371488555828878549                

    Last edited by 1fallen; September 18th, 2024 at 11:37 PM.
    "When you practice gratefulness, there is a sense of respect toward others." >>Dalai Lama

  9. #9
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    I don't remember let me check some post's de did but he did recommend that HBA that I chose

    Ohh here is the output of lsblk -f which is what I used at first

    Code:
     mike@beastie:/testpool$  lsblk -fNAME FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
    loop0
                                                                       0   100% /snap/core20/2318
    loop1
                                                                       0   100% /snap/core20/2379
    loop2
                                                                       0   100% /snap/lxd/27037
    loop3
                                                                       0   100% /snap/lxd/29351
    loop4
                                                                       0   100% /snap/snapd/20671
    loop5
                                                                       0   100% /snap/snapd/21759
    sda
    └─sda1
         zfs_me 5000  testpool
                            3633765842062420381
    sdb
    └─sdb1
         zfs_me 5000  testpool
                            3633765842062420381
    sdc
    └─sdc1
         zfs_me 5000  testpool
                            3633765842062420381
    sdd
    └─sdd1
         zfs_me 5000  testpool
                            3633765842062420381
    sde
    └─sde1
         zfs_me 5000  testpool
                            3633765842062420381
    nvme0n1
    │
    ├─nvme0n1p1
    │    vfat   FAT32       4CB7-A6FA                                 1G     1% /boot/efi
    └─nvme0n1p2
         ext4   1.0         886863a1-0959-4ebb-b189-c5aa9521430a  212.2G     4% /
    mike@beastie:/testpool$
    as those drives have been in multiple systems etc I suspect I did something on my end is why the UUID are not showing up
    Last edited by sgt-mike; September 18th, 2024 at 11:45 PM.

  10. #10
    Join Date
    May 2018
    Location
    Here and There
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: ZFS create

    Quote Originally Posted by sgt-mike View Post
    I don't remember let me check some post's de did but he did recommend that HBA that I chose
    Ok I thought so I just need to get my head around a thread so long back. Yep that's a good one then.
    I can't even remember yesterday with distinct clarity.
    "When you practice gratefulness, there is a sense of respect toward others." >>Dalai Lama

Page 1 of 3 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •