Page 3 of 3 FirstFirst 123
Results 21 to 29 of 29

Thread: ZFS create

  1. #21
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    Quote Originally Posted by madscientist032 View Post
    I'm glad I found this thread, looks like you and I are in similar spots. I'm in the process of rebuilding my home NAS and dabbling with RAID. Nothing too fancy (I'm not going ZFS or anything, at least not as of writing). but I did want to share the parts & have and experience, maybe this would help you with your build

    1 - My HBA is this guy right here - it works like a dream out of the box and the Fractal Design Define R5 it sits in houses just as much drives as the card itself can handle. I currently have 2x 4port SATA breakouts connected, and it can handle 4x 4 for a total of 16 drives. The major difference b/t this one and what you've linked is cost. As of this post, this listing is 110 USD but I got it for half that.. if you really don't need 16 drives then perhaps one of your two HBAs mentioned above will do the job just fine, just make sure the controller is up to the task (which it seems you've already ran into some of that!) used This post from ServeTheHome as my starting point. One thing I haven't yet looked into is how SATA based SSDs function on this HBA. I've heard some folks mention this LSI controller doesn't support TRIM, some claim it does, so YMMV (and for my case, I'm using all mechanical HDDs except for the boot drive). Another item you'll want to keep in mind is that the HBA's heatsinks can get quite toasty, so ensure you've got adequate airflow and if not, might be worth finding a 3dprinted fan shroud mount for a fan and get things cooled off that way. I think this drive supports SAS 3 so you should be good there too.

    2 - I have had GREAT success with those hotswap bays that you've listed above. Mine was an KingWin, but it looks exactly the same as the Athena Power 3x HDD cage I use this for data ingestion & drive testing as I've been in the process of validating all drives in my possession + new/refurbished drives that have been arriving. It's been great. The only thing I need to mention is that the drives do get a bit toasty in there (35-42c), and the included fan is not a standard size, it's 75cm ?? I think? I don't recall, because I replaced mine with a 60mm Noctua fan as soon as the hotswap bay arrived 4 years ago. The fan is duck-taped to the cage + plugged into one of the motherboard chassis fan headers. I would recommend doing that if you do end up committing to these hard drive cages.

    I'm not trying to steamroll this thread, only wanted to share my knowledge in the hopes it would benefit your build.
    @madscientist032

    LOL I'm glad you popped on with the links that will be a great help, and personally I don't consider you steamrolling this thread with your post. In a matter of fact really I should have posted the last post to the thread I started dealing with the hardware build. When MAFoELffen recommended that HBA LSI 9300-16i I downloaded the manual and have been scouring it quite a bit. When I seen the ads for the LSI 9003-8i the only difference I could see within the manuals was number of ports and the heatsink is a lot larger on on the 16 port internal version. And of course cost is bit more affordable in the 8i version. And your timing is perfect with the vendor. On the HBA I'm planning on adding a fan to the heatsink to it to help with heat there as well. I'll look at the fans on the enclosures as you mentioned one thought is to a (or multiple) fan rack behind the enclosures and cabling to suck more air across the drives (something like this https://www.ebay.com/itm/145986384739 ), or simply replace them.

    On the HBA's that we mentioned I know soon after I populate the drive bays. I will want to go outside the case for additional drives into the pool which brings in the use of a expander (been eyeballing this one hard Adaptec AEC-82885T has 2 external SFF-8644 port out. listed with 36 ports @12Gps) which can be powered either by bus or via molex so it won't take up a slot. I actually drew the two different HBA cabling diagram to the Adaptec expander. The 16i definitely won if cabled correctly with SFF-8643 to SFF-8644 adapter provided one more external SFF-8644 double port. (which means I can use other existing computers drive bays and power to the NFS, or use drive shelves).

    In closing this leaves me with deciding between three COA's to use in order to replace the defective HBA
    a. send the original vendor a offer for two HBA's at a greatly reduced cost, thus affording them a chance to redeem themselves.
    b. just find a different vendor at the same price point or lower of the first vendor I used.
    c. Message your vendor and have them send me a offer
    (unfortunately for me I live in one of the few states that collect taxes on online sales, so I'm a cheap skate)

    I really need to post my hardware stuff back in my post dealing with the hardware of the build and stop the thread drift that I did on this one.
    https://ubuntuforums.org/showthread.php?t=2496263
    Last edited by sgt-mike; September 23rd, 2024 at 08:37 AM.

  2. #22
    Join Date
    May 2014
    Beans
    1

    Re: ZFS create

    Quote Originally Posted by sgt-mike View Post
    Just starting out on ZFS.
    I had seen a post/article elsewhere where a person assigned an alias to the drives versus them just using the standard /dev/sd*# method. Which seems to make some sense in order to cleanly track down a faulty drive.
    (I'll have to reread the article/post as I don't recall if when he assembled if he used the assigned alias or drive id, but I think he used the alias)

    I've seen where others use the drive ID, which makes a lot of sense vs the method I used.
    Hi Mike,

    It is recommended that the devices in /dev/disk/by-id, not the /dev/sd* versions get used for building your vdevs / pools.

    Sometimes the order that drives get detected by the hardware can differ and has that has caused problems with software raid style systems. The /dev/disk/by-id devices names are guaranteed to be unique for each device.

    If nothing else, it gives you the piece of mind to swap them to a new hba or mb and not have the device numbers change or even having to worry about which order you are plugging them in.

    -Kirt

  3. #23
    Join Date
    May 2007
    Beans
    90
    Distro
    Ubuntu 22.04 Jammy Jellyfish

    Re: ZFS create

    Quote Originally Posted by kirtr View Post
    Hi Mike,

    It is recommended that the devices in /dev/disk/by-id, not the /dev/sd* versions get used for building your vdevs / pools.

    Sometimes the order that drives get detected by the hardware can differ and has that has caused problems with software raid style systems. The /dev/disk/by-id devices names are guaranteed to be unique for each device.

    If nothing else, it gives you the piece of mind to swap them to a new hba or mb and not have the device numbers change or even having to worry about which order you are plugging them in.

    -Kirt
    Can confirm. I used to have a crontab entry that would mount via sd[a-c]1 however one day I replaced a drive, swapped things around in the chassis and all my mounts were messed up. Made for a helluva Sunday morning trying to fix, but in the end it forced me to re-think my setup (both my NAS & my primary linux workstation) and I had to do some learning on proper storage management & administration.

    So needless to say, I ended up learning real quick about UUIDs, /etc/fstab, and lsblk -f.

    Lsblk is critical for identifying drives, it shows where they are mounted (if mounted!) as well as any labels applied & dev information.

    Here's a sample of what we're talking about, this will def help with your drive troubleshooting when that comes into play:

    My lsblk output: (Note i am filtering out loopback devices for clarity & readability purposes)

    Code:
    madsci@madsci-nas:~$ lsblk -f | grep -v loop
    NAME        FSTYPE            FSVER LABEL           UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
    sda                                                                                                    
    └─sda1      ext4              1.0   archive         e9f7b0eb-7ebc-431c-8b25-de22b9715ee5    5.2T    37% /archive
    sdb                                                                                                    
    └─sdb1      ext4              1.0   sandbox01       879b4b26-d7cb-48ff-b0d5-f378dfc8a79f    1.7T     1% /tmp/disk3
    sdc                                                                                                    
    └─sdc1      ext4              1.0   media           f71fde5d-baeb-472a-8fdd-dbf79c9f7c22      2T    39% /data/media
    sdd                                                                                                    
    └─sdd1      ext4              1.0   music           73801dd5-f2b0-48a9-98bd-80786291c443    1.2T    27% /data/music
    sde         linux_raid_member 1.2   madsci-nas:0     d257ac4d-7b66-eb63-141e-1300f398cc01                
    └─md0       ext4              1.0                   5f164fcd-7d18-419a-9d41-ea92094f0cff   77.8G    82% /scratch
    sdf         linux_raid_member 1.2   madsci-nas:0     d257ac4d-7b66-eb63-141e-1300f398cc01                
    └─md0       ext4              1.0                   5f164fcd-7d18-419a-9d41-ea92094f0cff   77.8G    82% /scratch
    sdg                                                                                                    
    └─sdg1      ext4              1.0   sandbox02       93a58c53-d8f0-4b73-a87d-00d239b5b721    1.6T     5% /tmp/disk1
    nvme0n1                                                                                                
    ├─nvme0n1p1 vfat              FAT32                 F20B-6791                             504.9M     1% /boot/efi
    └─nvme0n1p2 ext4              1.0   main            bc0e1b91-54e9-11ea-ae6e-a8a15900bd9c  221.9G    47% /

    Next here's what my /etc/fstab looks like (comments for clarity):

    Code:
    madsci@madsci-nas: ~$ cat /etc/fstab
    # 1TB Samsung 980 Pro        (M.2 NVME SSD, primary boot disk)
    UUID=bc0e1b91-54e9-11ea-ae6e-a8a15900bd9c / ext4 defaults 0 0
    UUID=F20B-6791 /boot/efi vfat defaults 0 0
    #
    # 10 TB Seagate IronWolf     (3.5 HDD 7200 RPM)
    UUID=e9f7b0eb-7ebc-431c-8b25-de22b9715ee5 /archive ext4 defaults 0 0
    #
    # 2TB Samsung Spinpoint      (2.5 HDD 5400 RPM)
    UUID=73801dd5-f2b0-48a9-98bd-80786291c443 /data/music ext4 defaults 0 0
    #
    # 4TB Seagate Barracuda      (3.5 HDD 5400 RPM)
    UUID=f71fde5d-baeb-472a-8fdd-dbf79c9f7c22 /data/video ext4 defaults 0 0
    #
    /swap.img       none    swap    sw      0       0
    You can see that the UUIDs tie together, and that the information from lsblk is used to build the /etc/fstab entries.

    One word of caution, is that if the fstab file is not constructed properly or if there is an issue with a drive upon boot-up, the system will go into emergency mode and drop you into a root terminal.

    Here are some additional resources (albeit from DigitalOcean, hope that's okay!) that may come in handy (or are a good refresher if you're already versed!)

  4. #24
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    @kirtr and @ madscientist032
    Thanks a lot for your replies and advice, when I proceed this time with the SAS drives and the actual zpool for production.

    I'll go back over this again as well as the Digital Ocean links (which I have used before for multiple things).

    In my playing around I used sata 2.5" laptop drives to test configuring ZFS and establish a baseline to compare the soon to be installed actual SAS drives I ordered. I figured out what my actual problem was when I attempted lsblk -f and why I never seen the UUID for the drives.
    1. Those drives had been used in a madam array (yes I zeroed he superblocks)
    2. Because ZFS is a file system and soft raid I never formatted the drives, thus the uuid wasn't present for lsblk -f to pick up and report derrrp. Never crossed my mind until later when I was creating and destroying the Zpool to play with differing setups , striped zraid, zraid2 etc.

    When I did format (ext4) the drives to be used prior to creating the Zpool they did report the UUID which I then used lsblk -f reported UUID's to create the zpool . Versus the earlier sd# when the drives was unformatted.

    What ZFS did every time I created the zpool was automatically reformat the vdev (zraid2 array) and mount it. The process I did really helped me to kind of understand it a little better. I really liked the ease of setting the zpool up, and even adding another zraid2 vdev to it was extremely painless. When I checked the zpool using 2 vdevs consisting zraid2 it would report it as zraid2-0, zraid2-1 respectfully. One other ting that set ZFS apart was I never had to edit /etc/fstab in my test when I rebooted multiple time soft / hard ZFS automictically mount the zpool with each Vdev.

    Another observation I did notice with ZFS vs Mdadm is that one can one a spare (s) to the pool to service each of the makeup zraid vdevs really neat vs Mdadm needing to add one or two for each array.
    One way to think of it even though probably (actually not probably but actually is) wrong and inaccurate but makes sense in my head is that one can pool together multiple mdadm raid# arrays into one pool.

    ---------------------------------------------------------------
    Got some news on the hardware update which I'll post in my other post when I asked for advice on the hardware build.
    __________________________________________________ ____________________________
    9/25/2024 --- this is updated info hardware but will affect throughput testing in the future of this post bumped system to 96Gb ram versus 64used earlier.
    Been reading more on the forming of the raidz2 vdev's used in the pool. I was going to use 4 drives each, BUT according to an article/ post I read that number of drives would not be optimal for a zraid2. What was stated in the article as 5-7 drives in each vdev for a zraid2. This has caused me to rethink the first pool makeup. I need tp scour this site to see what most are using, I did see that MAFoELffen uses 5 drive but I really need to see if in his posting if that is spinners or soild state.
    Last edited by sgt-mike; September 26th, 2024 at 12:25 AM. Reason: ram added

  5. #25
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    Finally the HBA(s) arrived yesterday both checked out in IT mode etc. I added a fan to the LSI 9300-16i heat sink (zip ties are great) in order to help with cooling. (reserving the LSI 9300-8i into back up, or maybe use in a test bench system to format drives)

    Only when I took /installed the SAS drives that bought in a lot (of 6) did it become apparent that the vendor shipped two different model numbers, same capacity 4TB, same manufacturer.

    Which changed a lot of things in the way I had intended to assemble the vdevs for the pool(s). What I had received from the vendor /seller was three 6 gb/s and three 12 gb/s. Originally I intended to go 6 drives per pool, but after much thought of throughput. Even though mixing them only had 1 drawback the 12' have to step down to the 6's. Too easy to order three more 12 gp/s drives in order to cure my OCD and get back to my original plan, 6 drives raidZ2.

    So I whipped into action with a plan running face first into a brick wall at mach 8 ... pool 's would not assemble complaining of LVM or device busy, Huh I just checked them they are all on 512 sector. Now proceeding at a crawl I started to investigate (meaning I did a lot of googling on here and elsewhere). I found out that some of the drives had came out of Microsoft Server environment and had not been sanitized as advertised. Bummer... no wonder they won't assemble, so I jumped into sg_utils and sg_format, formatted all six at the same time. Now it wasn't quick (about 16hrs) but the 12gb/s drives was far outrunning the 6gb/s drives. Problem solved drives assemble beautifully.

    Now with all that transpired I had decided to use the 12gb/s drives into a raidZ1 pool three wide. So the three 6gb/s drives went into another pool, this one a stripped pool basically raid 0 (everything here data wise is backed up with the raidz1 pool and external drives) to feed the Movies/TV shows to the plex server.
    I'm aware that 3 drives is really too small to allow the raidZ1 to be effective in throughput with even a bigger penalty in raidZ2. So I'll simply order more 12's to get to a minimum of six drive wide on that array and bump it up by adding another drive in parity (raidZ2).

    Currently I have 1 pool named plexdata raidZ1 (three 4TB 12gb/s drives) and 1 pool named plexserve stripped (three 4TB 6gb/s drives). I assembled them via alias's from their uuid's. Because of haste I assembled them as whole drives, because I didn't quite understand how to assemble them on a partition using drive alias's. Until I did both pools, no problem when I get the other drives and set them on partitions vs whole drives. I'll implement that method later when I obtain a few more drives.
    After using the drive alias's method I really like it. Now I can somewhat tell which bay and drive number might be failing in the future for drive replacement vs a serial number hunt. Really when the system report back int1D1 degraded ... hey that's the internal bays disk1.

    So, how do the two pools stack up against each other? Not surprisingly the stripped pools really is doing good.

    Code:
    mike@beastie:/plexserv$ sudo fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=5g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    TEST: Laying out IO file (1 file / 5120MiB)
    Jobs: 1 (f=1): [W(1)][58.3%][w=536MiB/s][w=536 IOPS][eta 00m:05s]
    Jobs: 1 (f=1): [W(1)][86.7%][w=536MiB/s][w=535 IOPS][eta 00m:02s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=34992: Sun Sep 29 19:05:42 2024
      write: IOPS=471, BW=472MiB/s (495MB/s)(10.0GiB/21713msec); 0 zone resets
        slat (usec): min=201, max=3831, avg=1504.26, stdev=692.03
        clat (usec): min=2, max=6321.0k, avg=65618.34, stdev=344197.00
         lat (usec): min=330, max=6322.8k, avg=67123.13, stdev=344263.32
        clat percentiles (msec):
         |  1.00th=[    9],  5.00th=[   10], 10.00th=[   11], 20.00th=[   16],
         | 30.00th=[   41], 40.00th=[   57], 50.00th=[   58], 60.00th=[   59],
         | 70.00th=[   60], 80.00th=[   62], 90.00th=[   65], 95.00th=[   71],
         | 99.00th=[   82], 99.50th=[   87], 99.90th=[ 6275], 99.95th=[ 6342],
         | 99.99th=[ 6342]
       bw (  KiB/s): min=354304, max=2258944, per=100.00%, avg=658597.16, stdev=443965.52, samples=31
       iops        : min=  346, max= 2206, avg=643.16, stdev=433.56, samples=31
      lat (usec)   : 4=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
      lat (msec)   : 2=0.04%, 4=0.06%, 10=8.30%, 20=14.21%, 50=10.73%
      lat (msec)   : 100=66.31%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1190, max=1190, avg=1190.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1192],  5.00th=[ 1192], 10.00th=[ 1192], 20.00th=[ 1192],
         | 30.00th=[ 1192], 40.00th=[ 1192], 50.00th=[ 1192], 60.00th=[ 1192],
         | 70.00th=[ 1192], 80.00th=[ 1192], 90.00th=[ 1192], 95.00th=[ 1192],
         | 99.00th=[ 1192], 99.50th=[ 1192], 99.90th=[ 1192], 99.95th=[ 1192],
         | 99.99th=[ 1192]
      cpu          : usr=2.19%, sys=14.37%, ctx=61743, majf=1, minf=14
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=99.4%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
      WRITE: bw=472MiB/s (495MB/s), 472MiB/s-472MiB/s (495MB/s-495MB/s), io=10.0GiB (10.7GB), run=21713-21713msec
    mike@beastie:/plexserv$ sudo fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=5g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs
    =1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1)
    TEST: (groupid=0, jobs=1): err= 0: pid=59840: Sun Sep 29 19:06:15 2024
      read: IOPS=3633, BW=3634MiB/s (3810MB/s)(10.0GiB/2818msec)
        slat (usec): min=229, max=977, avg=273.12, stdev=42.13
        clat (usec): min=2, max=25607, avg=8477.49, stdev=1126.32
         lat (usec): min=256, max=26581, avg=8750.89, stdev=1164.08
        clat percentiles (usec):
         |  1.00th=[ 8094],  5.00th=[ 8094], 10.00th=[ 8094], 20.00th=[ 8094],
         | 30.00th=[ 8094], 40.00th=[ 8160], 50.00th=[ 8160], 60.00th=[ 8160],
         | 70.00th=[ 8160], 80.00th=[ 8160], 90.00th=[ 9896], 95.00th=[10028],
         | 99.00th=[12911], 99.50th=[13435], 99.90th=[21890], 99.95th=[23725],
         | 99.99th=[25297]
       bw (  MiB/s): min= 2850, max= 3816, per=99.10%, avg=3601.20, stdev=422.17, samples=5
       iops        : min= 2850, max= 3816, avg=3601.20, stdev=422.17, samples=5
      lat (usec)   : 4=0.02%, 500=0.02%, 750=0.02%, 1000=0.02%
      lat (msec)   : 2=0.08%, 4=0.16%, 10=95.22%, 20=4.32%, 50=0.15%
      cpu          : usr=1.38%, sys=98.58%, ctx=9, majf=0, minf=8203
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=99.4%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
       READ: bw=3634MiB/s (3810MB/s), 3634MiB/s-3634MiB/s (3810MB/s-3810MB/s), io=10.0GiB (10.7GB), run=2818-2818msec
    ---------------now to test pool plexdata-------------------------------------------------------------------------
    mike@beastie:/plexdata$ sudo fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=5g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjob
    s=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    TEST: Laying out IO file (1 file / 5120MiB)
    Jobs: 1 (f=1): [W(1)][50.0%][w=343MiB/s][w=343 IOPS][eta 00m:07s]
    Jobs: 1 (f=1): [W(1)][68.4%][w=316MiB/s][w=316 IOPS][eta 00m:06s]
    Jobs: 1 (f=1): [W(1)][90.5%][w=345MiB/s][w=345 IOPS][eta 00m:02s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=59884: Sun Sep 29 19:08:54 2024
      write: IOPS=317, BW=318MiB/s (333MB/s)(10.0GiB/32234msec); 0 zone resets
        slat (usec): min=208, max=4409, avg=2156.49, stdev=1124.99
        clat (usec): min=2, max=10200k, avg=97425.75, stdev=555468.28
         lat (usec): min=283, max=10203k, avg=99582.82, stdev=555597.61
        clat percentiles (msec):
         |  1.00th=[    9],  5.00th=[   10], 10.00th=[   10], 20.00th=[   18],
         | 30.00th=[   49], 40.00th=[   87], 50.00th=[   88], 60.00th=[   89],
         | 70.00th=[   91], 80.00th=[   92], 90.00th=[   95], 95.00th=[  102],
         | 99.00th=[  112], 99.50th=[  116], 99.90th=[10134], 99.95th=[10134],
         | 99.99th=[10134]
       bw (  KiB/s): min=30720, max=2840576, per=100.00%, avg=453700.27, stdev=434427.67, samples=45
       iops        : min=   30, max= 2774, avg=443.07, stdev=424.25, samples=45
      lat (usec)   : 4=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
      lat (msec)   : 2=0.04%, 4=0.08%, 10=14.39%, 20=5.71%, 50=9.92%
      lat (msec)   : 100=64.39%, 250=5.11%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=972, max=972, avg=972.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  972],  5.00th=[  972], 10.00th=[  972], 20.00th=[  972],
         | 30.00th=[  972], 40.00th=[  972], 50.00th=[  972], 60.00th=[  972],
         | 70.00th=[  972], 80.00th=[  972], 90.00th=[  972], 95.00th=[  972],
         | 99.00th=[  972], 99.50th=[  972], 99.90th=[  972], 99.95th=[  972],
         | 99.99th=[  972]
      cpu          : usr=1.62%, sys=9.34%, ctx=62597, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=99.4%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
      WRITE: bw=318MiB/s (333MB/s), 318MiB/s-318MiB/s (333MB/s-333MB/s), io=10.0GiB (10.7GB), run=32234-32234msec
    mike@beastie:/plexdata$ sudo fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=5g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs
    =1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1)
    TEST: (groupid=0, jobs=1): err= 0: pid=109236: Sun Sep 29 19:09:21 2024
      read: IOPS=3629, BW=3630MiB/s (3806MB/s)(10.0GiB/2821msec)
        slat (usec): min=225, max=947, avg=273.48, stdev=40.52
        clat (usec): min=2, max=24871, avg=8487.41, stdev=1089.44
         lat (usec): min=260, max=25820, avg=8761.18, stdev=1125.83
        clat percentiles (usec):
         |  1.00th=[ 8094],  5.00th=[ 8094], 10.00th=[ 8094], 20.00th=[ 8160],
         | 30.00th=[ 8160], 40.00th=[ 8160], 50.00th=[ 8160], 60.00th=[ 8160],
         | 70.00th=[ 8160], 80.00th=[ 8225], 90.00th=[10028], 95.00th=[10028],
         | 99.00th=[12518], 99.50th=[13304], 99.90th=[21103], 99.95th=[22938],
         | 99.99th=[24511]
       bw (  MiB/s): min= 2864, max= 3806, per=99.11%, avg=3597.60, stdev=412.03, samples=5
       iops        : min= 2864, max= 3806, avg=3597.60, stdev=412.03, samples=5
      lat (usec)   : 4=0.02%, 500=0.02%, 750=0.02%, 1000=0.02%
      lat (msec)   : 2=0.08%, 4=0.16%, 10=93.69%, 20=5.86%, 50=0.14%
      cpu          : usr=0.67%, sys=99.29%, ctx=5, majf=0, minf=8203
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=99.4%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
       READ: bw=3630MiB/s (3806MB/s), 3630MiB/s-3630MiB/s (3806MB/s-3806MB/s), io=10.0GiB (10.7GB), run=2821-2821msec
    mike@beastie:/plexserv$  sudo zpool iostat -v 30 5
                  capacity     operations     bandwidth
    pool        alloc   free   read  write   read  write
    ----------  -----  -----  -----  -----  -----  -----
    plexdata     340K  10.9T      0     18  2.30K  6.23M
      raidz1-0   340K  10.9T      0     18  2.30K  6.23M
        int1d1      -      -      0      6    783  2.08M
        int1d2      -      -      0      6    783  2.08M
        int1d3      -      -      0      6    783  2.08M
    ----------  -----  -----  -----  -----  -----  -----
    plexserv     202K  10.9T      0     11     65  9.15M
      int1d4      71K  3.62T      0      3     21  3.07M
      int1d5      68K  3.62T      0      4     21  3.03M
      ext1d1      63K  3.62T      0      3     21  3.06M
    ----------  -----  -----  -----  -----  -----  -----
                  capacity     operations     bandwidth
    pool        alloc   free   read  write   read  write
    ----------  -----  -----  -----  -----  -----  -----
    plexdata     340K  10.9T      0      0      0      0
      raidz1-0   340K  10.9T      0      0      0      0
        int1d1      -      -      0      0      0      0
        int1d2      -      -      0      0      0      0
        int1d3      -      -      0      0      0      0
    ----------  -----  -----  -----  -----  -----  -----
    plexserv     202K  10.9T      0      0      0      0
      int1d4      71K  3.62T      0      0      0      0
      int1d5      68K  3.62T      0      0      0      0
      ext1d1      63K  3.62T      0      0      0      0
    ----------  -----  -----  -----  -----  -----  -----
                  capacity     operations     bandwidth
    pool        alloc   free   read  write   read  write
    ----------  -----  -----  -----  -----  -----  -----
    plexdata     340K  10.9T      0      0      0      0
      raidz1-0   340K  10.9T      0      0      0      0
        int1d1      -      -      0      0      0      0
        int1d2      -      -      0      0      0      0
        int1d3      -      -      0      0      0      0
    ----------  -----  -----  -----  -----  -----  -----
    plexserv     202K  10.9T      0      0      0      0
      int1d4      71K  3.62T      0      0      0      0
      int1d5      68K  3.62T      0      0      0      0
      ext1d1      63K  3.62T      0      0      0      0
    ----------  -----  -----  -----  -----  -----  -----
                  capacity     operations     bandwidth
    pool        alloc   free   read  write   read  write
    ----------  -----  -----  -----  -----  -----  -----
    plexdata     340K  10.9T      0      0      0      0
      raidz1-0   340K  10.9T      0      0      0      0
        int1d1      -      -      0      0      0      0
        int1d2      -      -      0      0      0      0
        int1d3      -      -      0      0      0      0
    ----------  -----  -----  -----  -----  -----  -----
    plexserv     202K  10.9T      0      0      0      0
      int1d4      71K  3.62T      0      0      0      0
      int1d5      68K  3.62T      0      0      0      0
      ext1d1      63K  3.62T      0      0      0      0
    ----------  -----  -----  -----  -----  -----  -----
                  capacity     operations     bandwidth
    pool        alloc   free   read  write   read  write
    ----------  -----  -----  -----  -----  -----  -----
    plexdata     340K  10.9T      0      0      0      0
      raidz1-0   340K  10.9T      0      0      0      0
        int1d1      -      -      0      0      0      0
        int1d2      -      -      0      0      0      0
        int1d3      -      -      0      0      0      0
    ----------  -----  -----  -----  -----  -----  -----
    plexserv     202K  10.9T      0      0      0      0
      int1d4      71K  3.62T      0      0      0      0
      int1d5      68K  3.62T      0      0      0      0
      ext1d1      63K  3.62T      0      0      0      0
    ----------  -----  -----  -----  -----  -----  -----
    mike@beastie:/plexserv$
    I suspect that the pool plexdata will increase in throughput once I re-set it up back up with 6 to 11 matching drives , plus adding a spare drive or two for insurance for that pool.
    The pool plexserve I really don't see a point to buff it up until SSD's drop in price.

    I did one more thing different on the NFS, I am trying out cockpit vs what I normally use (webmin). So far I kinda like it. but would really like it if I could monitor temps, and smart data vs going to CLI to find out that. Those two points webmin excels. Just my point of view.

    Code:
     mike@beastie:~$ sudo zpool status  pool: plexdata
     state: ONLINE
    config:
    
    
            NAME        STATE     READ WRITE CKSUM
            plexdata    ONLINE       0     0     0
              raidz1-0  ONLINE       0     0     0
                int1d1  ONLINE       0     0     0
                int1d2  ONLINE       0     0     0
                int1d3  ONLINE       0     0     0
    
    
    errors: No known data errors
    
    
      pool: plexserv
     state: ONLINE
    config:
    
    
            NAME        STATE     READ WRITE CKSUM
            plexserv    ONLINE       0     0     0
              int1d4    ONLINE       0     0     0
              int1d5    ONLINE       0     0     0
              ext1d1    ONLINE       0     0     0
    
    
    errors: No known data errors
    mike@beastie:~$ sudo zpool history
    History for 'plexdata':
    2024-09-29.18:38:47 zpool create plexdata raidz1 int1d1 int1d2 int1d3 -f
    2024-09-29.19:14:23 zfs set compress=lz4 plexdata
    
    
    History for 'plexserv':
    2024-09-29.19:01:14 zpool create plexserv int1d4 int1d5 ext1d1 -f
    2024-09-29.19:14:15 zfs set compress=lz4 plexserv
    
    
    mike@beastie:~$
    All in all so far really like ZFS, is it better than Mdadm? I "think so" although Mdadm has some feature I really like. In the end It's kinda a wash and personal choice.
    Will the current system expand .... yep!

    Now I need to research how to setup / use ZFS share methods so that the Plex server isn't having to host files unloading that old HP i7 second gen.
    Last edited by sgt-mike; September 30th, 2024 at 01:07 AM.

  6. #26
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    Since my last post I've had several things happen HBA failure (my backup HBA is running right now), acquired more 3 drives. Ordered enclosures cables etc everything to fill bays from newegg in my existing case. Bought a new case (Cooler Master HAf 922 same as the I have now) it was on auction still in the original box, I suspect it is new open stock, i'll find out when it arrives.

    When the drives arrived the 3 - 4TB SAS (brings me up to 9 on hand) I was going through sg_format to make sure they was clean this time rather than the last batch when I assembled the pools. All went well except this one last remaining drive from that last batch order. It did not format properly while the other two did. Stumped checked my command .. exactly the same except for the drive ID. so I decided to shutdown and reboot to see if it made a difference. Now the HBA decided it was old and died on reboot I pulled it out put the 8i and into service it went, spend Hours on trying to revive the sas9033-16i at one point thought I had it nope. contacted the vendor they was kind enough to send a replacement. turned back to the last drive with the sas 9300-8i in and reissued a sg) format and now I sit here right now after HOURRSS. The time to rerun sg_format has caused me to actually rethink what I was doing with a Z1 pool and different pool with no parity or mirrors to send files to my media server. and both pools would host the exact files .......hmmmm OK nice for practice and play but NOT really efficient use of the data pools. I know what I was thinking, the one pool without mirrors and parity should be a good bit faster in reads ... and it is. But really do I need the speed? Or is a Z2 vdev pool fast enough?
    My network through put is setting stable at 980 mb/s on the slow side to 1gb/s on the fast side.

    (BTW side note the drive is at 94.31% complete everyone cross your fingers that it doesn't come out at 0 bytes again)

    But the waiting game is still on with the new egg order. Why didn't I look for a US located company versus one in china for the sas cables. Excitement and pricing took over I guess. ---- Estimated delivery 22 Oct any way enough whining.

    I know I want to use a Z2 pool so instead of my original path of two pools for movies and such.
    Just do one pool and send that one to the media server , not two pools with one going to the media server. The 9th drive (that is in format) is the stickler I would like to have 2 spares to that pool, so IF it formats correctly and checks out fully. That means I would use 7 - 4 TB drives in that vdev to form the pool with 2 spares total 9. That configuration should net me 17.9TB of storage, calculating a 20% slope (free space) that means I'll have 14.3 TB usable.
    If it doesn't then it's 6 - 4TB drives in the vdev and 2 spares totals 8 drives nets 14.3TB of storage, calculating a 20% slope (free) that means I'll have 11.4 TB usable.

    Either way that will house the appox. 3 to 3.5 TB of movies etc that I have at the moment quite nicely I should think at least so for a little bit of time . Allowing me time to find some deals on either 6 or 8Tb drives to set aside. To replace either failures or expansion which ever occurs first.

    What caused the ninth drive's failure to format correctly? I don't know for sure, it could have been the HBA, or when I ssh' ed in on a different terminal and was doing stuff in the background or because the HBA was on it's last legs didn't like formatting three drives at once. Or last but not least it just died, but the smart report looked good prior to me formatting it, so I'm doubting that.

  7. #27
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    Drive formatted correctly so 7 drives into the vdev and two spares it is... now to move data in preparation
    Once more into the fray.......
    Into the last good fight I'll ever know.
    Live and die on this day.......
    Live and die on this day.......

  8. #28
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    Ok so many changes all self inflicted (hardware wise, the raid method is pretty well locked in)...LMAO some I addressed earlier. Although I will list the the actual question on the last lines

    (why I'm I posting this , simple the wife doesn't want to hear it, the daughter doesn't care, the dog and the cats just look at me and walk off. The neighbors give me a indignant look and walk off. So, I'm left with pestering ya'll ......LMAO hope you don't mind)
    After finding a couple of cases on Goodwill auctions. I acquired them, one was a cooler Master HAF 922 looked to be new in a open box. This is the same case as what my NFS server is in currently. The other was a cooler master HAF 945 as listed complete system with a 650w PSU / motherboard cpu ram 3 GPU's description was very vague (according to my research on cooler master's web site it is actually a HAF X case). Now naturally one would ask why get these two more cases. Simple driving reason was the HAF 922 case I had gotten has a somewhat broken I/O panel when I purchased it (now is it fixed yep I super glued the I/O panel to the case, my last resort after the epoxying the broken retaining screw post and tabs failed). Now functional irritating because of the repair method, the other is that the HAF X (Haf945 as goodwill describes) has 1 more 5.25 exposed bay. Heyyyy more drives wooho. Having already ordered the 5.25" drive bay modules before winning those cases. It will change the drive layout, which I decided to add a 2.5" 4 - drive module. Which will support 16 drives with all the bays vs 12 in the 922 case. That shift in cases causes a complete stop on my progress to finish the NFS, until the 28th of October when the last parts are scheduled to arrive via slow boat from China. I could go into details on my plans for the other two remaining 922 cases, but I won't until asked.

    A (the) ACTUAL Question:
    We discussed the assembling of the drive into the pool (s) by using UUID, I agree with everything others said. But the delay listed above got me to thinking harder on that, not a good thing sometimes. I know one can assemble in ZFS and/or MDADM using the different UUID methods, or even use the WWN # (world wide number) either are completely acceptable. Yet I really don't see or hear anyone advocating for the usage of the wwn ?
    I wonder why that is?
    Is it because no one has thought of it, or uses that method? Or just easier to explain when using blkid at the command prompt ?

    To me it seems that a persistent ID such as wwn is better than software driven UUID. As the drive retains that ID no matter what system it is plugged into or how many times the drive is formatted. I know kind of a moot point, one really is as good as the other. And doesn't amount to a hill of beans between the two methods, they are pretty much the same animal. I also know some drives are labeled (wwn) with it, some not, mine happen to be.

    ( This is the Wiki that started my thinking on this, which led to this silly question https://wiki.archlinux.org/title/Per...sistent_naming blame the arch linux guys LOL)
    Last edited by sgt-mike; October 9th, 2024 at 08:35 AM.
    Once more into the fray.......
    Into the last good fight I'll ever know.
    Live and die on this day.......
    Live and die on this day.......

  9. #29
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    80
    Distro
    Ubuntu

    Re: ZFS create

    Well finally everything aligned the Bay modules are in the new case populated.
    This first part is to share Benchmark data first before I get into a question or rather advise before I do anymore.
    The System has nine 4TB drives wide into a pool. These drives are mixed what I mean by that is all are same Manufacture etc etc EXCEPT three are 12GP/s the other 6 are 6GP/s. System Memory is at 96GB

    Code:
     
    mike@Beastie:/mediapool1$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reportingTEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.36
    Starting 1 process
    TEST: Laying out IO file (1 file / 2048MiB)
    Jobs: 1 (f=1): [W(1)][-.-%][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][-.-%][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=575956: Wed Oct 23 20:34:17 2024
      write: IOPS=1103, BW=1104MiB/s (1157MB/s)(10.0GiB/9278msec); 0 zone resets
        slat (usec): min=130, max=2871, avg=292.06, stdev=166.68
        clat (usec): min=3, max=6258.6k, avg=28016.62, stdev=343258.61
         lat (usec): min=183, max=6258.8k, avg=28308.68, stdev=343254.79
        clat percentiles (msec):
         |  1.00th=[    5],  5.00th=[    6], 10.00th=[    6], 20.00th=[    6],
         | 30.00th=[    7], 40.00th=[    7], 50.00th=[    7], 60.00th=[    8],
         | 70.00th=[    8], 80.00th=[   15], 90.00th=[   17], 95.00th=[   19],
         | 99.00th=[   30], 99.50th=[   31], 99.90th=[ 6275], 99.95th=[ 6275],
         | 99.99th=[ 6275]
       bw (  MiB/s): min= 1460, max= 4856, per=100.00%, avg=3323.00, stdev=1421.97, samples=6
       iops        : min= 1460, max= 4856, avg=3323.00, stdev=1421.97, samples=6
      lat (usec)   : 4=0.03%, 10=0.02%, 250=0.03%, 500=0.05%, 750=0.04%
      lat (usec)   : 1000=0.05%
      lat (msec)   : 2=0.21%, 4=0.39%, 10=76.29%, 20=18.54%, 50=4.05%
      lat (msec)   : >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=6253.5M, max=6253.5M, avg=6253512170.00, stdev= 0.00
        sync percentiles (msec):
         |  1.00th=[ 6275],  5.00th=[ 6275], 10.00th=[ 6275], 20.00th=[ 6275],
         | 30.00th=[ 6275], 40.00th=[ 6275], 50.00th=[ 6275], 60.00th=[ 6275],
         | 70.00th=[ 6275], 80.00th=[ 6275], 90.00th=[ 6275], 95.00th=[ 6275],
         | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6275], 99.95th=[ 6275],
         | 99.99th=[ 6275]
      cpu          : usr=5.10%, sys=23.78%, ctx=2389, majf=0, minf=11
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
      WRITE: bw=1104MiB/s (1157MB/s), 1104MiB/s-1104MiB/s (1157MB/s-1157MB/s), io=10.0GiB (10.7GB), run=9278-9278msec
    mike@Beastie:/mediapool1$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize
    =1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.36
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][-.-%][r=3204MiB/s][r=3204 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=576116: Wed Oct 23 20:35:08 2024
      read: IOPS=3108, BW=3109MiB/s (3260MB/s)(10.0GiB/3294msec)
        slat (usec): min=223, max=1183, avg=318.60, stdev=39.38
        clat (usec): min=2, max=28008, avg=9839.33, stdev=1131.03
         lat (usec): min=304, max=28980, avg=10157.93, stdev=1159.49
        clat percentiles (usec):
         |  1.00th=[ 6194],  5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634],
         | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9634], 60.00th=[ 9634],
         | 70.00th=[ 9765], 80.00th=[ 9765], 90.00th=[11207], 95.00th=[11338],
         | 99.00th=[12125], 99.50th=[13304], 99.90th=[22676], 99.95th=[25297],
         | 99.99th=[27395]
       bw (  MiB/s): min= 2604, max= 3216, per=99.42%, avg=3090.67, stdev=241.79, samples=6
       iops        : min= 2604, max= 3216, avg=3090.67, stdev=241.79, samples=6
      lat (usec)   : 4=0.05%, 500=0.05%, 750=0.05%, 1000=0.05%
      lat (msec)   : 2=0.15%, 4=0.29%, 10=84.14%, 20=15.07%, 50=0.16%
      cpu          : usr=0.58%, sys=99.39%, ctx=9, majf=0, minf=8203
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
       READ: bw=3109MiB/s (3260MB/s), 3109MiB/s-3109MiB/s (3260MB/s-3260MB/s), io=10.0GiB (10.7GB), run=3294-3294msec
    mike@Beastie:/mediapool1$
    To be Honest I'm quite happy with the bench mark of these Rust drives. Actually exceeded my expectations.

    Now come the actual question because I was playing so much before I could actually set these SAS drive in like I wanted at on point I actually set up alais name via /etc/zfs/vdev_id.config. At that point I could create a pool using the drive aliases name.
    Then I wound up waiting on hardware so I actually forgot HOW I did it with the laptop drives. I attempted again failed, so I used partition UUID to create the Pool played with exporting and importing the pool via disk/by-id and returning back to part-uuid setup. is there a way to export and import the pool back using the Drive alaises used in the vdev_id.config setup.

    here is how the pool is listed right now with a history of what I have done thus far.

    Code:
    mike@Beastie:/mediapool1$ sudo zpool status -v[sudo] password for mike:
      pool: mediapool1
     state: ONLINE
      scan: scrub repaired 0B in 00:44:43 with 0 errors on Wed Oct 23 10:31:45 2024
    config:
    
    
            NAME                                      STATE     READ WRITE CKSUM
            mediapool1                                ONLINE       0     0     0
              raidz2-0                                ONLINE       0     0     0
                11a27fcc-ebbd-4864-9bc9-5cc7f01dc785  ONLINE       0     0     0
                1ccc6753-a8af-41a9-8a3e-3ee420d90f81  ONLINE       0     0     0
                507cbe23-0408-48b3-bfbe-e589d19ef8fe  ONLINE       0     0     0
                15bf2071-4741-4bde-9b95-15aa69e50c61  ONLINE       0     0     0
                169b4665-7aa6-4a6e-bec0-f7364f7097c4  ONLINE       0     0     0
                72d42ad6-ebeb-4fb7-a276-51285483bfba  ONLINE       0     0     0
                714d75bb-9b2e-43c2-b38b-e4068b21e105  ONLINE       0     0     0
                ca189b0c-4e82-49c0-80fe-aa6f9243f01c  ONLINE       0     0     0
                9431154c-4449-4716-9c6e-b51e436721d3  ONLINE       0     0     0
    
    
    errors: No known data errors
    mike@Beastie:/mediapool1$ sudo zpool history
    History for 'mediapool1':
    2024-10-22.09:48:04 zpool create -f mediapool1 raidz2 /dev/disk/by-partuuid/11a27fcc-ebbd-4864-9bc9-5cc7f01dc785 1ccc6753-a8af-41a9-8a3e-3ee420d90f81 507cbe23-0408-48b3-bfbe-e589d19ef8fe 15bf2071-4741-4bde-9b95-15aa69e50c61 169b4665-7aa6-4a6e-bec0-f7364f7097c4 72d42ad6-ebeb-4fb7-a276-51285483bfba 714d75bb-9b2e-43c2-b38b-e4068b21e105 ca189b0c-4e82-49c0-80fe-aa6f9243f01c 9431154c-4449-4716-9c6e-b51e436721d3
    2024-10-22.09:53:48 zfs set compress=lz4 mediapool1
    2024-10-22.09:55:57 zpool set autoexpand=on mediapool1
    2024-10-22.21:21:13 zpool export mediapool1
    2024-10-22.21:21:32 zpool import -d /dev/disk/by-id mediapool1
    2024-10-22.21:22:22 zpool export mediapool1
    2024-10-22.21:23:19 zpool import -d /dev/disk/by-partuuid mediapool1
    2024-10-22.21:24:47 zpool export mediapool1
    2024-10-22.21:25:03 zpool import -d /dev/disk/by-partuuid mediapool1
    2024-10-22.21:25:30 zpool scrub mediapool1
    2024-10-22.21:32:59 zpool scrub -s mediapool1
    2024-10-22.21:43:56 zpool import -c /etc/zfs/zpool.cache -aN
    2024-10-22.21:47:12 zpool import -c /etc/zfs/zpool.cache -aN
    2024-10-22.21:57:06 zpool import -c /etc/zfs/zpool.cache -aN
    2024-10-22.22:22:41 zpool import -c /etc/zfs/zpool.cache -aN
    2024-10-22.22:35:45 zpool import -c /etc/zfs/zpool.cache -aN
    2024-10-23.09:47:06 zpool scrub mediapool1
    2024-10-23.20:19:33 zfs snapshot mediapool1@test1-snapshot
    2024-10-23.20:24:08 zfs destroy mediapool1@test1-snapshot
    2024-10-23.20:24:34 zfs snapshot -r mediapool1@test1-snapshot
    2024-10-23.20:24:54 zfs destroy mediapool1@test1-snapshot
    2024-10-23.20:25:44 zfs snapshot -r mediapool1@2024.10.23-snapshot
    
    
    mike@Beastie:/mediapool1$
    I was thinking that I should be able to export the pool and simply run "sudo zpool import -d /dev/disk/by-vdev_id mediapool1 " and then that would allow me to see the drives in their alias's name when I run zpool status.

    Or do I have to actually destroy the pool and start back from scratch?

    Figured I would ask first before doing silly commands.
    ok that command didn't work
    Finally found the answer
    export the pool
    have the vdev Id.config file loaded issue the udem trigger
    import the pool with
    sudo zpool import -d /dev/disk/by-vdev mediapool1
    works close enough to what I wanted but it did add a -part1 for the partition to the name /alais


    Code:
    mike@Beastie:/$ sudo zpool export mediapool1
    mike@Beastie:/$ sudo zpool import -d /dev/disk/by-vdev mediapool1
    mike@Beastie:/$ sudo zpool list
    NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    mediapool1  32.7T  3.43T  29.3T        -         -     0%    10%  1.00x    ONLINE  -
    mike@Beastie:/$ sudo zpool status
      pool: mediapool1
     state: ONLINE
      scan: scrub repaired 0B in 00:44:43 with 0 errors on Wed Oct 23 10:31:45 2024
    config:
    
    
            NAME                  STATE     READ WRITE CKSUM
            mediapool1            ONLINE       0     0     0
              raidz2-0            ONLINE       0     0     0
                bay1drive1-part1  ONLINE       0     0     0
                bay1drive2-part1  ONLINE       0     0     0
                bay1drive3-part1  ONLINE       0     0     0
                bay2drive4-part1  ONLINE       0     0     0
                bay2drive5-part1  ONLINE       0     0     0
                bay2drive6-part1  ONLINE       0     0     0
                bay3drive7-part1  ONLINE       0     0     0
                bay3drive8-part1  ONLINE       0     0     0
                bay3drive9-part1  ONLINE       0     0     0
    
    
    errors: No known data errors
    mike@Beastie:/$
    That is way close enough to my goal and I can live with the -part1 without having to destroy the pool and redo it.
    Last edited by sgt-mike; October 24th, 2024 at 05:50 AM.
    Once more into the fray.......
    Into the last good fight I'll ever know.
    Live and die on this day.......
    Live and die on this day.......

Page 3 of 3 FirstFirst 123

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •