Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 28

Thread: ZFS create

  1. #11
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    60
    Distro
    Ubuntu

    Re: ZFS create

    duplicate see below
    Last edited by sgt-mike; 2 Weeks Ago at 12:16 AM.

  2. #12
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    60
    Distro
    Ubuntu

    Re: ZFS create

    LMAO that is me toooo many combat tours ...

    Oh finally went to his response on the thread he and I was discussing

    Code:
     Dang. I wrote you a post last night... With lots of links for 12 port LSI SAS HBA cards @ EBay. And don't see it now.
    
    The best deal I found for you was this one:
    https://www.ebay.com/itm/20414862155...BoCsSsQAvD_BwE
    
    It is PCIe x8 gen 3.0. The 4 port SAS cables usually run about $18 each. 4 x4 = 16. That card supports SAS/SATA.
    
    On GPU for a files server. No doesn't matter. The onboard -GPU is fine. It's just a file server. For that matter, if you wanted to store your media files there is an NFS share, it will be fine. <-- Then your media server can cannect to the file, and it's GPU can help with decoding...
    
    For a file server, your priorities are storage pools, disk I/O and network throughput... And the ability to add more and more storage as needed. Then the ability to manage that storage.
    
    The types of disks depend on what they will be used for and the priority of the I/O, whether reads or writes, and how large 'the files' are.
    
    HDD is good for mass media and backups... Though too slow a disk, and your backups will take longer. 20TB Recertified Enterprise HDD drives are cheap. Even if you look into Recert'ed "lots" of SAS drives...
    
    There is this:
    https://www.ebay.com/itm/26416718817...RoC0uQQAvD_BwE
    
    Just an example, but... Big investment outlay out-front. New 15K rpm SAS drives 20x 600GB = $1100. That comes out to $50 each. But that is only a little over 11 TB total.
    
    This one is a lot of 10 new SAS 600GB drives which comes out to about $59 each:
    https://www.ebay.com/itm/25453017628...86.c101224.m-1
    
    This is a 5-pack of Factory Refurbished 20TB 7200k Enterprise SATA HDD's: https://www.amazon.com/Seagate-Exos-...c&gad_source=1
    
    That comes out to about 57 TB of usable storage in RAIDZ2. Refurb'ed 20TB enterprise drives run about $250 each. But you can find 12 TB refurb'ed Enterprise drives at NewEgg for $99 each...
    https://www.newegg.com/seagate-exos-...BoCqpwQAvD_BwE
    
    Be aware that the warranty on Seagate Factory Refurbished Exos X14 Enterprise drives is 2 years, compared to 3 years new.
    
    You can find deals if you shop around. And with the hardware foundation more open, you have a lot more choices open to you with what you can do.
    
    If ZFS RAIDZ2, I would start out at vdevs of RAIDZ2, and add each of those vdevs of the same. At 12 TB each for 5 drives, that comes out to 34 TB per each vdev @ $495 per vdev, or $14.50 per TB of storage and fairly fast enterprise drives. If you need more, then later on, add another vdev of 34 TB RAIDZ2 when you can afford it, to that same pool.
    
    Is an investment...
    I had to check just be sure that I did order the one (model) HBA he pointed at as my CRS is kicking in loudly today...... YEP I did....
    the best part that I seen on his advice is I can use this https://www.ebay.com/itm/395595171727 to slave to the 16 port and expand to almost 30 (26 I think is the correct number) . What I find is interesting with the Intel expander is it can be powered via molex vs a pci-e slot.

    (besides already went to Broadcom's site to pull down the latest bios etc for that card so it's standing by when it arrives)

    edit to add a bit more information on the LSI 9300-16i-16 port HBA
    -----------From the user's manual-----
    This section lists the LSI 12Gb/s SAS HBA features.
     Implements two LSI SAS 3008 eight-port 12Gb/s SAS to PCIe 3.0 controllers
     Supports eight-lane, full-duplex PCIe 3.0 performance
     Supports sixteen internal 12Gb/s SATA+SAS ports
     Supports SATA link rates of 3Gb/s and 6Gb/s
     Supports SAS link rates of 3Gb/s, 6Gb/s, and 12Gb/s
     Provides four x4 internal mini-SAS HD connectors (SFF-8643)
     Supports passive copper cable
     Supports up to 1024 SATA or SAS end devices
     Offered with a full-height bracket
     Provides two heartbeat LEDs
    ----------------------------------------------
    So with the correct (#) of expander cards with a high port # such as 24 ports or more daisy chained one can control 1024 SATA or SAS devices.
    This is not the only LSI HBA that does this but the number does vary newer versions as I understand can support more than this one if I recall some conversations with others.
    hmm another thought just crossed my mind when I ordered the drives I didn't think to ask the seller if they was 520 or 512 sectors. Oh well I'll know soon enough there maybe some low level formatting in my future... haven't done that since the mid 1980's
    Last edited by sgt-mike; 2 Weeks Ago at 06:18 AM.

  3. #13
    Join Date
    May 2018
    Location
    Here and There
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: ZFS create

    All that is great advice, I will try my best to help when I can.
    That looks to me as great starting point with room to grow. Nice!
    "When you practice gratefulness, there is a sense of respect toward others." >>Dalai Lama

  4. #14
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    60
    Distro
    Ubuntu

    Re: ZFS create

    did one more check on the zpool creation leaving off the -o ashift=12
    and leaving off compression as well
    Code:
     
    mike@beastie:/metapool$ sudo zpool historyHistory for 'metapool':
    2024-09-18.18:59:07 zpool create metapool raidz2 sda1 sdb1 sdc1 sdd1 sde1 -f
    
    
    mike@beastie:/metapool$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    TEST: Laying out IO file (1 file / 2048MiB)
    Jobs: 1 (f=1): [W(1)][43.8%][w=407MiB/s][w=407 IOPS][eta 00m:09s]
    Jobs: 1 (f=1): [W(1)][56.5%][w=200MiB/s][w=200 IOPS][eta 00m:10s]
    Jobs: 1 (f=1): [W(1)][76.0%][w=297MiB/s][w=297 IOPS][eta 00m:06s]
    Jobs: 1 (f=1): [W(1)][89.3%][w=199MiB/s][w=199 IOPS][eta 00m:03s]
    Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
    Jobs: 1 (f=1): [W(1)][97.4%][eta 00m:01s]
    TEST: (groupid=0, jobs=1): err= 0: pid=63250: Wed Sep 18 19:01:40 2024
      write: IOPS=274, BW=275MiB/s (288MB/s)(10.0GiB/37276msec); 0 zone resets
        slat (usec): min=273, max=5649, avg=2693.44, stdev=1534.30
        clat (usec): min=3, max=9683.3k, avg=112275.40, stdev=528669.95
         lat (usec): min=296, max=9684.9k, avg=114969.53, stdev=528747.76
        clat percentiles (msec):
         |  1.00th=[   11],  5.00th=[   14], 10.00th=[   17], 20.00th=[   27],
         | 30.00th=[   54], 40.00th=[   67], 50.00th=[   81], 60.00th=[   97],
         | 70.00th=[  115], 80.00th=[  140], 90.00th=[  153], 95.00th=[  157],
         | 99.00th=[  159], 99.50th=[  161], 99.90th=[ 9731], 99.95th=[ 9731],
         | 99.99th=[ 9731]
       bw (  KiB/s): min=43008, max=1923072, per=100.00%, avg=364580.57, stdev=296478.87, samples=56
       iops        : min=   42, max= 1878, avg=356.04, stdev=289.53, samples=56
      lat (usec)   : 4=0.01%, 10=0.03%, 20=0.01%, 500=0.01%, 750=0.01%
      lat (usec)   : 1000=0.01%
      lat (msec)   : 2=0.04%, 4=0.10%, 10=0.59%, 20=13.05%, 50=12.72%
      lat (msec)   : 100=35.31%, 250=37.81%, >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=1218, max=1218, avg=1218.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[ 1224],  5.00th=[ 1224], 10.00th=[ 1224], 20.00th=[ 1224],
         | 30.00th=[ 1224], 40.00th=[ 1224], 50.00th=[ 1224], 60.00th=[ 1224],
         | 70.00th=[ 1224], 80.00th=[ 1224], 90.00th=[ 1224], 95.00th=[ 1224],
         | 99.00th=[ 1224], 99.50th=[ 1224], 99.90th=[ 1224], 99.95th=[ 1224],
         | 99.99th=[ 1224]
      cpu          : usr=1.47%, sys=10.71%, ctx=67787, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
      WRITE: bw=275MiB/s (288MB/s), 275MiB/s-275MiB/s (288MB/s-288MB/s), io=10.0GiB (10.7GB), run=37276-37276msec
    mike@beastie:/metapool$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
    TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=0): [f(1)][-.-%][r=3668MiB/s][r=3668 IOPS][eta 00m:00s]
    TEST: (groupid=0, jobs=1): err= 0: pid=109995: Wed Sep 18 19:02:33 2024
      read: IOPS=3571, BW=3572MiB/s (3745MB/s)(10.0GiB/2867msec)
        slat (usec): min=230, max=925, avg=277.80, stdev=38.00
        clat (usec): min=2, max=24505, avg=8587.61, stdev=1108.96
         lat (usec): min=261, max=25431, avg=8865.67, stdev=1138.93
        clat percentiles (usec):
         |  1.00th=[ 5407],  5.00th=[ 8291], 10.00th=[ 8291], 20.00th=[ 8291],
         | 30.00th=[ 8291], 40.00th=[ 8291], 50.00th=[ 8356], 60.00th=[ 8356],
         | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[10028], 95.00th=[10028],
         | 99.00th=[11600], 99.50th=[11863], 99.90th=[20317], 99.95th=[22676],
         | 99.99th=[24249]
       bw (  MiB/s): min= 2874, max= 3730, per=99.09%, avg=3539.20, stdev=373.67, samples=5
       iops        : min= 2874, max= 3730, avg=3539.20, stdev=373.67, samples=5
      lat (usec)   : 4=0.05%, 500=0.05%, 750=0.05%, 1000=0.05%
      lat (msec)   : 2=0.20%, 4=0.34%, 10=84.79%, 20=14.37%, 50=0.12%
      cpu          : usr=0.94%, sys=99.02%, ctx=7, majf=0, minf=8206
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    
    Run status group 0 (all jobs):
       READ: bw=3572MiB/s (3745MB/s), 3572MiB/s-3572MiB/s (3745MB/s-3745MB/s), io=10.0GiB (10.7GB), run=2867-2867msec
    Huh so the zpool create -o ashift=12 <poolname> uuid/sd*1 -f command when adding the drives which is supposed to make the zfs pool faster according to a website OTHER THAN this forum.

    Doesn't ...... it actually slow it down at least for these drives. Performance is better in this run.

    Good to know.

    BTW figured out why when I ran lsblk -f or blkid the system didn't show a uuid for the 500gb drives ... the drives was unformatted derrrr...ppp.....

    (finally looked up that laptop drives benchmark performance 106Mb/s write wow so actually so it is actually smoking along at approx 275Mb/s with ZFS)
    Last edited by sgt-mike; 2 Weeks Ago at 01:49 AM.

  5. #15
    Join Date
    May 2018
    Location
    Here and There
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: ZFS create

    Don't get worried when read writes slows seemly to a crawl, it just happens.
    "When you practice gratefulness, there is a sense of respect toward others." >>Dalai Lama

  6. #16
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    60
    Distro
    Ubuntu

    Re: ZFS create

    Been reviewing this article/ website in a attempt to comprehend the way zfs works https://klarasystems.com/articles/op...fs-vdev-types/
    Luckily for me that the page addressee's what I have settled on raidz2 which draws some questions.
    Code:
    Storage Vdevs
        |_ raidz2-0  : I am planning 5 4TB SAS drives, - so two drives are lost to parity, 3 receive data with a 5 drive configuration
        |_  raidz2-1: when I add this storage I understand that the pool should be at least three drives, but I'll probably use 5 again for my ocd/ sanity sake it can be composed of 1TB drive 2 TB 4TB or even 8TB drive again I'll lose two drives to parity to raidz2-1 Vdev and the remaining drive get data
        |_ raidz2-*: same as above
    Q. As more zraid2 Vdevs are added each (zraid2-*) they each will have parity loss of two drives for their respective storage Vdev raidz2 within each individual Vdev raidz2-# ?
    I think I get it thus far or do I?
    Is the graphic of the link that I hyperlinked accurate? If it is that way then I got it and understand it.

    Now moving on to support Vdevs (if I decide to use them)
    Cache - would a SSD work well here? or a waste? or because I'm using rusty spinner disks should it be a faster drive? I think somewhere I read it should be half the value of installed ram
    Log - again same question a fast ssd? regular drive and what should the capacity be ? or does it even matter?
    Spare - Yep no brainer if added to the Zpool should it be in a mixed setup matching the highest capacity individual drive. or if matching storage vdevs - individual drive capacity

    * Spare (when the storage Vdevs are greater than 1) make a lot of sense to utilize
    Last edited by sgt-mike; 2 Weeks Ago at 09:19 PM.

  7. #17
    Join Date
    May 2018
    Location
    Here and There
    Beans
    Hidden!
    Distro
    Xubuntu Development Release

    Re: ZFS create

    I told MAFoElffen that I did create a swap disk 256 Gig worth, and I notice no difference.

    This one helps a bit:
    Code:
    free -g 
                   total        used        free      shared  buff/cache   available 
    Mem:              13           3           6           0           2           9 
    Swap:             45           0          45
    
    

    and this:
    Code:
    vmstat 
    procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu------- 
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st gu 
     0  0      0 7209124   3180 3152128    0    0   252   140 3858    4  1  1 98  0  0  0
    
    
    Code:
    Filename                                Type            Size            Used            Priority 
    /swap/swapfile                          file            33554428        0               -2 
    /dev/zram0                              partition       14159868        0               100
    
    
    

    I've only seen it kick in maybe 2 or three times, but all this may vary on your setup.
    But SSD's are faster than spinners.
    Last edited by 1fallen2; 2 Weeks Ago at 10:11 PM. Reason: add to
    "When you practice gratefulness, there is a sense of respect toward others." >>Dalai Lama

  8. #18
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    60
    Distro
    Ubuntu

    Re: ZFS create

    Went back and read some post from MAFoELffen where he was helping with the speed on a Raidz3 setup. He addressed the cache and the log vdevs while not at great length but I noted he specifically called for small (64gb or less for cache, and I want to say about the same amount for the Log) NVMe usage for the log vdev. He even noted that it wouldn't help much except for certain writes VM's, database, etc which he was doing. But ram did more than anything for the system vs the usage of the cache/log vdevs. I'll have to reread it but my take away was I really don't need to worry about it until I max my onboard ram. And even then will probably not be needed until later.

    Still war gaming my first true actual pool and cabling requirement to the HBA so I don't waste ports. I know at some point I'll need to go outside my case and setup a DAS to expand if I stick with the Z2 configuration. Also set and thought about the number of drives. I know I at first said go 5 drives but I would be wasting ports on the HBA and cabling with the first raidz* vdev if I go 5, so 4 drive will be better.
    Now the questions Z1 or Z2. Z1 at 4 4TB 1 Vdev will afford me 10TB useable but a practical of 8.2 TB calculating a 20% slope, Z2 same setup usable = 7.1 practical =5.7 TB, it's redundancy vs storage that I have to look at. I really like the fault tolerance of he double parity and I Like the storage of the single parity.
    I'll mull it over again and again until I set the drives in the case. One saving grace is that my media, is still small enough (little less than 2.8 TB) to sit on the a Z2 configuration as I stated. just will have to add the second storage Vdev faster than if I went single parity. But the writing this out helps me to think through what I want to do.... (we won't discuss how many time I've hit the post quick reply. Typed out what I thinking made a decision and didn't post to this thread,, so yeah it's my sounding board).

    In my test pool results earlier and after actually reading some old posts. I realized that I was getting really good results for old sata spinners in both read and writes. As I actually considered it I also realized that what I really should be concerned about is read speed. At some point soon if continue to add more media I'll have to clear the media from the plex server. Then have the NAS/NFS provide / host the media to the plex server.

  9. #19
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    60
    Distro
    Ubuntu

    Re: ZFS create

    Yesterday parts arrived and a set back occurred the LSI HBA was DOA.... bummer, at least the ebay vendor has a return /refund policy.

    I'll just have to re-order /wait for a HBA to ship, with hind sight in mind. Maybe I should order a multiple this time. But with my luck when I order a multiple, all will work . Still debating 2 or 3, 3 will afford the most probability of a usable item.

    ------This part is me just thinking out loud before ordering --------------

    I did find a LSI 9300-8i , which the LSI 9300-16i is simply two 9300-8i on the same board supporting 16 drives versus 8, I might go that route with the 8i and order a expander with 8 ports external at the same time 36 ports total on the expander. losing 8 port to slaving to the 9300-8i and 8 port to external leaves me with 20 ports internal. As I do plan on adding outside the case, now after much internal debate.
    https://www.ebay.com/itm/19657030460...AAAA0HoV3kP08I
    This is the expander I'm considering of ordering with the 9300-8i, I was already considering it with the 9300-16i anyways, all support SAS-3 at 12 Gps. The Intel expander I was considering only supported up to 6 Gps.
    https://www.ebay.com/itm/135244175645

    After all unless I go with 2.5" drives, the Cooler Master case it can support 12 - 3.5" drives in the case using some OEM drive /case modules (internal 3.5" =5 with 5 -5.25" bays external).
    https://www.newegg.com/athena-power-...9SIAZ8XJ1U6752
    https://www.newegg.com/athena-power-...82E16817995109
    Last edited by sgt-mike; 2 Weeks Ago at 09:32 PM.

  10. #20
    Join Date
    May 2007
    Beans
    90
    Distro
    Ubuntu 22.04 Jammy Jellyfish

    Re: ZFS create

    I'm glad I found this thread, looks like you and I are in similar spots. I'm in the process of rebuilding my home NAS and dabbling with RAID. Nothing too fancy (I'm not going ZFS or anything, at least not as of writing). but I did want to share the parts & have and experience, maybe this would help you with your build

    1 - My HBA is this guy right here - it works like a dream out of the box and the Fractal Design Define R5 it sits in houses just as much drives as the card itself can handle. I currently have 2x 4port SATA breakouts connected, and it can handle 4x 4 for a total of 16 drives. The major difference b/t this one and what you've linked is cost. As of this post, this listing is 110 USD but I got it for half that.. if you really don't need 16 drives then perhaps one of your two HBAs mentioned above will do the job just fine, just make sure the controller is up to the task (which it seems you've already ran into some of that!) used This post from ServeTheHome as my starting point. One thing I haven't yet looked into is how SATA based SSDs function on this HBA. I've heard some folks mention this LSI controller doesn't support TRIM, some claim it does, so YMMV (and for my case, I'm using all mechanical HDDs except for the boot drive). Another item you'll want to keep in mind is that the HBA's heatsinks can get quite toasty, so ensure you've got adequate airflow and if not, might be worth finding a 3dprinted fan shroud mount for a fan and get things cooled off that way. I think this drive supports SAS 3 so you should be good there too.

    2 - I have had GREAT success with those hotswap bays that you've listed above. Mine was an KingWin, but it looks exactly the same as the Athena Power 3x HDD cage I use this for data ingestion & drive testing as I've been in the process of validating all drives in my possession + new/refurbished drives that have been arriving. It's been great. The only thing I need to mention is that the drives do get a bit toasty in there (35-42c), and the included fan is not a standard size, it's 75cm ?? I think? I don't recall, because I replaced mine with a 60mm Noctua fan as soon as the hotswap bay arrived 4 years ago. The fan is duck-taped to the cage + plugged into one of the motherboard chassis fan headers. I would recommend doing that if you do end up committing to these hard drive cages.

    I'm not trying to steamroll this thread, only wanted to share my knowledge in the hopes it would benefit your build.

Page 2 of 3 FirstFirst 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •