Just starting out on ZFS.
I did a test run of some scrub laptop drive in six drive 1- 5.25" bay enclosure (Sata only and thin drives I think 2.5" 9mm or less for each bay) to create a pool which I will destroy as many here have warned against the usage of laptop drive within any type of RAID configuration method.
Like I said this was a test before the arrival of actual enterprise class drives and a supporting HBA/cabling arrives within the next 2 weeks
Code:
mike@beastie:~$ zpool status
pool: teststore
state: ONLINE
scan: scrub repaired 0B in 00:00:26 with 0 errors on Sun Sep 15 07:19:36 2024
config:
NAME STATE READ WRITE CKSUM
teststore ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda1 ONLINE 0 0 0
sdb1 ONLINE 0 0 0
sdc1 ONLINE 0 0 0
sdd1 ONLINE 0 0 0
sde1 ONLINE 0 0 0
spares
sdf1 AVAIL
errors: No known data errors
mike@beastie:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
teststore 2.27T 504G 1.77T - - 0% 21% 1.00x ONLINE -
mike@beastie:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
teststore 2.27T 566G 1.71T - - 0% 24% 1.00x ONLINE -
mike@beastie:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
teststore 2.27T 858G 1.43T - - 0% 36% 1.00x ONLINE -
mike@beastie:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
teststore 2.27T 1018G 1.27T - - 0% 43% 1.00x ONLINE -
mike@beastie:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
teststore 2.27T 1.02T 1.24T - - 0% 45% 1.00x ONLINE -
mike@beastie:~$ zpool status
pool: teststore
state: ONLINE
scan: scrub repaired 0B in 00:00:26 with 0 errors on Sun Sep 15 07:19:36 2024
config:
NAME STATE READ WRITE CKSUM
teststore ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda1 ONLINE 0 0 0
sdb1 ONLINE 0 0 0
sdc1 ONLINE 0 0 0
sdd1 ONLINE 0 0 0
sde1 ONLINE 0 0 0
spares
sdf1 AVAIL
errors: No known data errors
mike@beastie:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
teststore 2.27T 1.22T 1.05T - - 0% 53% 1.00x ONLINE -
mike@beastie:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
teststore 2.27T 1.23T 1.04T - - 0% 54% 1.00x ONLINE -
mike@beastie:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
teststore 2.27T 1.34T 947G - - 0% 59% 1.00x ONLINE -
mike@beastie:~$ zpool iostat -vl teststore
capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim
pool alloc free read write read write read write read write read write read write wait wait
---------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
teststore 1.34T 947G 0 239 2.84K 39.9M 30ms 7ms 30ms 3ms 1us 1us 4us 3ms 52ms -
raidz1-0 1.34T 947G 0 239 2.84K 39.9M 30ms 7ms 30ms 3ms 1us 1us 4us 3ms 52ms -
sda1 - - 0 49 564 7.98M 30ms 6ms 29ms 3ms 1us 2us 1us 2ms 28ms -
sdb1 - - 0 49 623 7.98M 30ms 7ms 30ms 3ms 2us 1us 1us 3ms 62ms -
sdc1 - - 0 49 575 7.98M 30ms 6ms 29ms 3ms 1us 1us 16us 2ms 50ms -
sdd1 - - 0 45 573 7.98M 32ms 8ms 31ms 4ms 1us 1us 1us 4ms 60ms -
sde1 - - 0 46 572 7.98M 30ms 7ms 29ms 4ms 1us 1us 1us 3ms 63ms -
---------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
mike@beastie:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
teststore 2.27T 1.34T 947G - - 0% 59% 1.00x ONLINE -
mike@beastie:~$ zpool status
pool: teststore
state: ONLINE
scan: scrub repaired 0B in 00:00:26 with 0 errors on Sun Sep 15 07:19:36 2024
config:
NAME STATE READ WRITE CKSUM
teststore ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda1 ONLINE 0 0 0
sdb1 ONLINE 0 0 0
sdc1 ONLINE 0 0 0
sdd1 ONLINE 0 0 0
sde1 ONLINE 0 0 0
spares
sdf1 AVAIL
errors: No known data errors
mike@beastie:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 63.9M 1 loop /snap/core20/2105
loop1 7:1 0 63.9M 1 loop /snap/core20/2318
loop2 7:2 0 87M 1 loop /snap/lxd/27037
loop3 7:3 0 87M 1 loop /snap/lxd/29351
loop4 7:4 0 40.4M 1 loop /snap/snapd/20671
loop5 7:5 0 38.8M 1 loop /snap/snapd/21759
sda 8:0 0 465.8G 0 disk
└─sda1 8:1 0 465.8G 0 part
sdb 8:16 0 465.8G 0 disk
└─sdb1 8:17 0 465.8G 0 part
sdc 8:32 0 465.8G 0 disk
└─sdc1 8:33 0 465.8G 0 part
sdd 8:48 0 465.8G 0 disk
└─sdd1 8:49 0 465.8G 0 part
sde 8:64 0 465.8G 0 disk
└─sde1 8:65 0 465.8G 0 part
sdf 8:80 0 465.8G 0 disk
└─sdf1 8:81 0 465.8G 0 part
nvme0n1 259:0 0 238.5G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
└─nvme0n1p2 259:2 0 237.4G 0 part/
It went well so far, and I'm impressed with the ease, although still learning.
But it started me to thinking, thus leading to an actual question.
I had seen a post/article elsewhere where a person assigned an alias to the drives versus them just using the standard /dev/sd*# method. Which seems to make some sense in order to cleanly track down a faulty drive.
(I'll have to reread the article/post as I don't recall if when he assembled if he used the assigned alias or drive id, but I think he used the alias)
I've seen where others use the drive ID, which makes a lot of sense vs the method I used.
In creating a pool with the incoming SAS drives (6 - 4TB each), which is the best method to assembling the drives into a single zpool?
I know that is a subjective question.
(this will be in a NAS/NFS machine to back up media from the Plex server, and basically some documents such as tax stamp documents from ATF & DD214's. Which I'm planning on using wakon lan configuration so it's not on 24x7)
Bookmarks