Page 3 of 3 FirstFirst 123
Results 21 to 24 of 24

Thread: Oh No not another RAID Post

  1. #21
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    66
    Distro
    Ubuntu

    Re: Oh No not another RAID Post

    Failed sdb1 , hot removed, then zeroed the super blocks
    Code:
    mike@bastion:~$ sudo mdadm /dev/md0 --fail /dev/sdb1
    mdadm: set /dev/sdb1 faulty in /dev/md0
    mike@bastion:~$ sudo mdadm /dev/md0 --remove /dev/sdb1
    mdadm: hot removed /dev/sdb1 from /dev/md0
    mike@bastion:~$ sudo mdadm --zero-superblock /dev/sdb1
    mike@bastion:~$ watch cat /proc/mdstat 
    mike@bastion:~$ watch cat /proc/mdstat 
    mike@bastion:~$ sudo mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Mar  7 22:57:49 2024
            Raid Level : raid5
            Array Size : 2441267200 (2.27 TiB 2.50 TB)
         Used Dev Size : 488253440 (465.63 GiB 499.97 GB)
          Raid Devices : 6
         Total Devices : 7
           Persistence : Superblock is persistent
    
           Update Time : Sun Mar 17 15:31:36 2024
                 State : clean, degraded, recovering 
        Active Devices : 5
       Working Devices : 7
        Failed Devices : 0
         Spare Devices : 2
    
                Layout : left-symmetric
            Chunk Size : 512K
    
    Consistency Policy : resync
    
        Rebuild Status : 1% complete
    
                  Name : bastion:0  (local to host bastion)
                  UUID : 58952835:75d234f4:d4201fb9:d535d0c4
                Events : 8239
    
        Number   Major   Minor   RaidDevice State
           6       8      113        0      spare rebuilding   /dev/sdh1
           1       8       33        1      active sync   /dev/sdc1
           2       8       49        2      active sync   /dev/sdd1
           3       8       65        3      active sync   /dev/sde1
           4       8       81        4      active sync   /dev/sdf1
           5       8       97        5      active sync   /dev/sdg1
    
           7       8      129        -      spare   /dev/sdi1
    mike@bastion:~$ lsblk
    NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
    loop0     7:0    0   9.8M  1 loop  /snap/canonical-livepatch/264
    loop1     7:1    0  63.9M  1 loop  /snap/core20/2182
    loop2     7:2    0  63.9M  1 loop  /snap/core20/2105
    loop3     7:3    0  74.2M  1 loop  /snap/core22/1122
    loop4     7:4    0    87M  1 loop  /snap/lxd/27037
    loop5     7:5    0    87M  1 loop  /snap/lxd/27428
    loop6     7:6    0  40.4M  1 loop  /snap/snapd/20671
    loop7     7:7    0  39.1M  1 loop  /snap/snapd/21184
    sda       8:0    0 232.9G  0 disk  
    ├─sda1    8:1    0     1G  0 part  /boot/efi
    └─sda2    8:2    0 231.8G  0 part  /
    sdb       8:16   0 465.8G  0 disk  
    └─sdb1    8:17   0 465.8G  0 part  
    sdc       8:32   0 465.8G  0 disk  
    └─sdc1    8:33   0 465.8G  0 part  
      └─md0   9:0    0   2.3T  0 raid5 
    sdd       8:48   0 465.8G  0 disk  
    └─sdd1    8:49   0 465.8G  0 part  
      └─md0   9:0    0   2.3T  0 raid5 
    sde       8:64   0 465.8G  0 disk  
    └─sde1    8:65   0 465.8G  0 part  
      └─md0   9:0    0   2.3T  0 raid5 
    sdf       8:80   0 465.8G  0 disk  
    └─sdf1    8:81   0 465.8G  0 part  
      └─md0   9:0    0   2.3T  0 raid5 
    sdg       8:96   0 465.8G  0 disk  
    └─sdg1    8:97   0 465.8G  0 part  
      └─md0   9:0    0   2.3T  0 raid5 
    sdh       8:112  0 465.8G  0 disk  
    └─sdh1    8:113  0 465.8G  0 part  
      └─md0   9:0    0   2.3T  0 raid5 
    sdi       8:128  0 465.8G  0 disk  
    └─sdi1    8:129  0 465.8G  0 part  
      └─md0   9:0    0   2.3T  0 raid5 
    mike@bastion:~$
    things are looking really good. At this moment the array is resyncing / rebuilding. once completed one last grow command (Only if needed for some strange reason)

    I know I posted that I desired that I wanted /dev/sdd1 to be listed as spare, but I'm trying to decide if I really want to force that member into the spare slot.
    Which in my mind / thoughts would be to fail, remove then add back as a spare (sdd1).
    OR if I want to just let it run as it is currently setup, which I am really leaning towards.
    Last edited by sgt-mike; March 17th, 2024 at 11:33 PM.

  2. #22
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    66
    Distro
    Ubuntu

    Re: Oh No not another RAID Post

    OK everything seems to be finish
    Code:
    mike@bastion:~$ sudo resize2fs -p /dev/md0
    resize2fs 1.46.5 (30-Dec-2021)
    Resizing the filesystem on /dev/md0 to 610316800 (4k) blocks.
    The filesystem on /dev/md0 is now 610316800 (4k) blocks long.
    
    mike@bastion:~$ sudo mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Mar  7 22:57:49 2024
            Raid Level : raid5
            Array Size : 2441267200 (2.27 TiB 2.50 TB)
         Used Dev Size : 488253440 (465.63 GiB 499.97 GB)
          Raid Devices : 6
         Total Devices : 7
           Persistence : Superblock is persistent
    
           Update Time : Sun Mar 17 20:14:02 2024
                 State : clean 
        Active Devices : 6
       Working Devices : 7
        Failed Devices : 0
         Spare Devices : 1
    
                Layout : left-symmetric
            Chunk Size : 512K
    
    Consistency Policy : resync
    
                  Name : bastion:0  (local to host bastion)
                  UUID : 58952835:75d234f4:d4201fb9:d535d0c4
                Events : 8283
    
        Number   Major   Minor   RaidDevice State
           6       8      113        0      active sync   /dev/sdh1
           1       8       33        1      active sync   /dev/sdc1
           2       8       49        2      active sync   /dev/sdd1
           3       8       65        3      active sync   /dev/sde1
           4       8       81        4      active sync   /dev/sdf1
           5       8       97        5      active sync   /dev/sdg1
    
           7       8      129        -      spare   /dev/sdi1
    Looks like I lost/ or bad data....size on disks seems to be smaller.... No Problem will copy from the external drive I have the files backed up on
    To ensure that the files are good. I'll leave sdi1 as the spare

    All in all I'm pleased with the results thus far now to look at how to mark this solved

  3. #23
    Join Date
    Mar 2023
    Beans
    2

    Re: Oh No not another RAID Post - 3.6TB instead of 6TB

    I am rejoining the Linux community after some time off, building a Plex NextCloud server on Ubuntu server. I am booting off my nvme 1TB drive. All is well, but I am having difficulty with my RAID 5 array regarding its size. I have 3 2TB SSDs which should be 6TB, but lsblk and alike only show 3.6TB. I am looking for any guidance on resolving the issue.
    I used the Digital Ocean tutorial, which seemed to work fine except for the array size??
    https://www.digitalocean.com/communi...dadm-on-ubuntu


    lsblk

    loop0 7:0 0 55.7M 1 loop /snap/core18/2829
    loop1 7:1 0 74.2M 1 loop /snap/core22/1380
    loop2 7:2 0 130.1M 1 loop /snap/docker/2915
    loop3 7:3 0 325M 1 loop /snap/nextcloud/42890
    loop4 7:4 0 38.8M 1 loop /snap/snapd/21759
    sda 8:0 0 1.8T 0 disk
    └─md0 9:0 0 3.6T 0 raid5 /mnt/md0
    sdb 8:16 0 1.8T 0 disk
    └─md0 9:0 0 3.6T 0 raid5 /mnt/md0
    sdc 8:32 0 1.9T 0 disk
    └─md0 9:0 0 3.6T 0 raid5 /mnt/md0
    sdd 8:48 0 3.7T 0 disk
    ├─sdd1 8:49 0 200M 0 part
    └─sdd2 8:50 0 3.7T 0 part /media/plexmedia
    sde 8:64 0 3.6T 0 disk
    ├─sde1 8:65 0 200M 0 part
    └─sde2 8:66 0 3.6T 0 part /mnt/4tbcrucial
    nvme0n1 259:0 0 931.5G 0 disk
    ├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
    └─nvme0n1p2 259:2 0 930.5G 0 part /


    frisk -l
    Disk /dev/loop0: 55.66 MiB, 58363904 bytes, 113992 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk /dev/loop1: 74.24 MiB, 77844480 bytes, 152040 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk /dev/loop2: 130.09 MiB, 136404992 bytes, 266416 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk /dev/loop3: 325 MiB, 340791296 bytes, 665608 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk /dev/loop4: 38.83 MiB, 40714240 bytes, 79520 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk model: WD Blue SN570 1TB
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 7F2ED68A-DF94-4517-9F5C-7C9AD1DC91B6

    Device Start End Sectors Size Type
    /dev/nvme0n1p1 2048 2203647 2201600 1G EFI System
    /dev/nvme0n1p2 2203648 1953521663 1951318016 930.5G Linux filesystem

    Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: CT2000MX500SSD1
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes

    Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: CT2000MX500SSD1
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes

    Disk /dev/sdc: 1.86 TiB, 2048408248320 bytes, 4000797360 sectors
    Disk model: T-FORCE T253TY00
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    GPT PMBR size mismatch (4294967294 != 8001573551) will be corrected by write. [i am unsure if this is an issue??]

    Disk /dev/sdd: 3.73 TiB, 4096805658624 bytes, 8001573552 sectors
    Disk model: T-FORCE T253TY00
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 335CC8AB-3BD8-4FE3-9721-F878F16836E9


    Device Start End Sectors Size Type
    /dev/sdd1 40 409639 409600 200M EFI System
    /dev/sdd2 411648 8001572863 8001161216 3.7T Microsoft basic data


    Disk /dev/md0: 3.64 TiB, 4000527155200 bytes, 7813529600 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
    GPT PMBR size mismatch (4294967294 != 7814037167) will be corrected by write.

    Disk /dev/sde: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk model: CT4000X9PROSSD9
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 300BA892-F45B-41FF-AC5D-F85CFD0FADE4


    Device Start End Sectors Size Type
    /dev/sde1 40 409639 409600 200M EFI System
    /dev/sde2 409640 7813774983 7813365344 3.6T Apple HFS/HFS+

    blkid

    blkid
    /dev/nvme0n1p1: UUID="2921-C7B0" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="e6cc44ea-c846-4eeb-9fea-066dec45d9a9"
    /dev/nvme0n1p2: UUID="27741d3d-ee50-4b89-ac69-9efc24630774" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="c4be6a03-56f2-46db-bbce-575c770edd79"
    /dev/sdd2: UUID="35bfd2bc-603c-4b45-87de-2f2613bd6274" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="d2568c05-eb0c-47c4-8a21-b8bb3067d361"
    /dev/sdd1: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="67E3-17ED" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="550a17f8-96a2-497e-a489-17b225d8bf7a"
    /dev/sdb: UUID="df71813f-8ee7-efd6-fbe8-3b21100f2497" UUID_SUB="e572584f-6046-1e73-5d03-7df2f7689f2b" LABEL="selbynas:0" TYPE="linux_raid_member"
    /dev/md0: UUID="76ae7744-4374-486f-b2c3-2417d1a5e577" BLOCK_SIZE="4096" TYPE="ext4"
    /dev/sdc: UUID="df71813f-8ee7-efd6-fbe8-3b21100f2497" UUID_SUB="e4f2da1f-beb4-2cc2-d5c5-d52a1a6bb641" LABEL="selbynas:0" TYPE="linux_raid_member"
    /dev/sda: UUID="df71813f-8ee7-efd6-fbe8-3b21100f2497" UUID_SUB="3c5e1f92-0e6f-9aac-d1c3-20742a6d5ae3" LABEL="selbynas:0" TYPE="linux_raid_member"
    /dev/loop1: BLOCK_SIZE="131072" TYPE="squashfs"
    /dev/loop4: BLOCK_SIZE="131072" TYPE="squashfs"
    /dev/loop2: BLOCK_SIZE="131072" TYPE="squashfs"
    /dev/loop0: BLOCK_SIZE="131072" TYPE="squashfs"
    /dev/sde2: UUID="592d1e8f-7374-32dd-aadc-1cd7cf393e65" BLOCK_SIZE="8192" LABEL="4TBCrucial" TYPE="hfsplus" PARTUUID="9f7ae9f6-3cc8-47e2-9121-b4e8d43a6b37"
    /dev/sde1: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="67E3-17ED" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="8ccafe28-d82a-4936-a6a7-dc9f48254ce7"
    /dev/loop3: BLOCK_SIZE="131072" TYPE="squashfs"


    Thank you in advance for any suggestions!
    Last edited by copz1998; July 14th, 2024 at 09:08 PM. Reason: add blkid

  4. #24
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    66
    Distro
    Ubuntu

    Re: Oh No not another RAID Post

    Yes for a Raid 5 the 3.6 TB will be correct, not 6 TB.
    I'm sure you are siting there wondering why?
    Drive Manufacture use Decimal format (1kb= 1000 bytes) and operating systems - Linux/Windows/DOS use Binary format (1kb =1024 bytes), Mac's are the exception they use decimal format to address the capacity.
    This means that the manufacturers 2TB is actually 1.8TB to most of the operating system used, so a 3 drive setup in RAID 5 is 5.4TB because a Raid 5 uses 1 drive for parity one loses 1 drive to that parity.
    So while we start out with 5.4 TB (3 drives x1.8 capacity each) usable for all the drives. Parity means we must lose the capacity of one drive. So 5.4TB - 1.8 TB this leaves 3.6 TB useable.
    Easiest way to calculate the RAID 5 capacity is to not count 1 drive in the calculation.
    Here is a article that explains better than I can on drive manufacturer's advertised capacity vs actual ( https://www.seagate.com/support/kb/w...abel-172191en/)

    Here is a link to raid capacity calculator ( https://www.servethehome.com/raid-calculator/ )

    The question I have for you to consider is for a plex server do you really need a RAID 5? would a RAID 0 work?

    I understand the desire to have a drive to go down and still be functioning, which the RAID 5 redundancy allows.
    But if your goal is pure capacity (which from your post I'm assuming is your goal) maybe a RAID 0 for the Plex server?
    Which means for your setup you gain back that 1.8TB , leaving you at 5.4 TB available.
    RAID is not a backup but a redundancy. Backups are stored on a different machine (i.e NAS) or a drive that is used only for backups

    It is your goals as to which mandates RAID configuration to use for your purposes.

    Personally I have my headless plex server backed up with a NAS which is where my RAID configuration are now. My Plex server just uses the drives for my media at mount points, although there is nothing wrong with combining drives to make a RAID.

    My original media server I was using Universal Media Server utilizing a RAID5 array. I have now taken that out of service as UMS didn't do a good job for 1 of my TV, thus I migrated to Plex, using a old HP compaq 8200 elite ultra small with a second gen i7.
    Last edited by sgt-mike; 4 Weeks Ago at 08:21 PM.

Page 3 of 3 FirstFirst 123

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •