Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 24

Thread: Oh No not another RAID Post

  1. #11
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,570
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Oh No not another RAID Post

    I assumed you want to use this as a learning opportunity too. Otherwise and as you already said you have backup of the data, like TheFu says, much faster and better way is to dynamite the array and create a new one with the size and disks you want. And you can use the process to introduce LVM like mentioned too.

    I used mdadm + LVM before I changed it, and was very happy with it when using it. I now use snapraid + mergerfs but that is mainly because I wanted to use fully my disks of different sizes. That is a different story.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  2. #12
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    70
    Distro
    Ubuntu

    Re: Oh No not another RAID Post

    Darko stated:
    "I assumed you want to use this as a learning opportunity too. "
    YES Yes.
    The media server is really a home lab.
    And it just makes sense to me to learn this process versus waiting until a drive failure

    I know I alluded to the fact that I have 1 TB Drives enroute, but didnt push the fact that I will change over to them vs the 500gb's, I'm sure that the fastest and easiest way is to dynamite the array and just stick the new drives in and assemble the way I want the array to sit then transfer the media over once everything is clean.

    I will probably sit down tomorrow and start the learning process (changing the array) after I have read your advice 3 or 4 times. I'll shift the out going drives into my linux NFS server also running headless.

    Another thing I didn't state was the drives are housed in a Back-plane that uses 6 - 2.5" drives into a single 5.25 bay. I had a couple of reasons for this as it is the easiest way to get a bunch of drives into a SFF chasiss vs 3.5" drives. Another was exactly what TheFu was stating was spin ups my understanding is that the laptop drive are better suited for this, is that true or fiction IDK.

  3. #13
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Oh No not another RAID Post

    Avoid laptop HDDs for RAID use.

    I'd only use 2.5in HDDs in a RAID if they are SAS connected and enterprise class.

    On the laptop where I still have an HDD, the initial, shipped HDD was a 320G WD-Blue. After 3 yrs, I moved that to a 500G WD-Black. About 5 yrs later, it started showing some SMART issues, so I moved everything into a 1TB WD-Black ... which is still what it has. OTOH, I haven't booted that 2nd-Gen Core i5 in a long time and haven't used it seriously in about a decade. The 500G Black is here in a USB3 enclosure somewhere being used for scratch needs and sneakernet.

    All my newer laptops either came with an SSD or I pulled the shipped HDD and put it on a shelf for the warranty period, before putting it into a USB3 enclosure.

    I use 4 HDD drive cages that fit in 3x5.25in slots and have a huge, quiet, 120mm Noctura fan. 2 systems have that. The fan keeps those HDDs cool, pulling outside air across them. Drive cooling is important when they are densely mounted. Same order as my other post:
    Code:
    $ sudo hddtemp /dev/sd[a-h]
    /dev/sda: WDC WD8002FZWX-00BKUA0: 50°C
    /dev/sdb: WDC WD20EFRX-68AX9N0: 35°C
    /dev/sdc: TOSHIBA DT01ACA200: 36°C
    /dev/sdd: Hitachi HUA723020ALA641: 39°C
    /dev/sde: WDC WD20EFRX-68AX9N0: 36°C
    Can you guess which is NOT mounted in the hot-swap drive cage?

    Code:
    $ sudo hddtemp /dev/sd[a-h]
    /dev/sda: HGST HUS726T4TALA6L4: 40°C
    /dev/sdb: HGST HMS5C4040ALE640: 38°C
    /dev/sdc: HGST HMS5C4040ALE640: 39°C
    /dev/sdd: WD easystore 25FB: S.M.A.R.T. not available
    /dev/sde: WD easystore 25FB: S.M.A.R.T. not available
    /dev/sdf: WDC WD40EFRX-68WT0N0:  drive supported, but it doesn't have a temperature sensor.
    /dev/sdg: ST3320620AS:  drive supported, but it doesn't have a temperature sensor.
    /dev/sdh: WDC WD8002FZWX-00BKUA0: 43°C
    Can you guess which is NOT mounted in the hot-swap drive cage? Hint: The easystore HDDs are external USB3 with powered enclosures ... used only for backups. The last 3 are in an external drive array, eSATA connected. Looks like one of the WD-Red HDDs is having issues ... but not yet dying.

    Anyway, some things to consider. 2.5in laptop drives aren't meant to be used 24/7 and definitely not in an array situation. In a RAID array, if any of the drives is spun down, you have to wait for the slowest to spin up. If anything bad happens related to the spin-up/down, expect data loss. The only thing I'd consider worse would be using USB connected HDDs for RAID or even primary storage. I've been burned. Never again.

  4. #14
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Oh No not another RAID Post

    I used to use recertified SAS drives in that T720... But they are sill spendy. I no longer use SAS drives = I can't afford them (and SATA SSDs are now cheaper). I have used 2.5 inch Enterprize class SATA HDDs for years I still have a lot of those that I use for testing... My Dell PowerEdge T720 has 16 x 2.5" SAS/SATA drive bays.

    In my new servers, they are all SSD and NVMe. For the SSD's, I use 1 slot 5.25" Drive Bay cages that hold 6 x 2.5" drives... They have 2 fans. Very compact. I am very happy with those.

    Like TheFu mentioned... I prevent all my drives from spinning down at all. No power saving features left on. They never go to sleep or poweroff. <-- That is very important to me. Bad past experiences from drives going to sleep.

    I used to do mdam/lvm >> Now solely ZFS RAIDZ2 and RAIDZ3 arrays. I like that I can afford to lose 2-3 drives and still go on with business. I like that I can make (most) changes with a live filesystem. I can mark drives as failed., clear, scrub, add, remove, etc, while the array is live and accessible. The filesystem within it is very robust.

    If a drive powers down, it will be marked as failed in the array... Even though the drive is not really bad (just not accessible). When this happened to me, I could see each drive drop out of the array. Until the pool dropped out. Bad juju. PITA. Just caused more work and stress. Got it all back up... Changed the power-settings of the drives... No further problems.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  5. #15
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    70
    Distro
    Ubuntu

    Re: Oh No not another RAID Post

    @TheFu & MAFoElffen

    BEFORE I post my progress thus far let me ASSURE Both of you that your advice did not fall on deaf ears or I'm ignoring your mentorship and advice. As the systems are a homelab.
    That can be reconfigured with addition of a SAS controller and correct drives. And thank you for the advice, it will be used in my NFS (optiplex 3010) build is next so I can incorporate that into that system. Then shift the media server over to SAS or pure SSD i'll review the power setting again but if I recall correctly I did not allow the drives to power down EXCEPT in a shutdown or reboot but Like I said I'll triple check.


    here is the progress thus far on what I have.
    Code:
    mike@bastion:~$ sudo umount /dev/md0
    
     [sudo] password for mike:  
     mike@bastion:~$ sudo resize2fs -p /dev/md0 1800G
     resize2fs 1.46.5 (30-Dec-2021)
     Please run 'e2fsck -f /dev/md0' first.
     
     
     mike@bastion:~$ sudo e2fsck -f /dev/md0
     e2fsck 1.46.5 (30-Dec-2021)
     Pass 1: Checking inodes, blocks, and sizes
     Pass 2: Checking directory structure
     Pass 3: Checking directory connectivity
     Pass 4: Checking reference counts
     Pass 5: Checking group summary information
     /dev/md0: 976/213614592 files (1.0% non-contiguous), 254584432/854443520 blocks
     mike@bastion:~$ sudo resize2fs -p /dev/md0 1800G
     resize2fs 1.46.5 (30-Dec-2021)
     Resizing the filesystem on /dev/md0 to 471859200 (4k) blocks.
     Begin pass 3 (max = 26076)
     Scanning inode table          XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
     The filesystem on /dev/md0 is now 471859200 (4k) blocks long.
     
     
     mike@bastion:~$ sudo mdadm --grow /dev/md0 --raid-devices=6  
     mdadm: this change will reduce the size of the array.
            use --grow --array-size first to truncate array.
            e.g. mdadm --grow /dev/md0 --array-size 2441267200
     mike@bastion:~$ sudo mdadm --grow /dev/md0 --array-size 2441267200
     mike@bastion:~$ sudo mdadm --grow /dev/md0 --raid-devices=6  
     mdadm: Need to backup 17920K of critical section..
     mdadm: /dev/md0: Cannot grow - need backup-file

    this last line caused a cockeyed look from me so I issued


    Code:
     mike@bastion:~$ sudo mdadm --grow /dev/md0 --raid-devices=6 --backup=/home/mike/*.tmp
     mdadm: Need to backup 17920K of critical section..

    hmm did it finish resizing ???
    Code:
     mike@bastion:~$ sudo mdadm -D /dev/md0 | grep -e "Array Size" -e "Device Size"
             Array Size : 2441267200 (2.27 TiB 2.50 TB)
     mike@bastion:~$ cat /proc/mdstat  
     Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
     md0 : active raid5 sdb1[8] sdh1[6] sdi1[7] sdg1[5] sdf1[4] sde1[3] sdc1[1] sdd1[2]
           2441267200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
           [>....................]  reshape =  0.6% (2956284/488253440) finish=649.7min speed=12448K/sec

    Nope still working at resizing , ok just have to slow down quit rushing and wait 600 min until resized and complete until I can reissue the command sudo mdadm --grow /dev/md0 --raid-devices=6

    and the post that TheFu made about the HDD Temps here is mine UNDER Load as a FYI and not bragging etc but to give a idea of what I have without taking a picture
    Drive temperatures
    sda: 26°C sdb: 30°C sdc: 29°C sdd: 31°C sde: 32°C sdf: 31°C sdg: 32°C sdh: 30°C sdi: 30°C
    all have fans blowing over them
    sdc through sdi are setting in a 6 bay (2.5" 7-9.5mm height) 1 x 5.25 caged bay with cooling fans
    Last edited by sgt-mike; March 17th, 2024 at 07:39 AM.

  6. #16
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,570
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Oh No not another RAID Post

    Sorry, my bad that I didn't mention the backup file parameter in my grow command. I thought you had that clear from your tutorial link. The --grow reshape is basically asking you for path and filename where to keep some basic info about the reshape in case things go wrong.

    So it would have been something like:
    Code:
    sudo mdadm --grow /dev/md0 --raid-devices=6 --backup-file=/home/mike/reshape.tmp
    In any case, the --array-size command did complete correctly, you can see your new temporary Array Size is 2.27TiB (which is 5x 465GiB).

    After you did the --grow --array-size, the next --grow --raid-devices=6 you issued actually started the reshape (shrink) to 6 members. That is the process you see in 'cat /proc/mdstat'. It is estimating 600mins to complete after which you should see two of the 8 disks marked as spare in the array. NO NEED to issue another grow after this, in my opinion. This is the actual main grow/shrink that is already in progress.

    PS. If you can notice in the cat output, the array is already considered as having 6 members, but the two spares won't be assigned until this reshape process completes. The reshape is actually the shrink of the array.
    Code:
    2441267200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    Last edited by darkod; March 17th, 2024 at 09:41 AM.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  7. #17
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    70
    Distro
    Ubuntu

    Re: Oh No not another RAID Post

    sidebar question as getting older in my 60's memory sometimes fails me.
    the command to scrub the drives would be:
    echo check > /sys/block/md0/md/sync_action

    correct?



    on the
    sudo mdadm --grow /dev/md0 --raid-devices=6 --backup-file=/home/mike/reshape.tmp

    had I actually thought about it the reshape.tmp file should have come to mind I had seen that
    listed somewhere before I don't know why I used the wildcard in the command. Just was not thinking. Brain fart i guess.
    Last edited by sgt-mike; March 17th, 2024 at 10:58 AM.

  8. #18
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,570
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Oh No not another RAID Post

    I don't know that command, and not sure what exactly you mean by 'scrub'.

    Basically after you remove a disk/member from the array (for example the mentioned sdb1), and zero its superblock, you are free to use it where and how you want. It will not be related to this array any more. You can use it in the same or another machine.

    If you need to repartition it, or simply start fresh, you just create new blank partition table and off you go.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  9. #19
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    70
    Distro
    Ubuntu

    Re: Oh No not another RAID Post

    found the link to scrubbing the raid array as a maintenance, which is what I was thinking of and why I said it was a sidebar and not on topic
    https://raid.wiki.kernel.org/index.p...ing_the_drives
    this is what I was thinking of scrubbing has really nothing to do with this thread Other than maintaining the array's health

    perusing the media server I do see that there is a cron job configured via webmin that appears to be a "scrub" operation noticed this after post my sidebar question. And looking to see if Webmin had parameter for disk spin down that TheFu mentions.

  10. #20
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Oh No not another RAID Post

    Quote Originally Posted by sgt-mike View Post
    sidebar question as getting older in my 60's memory sometimes fails me.
    the command to scrub the drives would be:
    echo check > /sys/block/md0/md/sync_action

    correct?
    Let me check what I did, if I can find it in old backups from a year ago when I used mdadm. Well, I don't have backups from that system anymore and I took the HDDs out of it, so booting it won't do any good either. Sorry. Guess I deleeted the backups after the migration to a new system + 30 days when it was clear I wouldn't be .... hummmm. Let me check one more place.

    Found an old backup HDD on a shelf. The command I used:
    Code:
    # /usr/share/mdadm/checkarray --all --idle
    checkarray: I: check queued for array md1.
    checkarray: I: selecting idle I/O scheduling class and 15 niceness for resync of md1.
    checkarray: I: check queued for array md2.
    checkarray: I: selecting idle I/O scheduling class and 15 niceness for resync of md2.
    Ran that monthly for each mdadm array device. When it runs, it shows up in the /proc/mdstat file.
    Code:
    $ less /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
    md1 : active raid1 sdd2[3] sdc3[2]
          1943010816 blocks super 1.2 [2/2] [UU]
          [>....................]  check =  0.2% (5037440/1943010816) finish=218.1min speed=148065K/sec
          
    md2 : active raid1 sde2[0] sdb2[1]
          1338985536 blocks [2/2] [UU]
          [>....................]  check =  0.3% (4799232/1338985536) finish=157.3min speed=141340K/sec
          
    unused devices: <none>
    I moved the array from the other system (core i5-750) to a new system (Ryzen 5600G), but also moved the data from the arrays onto a new WD-black 8TB HDD. I've been slowly checking the old data and migrating it to better places on the LAN - much has been moved to the NFS box. Still need to deal with photo galleries and bring up old websites that need those photos and home movies. No hurry.

    The backup HDD I found isn't connected and it was failing after 10+ yrs of service holding backups for 10-20 systems over all that time.
    Code:
    /mnt/1/Backups# df -hT .
    Filesystem     Type  Size  Used Avail Use% Mounted on
    /dev/sdf1      ext4  1.4T  1.3T   34G  98% /mnt/1
    Hope all that isn't confusing. Lots of things moved systems in 2023 here. I retired a bunch of power-wasting systems to get down to just 2 Ryzens handling almost everything, sized so that 1 can handle all RAM/CPU requirements, if not hold all the storage.

Page 2 of 3 FirstFirst 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •