Page 1 of 2 12 LastLast
Results 1 to 10 of 15

Thread: Convert mdadm 6 disk raid5 to 5 disk raid5

  1. #1
    Join Date
    Mar 2010
    Beans
    19

    Convert mdadm 6 disk raid5 to 5 disk raid5

    I know you can fail and then remove a drive from a RAID5 array. This leaves the array in a degraded state.

    How can you remove a drive and convert the array to just a regular, clean array?

  2. #2
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Convert mdadm 6 disk raid5 to 5 disk raid5

    Yes, you can do this. You'll need a newer version of mdadm (>3.1, and kernel >2.6.31), and you'll need to make sure that your filesystem isn't taking up more space than you're going to shrink to. Can you give the output of these commands?

    Code:
    mdadm -V
    Code:
    mdadm --detail /dev/<your_array_here_probably_md0>
    Code:
    df -h
    And this...
    Code:
    mount
    You'll definitely want backups of your important data before trying this.

  3. #3
    Join Date
    Mar 2010
    Beans
    19

    Re: Convert mdadm 6 disk raid5 to 5 disk raid5

    Code:
    therms@EHUD:~$ mdadm -V
    mdadm - v3.1.4 - 31st August 2010
    Code:
    therms@EHUD:~$ sudo mdadm --detail /dev/md3
    [sudo] password for therms:
    /dev/md3:
            Version : 1.2
      Creation Time : Sat Dec 11 16:05:27 2010
         Raid Level : raid5
         Array Size : 9767559040 (9315.07 GiB 10001.98 GB)
      Used Dev Size : 1953511808 (1863.01 GiB 2000.40 GB)
       Raid Devices : 6
      Total Devices : 6
        Persistence : Superblock is persistent
    
        Update Time : Thu Jun 30 17:51:52 2011
              State : clean
     Active Devices : 6
    Working Devices : 6
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 128K
    
               Name : EHUD:3  (local to host EHUD)
               UUID : 593eca07:3723b8eb:4a11dd0b:6b750414
             Events : 40873
    
        Number   Major   Minor   RaidDevice State
           0       8       97        0      active sync   /dev/sdg1
           1       8      161        1      active sync   /dev/sdk1
           3       8      193        2      active sync   /dev/sdm1
           4       8      177        3      active sync   /dev/sdl1
           5       8       17        4      active sync   /dev/sdb1
           6       8       33        5      active sync   /dev/sdc1
    Code:
    therms@EHUD:~$ mount
    /dev/md1 on / type ext4 (rw,errors=remount-ro,commit=0)
    proc on /proc type proc (rw,noexec,nosuid,nodev)
    none on /sys type sysfs (rw,noexec,nosuid,nodev)
    fusectl on /sys/fs/fuse/connections type fusectl (rw)
    none on /sys/kernel/debug type debugfs (rw)
    none on /sys/kernel/security type securityfs (rw)
    none on /dev type devtmpfs (rw,mode=0755)
    none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
    none on /dev/shm type tmpfs (rw,nosuid,nodev)
    none on /var/run type tmpfs (rw,nosuid,mode=0755)
    none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
    /dev/mapper/vg_pool-pool on /media/LVM Pool type ext4 (rw,nosuid,nodev,uhelper=udisks)
    The array is currently part of a LVM. I know I'll need to take care of that first...

  4. #4
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Convert mdadm 6 disk raid5 to 5 disk raid5

    Yup, since I'm not sure how much of your filesystem is currently in use (need output of df -h), it's hard to know how much you can safely reduce, but the first step is to actually reduce the filesytem, then, you'll need to resize the LVM, then adjust the mdadm array-size, next reduce the number of disks in your array, and finally expand the LVM and then the filesystem to max size again. Like I said, this is involved

    Here's roughly how you start, once you have a backup of your important data, and you've confirmed available space.

    Unmount, and run a filesystem check
    Code:
    umount /media/LVM
    fsck.ext4 /dev/mapper/vg_pool-pool
    Then, resize the filesystem make this slightly smaller than you'll resize the LVM to...
    Code:
    resize2fs /dev/mapper/vg_pool-pool <size_in_GB_here>G
    Now, reduce the LVM size...
    Code:
    lvreduce -L<size_in_GB_her>G /dev/mapper/vg_pool-pool
    Now, you can dive into mdadm First change the array-size. This does not survive a reboot, so we'll do this all in one session.
    Code:
    mdadm /dev/md3 --grow --array-size=<new_size_in_bytes_ie_8000000000>
    Next, you'll need to reduce the number of disks.
    Code:
    mdadm /dev/md3 --grow --raid-devices=5 --backup-file=/root/backup
    Watch the progress
    Code:
    cat /proc/mdstat
    You should end up with a 5 disk RAID5 + 1 hot spare.

    Then, remove the hot spare.
    Code:
    mdadm --remove /dev/mdX /dev/sdX1
    Finally, you'll need to grow your LVM back to take up the whole array, and then extend your filesystem likewise. I have not tested these steps in a virtual machine, but this should give you the idea. Again, please backup first, then test this in a VM, and finally try it on your real array

    EDIT: See, I knew I'd forget something, you'll need to reduce the volume group too, prior to shrinking the array (vgreduce)
    Last edited by rubylaser; August 4th, 2011 at 11:52 PM.

  5. #5
    Join Date
    Mar 2010
    Beans
    19

    Re: Convert mdadm 6 disk raid5 to 5 disk raid5

    Quote Originally Posted by rubylaser View Post
    helpful info
    Thanks. You gave me just the info I needed!

    There's no worry about the filesystem size. The whole reason I need to do this is that I just added the drive to the mdadm array and then found out that I couldn't expand my LVM's ext4 filesystem because it would be larger than 16TB. See this thread for info about that.

  6. #6
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Convert mdadm 6 disk raid5 to 5 disk raid5

    Ah, I posted in that thread too, I should have recognized your username. Hope you can get this working properly. Good Luck

  7. #7
    Join Date
    Mar 2010
    Beans
    19

    Re: Convert mdadm 6 disk raid5 to 5 disk raid5

    I finally got around to working on this.

    I wanted to mention a potential snafu: The --array-size command takes its parameter in what I think are blocks...not in bytes!

    If you do:
    Code:
    mdadm --detail /dev/md3
    it will give you:

    Code:
    Array Size : 7814047232 (7452.06 GiB 8001.58 GB)
      Used Dev Size : 1953511808 (1863.01 GiB 2000.40 GB)
    This is the available storage space and the amount used for parity...in a RAID-5 of equal-size disks the parity size is equal to the disk size, so to arrive at the amount to use for --array-size just use
    Code:
    parity size * (number of disks you want in the array - 1)

  8. #8
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Convert mdadm 6 disk raid5 to 5 disk raid5

    According to Neil Brown, the developer of mdadm, you don't even need to give mdadm the array size, it calculates it from the number of disks.
    http://www.spinics.net/lists/raid/msg28628.html

  9. #9
    Join Date
    Mar 2010
    Beans
    19

    Re: Convert mdadm 6 disk raid5 to 5 disk raid5

    As I just got around to finishing this after waiting 7 days(!) for the md grow operation to finish, I wanted to mention one more caveat for anyone following rubylasers excellent instructions:

    This:
    Code:
    mdadm  --remove /dev/sdX1
    should be:

    Code:
    mdadm --remove /dev/mdX /dev/sdX1

    Otherwise everything he mentioned was great, and I owe him my thanks!

  10. #10
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Convert mdadm 6 disk raid5 to 5 disk raid5

    Great, I'm glad that worked. I'm going to edit my post so that someone doesn't miss that as part of the steps. 7 days is crazy :O I reshaped mine from a 6 disk RAID5 to a 7 disk RAID6, and it took about 6 hours. Shrinking must be substantially slower.

Page 1 of 2 12 LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •