Page 2 of 2 FirstFirst 12
Results 11 to 13 of 13

Thread: Problems adding new disk to mdadm raid 5 array

  1. #11
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Problems adding new disk to mdadm raid 5 array

    Well, it's a lot of work, but now that you know this, before you continue adding more and more data, think about this:
    You said the 4TB array is about half way full, which is approx 2TB.
    You have a new 2TB disk.

    Create a new partition with cfdisk on the new disk, make sure the partition is correct with Start/End cylinders, copy all data it can take. If any data is left over, copy it on a smaller hdd, it wouldn't be much left over.

    Destroy your array, and with cfdisk create the partition on your three disks. Make sure the partitions are correct with Start/End cylinders too. Build a new array. Copy data back.
    Add new disk to the array.

    But that's lots of work man. Anyway, think about it because the more and more data you have, this operation will be more and more difficult.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  2. #12
    Join Date
    Aug 2010
    Beans
    37

    Re: Problems adding new disk to mdadm raid 5 array

    Thanks for all of the help. Had to be an fdisk bug. I don't know how it ever let me set it up that way. I am backing up the data now and will rebuild the array, formatting with cfdisk this time. Better to fix it now before more data is added.

    -hogfan

  3. #13
    Join Date
    Aug 2010
    Beans
    37

    Re: Problems adding new disk to mdadm raid 5 array

    Ok, I backed everything up and got the array all rebuilt, data copied back over, added the 4th disk and grew the array.

    Now I have this problem though:

    I was following a couple different guides for setting up the array and made a very easy to do, but problematic mistake.

    I formatted all 3 WD20EARS drives starting at sector 2048 since they are advanced format drives.

    However, when I built the array initially, I added sdb, sdc, & sdd to the array rather than sdb1, sdc1, sdd1! When I added the fourth disk, I actually added it to the array as sde1, correctly. So now I have a good array with 3 entire disks added and one partition. Normally I don't think is would be bad, but since these are advanced format drives and the partitions on them are set to start at 2048 because of the 4k sector size, I think the first 3 entire disks added to the array will cause raid performance problem.

    Sooo......my question is can I do the following.

    1.) Fail each of the WD20EARS drives in the array, one at a time, zero the superblock on the "failed" drive, recreate the partition on the drive, starting at 2048 and then replace the "failed" drive with the newly formatted disk, adding it back into the array as (ie. sdb1 rather than sdb)? Obviously, I would do this one disk at a time, allowing the array to rebuild after each of the 3 disks.

    Or, will have to back everything up externally again and rebuild the entire array from scratch? This is kind of a problem now because I do not have that 4th 2TB drive available to backup the data on the raid. I do have an off-site backup of the data however, but it would take much longer for me to get that data back over to the array after it is rebuilt. Thanks for any input on this.

    -hogfan

Page 2 of 2 FirstFirst 12

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •