Page 1 of 5 123 ... LastLast
Results 1 to 10 of 47

Thread: mdadm - replacing failed drive

  1. #1
    Join Date
    Aug 2007
    Location
    Tampa, FL
    Beans
    44
    Distro
    Ubuntu 14.04 Trusty Tahr

    mdadm - replacing failed drive

    Hello,

    A few days ago I received a SMART error on one of the drives in my raid so I removed the drive and had a replacement (it was still under warranty) sent to me. Yesterday I attempted to add the new drive to the raid array and goofed up. I believe my issue was with the partitioning of the new drive.

    The raid array is raid5 and is made up of five 2 TB drives. /dev/sdc is the drive that failed. Last night the raid was working but missing the failed drive. I partitioned the new drive and ran "mdadm --add /dev/md0 /dev/sdc". The process ran and this morning when I checked the status it showed two failed drives and the raid no longer working.

    Below is the current mdadm --detail:

    Code:
    #sudo mdadm --detail /dev/md0
    /dev/md0:
            Version : 0.90
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
       Raid Devices : 5
      Total Devices : 4
    Preferred Minor : 0
        Persistence : Superblock is persistent
    
        Update Time : Thu Apr 18 07:43:28 2013
              State : clean, FAILED
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 1
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 256K
    
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
             Events : 0.63481
    
        Number   Major   Minor   RaidDevice State
           0       8       65        0      active sync   /dev/sde1
           1       8       17        1      active sync   /dev/sdb1
           2       0        0        2      removed
           3       0        0        3      removed
           4       8       81        4      active sync   /dev/sdf1
    
           5       8        1        -      faulty spare   /dev/sda1
    And the current fdisk -l:

    Code:
    #sudo fdisk -l
    
    Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xa2a226e5
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *          63  3907024064  1953512001   fd  Linux raid autodetect
    
    Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
    81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00064ce9
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1            2048  3907029167  1953513560   fd  Linux raid autodetect
    
    Disk /dev/sdd: 250.1 GB, 250059350016 bytes
    255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000e80a1
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdd1            2048      499711      248832   83  Linux
    /dev/sdd2          501758   488396799   243947521    5  Extended
    /dev/sdd5          501760   488396799   243947520   8e  Linux LVM
    
    Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xc744f265
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sde1              63  3907024064  1953512001   fd  Linux raid autodetect
    
    WARNING: GPT (GUID Partition Table) detected on '/dev/sdf'! The util fdisk doesn't support GPT. Use GNU Parted.
    
    
    Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdf1              64  3907029167  1953514552   fd  Linux raid autodetect
    
    Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x086bff60
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1              63  3907024064  1953512001   fd  Linux raid autodetect
    
    Disk /dev/md0: 8001.6 GB, 8001584889856 bytes
    2 heads, 4 sectors/track, 1953511936 cylinders, total 15628095488 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 262144 bytes / 1048576 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table
    
    Disk /dev/mapper/NAS-root: 245.6 GB, 245647802368 bytes
    255 heads, 63 sectors/track, 29864 cylinders, total 479780864 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/mapper/NAS-root doesn't contain a valid partition table
    
    Disk /dev/mapper/NAS-swap_1: 4148 MB, 4148166656 bytes
    255 heads, 63 sectors/track, 504 cylinders, total 8101888 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/mapper/NAS-swap_1 doesn't contain a valid partition table
    As you can see, the new drive (/dev/sdc) was not properly added to the raid and now /dev/sda is showing as "faulty spare". Can I simply run the following to rebuild the array minus the missing disk and then attempt to add /dev/sdc again?

    Code:
    #mdadm --stop /dev/md0
    #mdadm --assemble /dev/md0 /dev/sd[a,b,c,e,f]1

    Thank you in advance,
    -Kendall
    Last edited by kendallp; April 26th, 2013 at 12:48 PM.

  2. #2
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: mdadm - replacing failed drive

    Yes, that would be the easiest next step. I would also verify each disk's SMART info while you are at it, and check dmesg for why the other disk got kicked out of the array.

  3. #3
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: mdadm - replacing failed drive

    Also note this: To be precise, you need to add /dev/sdc1, not /dev/sdc. I think you know this and you wrote in your post /dev/sdc just to say which disk you are adding, but it creates confusion since with mdadm you can add both partitions and whole disks. I think using partitions is usually better, like you are using it. Not whole disks.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  4. #4
    Join Date
    Aug 2007
    Location
    Tampa, FL
    Beans
    44
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: mdadm - replacing failed drive

    Thank you Rubylaser and Darkod. Unfortunately, things did not go how I was hoping.

    I stopped the array and attempted to reassemble it:

    Code:
    #sudo mdadm --assemble /dev/md0 /dev/sd[a,b,c,e,f]1
    mdadm: device 5 in /dev/md0 has wrong state in superblock, but /dev/sdc1 seems ok
    mdadm: /dev/md0 assembled from 3 drives and 1 spare - not enough to start the array.
    I believe my mistake was when adding the new drive I made the mistake you mentioned Darkod and added /dev/sdc and not /dev/sdc1. Hopefully I have not compounded the issue. I attempted to reassemble the raid with the four working drives:

    Code:
    #sudo mdadm --assemble --force /dev/md0 /dev/sd[a,b,e,f]1
    mdadm: forcing event count in /dev/sda1(2) from 63438 upto 63491
    mdadm: clearing FAULTY flag for device 0 in /dev/md0 for /dev/sda1
    mdadm: Marking array /dev/md0 as 'clean'
    mdadm: /dev/md0 assembled from 4 drives - not enough to start the array.
    So I then tried to use the --scan option and confirmed I made the mistake Darkod mentioned:

    Code:
    #sudo mdadm --assemble --scan
    mdadm: WARNING /dev/sdc1 and /dev/sdc appear to have very similar superblocks.
          If they are really different, please --zero the superblock on one
          If they are the same or overlap, please remove one from the
          DEVICE list in mdadm.conf.
    kendall@NAS:~$ sudo mdadm --assemble /dev/md0 /dev/sd[a,b,c,e,f]1
    mdadm: ignoring /dev/sdb1 as it reports /dev/sda1 as failed
    mdadm: ignoring /dev/sdf1 as it reports /dev/sda1 as failed
    mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.
    I'm not sure what my best option is from here. Below is the output from --examine:

    Code:
    #sudo mdadm --misc --examine /dev/sd[abcef]1
    /dev/sda1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 5
    Preferred Minor : 0
    
        Update Time : Thu Apr 18 00:13:41 2013
              State : clean
     Active Devices : 4
    Working Devices : 5
     Failed Devices : 1
      Spare Devices : 1
           Checksum : af870261 - correct
             Events : 63491
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     2       8        1        2      active sync   /dev/sda1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       8        1        2      active sync   /dev/sda1
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
       5     5       8       32        5      spare   /dev/sdc
    /dev/sdb1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 4
    Preferred Minor : 0
    
        Update Time : Thu Apr 18 20:43:29 2013
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 2
      Spare Devices : 0
           Checksum : af8822b8 - correct
             Events : 63491
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     1       8       17        1      active sync   /dev/sdb1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       0        0        2      faulty removed
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
    /dev/sdc1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 5
    Preferred Minor : 0
    
        Update Time : Thu Apr 18 07:31:35 2013
              State : clean
     Active Devices : 3
    Working Devices : 4
     Failed Devices : 2
      Spare Devices : 1
           Checksum : af87692e - correct
             Events : 63463
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     5       8       32        5      spare   /dev/sdc
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       0        0        2      faulty removed
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
       5     5       8       32        5      spare   /dev/sdc
    /dev/sde1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 4
    Preferred Minor : 0
    
        Update Time : Thu Apr 18 20:43:29 2013
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 2
      Spare Devices : 0
           Checksum : af8822e2 - correct
             Events : 63491
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     0       8       65        0      active sync   /dev/sde1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       0        0        2      active sync
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
    /dev/sdf1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 4
    Preferred Minor : 0
    
        Update Time : Thu Apr 18 20:43:29 2013
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 2
      Spare Devices : 0
           Checksum : af8822fe - correct
             Events : 63491
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     4       8       81        4      active sync   /dev/sdf1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       0        0        2      faulty removed
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
    And finally (and unfortunately):

    Code:
    #sudo mdadm --misc --examine /dev/sdc
    /dev/sdc:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 5
    Preferred Minor : 0
    
        Update Time : Thu Apr 18 07:31:35 2013
              State : clean
     Active Devices : 3
    Working Devices : 4
     Failed Devices : 2
      Spare Devices : 1
           Checksum : af87692e - correct
             Events : 63463
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     5       8       32        5      spare   /dev/sdc
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       0        0        2      faulty removed
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
       5     5       8       32        5      spare   /dev/sdc

    Thank you again in advance,
    -Kendall

  5. #5
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: mdadm - replacing failed drive

    The counters on four partitions are identical, and on sdc1 is very similar. You should be able to force assemble this.

    As for adding sdc by mistake, simply zero the superblock.
    Code:
    sudo mdadm --zero-superblock /dev/sdc
    Your --examine details don't show Device Role, you will need the exact order to force assemble. Don't assume the order is abcef because it might not be. What does this command output:
    Code:
    sudo mdadm --examine /dev/sd[abcef]1
    PS. Also I would comment out all ARRAY definitions in mdadm.conf since right now it can be confused what belongs where. It seems to think you have 6 members, so it says it can't start the array with 4 members. A 5 member raid5 should be able to start with 4 members present.
    Last edited by darkod; April 19th, 2013 at 08:46 AM.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  6. #6
    Join Date
    Aug 2007
    Location
    Tampa, FL
    Beans
    44
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: mdadm - replacing failed drive

    Thank you Darko,

    Below is the output you requested:

    Code:
    #sudo mdadm --examine /dev/sd[abcef]1
    /dev/sda1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 4
    Preferred Minor : 0
    
        Update Time : Fri Apr 19 07:50:18 2013
              State : clean
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 1
      Spare Devices : 0
           Checksum : af88befe - correct
             Events : 63503
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     2       8        1        2      active sync   /dev/sda1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       8        1        2      active sync   /dev/sda1
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
    /dev/sdb1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 4
    Preferred Minor : 0
    
        Update Time : Fri Apr 19 07:50:18 2013
              State : clean
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 1
      Spare Devices : 0
           Checksum : af88bf0c - correct
             Events : 63503
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     1       8       17        1      active sync   /dev/sdb1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       8        1        2      active sync   /dev/sda1
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
    mdadm: No md superblock detected on /dev/sdc1.
    /dev/sde1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 4
    Preferred Minor : 0
    
        Update Time : Fri Apr 19 07:50:18 2013
              State : clean
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 1
      Spare Devices : 0
           Checksum : af88bf3a - correct
             Events : 63503
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     0       8       65        0      active sync   /dev/sde1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       8        1        2      active sync   /dev/sda1
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
    /dev/sdf1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 4
    Preferred Minor : 0
    
        Update Time : Fri Apr 19 07:50:18 2013
              State : clean
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 1
      Spare Devices : 0
           Checksum : af88bf52 - correct
             Events : 63503
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     4       8       81        4      active sync   /dev/sdf1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       8        1        2      active sync   /dev/sda1
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1

    I tried to assemble (not using force) after zeroing out the superblock:

    Code:
    #sudo mdadm --assemble /dev/md0 /dev/sd[a,b,c,e,f]1
    mdadm: no RAID superblock on /dev/sdc1
    mdadm: /dev/sdc1 has no superblock - assembly aborted

    So I then tried assembling with --scan:

    Code:
    #sudo mdadm --assemble --scan
    mdadm: /dev/md0 has been started with 4 drives (out of 5).

    Success.... well maybe not:

    Code:
    #sudo mdadm --detail /dev/md0
    /dev/md0:
            Version : 0.90
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
       Raid Devices : 5
      Total Devices : 4
    Preferred Minor : 0
        Persistence : Superblock is persistent
    
        Update Time : Thu Apr 18 20:43:29 2013
              State : clean, degraded
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 256K
    
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
             Events : 0.63491
    
        Number   Major   Minor   RaidDevice State
           0       8       65        0      active sync   /dev/sde1
           1       8       17        1      active sync   /dev/sdb1
           2       8        1        2      active sync   /dev/sda1
           3       0        0        3      removed
           4       8       81        4      active sync   /dev/sdf1

    The array is now restarted but I cannot access any of the data. Does that mean the drives are assembled out of order as you mentioned?


    Thank you again,
    -Kendall

  7. #7
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: mdadm - replacing failed drive

    It might be. Look at the Raid Device values, I think they should show the order. You are using metadata version 0.90 and I guess that's why there is no separate device Role enry that I expected.

    But according to the Raid Device order, the order should be sde,sdb,sda,missing,sdf.

    So, try assembling it with something like (and stop md0 if it's running right now):
    Code:
    sudo mdadm --stop /dev/md0
    sudo mdadm --assemble /dev/md0 /dev/sde1 /dev/sdb1 /dev/sda1 missing /dev/sdf1
    That should assemble it leaving one slot empty (missing) for the missing disk. If this works, you will add /dev/sdc1 later to this missing slot.

    if that assembles, don't forget to mount it first, and see if the data is there. Then add sdc1.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  8. #8
    Join Date
    Aug 2007
    Location
    Tampa, FL
    Beans
    44
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: mdadm - replacing failed drive

    Quote Originally Posted by darkod View Post
    If that assembles, don't forget to mount it first, and see if the data is there. Then add sdc1.
    This is the reason you should not work on computers prior to having coffee. I remembered while taking a shower that I had not mounted the array.

    So I have reassembled using --scan and mounted the raid. Data all looks to be there. I am going to now add /dev/sdc1 and if everything is working will marked this solved.


    Thank you again Darko for all your help.


    -Kendall

  9. #9
    Join Date
    Aug 2007
    Location
    Tampa, FL
    Beans
    44
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: mdadm - replacing failed drive

    No such luck. Added /dev/sdc1 which took around 26 hours to finish. Results from this morning:

    Code:
    #cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid5 sdc1[5](S) sde1[0] sdf1[4] sda1[6](F) sdb1[1]
          7814047744 blocks level 5, 256k chunk, algorithm 2 [5/3] [UU__U]
    Code:
    #sudo mdadm --detail /dev/md0
    /dev/md0:
            Version : 0.90
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
       Raid Devices : 5
      Total Devices : 5
    Preferred Minor : 0
        Persistence : Superblock is persistent
    
        Update Time : Sat Apr 20 06:32:53 2013
              State : clean, FAILED
     Active Devices : 3
    Working Devices : 4
     Failed Devices : 1
      Spare Devices : 1
    
             Layout : left-symmetric
         Chunk Size : 256K
    
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
             Events : 0.63539
    
        Number   Major   Minor   RaidDevice State
           0       8       65        0      active sync   /dev/sde1
           1       8       17        1      active sync   /dev/sdb1
           2       0        0        2      removed
           3       0        0        3      removed
           4       8       81        4      active sync   /dev/sdf1
    
           5       8       33        -      spare   /dev/sdc1
           6       8        1        -      faulty spare   /dev/sda1
    Code:
    #sudo mdadm --examine /dev/sd[abcef]1
    /dev/sda1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 5
    Preferred Minor : 0
    
        Update Time : Fri Apr 19 23:41:52 2013
              State : clean
     Active Devices : 4
    Working Devices : 5
     Failed Devices : 1
      Spare Devices : 1
           Checksum : af899e70 - correct
             Events : 63530
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     2       8        1        2      active sync   /dev/sda1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       8        1        2      active sync   /dev/sda1
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
       5     5       8       33        5      spare   /dev/sdc1
    /dev/sdb1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 5
    Preferred Minor : 0
    
        Update Time : Sat Apr 20 10:02:39 2013
              State : clean
     Active Devices : 3
    Working Devices : 4
     Failed Devices : 2
      Spare Devices : 1
           Checksum : af8a3022 - correct
             Events : 63541
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     1       8       17        1      active sync   /dev/sdb1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       0        0        2      faulty removed
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
       5     5       8       33        5      spare   /dev/sdc1
    /dev/sdc1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 5
    Preferred Minor : 0
    
        Update Time : Sat Apr 20 10:02:39 2013
              State : clean
     Active Devices : 3
    Working Devices : 4
     Failed Devices : 2
      Spare Devices : 1
           Checksum : af8a3034 - correct
             Events : 63541
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     5       8       33        5      spare   /dev/sdc1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       0        0        2      faulty removed
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
       5     5       8       33        5      spare   /dev/sdc1
    /dev/sde1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 5
    Preferred Minor : 0
    
        Update Time : Sat Apr 20 10:02:39 2013
              State : clean
     Active Devices : 3
    Working Devices : 4
     Failed Devices : 2
      Spare Devices : 1
           Checksum : af8a3050 - correct
             Events : 63541
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     0       8       65        0      active sync   /dev/sde1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       0        0        2      faulty removed
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
       5     5       8       33        5      spare   /dev/sdc1
    /dev/sdf1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : 99f44a81:c204bc31:cced5de7:ca715931 (local to host NAS)
      Creation Time : Fri Dec 31 17:20:12 2010
         Raid Level : raid5
      Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
         Array Size : 7814047744 (7452.06 GiB 8001.58 GB)
       Raid Devices : 5
      Total Devices : 5
    Preferred Minor : 0
    
        Update Time : Sat Apr 20 10:02:39 2013
              State : clean
     Active Devices : 3
    Working Devices : 4
     Failed Devices : 2
      Spare Devices : 1
           Checksum : af8a3068 - correct
             Events : 63541
    
             Layout : left-symmetric
         Chunk Size : 256K
    
          Number   Major   Minor   RaidDevice State
    this     4       8       81        4      active sync   /dev/sdf1
    
       0     0       8       65        0      active sync   /dev/sde1
       1     1       8       17        1      active sync   /dev/sdb1
       2     2       0        0        2      faulty removed
       3     3       0        0        3      faulty removed
       4     4       8       81        4      active sync   /dev/sdf1
       5     5       8       33        5      spare   /dev/sdc1

    Any suggestions from here?


    Thank you,
    -Kendall

  10. #10
    Join Date
    Aug 2007
    Location
    Tampa, FL
    Beans
    44
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: mdadm - replacing failed drive

    And in case it helps:

    Code:
    #cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    
    # by default (built-in), scan all partitions (/proc/partitions) and all
    # containers for MD superblocks. alternatively, specify devices to scan, using
    # wildcards if desired.
    #DEVICE partitions containers
    
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    
    # instruct the monitoring daemon where to send mail alerts
    MAILADDR root
    
    # definitions of existing MD arrays
    ARRAY /dev/md0 UUID=99f44a81:c204bc31:cced5de7:ca715931
    
    # This file was auto-generated on Thu, 20 Dec 2012 20:33:25 -0500
    # by mkconf $Id$

    Code:
    #sudo fdisk -l
    
    Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xa2a226e5
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *          63  3907024064  1953512001   fd  Linux raid autodetect
    
    Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
    81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00064ce9
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1            2048  3907029167  1953513560   fd  Linux raid autodetect
    
    Disk /dev/sdd: 250.1 GB, 250059350016 bytes
    255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000e80a1
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdd1            2048      499711      248832   83  Linux
    /dev/sdd2          501758   488396799   243947521    5  Extended
    /dev/sdd5          501760   488396799   243947520   8e  Linux LVM
    
    Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xc744f265
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sde1              63  3907024064  1953512001   fd  Linux raid autodetect
    
    WARNING: GPT (GUID Partition Table) detected on '/dev/sdf'! The util fdisk doesn't support GPT. Use GNU Parted.
    
    
    Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdf1              64  3907029167  1953514552   fd  Linux raid autodetect
    
    Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x086bff60
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1              63  3907024064  1953512001   fd  Linux raid autodetect
    
    Disk /dev/mapper/NAS-root: 245.6 GB, 245647802368 bytes
    255 heads, 63 sectors/track, 29864 cylinders, total 479780864 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/mapper/NAS-root doesn't contain a valid partition table
    
    Disk /dev/mapper/NAS-swap_1: 4148 MB, 4148166656 bytes
    255 heads, 63 sectors/track, 504 cylinders, total 8101888 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/mapper/NAS-swap_1 doesn't contain a valid partition table
    
    Disk /dev/md0: 8001.6 GB, 8001584889856 bytes
    2 heads, 4 sectors/track, 1953511936 cylinders, total 15628095488 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 262144 bytes / 1048576 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table

    Thank you,
    -Kendall

Page 1 of 5 123 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •