Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Thread: mdadm: doesn't contain a valid partition table after reboot

  1. #1
    Join Date
    Jan 2006
    Location
    Waterloo, ON, Canada
    Beans
    212
    Distro
    Ubuntu 12.04 Precise Pangolin

    Question mdadm: doesn't contain a valid partition table after reboot

    This is really weird.

    As per http://ubuntuforums.org/showthread.php?t=1719427, I had a working array. I failed out one of the drives for fun.

    Code:
    fermulator@fermmy-server:/mnt$ cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md2000 : active raid6 sdk[3] sdl[0] sdj[1]
          3907026944 blocks level 6, 64k chunk, algorithm 2 [4/3] [UU_U]
    Yet ... fdisk claims these disks no longer have /dev/sdX1 partitions ...

    Code:
    fermulator@fermmy-server:/mnt$ sudo fdisk -ulc /dev/sd[klj]
    
    Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sdj doesn't contain a valid partition table
    
    Disk /dev/sdk: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sdk doesn't contain a valid partition table
    
    Disk /dev/sdl: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sdl doesn't contain a valid partition table
    Even the disk that I had failed out previously claims invalid partition:

    Code:
    fermulator@fermmy-server:/mnt$ sudo fdisk -ulc /dev/sdi
    
    Disk /dev/sdi: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sdi doesn't contain a valid partition table
    Even still ... mdadm recognizes the superblock on these devices:

    Code:
    fermulator@fermmy-server:/mnt$ sudo mdadm -E /dev/sd[ijkl]
    /dev/sdi:
              Magic : a92b4efc
            Version : 00.90.00
               UUID : 74fae2cf:63dcb956:01f5a1db:50a22640 (local to host fermmy-server)
      Creation Time : Sat Apr  2 12:04:52 2011
         Raid Level : raid6
      Used Dev Size : 1953513472 (1863.02 GiB 2000.40 GB)
         Array Size : 3907026944 (3726.03 GiB 4000.80 GB)
       Raid Devices : 4
      Total Devices : 4
    Preferred Minor : 2000
    
        Update Time : Sat Apr  2 15:22:10 2011
              State : active
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 0
      Spare Devices : 0
           Checksum : e43ac053 - correct
             Events : 7
    
         Chunk Size : 64K
    
          Number   Major   Minor   RaidDevice State
    this     2       8      208        2      active sync
    
       0     0       8      176        0      active sync   /dev/sdl
       1     1       8      192        1      active sync
       2     2       8      208        2      active sync
       3     3       8      224        3      active sync
    /dev/sdj:
              Magic : a92b4efc
            Version : 00.90.00
               UUID : 74fae2cf:63dcb956:01f5a1db:50a22640 (local to host fermmy-server)
      Creation Time : Sat Apr  2 12:04:52 2011
         Raid Level : raid6
      Used Dev Size : 1953513472 (1863.02 GiB 2000.40 GB)
         Array Size : 3907026944 (3726.03 GiB 4000.80 GB)
       Raid Devices : 4
      Total Devices : 3
    Preferred Minor : 2000
    
        Update Time : Thu Apr 14 20:54:57 2011
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 1
      Spare Devices : 0
           Checksum : e44b8223 - correct
             Events : 20912
    
         Chunk Size : 64K
    
          Number   Major   Minor   RaidDevice State
    this     1       8      144        1      active sync   /dev/sdj
    
       0     0       8      176        0      active sync   /dev/sdl
       1     1       8      144        1      active sync   /dev/sdj
       2     2       0        0        2      faulty removed
       3     3       8      160        3      active sync   /dev/sdk
    /dev/sdk:
              Magic : a92b4efc
            Version : 00.90.00
               UUID : 74fae2cf:63dcb956:01f5a1db:50a22640 (local to host fermmy-server)
      Creation Time : Sat Apr  2 12:04:52 2011
         Raid Level : raid6
      Used Dev Size : 1953513472 (1863.02 GiB 2000.40 GB)
         Array Size : 3907026944 (3726.03 GiB 4000.80 GB)
       Raid Devices : 4
      Total Devices : 3
    Preferred Minor : 2000
    
        Update Time : Thu Apr 14 20:54:57 2011
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 1
      Spare Devices : 0
           Checksum : e44b8237 - correct
             Events : 20912
    
         Chunk Size : 64K
    
          Number   Major   Minor   RaidDevice State
    this     3       8      160        3      active sync   /dev/sdk
    
       0     0       8      176        0      active sync   /dev/sdl
       1     1       8      144        1      active sync   /dev/sdj
       2     2       0        0        2      faulty removed
       3     3       8      160        3      active sync   /dev/sdk
    /dev/sdl:
              Magic : a92b4efc
            Version : 00.90.00
               UUID : 74fae2cf:63dcb956:01f5a1db:50a22640 (local to host fermmy-server)
      Creation Time : Sat Apr  2 12:04:52 2011
         Raid Level : raid6
      Used Dev Size : 1953513472 (1863.02 GiB 2000.40 GB)
         Array Size : 3907026944 (3726.03 GiB 4000.80 GB)
       Raid Devices : 4
      Total Devices : 3
    Preferred Minor : 2000
    
        Update Time : Thu Apr 14 20:54:57 2011
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 1
      Spare Devices : 0
           Checksum : e44b8241 - correct
             Events : 20912
    
         Chunk Size : 64K
    
          Number   Major   Minor   RaidDevice State
    this     0       8      176        0      active sync   /dev/sdl
    
       0     0       8      176        0      active sync   /dev/sdl
       1     1       8      144        1      active sync   /dev/sdj
       2     2       0        0        2      faulty removed
       3     3       8      160        3      active sync   /dev/sdk
    Obviously, /dev/sdi used to be /dev/sdl. In order to re-add /dev/sdi to the array, I'll have to wipe the superblock and manually re-add.

    QUESTION: Why do the disks claim invalid partition tables now? Before proceeding, I want to make sure there's nothing wrong ...

    And ... this is scary. Is mdadm using the DEVICES rather than the partitions as members of the array since the reboot???

    Code:
    fermulator@fermmy-server:/mnt$ sudo mdadm --query --detail /dev/md2000
    /dev/md2000:
            Version : 00.90
      Creation Time : Sat Apr  2 12:04:52 2011
         Raid Level : raid6
         Array Size : 3907026944 (3726.03 GiB 4000.80 GB)
      Used Dev Size : 1953513472 (1863.02 GiB 2000.40 GB)
       Raid Devices : 4
      Total Devices : 3
    Preferred Minor : 2000
        Persistence : Superblock is persistent
    
        Update Time : Thu Apr 14 20:54:57 2011
              State : clean, degraded
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 0
    
         Chunk Size : 64K
    
               UUID : 74fae2cf:63dcb956:01f5a1db:50a22640 (local to host fermmy-server)
             Events : 0.20912
    
        Number   Major   Minor   RaidDevice State
           0       8      176        0      active sync   /dev/sdl
           1       8      144        1      active sync   /dev/sdj
           2       0        0        2      removed
           3       8      160        3      active sync   /dev/sdk
    ~Fermmy

  2. #2
    Join Date
    Jun 2008
    Location
    South of the Border.
    Beans
    160
    Distro
    Ubuntu 12.04 Precise Pangolin

    Re: mdadm: doesn't contain a valid partition table after reboot

    This is exactly what was happening to me. Eventually, I lost all but one drive.

    Edit:I never switched to drives, I always had the partition, sdb1, sdc1, etc, but I lost the drive as you did.

    Now, md0 does not have a valid partition table.

    It will be interesting to see what suggestions you receive.

    Not to digress, but how did you get the scroll bar to appear in your 4th section of code?

    Thanks,
    MarkN
    Last edited by mn_voyageur; April 15th, 2011 at 05:06 AM.
    Sony Vaio, 64-bit, running Precise Pangolin.
    File Server - That has overran it's budget: MSI 870A-G54 / Radeon HD 5750 / 160 Gig System HDD / 4 x 1TB HDD - RAID 6

  3. #3
    Join Date
    Feb 2005
    Location
    Oregon
    Beans
    496
    Distro
    Ubuntu 10.04 Lucid Lynx

    Re: mdadm: doesn't contain a valid partition table after reboot

    fermulator:
    You made a raid out of full disks instead of partitions this time, indicated by the lack of partition numbers in the /proc/mdstat output, so the partition tables would have been overwritten. Any attempt to recreate them would probably corrupt your raid.

    mn_voyageur:
    /dev/md0 normally would not have a partition table. Before now I didn't think they could be partitioned. It's possible, but you have to specify "--audo=mdp" when you create it: https://raid.wiki.kernel.org/index.p..._/_LVM_on_RAID

    I often go with the LVM on md raid option to make it easier if I need to grow, shrink, or move a volume. Though on at least Lucid I always have to edit a udev rule to get it to detect LVM volumes on raid during bootup.

  4. #4
    Join Date
    Jun 2008
    Location
    South of the Border.
    Beans
    160
    Distro
    Ubuntu 12.04 Precise Pangolin

    Re: mdadm: doesn't contain a valid partition table after reboot

    dtfinch,

    mdadm would work without a partition table?

    Edit: If mdadm requires a partition table, then it must have had one. Mdadm did work. When I started loosing drives/partitions, that's when I could no longer mount md0

    MarkN
    Last edited by mn_voyageur; April 15th, 2011 at 11:25 AM.
    Sony Vaio, 64-bit, running Precise Pangolin.
    File Server - That has overran it's budget: MSI 870A-G54 / Radeon HD 5750 / 160 Gig System HDD / 4 x 1TB HDD - RAID 6

  5. #5
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,123
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: mdadm: doesn't contain a valid partition table after reboot

    It's perfectly fine for /dev/md0 to not have a valid partition table when viewed with fdisk. Here's my perfectly working mdadm array for an example.

    Code:
    Disk /dev/md0: 6001.2 GB, 6001212260352 bytes
    2 heads, 4 sectors/track, 1465139712 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 524288 bytes / 3145728 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table
    Typically, after you've created the array, you would add a filesystem to the array, something like this (leaving out options to tune for stride and stride_width for this example).
    Code:
    mkfs.ext4 /dev/md0
    You'll notice that no partition was created on the array first. That's why fdisk reports the invalid partition table error...There is no partition
    Last edited by rubylaser; April 15th, 2011 at 12:48 PM.

  6. #6
    Join Date
    Jan 2006
    Location
    Waterloo, ON, Canada
    Beans
    212
    Distro
    Ubuntu 12.04 Precise Pangolin

    Exclamation Re: mdadm: doesn't contain a valid partition table after reboot

    Quote Originally Posted by dtfinch View Post
    fermulator:
    You made a raid out of full disks instead of partitions this time, indicated by the lack of partition numbers in the /proc/mdstat output, so the partition tables would have been overwritten. Any attempt to recreate them would probably corrupt your raid.
    Sorry, no. I did not create the RAID against devices rather than partitions. Look at http://ubuntuforums.org/showthread.php?t=1719427. In every instances there, it was clearly made with working partitions...

    Code:
    fermulator@fermmy-server:~$ cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
    md2000 : active raid6 sdo1[3] sdn1[2] sdm1[1] sdl1[0]
          3907026944 blocks level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
    Yet, now, after a failed disk, and a reboot, it seems to have auto-assembled itself on top of the devices rather than the partitions.

    Code:
    fermulator@fermmy-server:~$ cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md2000 : active raid6 sdk[3] sdl[0] sdj[1]
          3907026944 blocks level 6, 64k chunk, algorithm 2 [4/3] [UU_U]
    ... and ... subsequently corrupted the partition table? Can I recover this situation somehow?
    ~Fermmy

  7. #7
    Join Date
    Feb 2005
    Location
    Oregon
    Beans
    496
    Distro
    Ubuntu 10.04 Lucid Lynx

    Re: mdadm: doesn't contain a valid partition table after reboot

    The UUID is different from your previous thread, and the creation time is a day later.

    There is the possibility that the partitions happened to stretch until the very end disk, rather than leaving a few blocks free at the end (related bug here). So the superblocks happen to be at the very end of the disk, causing mdadm to think they belong to the full disks rather than the partitions. That would also explain why they're the same size as reported with the partitions, rather than slightly bigger.

    I don't know why mdadm would zero the beginnings of the disks though, rather than leaving them untouched as they're outside the used area.

  8. #8
    Join Date
    Jan 2006
    Location
    Waterloo, ON, Canada
    Beans
    212
    Distro
    Ubuntu 12.04 Precise Pangolin

    Re: mdadm: doesn't contain a valid partition table after reboot

    So basically I'm fubar'd and have to re-create the array?
    ~Fermmy

  9. #9
    Join Date
    Jan 2006
    Location
    Waterloo, ON, Canada
    Beans
    212
    Distro
    Ubuntu 12.04 Precise Pangolin

    Exclamation Re: mdadm: doesn't contain a valid partition table after reboot

    This is ridiculous. I have performed a full re-creation of my array, and after a reboot, it has eaten the disks again.

    Code:
    fermulator@fermmy-server:/media/arrays/md2000$ cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md2000 : active raid6 sdg1[4] sdk[2] sdj[1] sdl[3] sdi[0]
          5860540416 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
    Code:
    fermulator@fermmy-server:/mnt$ sudo fdisk -l /dev/sd[ghijkl] | grep raid
    [sudo] password for fermulator: 
    Disk /dev/sdi doesn't contain a valid partition table
    Disk /dev/sdj doesn't contain a valid partition table
    Disk /dev/sdk doesn't contain a valid partition table
    Disk /dev/sdl doesn't contain a valid partition table
    /dev/sdg1               1      765634  1953513560   fd  Linux raid autodetect
    /dev/sdh1               1      765634  1953513560   fd  Linux raid autodetect
    CONTEXT:
    * /dev/sd[ijkl] were the original creation.
    * /dev/sdg1 was newly added to the array (grow operation)
    * /dev/sdl1 is newly prepped for another addition (grow operation), but I stopped because I just noticed that the partitions are fubar'd again on the original four.

    I'm terrified for the safety of my data. Does anyone have any good ideas for troubleshooting?
    ~Fermmy

  10. #10
    Join Date
    Feb 2005
    Location
    Oregon
    Beans
    496
    Distro
    Ubuntu 10.04 Lucid Lynx

    Re: mdadm: doesn't contain a valid partition table after reboot

    It looks like you made the same mistake again, using partitions that extend to the last sector of the disk (size according to fdisk is the same as before). Mdadm can't tell whether the superblock belongs to the partition or the full disk.

    Any one of the following should prevent this in the future:
    1) Using slightly smaller partitions. Verify that the "end" sector according to fdisk is less than the total sectors by at least 256.
    2) Using a newer mdadm metadata version when creating the raid. The default 0.90 places the superblock at the end of the device. 1.1 and 1.2 place theirs at the beginning, which should make this issue impossible, but in doing so confuse grub (not an issue unless it's your boot partition), which I assume is why 0.90 was left as the default.
    3) You could also edit mdadm.conf to change "DEVICE partitions" to "DEVICE /dev/sd??" (as suggested here)

Page 1 of 2 12 LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •