Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: 64 bit Server 11.10 and MDADM problem

  1. #1
    Join Date
    Jun 2008
    Location
    New York, USA
    Beans
    781

    64 bit Server 11.10 and MDADM problem

    Hi all,

    I tried to upgrade my file server box running 11.04 64 bit server with 5 drives in a array to 11.10 server.

    Now, my RAID array is /dev/md0 and composed of 5 drives with no partitions. That is, I created the array, formatted /dev/md0 with EXT4 and that's it. Worked fine.

    Now MDADM for 11.10 complains that I have a "degraded array" and wants me to fix it or hit ctrl-D to continue (kinda difficult for a headless server box). It complains because it sees no "valid partition table".

    If I do ctrl-D out of it and mount the array manually, it works fine. But the system won't boot without complaining first.

    I see no reason why I should have to back up terabytes of data and rebuild my array only to add partitions to the drives so MDADM doesn't complain.

    Is there any way around this problem? Or do I have to make my 11.04 my final version?

    Thanks...

    -- Roger
    Gentlemen may prefer Blondes, but Real Men prefer Redheads!

  2. #2
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: 64 bit Server 11.10 and MDADM problem

    What's the output of these commands?

    Code:
    cat /proc/mdstat
    Code:
    cat /etc/mdadm/mdadm.conf
    Code:
    mdadm --detail /dev/md0
    There's really no need for a partition on top of an mdadm array, so I'm betting it's not liking your mdadm.conf file.

  3. #3
    Join Date
    Jun 2008
    Location
    New York, USA
    Beans
    781

    Re: 64 bit Server 11.10 and MDADM problem

    Quote Originally Posted by rubylaser View Post
    What's the output of these commands?

    Code:
    cat /proc/mdstat
    Code:
    cat /etc/mdadm/mdadm.conf
    Code:
    mdadm --detail /dev/md0
    There's really no need for a partition on top of an mdadm array, so I'm betting it's not liking your mdadm.conf file.
    mdstat:
    Code:
    root@storage:/# cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md0 : active raid6 sdd[3] sdb[1] sdc[2] sde[4] sda[0]
          2930284032 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
          
    unused devices: <none>
    mdadm.conf:
    Code:
    cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    
    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    DEVICE partitions
    
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    
    # instruct the monitoring daemon where to send mail alerts
    MAILADDR krupski@roadrunner.com
    
    # definitions of existing MD arrays
    ARRAY /dev/md0 UUID=311b8a28:5619dd16:d41b3874:59f9a3d7
    mdadm --detail /dev/md0:
    Code:
    root@storage:/# mdadm --detail /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Fri Dec 23 21:52:01 2011
         Raid Level : raid6
         Array Size : 2930284032 (2794.54 GiB 3000.61 GB)
      Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
       Raid Devices : 5
      Total Devices : 5
        Persistence : Superblock is persistent
    
        Update Time : Sun Feb  5 17:18:17 2012
              State : clean
     Active Devices : 5
    Working Devices : 5
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 512K
    
               Name : storage:0  (local to host storage)
               UUID : 311b8a28:5619dd16:d41b3874:59f9a3d7
             Events : 45
    
        Number   Major   Minor   RaidDevice State
           0       8        0        0      active sync   /dev/sda
           1       8       16        1      active sync   /dev/sdb
           2       8       32        2      active sync   /dev/sdc
           3       8       48        3      active sync   /dev/sdd
           4       8       64        4      active sync   /dev/sde

    All looks normal to me. Note that this is running on my old 11.04 system. I yanked 11.10 because of the need to hit ctrl-D to boot.

    (oh and please nobody yell at me for using the root account... I disabled UAC in Windows 7 too... and the world is still turning).

    (p.s.: The boot drive is /dev/sdf - a 40GB SSD card which is not part of the array).
    Last edited by Krupski; February 5th, 2012 at 11:25 PM. Reason: added info
    Gentlemen may prefer Blondes, but Real Men prefer Redheads!

  4. #4
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: 64 bit Server 11.10 and MDADM problem

    This looks correct, but I'm betting that you needed to recreate your /etc/mdadm/mdadm.conf file on 11.10 to get it to startup correct. Or, you could have run dpkg-reconfigure mdadm, and selected the all option for arrays to startup automatically.

  5. #5
    Join Date
    Jun 2008
    Location
    New York, USA
    Beans
    781

    Re: 64 bit Server 11.10 and MDADM problem

    Quote Originally Posted by rubylaser View Post
    This looks correct, but I'm betting that you needed to recreate your /etc/mdadm/mdadm.conf file on 11.10 to get it to startup correct. Or, you could have run dpkg-reconfigure mdadm, and selected the all option for arrays to startup automatically.
    Well, what worries me is in the past, when I've upgraded Ubuntu versions, I just did a fresh re-install (to the separate SSD drive which is not part of the RAID array), then installed MDADM and it found my existing array, generated the mdadm.conf file with the proper data and it all just worked.

    Now, doing the same thing (moving from 11.04 to 11.10), upon installing MDADM I get the message that the array is "degraded" because "no valid partition table" is found.

    Since what USED to always work now suddenly does NOT work, I have to assume that the problem lies elsewhere than my "upgrade technique".

    It seems to me intuitively wrong that I now need to change some configuration to make it work when in the past I never had to.

    If it had something to do with a fundamental change in the 2.6 -> 3.0 kernel design, then I could accept it.

    Is it possible that an array without any partition table was "always wrong but ignored" in the past and now "being enforced as necessary"?

    Thanks for all your input so far!

    -- Roger
    Gentlemen may prefer Blondes, but Real Men prefer Redheads!

  6. #6
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: 64 bit Server 11.10 and MDADM problem

    You don't need to change your configuration, but you do need to ensure that it exists otherwise mdadm won't know how to assemble the array. Finally, there's no need for a partition on top of mdadm unless, you're using LVM to create smaller volumes on top of mdadm, but you haven't mentioned that, so that's not the case.

    On 11.04 what's the output of these?
    Code:
    mount
    Code:
    fdisk -l
    Code:
    cat /etc/fstab | grep md0

  7. #7
    Join Date
    Jun 2008
    Location
    New York, USA
    Beans
    781

    Re: 64 bit Server 11.10 and MDADM problem

    Quote Originally Posted by rubylaser View Post
    You don't need to change your configuration, but you do need to ensure that it exists otherwise mdadm won't know how to assemble the array. Finally, there's no need for a partition on top of mdadm unless, you're using LVM to create smaller volumes on top of mdadm, but you haven't mentioned that, so that's not the case.

    On 11.04 what's the output of these?
    Code:
    mount
    Code:
    fdisk -l
    Code:
    cat /etc/fstab | grep md0
    mount:
    Code:
    root@storage:/# mount
    /dev/sdf1 on / type ext4 (rw,noatime,discard,errors=remount-ro)
    proc on /proc type proc (rw)
    none on /sys type sysfs (rw,noexec,nosuid,nodev)
    fusectl on /sys/fs/fuse/connections type fusectl (rw)
    none on /sys/kernel/debug type debugfs (rw)
    none on /sys/kernel/security type securityfs (rw)
    none on /dev type devtmpfs (rw,mode=0755)
    none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
    none on /dev/shm type tmpfs (rw,nosuid,nodev)
    tmpfs on /tmp type tmpfs (rw,noatime,mode=1777)
    none on /var/run type tmpfs (rw,nosuid,mode=0755)
    none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
    /dev/md0 on /home/shared type ext4 (rw)
    fdisk -l:
    Code:
    root@storage:/dev/shm# fdisk -l
    
    Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sda doesn't contain a valid partition table
    
    Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sdb doesn't contain a valid partition table
    
    Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sdc doesn't contain a valid partition table
    
    Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sdd doesn't contain a valid partition table
    
    Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sde doesn't contain a valid partition table
    
    Disk /dev/sdf: 40.0 GB, 40020664320 bytes
    255 heads, 63 sectors/track, 4865 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000ab996
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdf1   *           1        4866    39081656   83  Linux
    
    Disk /dev/md0: 3000.6 GB, 3000610848768 bytes
    2 heads, 4 sectors/track, 732571008 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table
    cat /etc/fstab | grep md0:
    Code:
    /* nothing  (I typed this) */
    /etc/fstab:
    Code:
    # /etc/fstab: static file system information.
    #
    # Use 'vol_id --uuid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system>                <mount point>    <type>  <options>                                   <dump>  <fsck>
    
    # proc
    proc                            /proc           proc    defaults                                    0       0
    
    # /tmp
    tmpfs                           /tmp            tmpfs   defaults,noatime,mode=1777                  0       0
    
    # root
    /dev/disk/by-label/linux-boot   /               auto    defaults,noatime,discard,errors=remount-ro  0       1
    
    # shared raid drive
    /dev/disk/by-label/linux-raid   /home/shared    auto    defaults                                    0       2
    
    # swap
    /home/shared/swapfile           swap            swap    defaults                                    0       0

    My fstab mounts stuff via volume label which is why the "grep /md0" failed.

    Also note from fdisk -l that some drives are 512 byte and some are newer 4096 "advanced format" drives. Since the /dev/md0 device is formatted without a partition table, the drive starts at cylinder 0 which is "optimal" for both 512 and 4096 byte alignments.

    Drive /dev/sdf is the 40gb SSD boot drive.

    The "discard" option for the root drive activates "TRIM" for solid state drives (makes the drive internally zero out unused and relinquished sectors to speed writes).

    Again... it all looks good to me. What do you think?

    By the way, you said:
    but you do need to ensure that it [mdadm config] exists otherwise mdadm won't know how to assemble the array.
    If I boot 11.10 and manually ctrl-D past the error, then the "/dev/md0" device exists and can be mounted manually and it works. Also in 11.10, a "cat /proc/mdstat" shows no resync activity (i.e. the array is happy). And of course, /etc/mdadm/mdadm.conf DOES exist.

    -- Roger
    Gentlemen may prefer Blondes, but Real Men prefer Redheads!

  8. #8
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: 64 bit Server 11.10 and MDADM problem

    This looks okay as well. The error it gives you is that the partition is missing or is it saying the filesystem is not available?

  9. #9
    Join Date
    Jun 2008
    Location
    New York, USA
    Beans
    781

    Re: 64 bit Server 11.10 and MDADM problem

    Quote Originally Posted by rubylaser View Post
    This looks okay as well. The error it gives you is that the partition is missing or is it saying the filesystem is not available?
    OK, here's what I did step by step and what happened:

    (1) Have a working RAID-6 array using MDADM and Ubuntu 11.04

    (2) Shutdown the machine, unplugged the HDD power cables leaving only the SSD alive

    (3) Booted into 11.10 server installer from a USB stick (created with "Startup Disk Creator" and the 11.10 ISO image).

    (4) Installed 11.10 to the 40GB SSD (formatted as EXT4).

    (5) Shut down and plugged HDD power cables back in

    (6) Boot the server, then "apt-get install mdadm"

    (7) The install goes fine, "/etc/mdadm/mdadm.conf" is created by the installer. It shows the correct UUID.

    (8) Sync, then reboot the server.

    (9) The server hangs during the startup, saying "array is degraded" and waits for me to hit ctrl-D to continue booting.

    (10) Hit ctrl-D, bootup finishes.

    (11) "ls /dev/md0" - it's there.

    (12) cat /proc/mdstat - it's there and not doing anything (i.e. it's already sync'ed).

    (13) Manually type "mount /dev/md0 /home/shared" - command completes successfully. All the content at /home/shared is there and accessible - no errors.

    (14) Edit /etc/fstab to mount the array at boot time (i.e. the fstab you saw a few posts ago).

    (15) Reboot. Same thing. It says "degraded array" and will not continue until I hit ctrl-D


    Sorry I don't have the *exact* text of the error message as I'm doing it from memory (and the server is back to 11.04 for now).

    Hope this shows you something that I'm missing... and thank you again for all your help and patience.

    -- Roger
    Gentlemen may prefer Blondes, but Real Men prefer Redheads!

  10. #10
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: 64 bit Server 11.10 and MDADM problem

    Booting degraded (even though it isn't) from a non system array should not stop the booting process. I'll have to setup an array in 11.04 tomorrow in Virtualbox and then migrate to 11.10. I'll follow the same steps that you did. I could understanding it stalling the boot process if it tried to mount the array and it wasn't available, but that should give you the "S" option to skip, so it's not that.

    I'll try to set a test up and see if I can recreate your problem. Maybe I can figure out what the issue is.

Page 1 of 2 12 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •