Page 2 of 2 FirstFirst 12
Results 11 to 16 of 16

Thread: 64 bit Server 11.10 and MDADM problem

  1. #11
    Join Date
    Jun 2008
    Location
    New York, USA
    Beans
    769
    Distro
    Xubuntu 14.04 Trusty Tahr

    Re: 64 bit Server 11.10 and MDADM problem

    Quote Originally Posted by rubylaser View Post
    Booting degraded (even though it isn't) from a non system array should not stop the booting process. I'll have to setup an array in 11.04 tomorrow in Virtualbox and then migrate to 11.10. I'll follow the same steps that you did. I could understanding it stalling the boot process if it tried to mount the array and it wasn't available, but that should give you the "S" option to skip, so it's not that.

    I'll try to set a test up and see if I can recreate your problem. Maybe I can figure out what the issue is.
    I tried it again and this time took pictures of the boot screens (since I couldn't capture the text any other way). Let me know if it would help you to see the text... and please let me know if there are any tests you would like me to perform.

    Thanks...

    -- Roger
    Gentlemen may prefer Blonds, but Real Men prefer Redheads!

  2. #12
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,078
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: 64 bit Server 11.10 and MDADM problem

    Sorry, I forgot to run the test. I'll try it today.

  3. #13
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,078
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: 64 bit Server 11.10 and MDADM problem

    Okay, I just ran the tests and everything works great for me here. I installed 11.04 on a single OS disk and setup a simple (3) disk RAID5 array and mounted it at /storage. Here are the steps I followed.

    11.04

    Code:
    root@mdadm-test:~# lsb_release -a
    No LSB modules are available.
    Distributor ID:	Ubuntu
    Description:	Ubuntu 11.04
    Release:	11.04
    Codename:	natty
    
    root@mdadm-test:~# fdisk -l
    
    Disk /dev/sda: 8589 MB, 8589934592 bytes
    255 heads, 63 sectors/track, 1044 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000da21b
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1         784     6290432   83  Linux
    Partition 1 does not end on cylinder boundary.
    /dev/sda2             784        1045     2095105    5  Extended
    /dev/sda5             784        1045     2095104   82  Linux swap / Solaris
    
    Disk /dev/sdb: 1073 MB, 1073741824 bytes
    255 heads, 63 sectors/track, 130 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x92c7d68c
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1               1         130     1044193+  fd  Linux raid autodetect
    
    Disk /dev/sdc: 1073 MB, 1073741824 bytes
    255 heads, 63 sectors/track, 130 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x5b8bfd17
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1               1         130     1044193+  fd  Linux raid autodetect
    
    Disk /dev/sdd: 1073 MB, 1073741824 bytes
    255 heads, 63 sectors/track, 130 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xde26680a
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdd1               1         130     1044193+  fd  Linux raid autodetect
    root@mdadm-test:~# apt-get install mdadm
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following extra packages will be installed:
      postfix ssl-cert
    Suggested packages:
      procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre sasl2-bin dovecot-common resolvconf postfix-cdb mail-reader openssl-blacklist
    Recommended packages:
      default-mta mail-transport-agent
    The following NEW packages will be installed:
      mdadm postfix ssl-cert
    0 upgraded, 3 newly installed, 0 to remove and 3 not upgraded.
    Need to get 1,477 kB of archives.
    After this operation, 4,407 kB of additional disk space will be used.
    Do you want to continue [Y/n]? Y
    Get:1 http://us.archive.ubuntu.com/ubuntu/ natty-updates/main mdadm i386 3.1.4-1+8efb9d1ubuntu4.1 [298 kB]
    Get:2 http://us.archive.ubuntu.com/ubuntu/ natty/main ssl-cert all 1.0.28 [12.2 kB]
    Get:3 http://us.archive.ubuntu.com/ubuntu/ natty-updates/main postfix i386 2.8.5-2~build0.11.04 [1,167 kB]
    Fetched 1,477 kB in 8s (178 kB/s)                                                                                                                   
    Preconfiguring packages ...
    Selecting previously deselected package mdadm.
    (Reading database ... 48243 files and directories currently installed.)
    Unpacking mdadm (from .../mdadm_3.1.4-1+8efb9d1ubuntu4.1_i386.deb) ...
    Selecting previously deselected package ssl-cert.
    Unpacking ssl-cert (from .../ssl-cert_1.0.28_all.deb) ...
    Selecting previously deselected package postfix.
    Unpacking postfix (from .../postfix_2.8.5-2~build0.11.04_i386.deb) ...
    Processing triggers for ureadahead ...
    Processing triggers for man-db ...
    Processing triggers for ufw ...
    Setting up mdadm (3.1.4-1+8efb9d1ubuntu4.1) ...
    Generating mdadm.conf... done.
     Removing any system startup links for /etc/init.d/mdadm-raid ...
    update-initramfs: deferring update (trigger activated)
     * Starting MD monitoring service mdadm --monitor
       ...done.
    Setting up ssl-cert (1.0.28) ...
    Setting up postfix (2.8.5-2~build0.11.04) ...
    Adding group `postfix' (GID 113) ...
    Done.
    Adding system user `postfix' (UID 104) ...
    Adding new user `postfix' (UID 104) with group `postfix' ...
    Not creating home directory `/var/spool/postfix'.
    Creating /etc/postfix/dynamicmaps.cf
    Adding tcp map entry to /etc/postfix/dynamicmaps.cf
    Adding group `postdrop' (GID 114) ...
    Done.
    setting myhostname: mdadm-test
    setting alias maps
    setting alias database
    mailname is not a fully qualified domain name.  Not changing /etc/mailname.
    setting destinations: mdadm-test, localhost.localdomain, , localhost
    setting relayhost: 
    setting mynetworks: 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
    setting mailbox_size_limit: 0
    setting recipient_delimiter: +
    setting inet_interfaces: all
    /etc/aliases does not exist, creating it.
    WARNING: /etc/aliases exists, but does not have a root alias.
    
    Postfix is now set up with a default configuration.  If you need to make 
    changes, edit
    /etc/postfix/main.cf (and others) as needed.  To view Postfix configuration
    values, see postconf(1).
    
    After modifying main.cf, be sure to run '/etc/init.d/postfix reload'.
    
    Running newaliases
     * Stopping Postfix Mail Transport Agent postfix
       ...done.
     * Starting Postfix Mail Transport Agent postfix
       ...done.
    Processing triggers for initramfs-tools ...
    update-initramfs: Generating /boot/initrd.img-2.6.38-8-generic-pae
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
    Processing triggers for libc-bin ...
    ldconfig deferred processing now taking place
    
    root@mdadm-test:~# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sd[bcd]1
    mdadm: layout defaults to left-symmetric
    mdadm: chunk size defaults to 512K
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: size set to 1043968K
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
    
    root@mdadm-test:~# watch cat /proc/mdstat
     
    root@mdadm-test:~# echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
    root@mdadm-test:~# echo "HOMEHOST fileserver" >> /etc/mdadm/mdadm.conf
    root@mdadm-test:~# echo "MAILADDR  youruser@gmail.com" >> /etc/mdadm/mdadm.conf
    root@mdadm-test:~# mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    root@mdadm-test:~# mdadm --detail /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Fri Feb 10 13:40:48 2012
         Raid Level : raid5
         Array Size : 2087936 (2039.34 MiB 2138.05 MB)
      Used Dev Size : 1043968 (1019.67 MiB 1069.02 MB)
       Raid Devices : 3
      Total Devices : 3
        Persistence : Superblock is persistent
    
        Update Time : Fri Feb 10 13:41:05 2012
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 512K
    
               Name : mdadm-test:0
               UUID : 5cb9b746:450d257f:00fb7bd6:854bcb2b
             Events : 18
    
        Number   Major   Minor   RaidDevice State
           0       8       17        0      active sync   /dev/sdb1
           1       8       33        1      active sync   /dev/sdc1
           3       8       49        2      active sync   /dev/sdd1
    
    root@mdadm-test:~# mkfs.ext4 -b 4096 -E stride=128,stripe-width=256 /dev/md0
    mke2fs 1.41.14 (22-Dec-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=128 blocks, Stripe width=256 blocks
    130560 inodes, 521984 blocks
    26099 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=536870912
    16 block groups
    32768 blocks per group, 32768 fragments per group
    8160 inodes per group
    Superblock backups stored on blocks: 
    	32768, 98304, 163840, 229376, 294912
    
    Writing inode tables: done                            
    Creating journal (8192 blocks): done
    Writing superblocks and filesystem accounting information: done
    
    This filesystem will be automatically checked every 27 mounts or
    180 days, whichever comes first.  Use tune2fs -c or -i to override.
    root@mdadm-test:~# tune2fs -m 0 /dev/md0
    tune2fs 1.41.14 (22-Dec-2010)
    Setting reserved blocks percentage to 0% (0 blocks)
    root@mdadm-test:~# nano /etc/fstab
    root@mdadm-test:~# mkdir /storage
    root@mdadm-test:~# mount -a
    root@mdadm-test:~# df -h /storage
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/md0              2.0G   35M  2.0G   2% /storage

    After verifying that works correctly, rebooted and installed 11.10 on the OS disk. Ubuntu automatically installed mdadm for me, and configured an mdadm.conf file, but I like to have it in a consistent format, so I redid that, but here's my steps.

    11.10

    Code:
    root@mdadm-test:~# lsb_release -a
    No LSB modules are available.
    Distributor ID:	Ubuntu
    Description:	Ubuntu 11.10
    Release:	11.10
    Codename:	oneiric
    
    root@mdadm-test:~# fdisk -l
    
    Disk /dev/sda: 8589 MB, 8589934592 bytes
    255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00072036
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048    12582911     6290432   83  Linux
    /dev/sda2        12584958    16775167     2095105    5  Extended
    /dev/sda5        12584960    16775167     2095104   82  Linux swap / Solaris
    
    Disk /dev/sdb: 1073 MB, 1073741824 bytes
    255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x92c7d68c
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1              63     2088449     1044193+  fd  Linux raid autodetect
    
    Disk /dev/sdc: 1073 MB, 1073741824 bytes
    255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x5b8bfd17
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1              63     2088449     1044193+  fd  Linux raid autodetect
    
    Disk /dev/sdd: 1073 MB, 1073741824 bytes
    255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xde26680a
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdd1              63     2088449     1044193+  fd  Linux raid autodetect
    
    Disk /dev/md0: 2138 MB, 2138046464 bytes
    2 heads, 4 sectors/track, 521984 cylinders, total 4175872 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table
    
    root@mdadm-test:~# echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
    root@mdadm-test:~# echo "HOMEHOST fileserver" >> /etc/mdadm/mdadm.conf
    root@mdadm-test:~# echo "MAILADDR  youruser@gmail.com" >> /etc/mdadm/mdadm.conf
    root@mdadm-test:~# mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    root@mdadm-test:~# nano /etc/fstab
    root@mdadm-test:~# mkdir /storage
    root@mdadm-test:~# mount -a
    root@mdadm-test:~# df -h /storage
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/md0              2.0G   35M  2.0G   2% /storage
    root@mdadm-test:~# reboot
    Here's a reboot, and following that, my machine booted and automounted the array just fine.
    Code:
    root@mdadm-test:~# df -h /storage/
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/md0              2.0G   35M  2.0G   2% /storage
    So, it looks like the fresh install method works fine and continues to work as it has for years. Maybe this will help you see a step that you missed

  4. #14
    Join Date
    Jun 2008
    Location
    New York, USA
    Beans
    769
    Distro
    Xubuntu 14.04 Trusty Tahr

    Re: 64 bit Server 11.10 and MDADM problem

    Quote Originally Posted by rubylaser View Post
    Okay, I just ran the tests and everything works great for me here.
    Here's a "screenshot" of the error I get:



    Note: I combined 2 photos together to show all of it.

    Does that tell you anything?
    Gentlemen may prefer Blonds, but Real Men prefer Redheads!

  5. #15
    Join Date
    Jun 2008
    Location
    New York, USA
    Beans
    769
    Distro
    Xubuntu 14.04 Trusty Tahr

    Re: 64 bit Server 11.10 and MDADM problem

    Quote Originally Posted by rubylaser View Post
    Okay, I just ran the tests and everything works great for me here.
    Your RAID array is made of /dev/sdb1, sdc1 and sdd1 (all partitions of a disk).

    My array was initially created like this:

    mdadm -C -n5 -l6 /dev/md0 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde

    (something like that) - note that I used /dev/sda, /dev/sdb etc... NOT /dev/sda1, /dev/sdb1, etc...
    Gentlemen may prefer Blonds, but Real Men prefer Redheads!

  6. #16
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,078
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: 64 bit Server 11.10 and MDADM problem

    Partitions vs. raw disks wouldn't be causing this issue, that's just how I like to setup mdadm arrays, but either way is fine. In this case the partition message doesn't mean anything and isn't the problem. The issue is that all the disks aren't available yet when it tries to assemble the array, so it thinks the array is degraded.

    I would set up the boot degraded option first like this (this shouldn't be necessary, but this will get you around the error).
    Code:
    nano /etc/initramfs-tools/conf.d/mdadm
    change "BOOT_DEGRADED=false" to "BOOT_DEGRADED=true"

    Then, I would try to reconfigure mdadm
    Code:
    dpkg-reconfigure mdadm
    Make sure when it asks you to autostart arrays, that you leave that field as all. That should startup your array early in the boot process and avoid this. The other option is to remove the mount line from /etc/fstab and write an init script that mounts your array at the end of the boot sequence, but I'll wait to hear from you about the results from trying the above.
    Last edited by rubylaser; February 10th, 2012 at 09:22 PM.

Page 2 of 2 FirstFirst 12

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •