Page 3 of 3 FirstFirst 123
Results 21 to 27 of 27

Thread: Jaunty Broke Software Raid

  1. #21
    Join Date
    Jan 2008
    Beans
    12

    Re: Jaunty Broke Software Raid

    I was experiencing similar symptoms as you, I was getting "device busy" and mismatching "superblock" when I was trying to create my mdadm software RAID.

    If I booted with my RAID drives connected I would get an error "Initialization of HAL failed!" and if I disconnected them during boot everything would be fine. I checked my syslog and came across a segfault error for hald.

    Here is a bug report https://bugs.launchpad.net/ubuntu/+s...al/+bug/361689 that seemed to address my problem exactly.

    I upgraded Hal using the PPA listed in the above bug report and now my problem is fixed! I can successfully boot and mount my software raid again (and I no longer lose my keyboard and mouse either).

    -Mic

  2. #22
    Join Date
    Sep 2007
    Beans
    28
    Distro
    Ubuntu 8.04 Hardy Heron

    Re: Jaunty Broke Software Raid

    I have noticed that one disk will assemble onto md0 and the other will assemble onto md127 they switch depending what I set the md to be in mdadm.conf

    Code:
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    
    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    DEVICE partitions
    
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    
    # instruct the monitoring daemon where to send mail alerts
    MAILADDR root
    
    # definitions of existing MD arrays
    ARRAY /dev/md127 level=raid0 num-devices=2 UUID=616e78cd:fc386bed:b928cd6d:81a874bc
    This mdadm.conf yeilds...

    Code:
    root@griffie:/home/nutchy# mdadm -Asv
    mdadm: looking for devices for /dev/md127
    mdadm: cannot open device /dev/sdc: Device or resource busy
    mdadm: /dev/sdc has wrong uuid.
    mdadm: cannot open device /dev/sdb1: Device or resource busy
    mdadm: /dev/sdb1 has wrong uuid.
    mdadm: cannot open device /dev/sdb: Device or resource busy
    mdadm: /dev/sdb has wrong uuid.
    mdadm: cannot open device /dev/sda5: Device or resource busy
    mdadm: /dev/sda5 has wrong uuid.
    mdadm: no RAID superblock on /dev/sda2
    mdadm: /dev/sda2 has wrong uuid.
    mdadm: cannot open device /dev/sda1: Device or resource busy
    mdadm: /dev/sda1 has wrong uuid.
    mdadm: cannot open device /dev/sda: Device or resource busy
    mdadm: /dev/sda has wrong uuid.
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 1.
    mdadm: no uptodate device for slot 0 of /dev/md127
    mdadm: added /dev/sdd to /dev/md127 as 1
    mdadm: /dev/md127 assembled from 1 drive - not enough to start the array.
    mdadm: looking for devices for further assembly
    mdadm: cannot open device /dev/sdd: Device or resource busy
    mdadm: cannot open device /dev/sdc: Device or resource busy
    mdadm: cannot open device /dev/sdb1: Device or resource busy
    mdadm: cannot open device /dev/sdb: Device or resource busy
    mdadm: cannot open device /dev/sda5: Device or resource busy
    mdadm: no recogniseable superblock on /dev/sda2
    mdadm: cannot open device /dev/sda1: Device or resource busy
    mdadm: cannot open device /dev/sda: Device or resource busy
    If I change THE ARRAY line to md0 I get the other hdd

    Code:
    ARRAY /dev/md0 level=raid0 num-devices=2 UUID=616e78cd:fc386bed:b928cd6d:81a874bc
    Code:
    root@griffie:/home/nutchy# mdadm -Asv
    mdadm: looking for devices for /dev/md0
    mdadm: cannot open device /dev/sdd: Device or resource busy
    mdadm: /dev/sdd has wrong uuid.
    mdadm: cannot open device /dev/sdb1: Device or resource busy
    mdadm: /dev/sdb1 has wrong uuid.
    mdadm: cannot open device /dev/sdb: Device or resource busy
    mdadm: /dev/sdb has wrong uuid.
    mdadm: cannot open device /dev/sda5: Device or resource busy
    mdadm: /dev/sda5 has wrong uuid.
    mdadm: no RAID superblock on /dev/sda2
    mdadm: /dev/sda2 has wrong uuid.
    mdadm: cannot open device /dev/sda1: Device or resource busy
    mdadm: /dev/sda1 has wrong uuid.
    mdadm: cannot open device /dev/sda: Device or resource busy
    mdadm: /dev/sda has wrong uuid.
    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 0.
    mdadm: no uptodate device for slot 1 of /dev/md0
    mdadm: added /dev/sdc to /dev/md0 as 0
    mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.
    mdadm: looking for devices for further assembly
    mdadm: cannot open device /dev/sdd: Device or resource busy
    mdadm: cannot open device /dev/sdc: Device or resource busy
    mdadm: cannot open device /dev/sdb1: Device or resource busy
    mdadm: cannot open device /dev/sdb: Device or resource busy
    mdadm: cannot open device /dev/sda5: Device or resource busy
    mdadm: no recogniseable superblock on /dev/sda2
    mdadm: cannot open device /dev/sda1: Device or resource busy
    mdadm: cannot open device /dev/sda: Device or resource busy
    These match up with the prefered minor in the superblocks
    It's fun fun fun in the sun sun sun, They're Hardy Hardy Herons???

    Ubuntu 8.04 Hungry Hippo Forever

  3. #23
    Join Date
    Sep 2007
    Beans
    28
    Distro
    Ubuntu 8.04 Hardy Heron

    Re: Jaunty Broke Software Raid

    Is there a way to change the prefered minor number?

  4. #24
    Join Date
    Sep 2007
    Beans
    28
    Distro
    Ubuntu 8.04 Hardy Heron

    Talking Re: Jaunty Broke Software Raid

    FIXED!

    but not sure quite what did it

    • Installed mdadm from Jonas Pedersen PPA
    • Rebooted got repair console ("Enter root password or Control-D to continue")
    • ran mdadm -As -U super-minor


    It assembled and is working great.

    Not sure if it was the ppa or the -AU super-minor used straight after a reboot.
    Both drives' superblocks' have prefered minors of 0 now.

    I haven't risked a reboot yet gonna do a backup first.
    It's fun fun fun in the sun sun sun, They're Hardy Hardy Herons???

    Ubuntu 8.04 Hungry Hippo Forever

  5. #25
    Join Date
    Feb 2007
    Location
    Cameron Park CA USA
    Beans
    4,571
    Distro
    Ubuntu Development Release

    Re: Jaunty Broke Software Raid

    ran mdadm -As -U super-minor
    I would say running the above command did the trick... Thank God for man pages, eh?

    Happy to see you are up and running again.
    Regards, frank, at http://yantrayoga.typepad.com/noname/
    Homebuilt Lian-Li PC-Q33WB, Intel i7-4790K 4.6GHz, SSDs,32G RAM | Dell Laptop 13.3".
    Oracle VBox w/ WinXP/Win10 running Xara Designer, PaintShopPro, and InDesign CS.

  6. #26
    Join Date
    Oct 2006
    Beans
    45
    Distro
    Ubuntu 10.04 Lucid Lynx

    Re: Jaunty Broke Software Raid

    I'm suspecting Jaunty upgrade of hurting my RAID setup too ... I posted below in a different thread, but did not receive any responses ... guilty of cross-posting, but hoping someone here may spot what the problem might be. Please HELP!

    I've got 4 320GB Seagate 7200.11 drives and have the following RAIDs (using mdadm) across them:
    Code:
    /dev/md0  /boot  RAID0
    /dev/md1  /      RAID10
    /dev/md2  /home  RAID10
    I'm monitoring the status of my RAIDs via conky (screenshot below) ... I've noticed a strange thing that after about a day of running /dev/sdb fails for both of my RAID10 devices:
    Code:
    cyberkost@raidbox:~$ cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md2 : active raid10 sdb3[4](F) sdc3[3] sda3[1] sdd3[2]
          552779776 blocks 256K chunks 2 far-copies [4/3] [_UUU]
    
    md1 : active raid10 sda2[1] sdc2[3] sdb2[4](F) sdd2[2]
          67103232 blocks 64K chunks 2 far-copies [4/3] [_UUU]
    
    md0 : active raid1 sda1[1] sdc1[3] sdb1[0] sdd1[2]
          1052160 blocks [4/4] [UUUU] 
    
    unused devices: <none>
    This is also signaled by the HDD activity LED being constantly on (though no audible HDD activity). If I try to add /dev/sdb[2,3] partitions back to their respective arrays, I get:
    Code:
    cyberkost@raidbox:~$ sudo mdadm --add /dev/md1 /dev/sdb2 
    mdadm: Cannot open /dev/sdb2: Device or resource busy
    The HDD activity LED goes off after about a day or two and I can add the partition back after that. Alternatively, I can just reboot the machine and although it takes a while (2-3 hours) to come up, I can add the /dev/sdb[2,3] partitions back to their RAID devices when start-up is completed (the machine comes up with /dev/sdb[2,3] missing).

    Originally I thought that the HDD that corresponds to /dev/sdb is going bad or the corresponding SATA cable/channel on the motherboard have become faulty, but from reading post1 and post2 it references I started to suspect that it's the upgrade to Jaunty 9.04 that messed my mdadm RAID up.

    My suspicions are further deepened by the following facts:
    1. sudo smartctl -a /dev/sdb days the drive is fine (and the output for /dev/sdb looks VERY similar to that for the other 3 drives)
    2. I first noticed the problem shortly after the upgrade to 9.04, I have not had a problem for a year prior to that.
    3. /proc/mdstat used to list /sdaX /sdbX /sdcX /sddX for all 3 RAID devices prior to upgrade (ABCD order). It now has ACBD for /dev/md[0,1] and BCAD for /dev/md2.
    4. Looking at mdadm.conf I find UUID contsucts to look rather suspicious -- the last two blocks are the same for all /dev/md[1,2,3], while the first two are different:
    Code:
    cyberkost@raidbox:~$ cat /etc/mdadm/mdadm.conf 
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    
    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    DEVICE partitions
    
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    
    # instruct the monitoring daemon where to send mail alerts
    MAILADDR root
    
    # definitions of existing MD arrays
    ARRAY /dev/md0 level=raid1 num-devices=4 UUID=20eef32f:87dc5eba:e368bf24:bd0fce41
    ARRAY /dev/md1 level=raid10 num-devices=4 UUID=bacf8d29:0b16d983:e368bf24:bd0fce41
    ARRAY /dev/md2 level=raid10 num-devices=4 UUID=67665d86:1cc558d5:e368bf24:bd0fce41
    
    # This file was auto-generated on Thu, 01 Jan 2009 00:37:24 +0000
    # by mkconf $Id$
    I'd naively expect those UUID fields to be the same across all 3 RAID devices (e.g., if each block is a reference to a particular physical HDD) or be completely different (e.g., if each block is a reference to a particular superblock/parition).
    5. Lastly, I see that mdadm.conf has the date of my upgrade to Ubuntu 9.04 Jaunty Jackalope:
    Code:
    cyberkost@raidbox:~$ ls -la /etc/mdadm/mdadm.conf 
    -rw-r--r-- 1 root root 874 2009-04-25 23:09 /etc/mdadm/mdadm.conf
    ... and no, there's no backup file left

    I checked if following the advice offered in post2 is going to help, but it seems that it will not ... b/c the command returns the configuration that's already part of my 9.04 mdadm.conf
    Code:
    cyberkost@raidbox:~$ sudo mdadm --examine --scan --config=mdadm.conf
    [sudo] password for kost: 
    ARRAY /dev/md0 level=raid1 num-devices=4 UUID=20eef32f:87dc5eba:e368bf24:bd0fce41
    ARRAY /dev/md1 level=raid10 num-devices=4 UUID=bacf8d29:0b16d983:e368bf24:bd0fce41
    ARRAY /dev/md2 level=raid10 num-devices=4 UUID=67665d86:1cc558d5:e368bf24:bd0fce41
    EDIT TO ADD:
    My /dev/md0 raid1 just got affected!!!
    Code:
    cyberkost@raidbox:~$ cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md2 : active raid10 sdb3[4](F) sdc3[3] sda3[1] sdd3[2]
          552779776 blocks 256K chunks 2 far-copies [4/3] [_UUU]
          
    md1 : active raid10 sda2[1] sdc2[3] sdb2[4](F) sdd2[2]
          67103232 blocks 64K chunks 2 far-copies [4/3] [_UUU]
          
    md0 : active raid1 sda1[1] sdc1[3] sdb1[4](F) sdd1[2]
          1052160 blocks [4/3] [_UUU]
          
    unused devices: <none>
    Attached Images Attached Images
    Last edited by cyberkost; June 22nd, 2009 at 04:58 PM.

  7. #27
    Join Date
    Nov 2008
    Beans
    109

    Re: Jaunty Broke Software Raid

    CyberKost:
    Thanks for doing a real post! Nothing makes people crazier faster than when someone comes in and posts "It broke, I cant figure out what's wrong, fix it for me now!!" with zero details! ...I digress...

    I had very similar symptoms to what you are experiencing when I first upgraded to 9.04. For me, the fix was to force the arrays to assemble, then a change to my mdadm.conf (which yours already looks fine), reboot and I havent had a seconds' trouble. Try doing a force assemble with:

    sudo mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd /dev/sde --force
    (with your actual /dev/'s of course!)

    ...and let us know if that does the trick.

Page 3 of 3 FirstFirst 123

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •