andymcca
September 17th, 2010, 12:22 AM
Hello,
I'm trying to install 10.04.1 from the alternate installation CD onto a 2-disk raid array using mdadm. I can set up all of my partitions, create the raid devices, and install the OS to them. However, when I reboot after completing the installation:
* Grub loads
* initramfs loads
* initramfs fails to load /dev/md0 (my /) and /dev/md2 (my /home), complaining about degraded status
* I get dumped to an initramfs prompt.
At this point I can run
mdadm -S /dev/md*
mdadm -A /dev/md0
mdadm -A /dev/md2
exitand the OS boots fine. However, the problem comes right back after a restart. The mdadm.conf UUIDs seem to match the mdadm -Ds results. Any clues as to why the initial assembly attempt is failing? Let me know what information I can provide.
Thanks in advance!
***UPDATE***
It appears if I add devices by hand to mdadm.conf, the partition is assembled automatically at bootup? I did this with device names like /dev/sda5, etc. I will try to use by-uuid names next, and post back. It appears perhaps the line
DEVICE partitionswhich appears in mdadm.conf by default is not working? In any case this seems like something which should be worked around automatically during an install.
***UPDATE 2***
After a few hours of testing I have determined that my raid assembly fails any time I attempt to define devices in mdadm.conf with wildcards. That is,
DEVICE partitions
DEVICE sd*etc all fail.
In fact,
DEVICE /dev/sda5 /dev/sdb5 /dev/sda7 /dev/sdb7 /dev/sda8 /dev/sdb8 /dev/sd*fails, so even if I explicitly state the devices but then throw in a wildcard nothing gets assembled. My final working copy (in full) is:
# mdadm.conf
#the following was automatically generated and I did not mess with it:
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST <system>
MAILADDR root
#my stuff:
DEVICE /dev/sda5 /dev/sdb5 /dev/sda7 /dev/sdb7 /dev/sda8 /dev/sdb8
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=4384d20c:66616f4e:7995638d:58efe975
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=91b479e3:e89be956:aec481ba:c6bfe0a7
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=42bae81d:71b13219:1ff44cfa:0ee06a66Sadly, this is not very general. If my device names change (which will happen in about 10 minutes when I add the rest of my hard drives), I will have to do the whole initramfs manually assembly dance again.
Any thoughts? Where should I take this for a possible bug report?
***PS***
Anyone reading this and having a similar problem:
after to edit mdadm.conf, you need to run
update-initramfs -u to update the copy that gets loaded before your root partition is loaded (ie the one that is used to assemble your drives). At least, this is definitely the case when / is on a RAID array.
If you have to boot off a livecd to rescue your install, you can:
install mdadm (sudo apt-get install mdadm),
then assemble whatever your partitions are (sudo mdadm -A /dev/md? /dev/sd?? /dev/sd??),
mount your root device (sudo mkdir /media/tmp; sudo mount /dev/md? /media/tmp),
mount your boot device (if it is seperate from root) (sudo mkdir /media/tmp1; sudo mount /dev/md? /media/tmp1),
bind the livecd proc directory over your root's /proc (sudo mount --bind /proc/ /media/tmp/proc/),
bind your boot over your root's /boot (sudo mount --bind /media/tmp1/ /media/tmp/boot/),
chroot into your installation (sudo chroot /media/tmp/)
and finally update your initramfs (sudo update-initramfs -u)
ctrl+d to leave the chroot, restart, and your mdadm.conf will be updated!
Phew! Good luck!
I'm trying to install 10.04.1 from the alternate installation CD onto a 2-disk raid array using mdadm. I can set up all of my partitions, create the raid devices, and install the OS to them. However, when I reboot after completing the installation:
* Grub loads
* initramfs loads
* initramfs fails to load /dev/md0 (my /) and /dev/md2 (my /home), complaining about degraded status
* I get dumped to an initramfs prompt.
At this point I can run
mdadm -S /dev/md*
mdadm -A /dev/md0
mdadm -A /dev/md2
exitand the OS boots fine. However, the problem comes right back after a restart. The mdadm.conf UUIDs seem to match the mdadm -Ds results. Any clues as to why the initial assembly attempt is failing? Let me know what information I can provide.
Thanks in advance!
***UPDATE***
It appears if I add devices by hand to mdadm.conf, the partition is assembled automatically at bootup? I did this with device names like /dev/sda5, etc. I will try to use by-uuid names next, and post back. It appears perhaps the line
DEVICE partitionswhich appears in mdadm.conf by default is not working? In any case this seems like something which should be worked around automatically during an install.
***UPDATE 2***
After a few hours of testing I have determined that my raid assembly fails any time I attempt to define devices in mdadm.conf with wildcards. That is,
DEVICE partitions
DEVICE sd*etc all fail.
In fact,
DEVICE /dev/sda5 /dev/sdb5 /dev/sda7 /dev/sdb7 /dev/sda8 /dev/sdb8 /dev/sd*fails, so even if I explicitly state the devices but then throw in a wildcard nothing gets assembled. My final working copy (in full) is:
# mdadm.conf
#the following was automatically generated and I did not mess with it:
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST <system>
MAILADDR root
#my stuff:
DEVICE /dev/sda5 /dev/sdb5 /dev/sda7 /dev/sdb7 /dev/sda8 /dev/sdb8
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=4384d20c:66616f4e:7995638d:58efe975
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=91b479e3:e89be956:aec481ba:c6bfe0a7
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=42bae81d:71b13219:1ff44cfa:0ee06a66Sadly, this is not very general. If my device names change (which will happen in about 10 minutes when I add the rest of my hard drives), I will have to do the whole initramfs manually assembly dance again.
Any thoughts? Where should I take this for a possible bug report?
***PS***
Anyone reading this and having a similar problem:
after to edit mdadm.conf, you need to run
update-initramfs -u to update the copy that gets loaded before your root partition is loaded (ie the one that is used to assemble your drives). At least, this is definitely the case when / is on a RAID array.
If you have to boot off a livecd to rescue your install, you can:
install mdadm (sudo apt-get install mdadm),
then assemble whatever your partitions are (sudo mdadm -A /dev/md? /dev/sd?? /dev/sd??),
mount your root device (sudo mkdir /media/tmp; sudo mount /dev/md? /media/tmp),
mount your boot device (if it is seperate from root) (sudo mkdir /media/tmp1; sudo mount /dev/md? /media/tmp1),
bind the livecd proc directory over your root's /proc (sudo mount --bind /proc/ /media/tmp/proc/),
bind your boot over your root's /boot (sudo mount --bind /media/tmp1/ /media/tmp/boot/),
chroot into your installation (sudo chroot /media/tmp/)
and finally update your initramfs (sudo update-initramfs -u)
ctrl+d to leave the chroot, restart, and your mdadm.conf will be updated!
Phew! Good luck!