I was having problems with my RAID 1 array which had previously worked fine on Natty/Lucid. There was nothing wrong with it, but on every boot either the dreaded 'purple screen of death' popped up or the busybox initramfs screen.
The /proc/mdstat that is dumped out on the 'degraded raid' screen appeared to show that one of my two drives had not been detected before the check on the raids health had been performed. It turns out that mdadm raid arrays are incrementally added to the system via udev rules that you can find in /lib/udev/rules.d/85-mdadm.rules.
This is all well and good, as long as each of the udev rules for each of your drives in the array has fired before the check on the health of the array has occurred. If not all of the devices have come up in time then you'll get the degraded screen - a good old fashioned race condition. This occurs even if there is nothing wrong with any of your drives.
Checking through the initramfs scripts for ocelot/natty/lucid, I couldn't find anywhere where a 'sanity check' is taken to ensure that all members of the array have come up, other than the 'degraded array' screen. I can only assume in previous incarnations of ubuntu the structure of the scripts was such that the race condition didn't occur very much (I did have the odd bad boot where the array wouldn't come up).
I fixed this problem on my hardware by adding a 'udevadm settle' in the following file:
/usr/share/initramfs-tools/scripts/mdadm-functions
In there, look for the following function:
Code:
degraded_arrays()
{
mdadm --misc --scan --detail --test >/dev/null 2>&1
return $((! $?))
}
and change it to:
Code:
degraded_arrays()
{
udevadm settle
mdadm --misc --scan --detail --test >/dev/null 2>&1
return $((! $?))
}
then do a:
Code:
sudo update-initramfs -u
and reboot.
This line makes all current udev rules being processed complete before the health check is made on the array. By then the array should have had chance to correctly assemble.
Hope this helps.
Bookmarks