PDA

View Full Version : [ubuntu] 10.04 upgrade will not boot "invalid argument" error



olddave1
May 1st, 2010, 05:58 PM
Hi,

I did the online upgrade from 9.10 to 10.04, using 64 bit. I was forced to as the 2 solutions to my incedibly slow internet lookups both required new package installs, both of which failed, so hoped it was sorted out in 10.04 TLS.

It all went to plan until it tried to reboot. I get this:

mount: mounting /dev/disk/by-uuid/1e4e276c-8741-42e7-b52e-05c195790d28 on /root failed: invalid argument
mount:mounting /dev on /root/dev failed: No such file or directory
mount:mounting /sys on /root/sys failed: No such file or directory
mount:mounting /proc on /root/proc failed: No such file or directory
Target filesystem doesn't have /sbin/init.
No init found. Try passing init= bootarg.

I don't have RAID at all a single Samsung 200GB SATA drive as the boot drive. Can someone tell me where to start looking? Also how can a TLS release screw up so badly on a common Gigabyte motherboard?

Thx.

David

Tummycd28
May 2nd, 2010, 05:20 AM
same problem, mine is fresh install

olddave1
May 2nd, 2010, 12:01 PM
Did some googling and after running e2fsck -p /dev/sda1 found it could not find the superblock. So tried this from 10.04 LiveCD
>sudo dumpe2fs /dev/sda1 |grep superblock

I got


dumpe2fs: No such file or directory while trying to open /dev/sda1
Couldn't find valid filesystem superblock.


Seems the 10.04 upgrade has trashed my boot stuff, way to go....

Any way out of this now?

Thx.

David

olddave1
May 3rd, 2010, 12:35 PM
Here is the output from sudo fdisk -l -u /dev/sda1

Disk /dev/sda: 200.0 GB, 200049647616 bytes
255 heads, 63 sectors/track, 24321 cylinders, total 390721968 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c492c

Device Boot Start End Blocks Id System
/dev/sda1 * 63 374796449 187398193+ 83 Linux
/dev/sda2 374796450 390716864 7960207+ 5 Extended
/dev/sda5 374796513 390716864 7960176 82 Linux swap / Solaris

Is there a utility I can use that will show me the superblocks like dumpe2fs should be doing? Anything else I can do? Anyone?

olddave1
May 3rd, 2010, 03:36 PM
Might be getting somewhere. 10.04 upgrade is its wisdom seems to have thought that because I have 2 identical 200GB SATA drives they are in a RAID config, judging by the outcome of blkid

/dev/loop0: TYPE="squashfs"
/dev/sda: TYPE="isw_raid_member"
/dev/sdb: TYPE="isw_raid_member"
/dev/sdc1: LABEL="W2003Server" UUID="ae98cd9a-653d-4f42-8479-b78ea1ab6362" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdc2: LABEL="TestSpace" UUID="126fdbbd-ba7a-4379-b0ec-4aa262c21d5e" TYPE="ext4"
/dev/mapper/isw_cgghdebhfi_Dev21: UUID="1e4e276c-8741-42e7-b52e-05c195790d28" TYPE="ext3"
/dev/mapper/isw_cgghdebhfi_Dev22: UUID="atNi17-RF09-GjJO-xKh1-tkIV-U746-9jzeGx" TYPE="LVM2_member"
/dev/sdd1: UUID="A663-94F1" TYPE="vfat"

/dev/sda1 is my boot drive and still shows as such in fdisk -l -u. So how do I get it from this false idea back to where it was before 10.04 upgrade worked it's magic.

Anyone at all?

olddave1
May 4th, 2010, 01:32 PM
Bump.

As far as I can tell this must be a bug in the upgrade software. It has found something on the disk that is left over from when the disk was in a RAID 1 setup a couple of years ago. This ignores the fact it has been running from a single disk for over a year.

silver6
May 4th, 2010, 08:11 PM
I think I had a similar problem. I had RAID disable in BIOS but for some reason software RAID tried to do its thing so I could not boot.

What I did to resolve it is to pass the nodmraid option to your kernel boot option. So, you need to edit /boot/grub/menu.lst, and the line that says:

kernel /boot/vmlinuz-2.6.32-21-generic root=UUID=xxxxxxxxxxx ro quiet splash

simply add nodmraid to the end of that line. You can get to that by putting in the LiveCD and mounting the drive to edit that file, or in grub press e to edit the boot entry and you can do it from there.

The reason why this happens now is that dmraid is now activated by default, according to the release notes: http://www.ubuntu.com/getubuntu/releasenotes/1004.

Hope that helps, worked for me.

olddave1
May 6th, 2010, 12:14 PM
Hi,

I rebooted and forgot I had taken the Live CD out and guess what, it booted with no changes to anything. This is the definition of flaky, booting at random, this was probably one out of 20 attempts. So I made the change you suggested, concat 'nodmraid' to all kernel lines in the menu.lst. It did not work, I am back to no boot. This is very frustrating.

Thx for the suggestion though..

olddave1
May 6th, 2010, 12:34 PM
Found some threads on removing the dmraid package, seems logical, will do so when I get home. It looks like a consistent problem for quite a few people, surprised, given the problem it causes, that it has not been sorted out in the installer.

olddave1
May 6th, 2010, 07:08 PM
I rebooted 8 times this time before it booted. Then ran 'sudo apt-get remove dmraid' to find dmraid is not installed. Back to square one, not booting most of the time due to remnants of an old RAID setup on disk and no way of fixing it. Windows 7 is starting look really tempting.

Unless there is someone who knows how to fix it? Anyone?

olddave1
May 17th, 2010, 08:51 PM
It seems to me that anyone who uses a boot drive for Ubuntu that once acted in a soft RAID will have my problem. A friend mentioned superblocks might have to be wiped, but I could find nothing related to that in the forum.

This just looks flaky. As a developer you would think, if not actually RAID try non RAID boot. At the moment I am just hitting the reset button several times until it finally boots, not a good experience. It is just flaky... A lousy Ubuntu experience.

No one seems to know the answer to this. One last try at getting someone who know the boot process really well. If I can fix this I might then tackle the thing 10.04 has in common with 9.10, v slow DNS lookup, IPV6 is disabled.

Thx.

David

olddave1
May 30th, 2010, 10:50 PM
Bump

otx
July 13th, 2010, 02:40 PM
This is what I have found.... I'm testing right now, i will be back with the results soon.

Edit: I can confirm that it works! It seems that this is a weird bug in the ubuntu installer, and are affected only the systems with 500Gb hard disks configurations. I have created two partitions, one 496Gb and one 4Gb for swap (RAID 1). In this way the installer shows some remaining free space but this is probably due to the above bug. So, take care of the sizes of the partitions (should be 500Gb in total) and everything will be fine.


[Bug 569900] Re: mount: mounting /dev/md0 on /root/ failed: Invalid argument
Thomas Krause
Thu, 06 May 2010 03:21:03 -0700

Ok, I think found the problem and a solution/workaround:

When you ask the installer to partition the free space on the disk it
will prompt you with the size of the new partition. Per default this is
"500.1 GB". If you first add a smaller Swap and then a second partition
this will be smaller, but still something like "482.1 GB".

I you enter by hand "500GB" (or with multiple partitions something that
the sum is not bigger than 500GB) than everything works fine and as it
should.

Maybe 500.1 GB *is* the right number for the 500GB drive but I somehow
doubt it and blame the installer for choosing a wrong default value ;-)

--
mount: mounting /dev/md0 on /root/ failed: Invalid argument
https://bugs.launchpad.net/bugs/569900
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

trarman
July 23rd, 2010, 02:59 PM
I have a mac mini server with two 500gb hard drives and encountered this problem installing 10.04. I can confirm that just dropping the .1GB from your partitioning will resolve the issue.

wkulecz
July 23rd, 2010, 03:17 PM
Funny, I got this error message during the update when the grub-update thing was run. I've been frustrated by the way the system can't decide where my hard drives should be -- the mix of IDE and SATA drives confuses its feable mind.

But the updater also offered the option of installing grub2 to all my drives which I allowed, since I've switched all mounts to by UUID, and fortunatley had no issues upon reboot beyond the DKMS failing to rebuild my VirtualBox modules so I had to do it manually (a PITA since the reboot was days ago but didn't need to fireup VirtualBox until yesterday, how many more issues have I not yet discovered?) :(