PDA

View Full Version : 12.04.1, alternate CD hangs on grub-install - RAID



ladasky
December 3rd, 2012, 10:56 PM
Hi, folks,

I managed to bork my 11.10 installation, probably with an unnecessary NVidia GPU driver upgrade. But I've been meaning to upgrade to 12.04 for a while, and so I'm trying to do that. I'm tired of managing my GPU driver issues manually, and I'm hoping that 12.04.1 will solve that problem for me.

Here's my configuration information:


Gigabyte GA-MA78-US2H motherboard, 8 GB RAM, AMD Phenom II x6 1100T CPU
NVidia 460 family GPU card
Ubuntu 11.10 OS, 64-bit; no other operating systems to complicate matters
Two hard drives operated in RAID1 configuration, administered by mdadm; four partitions; swap, root for 11.10, home, and one spare partition that I reserved for the root for 12.04

I downloaded both the standard and the alternate installations of 12.04.1, x86, 64-bit. Both passed their checksum tests, and I burned two CD's. My system boots from the 12.04.1 live CD (with the nomodeset option). I am using it right now.

Because I'm using RAID, I have been using the alternate CD's to install Ubuntu. I've been doing this with success since 10.10. But when I try this with 12.04.1, I only get up to grub-install, and then the system hangs. I've waited 15 minutes to an HOUR before giving up. The hard drives were thrashing the whole time. I had this exact same problem when I tried plain-old 12.04 (http://ubuntuforums.org/showpost.php?p=11918328&postcount=6) (rather than 12.04.1) several months ago.

Does anyone have any suggestions as to why grub-install is failing, and how I might fix it?

Alternately: in a recent thread another poster hinted that you can install to a RAID from the standard, live CD (http://ubuntuforums.org/showpost.php?p=12298288&postcount=11), if you know what you are doing. He didn't give directions, though. How do I proceed? I figure that the first step would be to build my RAID. I can do that. From the Live CD, I am able to:


sudo apt-get install mdadm
build all of my RAID1 partitions with a series of sudo mdadm --create commands (EDIT: sudo mdadm --assemble --scan works better!)
mount both the 11.10 root partition and the home partition.

Can I just run the graphical installation process from there? The empty partition that I reserved for 12.04 won't mount, obviously, because it isn't formatted yet.

oldfred
December 4th, 2012, 12:03 AM
Added RAID to your title as only a few know RAID. I do not.

Run the BootInfo report. You probably need to install the mdadm first.

Post the link to the BootInfo report that this creates. Is part of Boot-Repair:
https://help.ubuntu.com/community/Boot-Info
Boot Repair -Also handles LVM, GPT, separate /boot and UEFI dual boot.:
https://help.ubuntu.com/community/Boot-Repair
You can repair many boot issues with this or 'Create BootInfo' report (Other Options) & post the link it creates, so we can see your exact configuration and diagnose advanced problems.
Install in Ubuntu liveCD or USB or Full RepairCD with Boot-Repair (for newer computers)
http://sourceforge.net/p/boot-repair/home/Home/
https://help.ubuntu.com/community/UbuntuSecureRemix

Boot Repair runs the Bootinfoscript as part of BootInfo report it creates.
Boot script is able to search LVM partitions if the LVM2 package is install, Fedora needs to be mounted to see it with os-prober
# ("apt-get install lvm2" in debian based distros)
# Is able to search Linux Software Raid partitions (MD Raids) if
# the "mdadm" package is installed.
sudo apt-get install lvm2
sudo apt-get install mdadm
sudo apt-get install gawk
sudo apt-get install xz-utils
# unlzma is equivalent to xz --format=lzma --decompress.

darkod
December 4th, 2012, 12:12 AM
Yeah, after adding mdadm in live session and creating the md devices, you should be able to simply start the install with the desktop icon.

It would be better to use manual partitioning, the Something Else option. When the partitions list shows, the md devices should be there. Do not try to mount them before you begin the install process, you need to install on unmounted devices.

Select /dev/sda for the bootloader installation if that's one of the disks part of the array. You can add grub2 to /dev/sdb once the system is running.

I don't remember whether I have tried this with the live cd, but in theory it should work.

It's very strange that the alternate cd is giving you trouble. You might be affected by some bug, strange hardware combination, etc.

PS. Forgot to mention. I would actually create the partitions too with parted before starting the GUI install. Right after you create the md devices you plan to use, use parted and create single partition from each md device. I think this step was necessary. Later in the manual partitioning step you only select the partitions to use, and select their mount points. You don't actually create them in that step.
Try both options, I'm not 100% sure which one will work, maybe both.

ladasky
December 4th, 2012, 09:05 AM
I'm not done yet, but I'm popping in with a status update.

Before I had read any of your answers, I attempted a live CD installation onto my RAID, and it failed. HOWEVER I think that part of the problem was that the Live CD commandeered one of my physical swap partitions. The system wouldn't let me redefine the new RAID1 swap while one of its components was in use. Also, selections with the partition editor inside the graphical installation utility were clunky, and doing odd things.

I have done some research and discovered the command sudo swapoff -a, which unmounts all swap partitions. I have 8 GB of RAM, I should be OK without swap, at least during any installation process.

oldfred: I followed your links to the boot-repair package. I got some errors following the installation directions, but that may be due to the fact that the instructions date back to Ubuntu 11.04? Anyway, boot-repair runs. On startup, I got a message which read: "DMRaid packages my interfere with mdraid. Uninstall them?" I selected "yes", as I'm using mdadm and not dmraid.

My boot report is here (http://paste.ubuntu.com/1409742/). I don't know how to read most of it. But one thing that jumps out at me (see lines 67-73) is that I seem to have no 11.10 partition at all on my RAID any more. It should appear on /dev/md/1. Instead, I have 12.04.1 there? No, I'm pretty sure that I don't. Funny -- I was TRYING to install it on /dev/md/2.

In any case, my 11.10 system already was NOT working before I starting fussing with this 12.04.1 installation. Something different is causing grub-install to hang when I try the 12.04 alternate install CD's.

I see error messages on indicating that I have partitions "outside the disk" (lines 476, 530, and 656). I have no idea how that might have happened, but the report suggests that boot-repair knows how to fix those problems, and I am going to let it try to do so.

oldfred
December 4th, 2012, 05:23 PM
I do not know RAID, so those issues Darko or someone that knows RAID will have to help with.

Normally this says e2fsck is needed, not sure if the same with RAID or not.


md/2: _______________________

File system:
Boot sector type: -
Boot sector info:
Mounting failed: mount: unknown filesystem type ''

The partitions outside disk for sr0 are not an issue. I think it just has to do with the oversize CDs.
But line 656 looks like md3. And it does not show details to confirm. Devices sda, sdb Partition table itself looks ok.

But you also then seem to have errors in RAID partition table. Which may relate to drives being slightly different in size - see total sectors and then last partition is different:


/dev/sda4 93,747,200 1,250,263,039 1,156,515,840 fd Linux raid autodetect
/dev/sdb4 93,747,200 1,250,260,991 1,156,513,792 fd Linux raid autodetect


I thought with RAID they had to be the same, or sda4 needs to be a bit smaller to accommodate the slightly smaller sdb4 because sdb is a few sectors smaller than sda. Or if RAID is using the entire partition it does not match??

darkod
December 4th, 2012, 05:51 PM
1. I don't think the difference in sector size between sda4 and sdb4 is a problem. As we can see, /dev/md3 sector size is smaller than both numbers anyway, so it looks like the md3 device is correctly assembled despite the sda4 and sdb4 difference. Yes, the partitions should be identical size but it looks like md3 is assembled anyway with slightly smaller size.

2. /dev/md2 fdisk results are little different compared to the other md devices. It actually shows a partition on it, like /dev/md2p1. The other md devices don't show this p1 partition and it seems to be the correct way (I just checked my server md devices and they don't have p1 in fdisk).
I am not sure if this is an issue, or simply the md2 was created with another partition on top of it. I am also not sure if that explains why md2 doesn't show in any blkid results.

3. The partition out of disk about sr0 is no issue, like oldfred says. That's a common message about CDs.

If we put md2 aside, which shouldn't affect your new 12.04 installation on md1 anyway, I actually don't see anything bad in the bootinfo. It all looks normal.

If it still fails to install grub2, you can try adding it from live session.

ladasky
December 4th, 2012, 10:46 PM
Update:

I have things sort of working. I think that GRUB is an issue for me, and for other people. It may be a Linux bug. I believe that these links are relevant:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=508834
http://ubuntuforums.org/archive/index.php/t-1356844.html
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/508863

Here are the details of what I've tried.

After completing my Boot Repair process, I once again ran the GUI installer. The installer took forever to run, hanging up on GRUB-related commands. I decided to watch and wait because the GUI installer gave more feedback than the console installer on the alternate CD. Every 15 minutes or so, a new message would appear. Well, after grinding through two hours or so, the installer gave indications that it had, in fact failed. I have somewhat more detailed notes, but I don't think that they're relevant.

I couldn't close the installer properly, I had to reboot the machine. But when I booted, I got... a GRUB menu, and then I booted into 12.04.1 from my hard drive, but without my user accounts. Apparently the system HAD installed, but failed to mount my /dev/md/3 as /home.

Thinking that I had had a broken system that Boot Repair had probably fixed, I decided to go back to the alternate install CD. This time, I resolved to WAIT as long as it took. And if I wait the full TWO HOURS, grub-install and update-grub finish, and a grub.cfg file gets written, and I get a working system with 12.04.1 and my /home directory!

But I'm not done. The Update Manager stepped in, of course, and offered me 231 updates. And I'm hung up again! Where? I opened the Details panel of the Applying Changes window. The following two commands, which occur sequentially, take 15-30 minutes EACH to execute:


run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.2.0.34-generic /boot/vmlinuz-3.2.0.34-generic
Generating grub.cfg ...

See the common element? GRUB.

During the run-parts command, the entire system lagged. Mouse clicks on other applications were slowed to a crawl. I started writing this message from my backup laptop because, for a while, my main machine was basically unusable.

I'm almost ready to mark this thread as solved. Before I do, I hope that someone will be able to help me with the slow GRUB problem. That seems to be the primary cause of my troubles.

Thanks!

oldfred
December 5th, 2012, 12:06 AM
Both Boot-Repair & grub scan drive. If any partition does not correctly mount or has some corruption then they have issues. So did you run fsck on your partitions?

ladasky
December 5th, 2012, 09:29 PM
Update:

The system would NOT shut down properly yesterday. I left it on the Ubuntu closing splash page, the one with the five dots as a progress indicator, for a full hour before forcing a shutdown with the power switch.

Booted on the live CD this morning. This page (http://serverfault.com/questions/429778/linux-software-raid-how-to-fsck-on-hard-drive) suggests that fsck isn't the right place to start looking for problems on a RAID.

To unmount the swap, I executed:

sudo swapoff -a

I have now been waiting 90 minutes for the results of:

sudo badblocks /dev/sda

After I see the result, I'll execute:

sudo badblocks /dev/sdb

More soon. If you have another opinion about fsck on RAID, I'm open to it, I'm hardly an expert on these matters.

darkod
December 5th, 2012, 09:37 PM
Actually that doesn't say that you shouldn't use fsck. It's more related to physical check that the poster as asking about.

I would still start with fsck of the md devices (without swap). The problem in your installation might be on the filesystem level, not physical. I do think fsck on md devices can help, depending on the problem.

You can also check the SMART data on the hdds if you suspect physical failures. You might need to install smartmontools so you can use them.
https://help.ubuntu.com/community/Smartmontools

ladasky
December 6th, 2012, 10:25 AM
Update:

I made sure to back up my /home before proceeding.

From the installation of 12.04.1 that I accomplished with great difficulty, I executed a mount command. This shows, among other things:


/dev/md2p1 on / type ext4 (rw, errors=remount-ro)
/dev/md3 on /home type ext4 (rw)

I have no idea how I ended up with /dev/md2p1 instead of plain-old /dev/md2. A partition inside a partition? And that's where the OS installed? That isn't what I wanted. If I was sure that it wouldn't take me three hours to reinstall 12.04.1, I might reformat both of my hard drives and start again.

Onward. From the live CD, with swap disabled and all hard drive partitions unmounted:

badblocks /dev/sda took ~2.5 hours, during which time the hard disk access light was on steadily, and returned... nothing. No errors, apparently.

badblocks /dev/sdb took TEN hours, even though it's the same size as /dev/sda, during which time the hard disk access light was on INTERMITTENTLY, and returned...

625130700
625130701
625130702
625130703
625130704
625130705
625130706
625130707

OK, I have bad blocks. I'm not sure exactly where on /dev/sdb they reside. So: I tried fdisk -l :

Disk /dev/sda: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000970f

Device Boot Start End Blocks Id System
/dev/sda1 2048 15624191 7811072 fd Linux raid autodetect
/dev/sda2 15624192 54685695 19530752 fd Linux raid autodetect
/dev/sda3 54685696 93747199 19530752 fd Linux raid autodetect
/dev/sda4 93747200 1250263039 578257920 fd Linux raid autodetect

Disk /dev/sdb: 640.1 GB, 640133946880 bytes
255 heads, 63 sectors/track, 77825 cylinders, total 1250261615 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009028a

Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 15624191 7811072 fd Linux raid autodetect
/dev/sdb2 15624192 54685695 19530752 fd Linux raid autodetect
/dev/sdb3 54685696 93747199 19530752 fd Linux raid autodetect
/dev/sdb4 93747200 1250260991 578256896 fd Linux raid autodetect

Disk /dev/md0: 7997 MB, 7997476864 bytes
2 heads, 4 sectors/track, 1952509 cylinders, total 15620072 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 20.0 GB, 19998367744 bytes
2 heads, 4 sectors/track, 4882414 cylinders, total 39059312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md2: 20.0 GB, 19998367744 bytes
255 heads, 63 sectors/track, 2431 cylinders, total 39059312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000c8cb

Device Boot Start End Blocks Id System
/dev/md2p1 2048 39057407 19527680 83 Linux

Disk /dev/md3: 592.1 GB, 592133873664 bytes
2 heads, 4 sectors/track, 144563934 cylinders, total 1156511472 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md3 doesn't contain a valid partition table

If I total up the number of blocks indicated on /dev/sdb, I get 625,129,472. But my first bad block is numbered 625,130,700? That high number suggests that the bad blocks aren't inside any of the partitions!

I don't know whether this is a problem. In any case, the HUGE difference in time between the badblocks scan of /dev/sda and /dev/sdb concerns me. It suggests that /dev/sdb is running at a quarter the speed that it should. Is it dying on me?

Finally, I tried sudo apt-get install mdadm; then sudo mdadm --assemble --scan; and finally, fsck on each RAID partition that I could. I got several error messages when I tried various forms of the fsck command. The fsck choices seem to be limited on the live CD. The live CD does not include fsck.swap, if it even exists, so I could not scan my swap partition, /dev/md0. The correct syntax to check an ext4 partition, per this post (http://ubuntuforums.org/showthread.php?t=1181661), is apparently fsck -fyv <device name>.

sudo fsck -fyv /dev/md1 returns:

fsck from util-linux 2.20.1
e2fsck 1.42 (29-Nov-2011)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences: +4242389 +4242397 +(4243248--4243272) +(4243275--4243295) +(4244844--4244847) +(4246075--4246079) +(4246100--4246111) +(4246132--4246143) +(4246180--4246200) +(4246202--4246203) +(4246231--4246239) +4246257 +(4246260--4246271) +4246305 +(4246308--4246319) -(4246328--4246335) +4246369 +(4246372--4246391) -4246399 +(4246417--4246431) +(4246461--4246463) +(4246481--4246495) +(4246513--4246527) +(4246595--4246607) -(4246608--4246623) +(4246647--4246655) +(4246727--4246735) -(4246736--4246751) +(4246775--4246783) -(4246864--4246879) +(4246903--4246911) -(4246992--4247007) +(4247029--4247039) +(4247111--4247119) -(4247120--4247135) +(4247157--4247167) +(4247225--4247231) +(4247287--4247295) +(4247331--4247336) +(4247340--4247343) -(4247344--4247359) +(4247418--4247423) +4247471 -(4247472--4247487) +(4247537--4247543) -(4247984--4247999) -4248029 +(4248030--4248031) -(4248052--4248063) -(4248807--4248824) -(4248826--4248827) -4248849 +(4248850--4248851) -(4248852--4248863) -4248881 +(4248882--4248883) -(4248884--4248895) +(4248958--4248959) -4249025 +(4249026--4249027) -(4249028--4249039) +(4249082--4249083) -(4249084--4249087) -4254141 +(4254142--4254143) -(4254168--4254175) -(4254195--4254207) -(4254440--4254451) -4254454 +4254455 -(4254531--4254543) -(4254584--4254591) -(4254660--4254671) -4254709 +(4254710--4254711) -(4254712--4254719) -(4254788--4254799) -4254837 +(4254838--4254839) -(4254840--4254847) +(4254914--4254915) -(4254916--4254927) -(4254963--4254975) -(4255035--4255039) +(4255078--4255079) -(4255080--4255087) +(4255174--4255175) +(4255222--4255223) -(4255268--4255271) +(4255342--4255343) +(4255422--4255423) +(4255478--4255479) +(4255530--4255531) -(4255532--4255535) +(4255598--4255599) -(4255676--4255679) +(4255734--4255735) +(4255982--4255983) +(4256038--4256039) +(4256166--4256167) +(4256234--4256235) -(4256902--4256903) -(4256946--4256947) -(4257018--4257019) -(4257166--4257167) -(4257542--4257543) -(4257662--4257663) -(4257726--4257727) +(4260066--4260092) -(4261208--4261234)
Fix? yes


/dev/md1: ***** FILE SYSTEM WAS MODIFIED *****

153673 inodes used (12.59%)
99 non-contiguous files (0.1%)
168 non-contiguous directories (0.1%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 113719/17
720026 blocks used (14.75%)
0 bad blocks
1 large file

86743 regular files
13115 directories
55 character device files
25 block device files
0 fifos
33 links
53725 symbolic links (39848 fast symbolic links)
1 socket
--------
153697 files

Whatever those block bitmap differences are, they just got fixed.

Next, sudo fsck -fyv /dev/md2

fsck from util-linux 2.20.1
e2fsck 1.42 (29-Nov-2011)
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/md2

The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>

I'm not sure if this is serious or not. The superblock could not be read, but this is my mystery partition which apparently has a child partition inside it. I did not try rerunning with an alternate superblock.

sudo fsck -fyv /dev/md2p1 yields:

fsck from util-linux 2.20.1
e2fsck 1.42 (29-Nov-2011)
/dev/md2p1: recovering journal
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong (3891192, counted=3873488).
Fix? yes

Free inodes count wrong (1038877, counted=1037276).
Fix? yes


/dev/md2p1: ***** FILE SYSTEM WAS MODIFIED *****

183332 inodes used (15.02%)
125 non-contiguous files (0.1%)
236 non-contiguous directories (0.1%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 141781/29
1008432 blocks used (20.66%)
0 bad blocks
1 large file

109752 regular files
18007 directories
55 character device files
25 block device files
0 fifos
34 links
55483 symbolic links (41433 fast symbolic links)
1 socket
--------
183357 files

Another minor(?) fix.

And finally, sudo fsck -fyv /dev/md3 returns:


fsck from util-linux 2.20.1
e2fsck 1.42 (29-Nov-2011)
Pass 1: Checking inodes, blocks, and sizes
Inodes that were part of a corrupted orphan linked list found. Fix? yes

Inode 17580946 was part of the orphaned inode list. FIXED.
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Unattached inode 17580946
Connect to /lost+found? yes

Inode 17580946 ref count is 65535, should be 1. Fix? yes

Inode 17581027 ref count is 1, should be 2. Fix? yes

Unattached inode 17581256
Connect to /lost+found? yes

Inode 17581256 ref count is 2, should be 1. Fix? yes

Pass 5: Checking group summary information

/dev/md3: ***** FILE SYSTEM WAS MODIFIED *****

68877 inodes used (0.19%)
1319 non-contiguous files (1.9%)
15 non-contiguous directories (0.0%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 68718/127/2
38257487 blocks used (26.46%)
0 bad blocks
2 large files

63001 regular files
5841 directories
0 character device files
0 block device files
0 fifos
1 link
25 symbolic links (19 fast symbolic links)
1 socket
--------
68867 files

I'm in a bit over my head. I don't know whether there are any serious issues here or not. I'm rebooting from my hard drive now, to see whether anything has changed. I would especially like to see whether the system shuts down quickly now, instead of taking over an hour (and maybe never shutting down at all). The fsck repairs I just did here took only a few seconds per partition. Therefore I doubt that fsck was the reason that my shutdown was delayed the last time.

darkod
December 6th, 2012, 11:19 AM
The Blocks column in fdisk is not the same as what badblocks reports I think. I don't know why, because it just creates confusion.

The numbers reported by badblocks should be sector numbers. And as you can see in the beginning of the /dev/sdb fdisk results, the total number of sectors is 1,250,261,615 which means all those sectors reported as bad are really inside the disk.

Further more, by comparing the Start/End sectors of the partitions on /dev/sdb, the reported sectors are inside /dev/sdb4, the partition making up /dev/md3 if I'm not mistaken.

We are waiting for update whether the fsck helped a bit. Just because it finished fast it doesn't mean it didn't fix at least some of your problems.

ladasky
December 7th, 2012, 11:14 PM
I think (hope) this may be my final report.

I have the system running, though I have seen several strange behaviors. After completing the fsck, I have restarted the machine four times. In summary: I have some intermittent problems, which may or may not be fixing themselves with software updates and restarts, and which may or may not have had anything to do with file system repairs. It's hard to tell!

You can stop reading here if you don't want the details. I'm going to mark this thread as solved, though I would appreciate any insight on my problems that you may have.

● On the first restart, I did exactly one thing: installed the NVidia GPU driver. There are three choices: "current updates", which I think may be the 295.xx driver series; "version-experimental 304"; and "version-experimental 310".

I am one of those unfortunate people who ended up rebuilding my GPU driver manually on a regular basis (http://ubuntuforums.org/showthread.php?t=2055141), a problem that I hope that migrating to Ubuntu 12.04 will eliminate. Over on Ubuntu 11.10, I found that kernel updates would (seem to) break compatibility with certain video drivers. I went through two or three minor revisions of the 295.xx GPU drivers to compensate, and eventually found myself forced to try a 304.xx driver. Anyway, for this installation of Ubuntu 12.04 I chose "current-updates" because I can see that that is what is supported -- and I really don't want to get out ahead of Ubuntu support if I don't have to.

● On the second restart, I thought everything was running fairly normally for a while, at least the web browser seemed fine. But when I attempted to run LibreOffice Writer, the system slowed to an agonizing crawl, including the whole GUI. I was stuck on the LibreOffice splash page. Intermittent hard disk access dragged on for several minutes. I attempted a shutdown, such as the sluggish GUI would allow. The shutdown process also stalled. After 30 minutes, I gave up and cold-started the machine.

I remember seeing terrible lags like this once before, the first time that I encountered a problem with the NVidia GPU drivers (http://ubuntuforums.org/showthread.php?t=1960714) while using Ubuntu 11.10. But that's a problem I should have addressed on the first restart, when I installed the current driver.

● On the third restart, loading the OS was very slow. If this was due to some hard-disk cleanup that the system was doing as a consequence of my untidy shutdown, the system didn't say. Once I got past the login screen, everything seemed to be operating normally (including the previously uncooperative LibreOffice Write), with one exception: I had no audio! Under "sound settings," no audio devices were listed except for a "dummy output" device. I downloaded some updates that the System Update sent to me, and some applications that I need. No troubles, no lag. The second shutdown was clean and quick.

● Now I'm on my fourth restart. Loading the OS was quick this time. LibreOffice remains responsive. And, without me doing anything, I have audio again!

As you can imagine, I'm having trouble pinpointing my problem or, indeed, whether I even still have one. Over to you.

ladasky
January 6th, 2013, 11:54 PM
I know that I marked this thread as solved a month ago. But I have a follow-up report.

One of the two hard disks I was using in my RAID, /dev/sdb, was in fact going bad. As far back as three months ago, I noticed during the system boot process that the BIOS was taking several seconds to recognize my second hard drive. It used to be instantaneous. A few weeks ago, the BIOS delay lengthened to about 30 seconds, and then the system stopped seeing that hard drive completely!

I obtained a replacement hard drive. Because of the weird things that happened with partitions the last time, I rebuilt the RAID from scratch. This time, the installation of Ubuntu 12.04.1 went very smoothly.

I'm not sure how to diagnose a semi-functional hard drive from within Linux. In retrospect, I should have taken that BIOS delay more seriously.