EDIT: I changed fsck checking to 0 in fstab and everything boots. Is this required for raid? Problem solved, I guess.
Sorry for such a long winded post, but I'm at my wits end,PLEASE HELP!
Here go's...
I've recently built a new machine and needed to move /home and everything in it from one raid1 array to a new larger raid 1 array. To make sure I didn't lose any data by accident, I removed one of the working drives, set it aside, stopped the array and mounted it's partition manually. I'm using Server 9.10 64-Bit and doing everything via ssh.
I then placed one of the new soon to be RAIDed drives in, partitioned the drive, and setup the new array. 'sdc1' is the new disk and 'missing' is the mounted old disk(sdb1):
Code:
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdc1
I then formatted /dev/md0
Code:
sudo mkfs.ext4 /dev/md0
...and copied files over to it following this tutorial(link). I used sudo su because the files would not copy due to permissions.
Code:
cd /home
sudo su
find . -depth -print0 | cpio --null --sparse -pvd /mnt/NEW_HOME
I shutdown removed the old smaller '/dev/sdb', added the new larger drive, powered on, and added the new drive to the array(after partitioning and formatting it).
Code:
sudo mdadm --add /dev/md0 /dev/sdb1
It synced the drives. I then checked the files and they were all there and in working order and both drives were in active sync. I then changed fstab to use the new array as '/home'.
Code:
bryan@NAS:~$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
# / was on /dev/sda1 during installation
UUID=6d1a79fe-7431-4a14-b10c-a1bad83e566d / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=fef142b2-2bd9-4d96-9384-770c409a8618 none swap sw 0 0
#CD Rom
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0
#data RAID1 array
UUID=ef49f27c:490d5346:cced5de7:ca715931 /home ext4 nodev, nosuid 0 2
Then I reboot.
After the reboot I tried to log back on via ssh and the server would not connect. So I connected it to a monitor and it said it was waiting for /home UUID=ef49f27c:490d5346:cced5de7:ca715931 to come online. I entered the rescue shell, commented out the /home entry in fstab and rebooted. The server came up again but then complained that it couldn't chdir to /home as would be expected.
I check the array to make sure it is in working order; it's not:
Code:
bryan@NAS:/$ cat /proc/mdstat
Personalities :
md0 : inactive sdc1[1] sdb1[0]
1953519872 blocks
unused devices: <none>
bryan@NAS:/$ dmesg | tail
...
[ 293.858825] EXT4-fs (md0): unable to read superblock
So I stop the array and try to reassemble it:
Code:
bryan@NAS:/$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0
bryan@NAS:/$ sudo mdadm --assemble --scan
mdadm: /dev/md0 has been started with 2 drives.
bryan@NAS:/$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[0] sdc1[1]
976759936 blocks [2/2] [UU]
unused devices: <none>
OK... Then:
Code:
bryan@NAS:/$ sudo mount -t ext4 /dev/md0 /home
bryan@NAS:/$ ls /home/
bryan cupsys lost+found samba
EDIT: Forgot to mention that both drives are of type Linux_Raid_Autodetect(fd).
It works, so what the F is going on and what did I screw up? The drives are WD10EARS. These drives are not certified for RAID use, but they work without error once I do the above steps, so what's the deal? Once I reboot I have to do it all again.
Are the drives junk? Did I screw up by issuing the original commands as sudo su?
Which leads to my second question:
When looking at the ownership of most of the folders on these drives, root is listed as the owner and the group. Is this right? Wouldn't USER normally be the owner of /home/USER and all it's files?
Thanks in advance!!!
Bookmarks