Hi, whenever I set up a new raid0, it works fine, until I reboot and try to mount it again. Here's what I did to set it up:
(note /dev/sdc is my main drive, sda and sdb are new SATA harddrives)
Code:#fdisk /dev/sda Command (m for help): n (create a new primary partition) Command (m for help): t (then select da for "non-fs data") Command (m for help): w (write changes to disk and quit) #fdisk /dev/sdb Command (m for help): n (create a new primary partition) Command (m for help): t (then select da for "non-fs data") Command (m for help): w (write changes to disk and quit)And here all works successfully.Code:#mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sda1 /dev/sdb1 #mkfs.ext3 /dev/md0 #mkdir /raid1 #mount /dev/md0 /raid1
But when I restart, /dev/md0 is gone.
This is on a clean Ubuntu 9 x86 installation.
Also here is some information about the system before I do the restart:
Code:root@ubuntu-raid:~# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sdb1[1] sda1[0] 8385664 blocks 64k chunks unused devices: <none>Code:root@ubuntu-raid:~# cat /etc/mdadm/mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Sun, 26 Jul 2009 16:16:31 -0400 # by mkconf $Id$...now I reboot the machine (#shutdown -r now).Code:root@ubuntu-raid:~# mdadm --query --detail /dev/md0 /dev/md0: Version : 00.90 Creation Time : Sun Jul 26 16:18:35 2009 Raid Level : raid0 Array Size : 8385664 (8.00 GiB 8.59 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun Jul 26 16:18:35 2009 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 6b402e82:fec208ab:c4e69391:c984b73c (local to host ubuntu-raid) Events : 0.1 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1
During reboot, I see a message saying
which I believe is a good thing.Code:Starting MD monitoring service mdadm --monitor [ OK ]
Now back at the shell I type:
And no /dev/md0 exists anymore!Code:root@ubuntu-raid:~# ls /dev/md* -l brw-rw---- 1 root disk 254, 0 2009-07-26 16:26 /dev/md_d0 lrwxrwxrwx 1 root root 7 2009-07-26 16:26 /dev/md_d0p1 -> md/d0p1 lrwxrwxrwx 1 root root 7 2009-07-26 16:26 /dev/md_d0p2 -> md/d0p2 lrwxrwxrwx 1 root root 7 2009-07-26 16:26 /dev/md_d0p3 -> md/d0p3 lrwxrwxrwx 1 root root 7 2009-07-26 16:26 /dev/md_d0p4 -> md/d0p4 /dev/md: total 0 brw------- 1 root root 254, 0 2009-07-26 16:26 d0 brw------- 1 root root 254, 1 2009-07-26 16:26 d0p1 brw------- 1 root root 254, 2 2009-07-26 16:26 d0p2 brw------- 1 root root 254, 3 2009-07-26 16:26 d0p3 brw------- 1 root root 254, 4 2009-07-26 16:26 d0p4
Also mdstat shows this
Any suggestions on what I am doing wrong would be greatly appreciated.Code:root@ubuntu-raid:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda1[0](S) 4192832 blocks unused devices: <none>
Thanks.



Adv Reply

Bookmarks