PDA

View Full Version : [ubuntu] 8.10 dmraid issues



EvilNed
October 30th, 2008, 11:03 PM
We have 2 HP Xw 4600 workstations that we are installing on one, and upgrading from 8.04 on another. they both are raid 0, set up through the bios, so we are using dmraid. They are also both using the 64 version. The upgraded machine ran 8.04 flawlessly. Here is the output from a dmraid command

sudo dmraid -ay -vvv -d
WARN: locking /var/lock/dmraid/.lock
NOTICE: skipping removable device /dev/sdc
NOTICE: skipping removable device /dev/sdd
NOTICE: skipping removable device /dev/sde
NOTICE: skipping removable device /dev/sdf
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: isw metadata discovered
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching isw_baeifdbbfi
DEBUG: _find_set: not found isw_baeifdbbfi
DEBUG: _find_set: searching isw_baeifdbbfi_Windows
DEBUG: _find_set: searching isw_baeifdbbfi_Windows
DEBUG: _find_set: not found isw_baeifdbbfi_Windows
DEBUG: _find_set: not found isw_baeifdbbfi_Windows
DEBUG: _find_set: searching isw_baeifdbbfi_Windows
DEBUG: _find_set: searching isw_baeifdbbfi_Windows
DEBUG: _find_set: found isw_baeifdbbfi_Windows
DEBUG: _find_set: found isw_baeifdbbfi_Windows
NOTICE: added /dev/sdb to RAID set "isw_baeifdbbfi"
DEBUG: _find_set: searching isw_baeifdbbfi
DEBUG: _find_set: found isw_baeifdbbfi
DEBUG: _find_set: searching isw_baeifdbbfi_Windows
DEBUG: _find_set: searching isw_baeifdbbfi_Windows
DEBUG: _find_set: found isw_baeifdbbfi_Windows
DEBUG: _find_set: found isw_baeifdbbfi_Windows
DEBUG: _find_set: searching isw_baeifdbbfi_Windows
DEBUG: _find_set: searching isw_baeifdbbfi_Windows
DEBUG: _find_set: found isw_baeifdbbfi_Windows
DEBUG: _find_set: found isw_baeifdbbfi_Windows
NOTICE: added /dev/sda to RAID set "isw_baeifdbbfi"
DEBUG: checking isw device "/dev/sda"
ERROR: isw device for volume "Windows" broken on /dev/sda in RAID set "isw_baeifdbbfi_Windows"
ERROR: isw: wrong # of devices in RAID set "isw_baeifdbbfi_Windows" [4/2] on /dev/sda
DEBUG: checking isw device "/dev/sda"
ERROR: isw device for volume "Linux" broken on /dev/sda in RAID set "isw_baeifdbbfi_Windows"
ERROR: isw: wrong # of devices in RAID set "isw_baeifdbbfi_Windows" [4/2] on /dev/sda
DEBUG: checking isw device "/dev/sdb"
ERROR: isw device for volume "Windows" broken on /dev/sdb in RAID set "isw_baeifdbbfi_Windows"
ERROR: isw: wrong # of devices in RAID set "isw_baeifdbbfi_Windows" [4/2] on /dev/sdb
DEBUG: checking isw device "/dev/sdb"
ERROR: isw device for volume "Linux" broken on /dev/sdb in RAID set "isw_baeifdbbfi_Windows"
ERROR: isw: wrong # of devices in RAID set "isw_baeifdbbfi_Windows" [4/2] on /dev/sdb
DEBUG: set status of set "isw_baeifdbbfi_Windows" to 2
DEBUG: set status of set "isw_baeifdbbfi" to 4
INFO: Activating GROUP RAID set "isw_baeifdbbfi"
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "isw_baeifdbbfi_Windows"
DEBUG: freeing device "isw_baeifdbbfi_Windows", path "/dev/sda"
DEBUG: freeing device "isw_baeifdbbfi_Windows", path "/dev/sda"
DEBUG: freeing device "isw_baeifdbbfi_Windows", path "/dev/sdb"
DEBUG: freeing device "isw_baeifdbbfi_Windows", path "/dev/sdb"
DEBUG: freeing devices of RAID set "isw_baeifdbbfi"
DEBUG: freeing device "isw_baeifdbbfi", path "/dev/sda"
DEBUG: freeing device "isw_baeifdbbfi", path "/dev/sdb"

badbeeker
October 31st, 2008, 12:14 AM
I have came across this as well.. Haven't been able to figure it out myself since I'm not too sure what needs to be done. Anyone else using dmraid?

I upgraded via the upgrade manager and now get the busybox screen.

It seems that when I use the older kernel (2.6.24-21), dmraid loads fine and I'm able to get to the command prompt but not to the desktop as the nvidia drivers seem to be tied to the newer kernel.

If I use 2.6.27-7 I get the busybox screen as dmraid fails to find the raid.

:confused:

mike4ubuntu
October 31st, 2008, 06:46 PM
yes, I have been using dmraid with an Asus M2N-VM motherboard with nvidia raid disk controller for several releases. It seems to work ok with the command

sudo dmraid -ay

However, I noticed starting with Intrepid that the /etc/init.d/dmraid script doesn't get setup with installing dmraid. It did get setup when installing dmraid in Hardy. At any rate, I just copied the script from a Hardy install, and it works with Intrepid ok.

badbeeker
October 31st, 2008, 06:46 PM
Ack, didn't see the update, thanks for the information mike, unfortunately I don't have a 8.04 os anymore since I upgraded.. is there a way to grab the old dmraid script even though I've already upgraded?

mike4ubuntu
October 31st, 2008, 08:17 PM
I'm surprised that it actually deleted the /etc/init.d/dmraid script when you upgraded from 8.04 to 8.10. It's a pretty simple script. It basically just implements the start/stop/restart commands for init.d services:



#!/bin/bash

# try to load module in case that hasn't been done yet
modprobe dm-mod >/dev/null 2>&1

set -e

. /lib/lsb/init-functions
[ -r /etc/default/rcS ] && . /etc/default/rcS

[ -x /sbin/dmraid ] || exit 0

case "$1" in
start|"")
log_begin_msg "Setting up DMRAID devices..."
if [ "$VERBOSE" != no ]; then
/sbin/dmraid --activate yes --ignorelocking --verbose
else
/sbin/dmraid --activate yes --ignorelocking >/dev/null 2>&1
fi
log_end_msg $?
;;

stop)
log_begin_msg "Shutting down DMRAID devices... "
if [ "$VERBOSE" != no ]; then
/sbin/dmraid --activate no --ignorelocking --verbose
else
/sbin/dmraid --activate no --ignorelocking >/dev/null 2>&1
fi
log_end_msg $?
;;

restart|force-reload)
$0 stop
sleep 1
$0 start
;;

*)
log_success_msg "Usage: dmraid {start|stop|restart|force-reload}"
exit 1
;;
esac

mike4ubuntu
October 31st, 2008, 08:25 PM
I'm surprised that it actually deleted the /etc/init.d/dmraid script when you upgraded from 8.04 to 8.10. It's a pretty simple script. It basically just implements the start/stop/restart commands for init.d services:



#!/bin/bash

# try to load module in case that hasn't been done yet
modprobe dm-mod >/dev/null 2>&1

set -e

. /lib/lsb/init-functions
[ -r /etc/default/rcS ] && . /etc/default/rcS

[ -x /sbin/dmraid ] || exit 0

case "$1" in
start|"")
log_begin_msg "Setting up DMRAID devices..."
if [ "$VERBOSE" != no ]; then
/sbin/dmraid --activate yes --ignorelocking --verbose
else
/sbin/dmraid --activate yes --ignorelocking >/dev/null 2>&1
fi
log_end_msg $?
;;

stop)
log_begin_msg "Shutting down DMRAID devices... "
if [ "$VERBOSE" != no ]; then
/sbin/dmraid --activate no --ignorelocking --verbose
else
/sbin/dmraid --activate no --ignorelocking >/dev/null 2>&1
fi
log_end_msg $?
;;

restart|force-reload)
$0 stop
sleep 1
$0 start
;;

*)
log_success_msg "Usage: dmraid {start|stop|restart|force-reload}"
exit 1
;;
esac

mike4ubuntu
October 31st, 2008, 08:30 PM
sorry for the double reply. There was an ubuntuforums database error and I submitted the same thing twice.

EvilNed
October 31st, 2008, 09:18 PM
We tried a fresh raid created in the bios, with a fresh 8.10 cd install, and still have the problem with raid detection errors.

psusi
October 31st, 2008, 09:33 PM
Mike: that script was removed intentionally because udev now invokes dmraid plug and play style whenever it detects a new hard disk.

EvilNed: It looks like you have the raid set divided into two sub sets in the bios, one called Windows, and one called Linux. Is the working machine set up this way? It looks like dmraid is getting confused by this and failing because it thinks the raid set has 4 out of 2 disks. I would suggest not dividing the array with the raid utility in the bios, and just partition the array into windows and linux partitions instead.

You also should file a bug report stating that dmraid fails with subdivided raid sets on isw, and attach the files generated by dmraid -rD.

Serg783
November 1st, 2008, 01:04 AM
Same problem! Seems to be bug in last version of dmraid. I've instaled earlier version and see my partitions at least... Now - formatting it

https://edge.launchpad.net/ubuntu/hardy/amd64/dmraid/1.0.0.rc14-0ubuntu3

dgleeson
November 3rd, 2008, 07:03 AM
Similar experience to Serg783,

used the following package and can now view all partitions correctly. About to continue with install and I'll post my luck.

https://launchpad.net/ubuntu/hardy/i386/dmraid/1.0.0.rc14-0ubuntu3

psusi
November 3rd, 2008, 04:57 PM
Serge and dgleeson, are you having the same specific problem with the current version of dmraid? Where it prints an error with x / y disks in the set with x > y, and your raid set is subdivided?

If I can narrow it down to that being the cause of the problem and get some sample metadata I can start debugging.

Richard Thompson
November 4th, 2008, 01:03 PM
Hi,
I've got the same problem! It's a sub-divided RAID pair on an Intel ICH9R fakeraid controller, for dual booting (else I'd use proper softraid):

ubuntu@ubuntu:~$ sudo dmraid -r
/dev/sdb: isw, "isw_ecbdhhhfe", GROUP, ok, 976773165 sectors, data@ 0
/dev/sda: isw, "isw_ecbdhhhfe", GROUP, ok, 976773165 sectors, data@ 0
ubuntu@ubuntu:~$ sudo dmraid -ay
ERROR: isw device for volume "Linux" broken on /dev/sda in RAID set "isw_ecbdhhhfe_Linux"
ERROR: isw: wrong # of devices in RAID set "isw_ecbdhhhfe_Linux" [4/2] on /dev/sda
ERROR: isw device for volume "Windows" broken on /dev/sda in RAID set "isw_ecbdhhhfe_Linux"
ERROR: isw: wrong # of devices in RAID set "isw_ecbdhhhfe_Linux" [4/2] on /dev/sda
ERROR: isw device for volume "Linux" broken on /dev/sdb in RAID set "isw_ecbdhhhfe_Linux"
ERROR: isw: wrong # of devices in RAID set "isw_ecbdhhhfe_Linux" [4/2] on /dev/sdb
ERROR: isw device for volume "Windows" broken on /dev/sdb in RAID set "isw_ecbdhhhfe_Linux"
ERROR: isw: wrong # of devices in RAID set "isw_ecbdhhhfe_Linux" [4/2] on /dev/sdb
ubuntu@ubuntu:~$ sudo dmraid -ay -vvv -d
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: isw metadata discovered
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching isw_ecbdhhhfe
DEBUG: _find_set: not found isw_ecbdhhhfe
DEBUG: _find_set: searching isw_ecbdhhhfe_Linux
DEBUG: _find_set: searching isw_ecbdhhhfe_Linux
DEBUG: _find_set: not found isw_ecbdhhhfe_Linux
DEBUG: _find_set: not found isw_ecbdhhhfe_Linux
DEBUG: _find_set: searching isw_ecbdhhhfe_Linux
DEBUG: _find_set: searching isw_ecbdhhhfe_Linux
DEBUG: _find_set: found isw_ecbdhhhfe_Linux
DEBUG: _find_set: found isw_ecbdhhhfe_Linux
NOTICE: added /dev/sdb to RAID set "isw_ecbdhhhfe"
DEBUG: _find_set: searching isw_ecbdhhhfe
DEBUG: _find_set: found isw_ecbdhhhfe
DEBUG: _find_set: searching isw_ecbdhhhfe_Linux
DEBUG: _find_set: searching isw_ecbdhhhfe_Linux
DEBUG: _find_set: found isw_ecbdhhhfe_Linux
DEBUG: _find_set: found isw_ecbdhhhfe_Linux
DEBUG: _find_set: searching isw_ecbdhhhfe_Linux
DEBUG: _find_set: searching isw_ecbdhhhfe_Linux
DEBUG: _find_set: found isw_ecbdhhhfe_Linux
DEBUG: _find_set: found isw_ecbdhhhfe_Linux
NOTICE: added /dev/sda to RAID set "isw_ecbdhhhfe"
DEBUG: checking isw device "/dev/sda"
ERROR: isw device for volume "Linux" broken on /dev/sda in RAID set "isw_ecbdhhhfe_Linux"
ERROR: isw: wrong # of devices in RAID set "isw_ecbdhhhfe_Linux" [4/2] on /dev/sda
DEBUG: checking isw device "/dev/sda"
ERROR: isw device for volume "Windows" broken on /dev/sda in RAID set "isw_ecbdhhhfe_Linux"
ERROR: isw: wrong # of devices in RAID set "isw_ecbdhhhfe_Linux" [4/2] on /dev/sda
DEBUG: checking isw device "/dev/sdb"
ERROR: isw device for volume "Linux" broken on /dev/sdb in RAID set "isw_ecbdhhhfe_Linux"
ERROR: isw: wrong # of devices in RAID set "isw_ecbdhhhfe_Linux" [4/2] on /dev/sdb
DEBUG: checking isw device "/dev/sdb"
ERROR: isw device for volume "Windows" broken on /dev/sdb in RAID set "isw_ecbdhhhfe_Linux"
ERROR: isw: wrong # of devices in RAID set "isw_ecbdhhhfe_Linux" [4/2] on /dev/sdb
DEBUG: set status of set "isw_ecbdhhhfe_Linux" to 2
DEBUG: set status of set "isw_ecbdhhhfe" to 4
INFO: Activating GROUP RAID set "isw_ecbdhhhfe"
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "isw_ecbdhhhfe_Linux"
DEBUG: freeing device "isw_ecbdhhhfe_Linux", path "/dev/sda"
DEBUG: freeing device "isw_ecbdhhhfe_Linux", path "/dev/sda"
DEBUG: freeing device "isw_ecbdhhhfe_Linux", path "/dev/sdb"
DEBUG: freeing device "isw_ecbdhhhfe_Linux", path "/dev/sdb"
DEBUG: freeing devices of RAID set "isw_ecbdhhhfe"
DEBUG: freeing device "isw_ecbdhhhfe", path "/dev/sda"
DEBUG: freeing device "isw_ecbdhhhfe", path "/dev/sdb"
ubuntu@ubuntu:~$


I found this, if it helps; https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/292302

Thanks.

aacero
November 9th, 2008, 05:24 PM
Downgrading helped me too. I have a subdivided RAID with a 160MB RAID1 set at the beginning of the disks and a RAID0 set using the balance. Here's the output before the downgrade:

$ sudo dmraid -ay -vvv -d
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdd: asr discovering
NOTICE: /dev/sdd: ddf1 discovering
NOTICE: /dev/sdd: hpt37x discovering
NOTICE: /dev/sdd: hpt45x discovering
NOTICE: /dev/sdd: isw discovering
NOTICE: /dev/sdd: jmicron discovering
NOTICE: /dev/sdd: lsi discovering
NOTICE: /dev/sdd: nvidia discovering
NOTICE: /dev/sdd: pdc discovering
NOTICE: /dev/sdd: sil discovering
NOTICE: /dev/sdd: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: isw metadata discovered
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching isw_cedajeedhi
DEBUG: _find_set: not found isw_cedajeedhi
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: not found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: not found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: found isw_cedajeedhi_JUPITER_C_RAID1
NOTICE: added /dev/sdb to RAID set "isw_cedajeedhi"
DEBUG: _find_set: searching isw_cedajeedhi
DEBUG: _find_set: found isw_cedajeedhi
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: found isw_cedajeedhi_JUPITER_C_RAID1
NOTICE: added /dev/sda to RAID set "isw_cedajeedhi"
DEBUG: checking isw device "/dev/sdb"
ERROR: isw device for volume "JUPITER_C_RAID1" broken on /dev/sdb in RAID set "isw_cedajeedhi_JUPITER_C_RAID1"
ERROR: isw: wrong # of devices in RAID set "isw_cedajeedhi_JUPITER_C_RAID1" [4/2] on /dev/sdb
DEBUG: checking isw device "/dev/sdb"
ERROR: isw device for volume "SCRATCH_RAID0" broken on /dev/sdb in RAID set "isw_cedajeedhi_JUPITER_C_RAID1"
ERROR: isw: wrong # of devices in RAID set "isw_cedajeedhi_JUPITER_C_RAID1" [4/2] on /dev/sdb
DEBUG: checking isw device "/dev/sda"
ERROR: isw device for volume "JUPITER_C_RAID1" broken on /dev/sda in RAID set "isw_cedajeedhi_JUPITER_C_RAID1"
ERROR: isw: wrong # of devices in RAID set "isw_cedajeedhi_JUPITER_C_RAID1" [4/2] on /dev/sda
DEBUG: checking isw device "/dev/sda"
ERROR: isw device for volume "SCRATCH_RAID0" broken on /dev/sda in RAID set "isw_cedajeedhi_JUPITER_C_RAID1"
ERROR: isw: wrong # of devices in RAID set "isw_cedajeedhi_JUPITER_C_RAID1" [4/2] on /dev/sda
DEBUG: set status of set "isw_cedajeedhi_JUPITER_C_RAID1" to 2
DEBUG: set status of set "isw_cedajeedhi" to 4
ERROR: no mapping possible for RAID set isw_cedajeedhi_JUPITER_C_RAID1
INFO: Activating GROUP RAID set "isw_cedajeedhi"
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "isw_cedajeedhi_JUPITER_C_RAID1"
DEBUG: freeing device "isw_cedajeedhi_JUPITER_C_RAID1", path "/dev/sdb"
DEBUG: freeing device "isw_cedajeedhi_JUPITER_C_RAID1", path "/dev/sdb"
DEBUG: freeing device "isw_cedajeedhi_JUPITER_C_RAID1", path "/dev/sda"
DEBUG: freeing device "isw_cedajeedhi_JUPITER_C_RAID1", path "/dev/sda"
DEBUG: freeing devices of RAID set "isw_cedajeedhi"
DEBUG: freeing device "isw_cedajeedhi", path "/dev/sdb"
DEBUG: freeing device "isw_cedajeedhi", path "/dev/sda"

aacero
November 9th, 2008, 05:28 PM
psusi> If I can narrow it down to that being the cause of the problem and get some sample metadata I can start debugging.

psusi,

I can send you some metadata if you still need it. Which version of dmraid should I generate it with, the working or non-working version?

Many thanks,
aaa

psusi
November 9th, 2008, 09:12 PM
I found the cause and came up with a fix, but it caused breakage for raid 10, so I'm still working on it. See bug #292302 and try the package suggested there.

cariboo
November 9th, 2008, 10:37 PM
I just installed Intrepid on my server and dmraid was installed automatically. All i had to do was add the the following line to fstab:


/dev/mapper/nvidia_ifdejefh1 /home/storage ext3 relatime 0 1


and I was good to go.

Due to the number of people that are having problems upgrading from an earlier version to Intrepid, I recommend doing a clean installation.

Jim

yonish
November 14th, 2008, 01:22 AM
Well, I have a similar issue :



I tried downgrading and I can't understand whether the downgrade didn't work or It's just not working ;
yoni@yoniBuntu:~$ sudo dmraid -ay
/dev/sdb: "sil" and "isw" formats discovered (using isw)!
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_baiacbfgeh_Volume0"
ERROR: isw: wrong # of devices in RAID set "isw_baiacbfgeh_Volume0" [1/2] on /dev/sdb
RAID set "nvidia_fghcaafc" already active

When I restart my computer after interpid was running (even without dmraid installed) I see a bios diagnostic message telling me one of my two raided hardDisks has "failed". This is solved by a complete shutdown - startup sequence (instead of reboot.

after downgrading In synaptic, I see installed version is 1.0.0.rc14-2ubuntu13.

The sequence of operations I performed in order to "downgrade"
:
1. added the two lines from one of the replies above to my sources.list for apt.
2. sudo apt-get update
3. sudo apt-get upgrade

I saw the installation log and everything looks fine.

Help ?

psusi
November 14th, 2008, 05:57 AM
Due to the number of people that are having problems upgrading from an earlier version to Intrepid, I recommend doing a clean installation.
Jim

The issue has nothing to do with upgrading; the Intrepid version simply has a bug that isn't there in Hardy.


Well, I have a similar issue :
I tried downgrading and I can't understand whether the downgrade didn't work or It's just not working ;
yoni@yoniBuntu:~$ sudo dmraid -ay
/dev/sdb: "sil" and "isw" formats discovered (using isw)!
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_baiacbfgeh_Volume0"
ERROR: isw: wrong # of devices in RAID set "isw_baiacbfgeh_Volume0" [1/2] on /dev/sdb
RAID set "nvidia_fghcaafc" already active


Your issue isn't related to this bug. Your problem appears to be that you have used your disks in multiple fakeraid controllers and they still contain the metadata from all of them. Your sdb has both sil and isw formats, and it looks like the other disk is nvidia. What is is supposed to be? You need to get rid of the wrong formats with the -E option to dmraid, which can be combined with -f to specify which one to erase.

aacero
November 17th, 2008, 08:01 PM
I found the cause and came up with a fix, but it caused breakage for raid 10, so I'm still working on it. See bug #292302 and try the package suggested there.

I tried the new package (1.0.0.rc14-2ubuntu13), but it only adds devices for the first RAID set:
sudo dmraid -ay -v
RAID set "isw_cedajeedhi_JUPITER_C_RAID1" already active
RAID set "isw_cedajeedhi_SCRATCH_RAID0" already active
INFO: Activating GROUP RAID set "isw_cedajeedhi"
ERROR: dos: partition address past end of RAID device
RAID set "isw_cedajeedhi_JUPITER_C_RAID11" already active
INFO: Activating partition RAID set "isw_cedajeedhi_JUPITER_C_RAID11"


I've downgraded to dmraid_1.0.0.rc14-0ubuntu3_i386.deb for now.
sudo dmraid -ay -v
RAID set "isw_cedajeedhi_JUPITER_C_RAID1" already active
RAID set "isw_cedajeedhi_SCRATCH_RAID0" already active
INFO: Activating GROUP RAID set "isw_cedajeedhi"
RAID set "isw_cedajeedhi_JUPITER_C_RAID11" already active
INFO: Activating partition RAID set "isw_cedajeedhi_JUPITER_C_RAID11"
RAID set "isw_cedajeedhi_SCRATCH_RAID01" already active
INFO: Activating partition RAID set "isw_cedajeedhi_SCRATCH_RAID01"

psusi
November 17th, 2008, 09:09 PM
Other than the odd error about the dos partition being too long, it looks like it was working fine. The disks were already active though.

Try running with -vvvv -dddd for more output. Also are you doing this from the livecd? That might help so as not to get mixed up with what's on your hard drive, and so you can be sure it really works or does not when you start without the devices already active.

aacero
November 17th, 2008, 09:29 PM
Other than the odd error about the dos partition being too long, it looks like it was working fine. The disks were already active though.

Try running with -vvvv -dddd for more output. Also are you doing this from the livecd? That might help so as not to get mixed up with what's on your hard drive, and so you can be sure it really works or does not when you start without the devices already active.


I had booted with the old package installed, and then installed the new package to generate the error message. When I boot with the new package installed there is no /dev/mapper/isw_cedajeedhi_SCRATCH_RAID01 device. Here's the output from a clean boot with the new package installed:

sudo dmraid -ay -dddd -vvvv
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdd: asr discovering
NOTICE: /dev/sdd: ddf1 discovering
NOTICE: /dev/sdd: hpt37x discovering
NOTICE: /dev/sdd: hpt45x discovering
NOTICE: /dev/sdd: isw discovering
NOTICE: /dev/sdd: jmicron discovering
NOTICE: /dev/sdd: lsi discovering
NOTICE: /dev/sdd: nvidia discovering
NOTICE: /dev/sdd: pdc discovering
NOTICE: /dev/sdd: sil discovering
NOTICE: /dev/sdd: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: isw metadata discovered
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching isw_cedajeedhi
DEBUG: _find_set: not found isw_cedajeedhi
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: not found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: not found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_SCRATCH_RAID0
DEBUG: _find_set: searching isw_cedajeedhi_SCRATCH_RAID0
DEBUG: _find_set: searching isw_cedajeedhi_SCRATCH_RAID0
DEBUG: _find_set: not found isw_cedajeedhi_SCRATCH_RAID0
DEBUG: _find_set: not found isw_cedajeedhi_SCRATCH_RAID0
DEBUG: _find_set: not found isw_cedajeedhi_SCRATCH_RAID0
NOTICE: added /dev/sdb to RAID set "isw_cedajeedhi"
DEBUG: _find_set: searching isw_cedajeedhi
DEBUG: _find_set: found isw_cedajeedhi
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: found isw_cedajeedhi_JUPITER_C_RAID1
DEBUG: _find_set: searching isw_cedajeedhi_SCRATCH_RAID0
DEBUG: _find_set: searching isw_cedajeedhi_SCRATCH_RAID0
DEBUG: _find_set: found isw_cedajeedhi_SCRATCH_RAID0
DEBUG: _find_set: found isw_cedajeedhi_SCRATCH_RAID0
NOTICE: added /dev/sda to RAID set "isw_cedajeedhi"
DEBUG: checking isw device "/dev/sdb"
DEBUG: checking isw device "/dev/sda"
DEBUG: set status of set "isw_cedajeedhi_JUPITER_C_RAID1" to 16
DEBUG: checking isw device "/dev/sdb"
DEBUG: checking isw device "/dev/sda"
DEBUG: set status of set "isw_cedajeedhi_SCRATCH_RAID0" to 16
DEBUG: set status of set "isw_cedajeedhi" to 16
RAID set "isw_cedajeedhi_JUPITER_C_RAID1" already active
RAID set "isw_cedajeedhi_SCRATCH_RAID0" already active
INFO: Activating GROUP RAID set "isw_cedajeedhi"
NOTICE: discovering partitions on "isw_cedajeedhi_JUPITER_C_RAID1"
NOTICE: /dev/mapper/isw_cedajeedhi_JUPITER_C_RAID1: dos discovering
NOTICE: /dev/mapper/isw_cedajeedhi_JUPITER_C_RAID1: dos metadata discovered
DEBUG: _find_set: searching isw_cedajeedhi_JUPITER_C_RAID11
DEBUG: _find_set: not found isw_cedajeedhi_JUPITER_C_RAID11
NOTICE: created partitioned RAID set(s) for /dev/mapper/isw_cedajeedhi_JUPITER_C_RAID1
NOTICE: discovering partitions on "isw_cedajeedhi_SCRATCH_RAID0"
NOTICE: /dev/mapper/isw_cedajeedhi_SCRATCH_RAID0: dos discovering
NOTICE: /dev/mapper/isw_cedajeedhi_SCRATCH_RAID0: dos metadata discovered
ERROR: dos: partition address past end of RAID device
NOTICE: created partitioned RAID set(s) for /dev/mapper/isw_cedajeedhi_SCRATCH_RAID0
RAID set "isw_cedajeedhi_JUPITER_C_RAID11" already active
INFO: Activating partition RAID set "isw_cedajeedhi_JUPITER_C_RAID11"
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "isw_cedajeedhi_JUPITER_C_RAID1"
DEBUG: freeing device "isw_cedajeedhi_JUPITER_C_RAID1", path "/dev/sdb"
DEBUG: freeing device "isw_cedajeedhi_JUPITER_C_RAID1", path "/dev/sda"
DEBUG: freeing devices of RAID set "isw_cedajeedhi_SCRATCH_RAID0"
DEBUG: freeing device "isw_cedajeedhi_SCRATCH_RAID0", path "/dev/sdb"
DEBUG: freeing device "isw_cedajeedhi_SCRATCH_RAID0", path "/dev/sda"
DEBUG: freeing devices of RAID set "isw_cedajeedhi"
DEBUG: freeing device "isw_cedajeedhi", path "/dev/sdb"
DEBUG: freeing device "isw_cedajeedhi", path "/dev/sda"
DEBUG: freeing devices of RAID set "isw_cedajeedhi_JUPITER_C_RAID11"
DEBUG: freeing device "isw_cedajeedhi_JUPITER_C_RAID11", path "/dev/mapper/isw_cedajeedhi_JUPITER_C_RAID1"

ls -l /dev/mapper/
total 0
crw-rw---- 1 root root 10, 60 2008-11-17 10:21 control
brw-rw---- 1 root disk 254, 0 2008-11-17 10:21 isw_cedajeedhi_JUPITER_C_RAID1
brw-rw---- 1 root disk 254, 2 2008-11-17 10:21 isw_cedajeedhi_JUPITER_C_RAID11
brw-rw---- 1 root disk 254, 1 2008-11-17 10:21 isw_cedajeedhi_SCRATCH_RAID0

Thanks for taking a look,
aaa

psusi
November 17th, 2008, 11:59 PM
Could you post the output of:


sudo dmraid -n /dev/sda
sudo dmraid -n /dev/sdb
sudo fdisk -lu /dev/mapper/isw_cedajeedhi_SCRATCH_RAID0

aacero
November 18th, 2008, 02:26 AM
Could you post the output of:


sudo dmraid -n /dev/sda
sudo dmraid -n /dev/sdb
sudo fdisk -lu /dev/mapper/isw_cedajeedhi_SCRATCH_RAID0

~# dmraid -n /dev/sda; dmraid -n /dev/sdb; fdisk -lu /dev/mapper/isw_cedajeedhi_SCRATCH_RAID0
/dev/sda (isw):
0x000 sig: " Intel Raid ISM Cfg Sig. 1.2.00"
0x020 check_sum: 4203252941
0x024 mpb_size: 648
0x028 family_num: 2430944378
0x02c generation_num: 1545611
0x030 reserved[0]: 4080
0x034 reserved[1]: 3221225472
0x038 num_disks: 2
0x039 num_raid_devs: 2
0x03a fill[0]: 2
0x03b fill[1]: 0
0x0d8 disk[0].serial: " WD-WCANK4470138"
0x0e8 disk[0].totalBlocks: 488281250
0x0ec disk[0].scsiId: 0x100
0x0f0 disk[0].status: 0x13a
0x108 disk[1].serial: " Y667EW3E"
0x118 disk[1].totalBlocks: 488281250
0x11c disk[1].scsiId: 0x0
0x120 disk[1].status: 0x13a
0x138 isw_dev[0].volume: " JUPITER_C_RAID1"
0x14c isw_dev[0].SizeHigh: 0
0x148 isw_dev[0].SizeLow: 312499200
0x150 isw_dev[0].status: 0x80
0x154 isw_dev[0].reserved_blocks: 0
0x158 isw_dev[0].filler[0]: 131072
0x188 isw_dev[0].vol.reserved[0]: 610350
0x190 isw_dev[0].vol.migr_state: 0
0x191 isw_dev[0].vol.migr_type: 2
0x192 isw_dev[0].vol.dirty: 0
0x193 isw_dev[0].vol.fill[0]: 0
0x1a8 isw_dev[0].vol.map.pba_of_lba0: 0
0x1ac isw_dev[0].vol.map.blocks_per_member: 312499200
0x1b0 isw_dev[0].vol.map.num_data_stripes: 610350
0x1b4 isw_dev[0].vol.map.blocks_per_strip: 256
0x1b6 isw_dev[0].vol.map.map_state: 0
0x1b7 isw_dev[0].vol.map.raid_level: 1
0x1b8 isw_dev[0].vol.map.num_members: 2
0x1b9 isw_dev[0].vol.map.reserved[0]: 2
0x1ba isw_dev[0].vol.map.reserved[1]: 255
0x1bb isw_dev[0].vol.map.reserved[2]: 1
0x1d8 isw_dev[0].vol.map.disk_ord_tbl[0]: 0x0
0x1dc isw_dev[0].vol.map.disk_ord_tbl[1]: 0x1
0x1e0 isw_dev[1].volume: " SCRATCH_RAID0"
0x1f4 isw_dev[1].SizeHigh: 0
0x1f0 isw_dev[1].SizeLow: 351545344
0x1f8 isw_dev[1].status: 0xc
0x1fc isw_dev[1].reserved_blocks: 0
0x200 isw_dev[1].filler[0]: 65536
0x238 isw_dev[1].vol.migr_state: 0
0x239 isw_dev[1].vol.migr_type: 0
0x23a isw_dev[1].vol.dirty: 0
0x23b isw_dev[1].vol.fill[0]: 0
0x250 isw_dev[1].vol.map.pba_of_lba0: 312503296
0x254 isw_dev[1].vol.map.blocks_per_member: 175772931
0x258 isw_dev[1].vol.map.num_data_stripes: 686612
0x25c isw_dev[1].vol.map.blocks_per_strip: 256
0x25e isw_dev[1].vol.map.map_state: 0
0x25f isw_dev[1].vol.map.raid_level: 0
0x260 isw_dev[1].vol.map.num_members: 2
0x261 isw_dev[1].vol.map.reserved[0]: 1
0x262 isw_dev[1].vol.map.reserved[1]: 255
0x263 isw_dev[1].vol.map.reserved[2]: 1
0x280 isw_dev[1].vol.map.disk_ord_tbl[0]: 0x0
0x284 isw_dev[1].vol.map.disk_ord_tbl[1]: 0x1

/dev/sdb (isw):
0x000 sig: " Intel Raid ISM Cfg Sig. 1.2.00"
0x020 check_sum: 4203252941
0x024 mpb_size: 648
0x028 family_num: 2430944378
0x02c generation_num: 1545611
0x030 reserved[0]: 4080
0x034 reserved[1]: 3221225472
0x038 num_disks: 2
0x039 num_raid_devs: 2
0x03a fill[0]: 2
0x03b fill[1]: 0
0x0d8 disk[0].serial: " WD-WCANK4470138"
0x0e8 disk[0].totalBlocks: 488281250
0x0ec disk[0].scsiId: 0x100
0x0f0 disk[0].status: 0x13a
0x108 disk[1].serial: " Y667EW3E"
0x118 disk[1].totalBlocks: 488281250
0x11c disk[1].scsiId: 0x0
0x120 disk[1].status: 0x13a
0x138 isw_dev[0].volume: " JUPITER_C_RAID1"
0x14c isw_dev[0].SizeHigh: 0
0x148 isw_dev[0].SizeLow: 312499200
0x150 isw_dev[0].status: 0x80
0x154 isw_dev[0].reserved_blocks: 0
0x158 isw_dev[0].filler[0]: 131072
0x188 isw_dev[0].vol.reserved[0]: 610350
0x190 isw_dev[0].vol.migr_state: 0
0x191 isw_dev[0].vol.migr_type: 2
0x192 isw_dev[0].vol.dirty: 0
0x193 isw_dev[0].vol.fill[0]: 0
0x1a8 isw_dev[0].vol.map.pba_of_lba0: 0
0x1ac isw_dev[0].vol.map.blocks_per_member: 312499200
0x1b0 isw_dev[0].vol.map.num_data_stripes: 610350
0x1b4 isw_dev[0].vol.map.blocks_per_strip: 256
0x1b6 isw_dev[0].vol.map.map_state: 0
0x1b7 isw_dev[0].vol.map.raid_level: 1
0x1b8 isw_dev[0].vol.map.num_members: 2
0x1b9 isw_dev[0].vol.map.reserved[0]: 2
0x1ba isw_dev[0].vol.map.reserved[1]: 255
0x1bb isw_dev[0].vol.map.reserved[2]: 1
0x1d8 isw_dev[0].vol.map.disk_ord_tbl[0]: 0x0
0x1dc isw_dev[0].vol.map.disk_ord_tbl[1]: 0x1
0x1e0 isw_dev[1].volume: " SCRATCH_RAID0"
0x1f4 isw_dev[1].SizeHigh: 0
0x1f0 isw_dev[1].SizeLow: 351545344
0x1f8 isw_dev[1].status: 0xc
0x1fc isw_dev[1].reserved_blocks: 0
0x200 isw_dev[1].filler[0]: 65536
0x238 isw_dev[1].vol.migr_state: 0
0x239 isw_dev[1].vol.migr_type: 0
0x23a isw_dev[1].vol.dirty: 0
0x23b isw_dev[1].vol.fill[0]: 0
0x250 isw_dev[1].vol.map.pba_of_lba0: 312503296
0x254 isw_dev[1].vol.map.blocks_per_member: 175772931
0x258 isw_dev[1].vol.map.num_data_stripes: 686612
0x25c isw_dev[1].vol.map.blocks_per_strip: 256
0x25e isw_dev[1].vol.map.map_state: 0
0x25f isw_dev[1].vol.map.raid_level: 0
0x260 isw_dev[1].vol.map.num_members: 2
0x261 isw_dev[1].vol.map.reserved[0]: 1
0x262 isw_dev[1].vol.map.reserved[1]: 255
0x263 isw_dev[1].vol.map.reserved[2]: 1
0x280 isw_dev[1].vol.map.disk_ord_tbl[0]: 0x0
0x284 isw_dev[1].vol.map.disk_ord_tbl[1]: 0x1


Disk /dev/mapper/isw_cedajeedhi_SCRATCH_RAID0: 89.9 GB, 89995740672 bytes
255 heads, 63 sectors/track, 10941 cylinders, total 175772931 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x35e8e30a

Device Boot Start End Blocks Id System
/dev/mapper/isw_cedajeedhi_SCRATCH_RAID0p1 63 351534329 175767133+ 42 SFS

psusi
November 18th, 2008, 04:31 AM
I take it that the stripe is supposed to be 180 gigs, not just 90? Can you post the output of:


sudo dmsetup table isw_cedajeedhi_SCRATCH_RAID0

And run:


sudo dmraid -rD /dev/sda
sudo dmraid -rD /dev/sdb

And attach the files generated to the bug report.

Also it doesn't really matter, but why does the partition say the type is "SFS" instead of linux?

aacero
November 18th, 2008, 05:43 AM
I take it that the stripe is supposed to be 180 gigs, not just 90? Can you post the output of:


https://mail.google.com/mail/?shva=1#inbox/11dac62c3313be54

And run:


sudo dmraid -rD /dev/sda
sudo dmraid -rD /dev/sdb

And attach the files generated to the bug report.

Also it doesn't really matter, but why does the partition say the type is "SFS" instead of linux?

:~$ sudo dmsetup table isw_cedajeedhi_SCRATCH_RAID0
0 175772931 mirror core 2 131072 nosync 2 8:16 312503296 8:0 312503296 1 handle_errors


Please see attachment for output of 'dmraid -rD ...'

Regarding the partition type 'SFS" -- it should be (is) HPFS/NTFS!
I did install grub4dos under Windows to facilitate booting from other drives (sdd in this case), but I haven't otherwise intentionally modified the metadata of sda/sdb under winxp or linux.

psusi
November 18th, 2008, 05:45 AM
Oh boy, looks like it tried to mirror that volume instead of stripe it.

aacero
November 18th, 2008, 06:09 AM
Oh boy, looks like it tried to mirror that volume instead of stripe it.

What is 'it' -- the Intel Application Accelerator? I was having corruption problems with the stripe exactly after I updgraded to 8.10 BETA. At one point I deleted and re-added the stripe under windows with IAA. Not sure who thinks it's a mirror instead of a stripe.

Booting into Windows to check it out:
yep, windows thinks it's a ~180GB NTFS stripe (and so does the earlier version of dmraid).

Not sure what my next steps should be -- there's not much data on the stripe, so it would be no problem to recreate it. If it would be helpful for debugging in its current state, I can leave it as is.

psusi
November 18th, 2008, 08:20 AM
"It" would be dmraid. Looks like there may be a bug so I'll have to play with your metadata to debug it.

aacero
November 18th, 2008, 08:33 AM
I wouldn't be so sure. I just had an exciting 15 minutes after I decided to delete the RAID0 volume. I did it from the firmware screen. Somehow that resulted in destroying my mbr. Luckily ubuntu is on a non-RAID drive and I had a backup of the mbr. Could be buggy RAID firmware, no?

Skinner_au
January 22nd, 2009, 11:36 PM
Hi guys,

I'm hoping I'm posting in the right thread as the issue reported seem to be similar to my problem, and this is about the only mention of raid issues I can find that don't involve *installing* ubuntu to a raid partition.

I have a current XP installation on a RAID0 stripe, and ubuntu64/8.10 installed on a separate IDE drive. I want the ubuntu installation to be able to see this XP drive but dmraid will not mount it correctly. It is running on an Intel G33 motherboard with ICH9R. The raid set was originally created by the ICH9R bios.

Sorry I can't post the output of dmraid at the moment (as I'm at work, and the offending box is at home) but the error is *similar* to others printed here (but not identical).

I'm very open to the possibility of user-error as I really don't know what I'm doing, and some to most of what's in here is a little over my head.

Mine has output that it will see the raid, but it labels the 2 drives differently - ie: /sda = isw_[randomstringA) and /sdb = isw_[randomstringB]. It also makes mention of there being an incorrect number of drives in the set (if thats the correct term) - I believe this is because it can see RAID header info but because the two drives are given different isw labels it doesnt see the other.
It appears to try and mount at least one of the drives (and has at times said it was active), but obviously it's not getting anywhere with only 1 visible drive from the stripe. During bootup and /var/log/messages it is throwing errors that the sectors it is looking for are greater than the sectors on a particular drive - obviously because it can't see the strip, only a single drive.

Re-partitioning and re-installation is not really an option for me, and I would just like it to read the damned thing.

Can post error log later tonight if required but thought enough of them had been posted that mine wouldn't necessarily have contributed much.

Thanks

Sk

UPDATE: ok, here is my dmraid -ay -vvvv -dddd output:
(have cut searches for non-relevant raid types)

NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sda: isw metadata discovered
DEBUG: _find_set: searching isw_eajbedgaha
DEBUG: _find_set: not found isw_eajbedgaha
DEBUG: _find_set: searching isw_eajbedgaha_Sharp
DEBUG: _find_set: searching isw_eajbedgaha_Sharp
DEBUG: _find_set: not found isw_eajbedgaha_Sharp
DEBUG: _find_set: not found isw_eajbedgaha_Sharp
NOTICE: added /dev/sdb to RAID set "isw_eajbedgaha"
DEBUG: _find_set: searching isw_jigjbfibf
DEBUG: _find_set: not found isw_jigjbfibf
DEBUG: _find_set: searching isw_jigjbfibf_Sharp
DEBUG: _find_set: searching isw_jigjbfibf_Sharp
DEBUG: _find_set: searching isw_jigjbfibf_Sharp
DEBUG: _find_set: not found isw_jigjbfibf_Sharp
DEBUG: _find_set: not found isw_jigjbfibf_Sharp
DEBUG: _find_set: searching isw_jigjbfibf_Sharp
DEBUG: _find_set: not found isw_jigjbfibf_Sharp
DEBUG: _find_set: not found isw_jigjbfibf_Sharp
NOTICE: added /dev/sda to RAID set "isw_jigjbfibf"
DEBUG: checking isw device "/dev/sdb"
ERROR: isw device for volume "Sharp" broken on /dev/sdb in RAID set "isw_eajbedgaha_Sharp"
ERROR: isw: wrong # of devices in RAID set "isw_eajbedgaha_Sharp" [1/2] on /dev/sdb
DEBUG: set status of set "isw_eajbedgaha_Sharp" to 2
DEBUG: set status of set "isw_eajbedgaha" to 4
DEBUG: checking isw device "/dev/sda"
DEBUG: set status of set "isw_jigjbfibf_Sharp" to 16
DEBUG: set status of set "isw_jigjbfibf" to 16
INFO: Activating GROUP RAID set "isw_eajbedgaha"
INFO: Activating GROUP RAID set "isw_jigjbfibf"
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "isw_eajbedgaha_Sharp"
DEBUG: freeing device "isw_eajbedgaha_Sharp", path "/dev/sdb"
DEBUG: freeing devices of RAID set "isw_eajbedgaha"
DEBUG: freeing device "isw_eajbedgaha", path "/dev/sdb"
DEBUG: freeing devices of RAID set "isw_jigjbfibf_Sharp"
DEBUG: freeing device "isw_jigjbfibf_Sharp", path "/dev/sda"
DEBUG: freeing devices of RAID set "isw_jigjbfibf"
DEBUG: freeing device "isw_jigjbfibf", path "/dev/sda"


After I execute that my output of dmraid -s is:

ERROR: isw device for volume "Sharp" broken on /dev/sdb in RAID set "isw_eajbedgaha_Sharp"
ERROR: isw: wrong # of devices in RAID set "isw_eajbedgaha_Sharp" [1/2] on /dev/sdb
*** Group superset isw_eajbedgaha
--> Subset
name : isw_eajbedgaha_Sharp
size : 156294400
stride : 256
type : stripe
status : broken
subsets: 0
devs : 1
spares : 0
*** Group superset isw_jigjbfibf
--> Active Subset
name : isw_jigjbfibf_Sharp
size : 156301056
stride : 256
type : stripe
status : ok
subsets: 0
devs : 1
spares : 0


Thanks guys