I was doing this on Oneiric
mdadm --grow /dev/md0 --level=6 --raid-devices=6 --backup-file /md0.backup
to this
ARRAY /dev/md0 metadata=1.2 spares=1 name=server:0 UUID=c64d8bc1:fe26faa5:c9f6538f:8cb82480
I got to something like 6 and a half days in (it was at 96% when I checked a few hours earlier) and then had a power failure.
When power was restored, the array did not come back up.
me@server:[~]cat /proc/mdstat
me@server:[~]mdadm --detail --scanCode:Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sde1[1] sdf1[2] sdg1[3] sdh1[4] sdd1[5] sdb1[0] 11603871747 blocks super 1.2 unused devices: <none>
me@server:[~]mdadm --detail /dev/md0Code:ARRAY /dev/md0 metadata=1.2 spares=1 name=server:0 UUID=c64d8bc1:fe26faa5:c9f6538f:8cb82480 me@server:[/etc/mdadm]cat mdadm.conf DEVICE partitions CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR me@mydomain.net ARRAY /dev/md0 metadata=1.2 UUID=c64d8bc1:fe26faa5:c9f6538f:8cb82480 name=server:0
me@server:[/etc/mdadm]mdadm --misc --examine /dev/sd[bdefgh]1 | egrep 'dev|Update|Role|State|Chunk Size'Code:/dev/md0: Version : 1.2 Creation Time : Thu May 24 12:06:26 2012 Raid Level : raid6 Used Dev Size : 1933978112 (1844.39 GiB 1980.39 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Fri Jun 29 20:38:02 2012 State : active, degraded, Not Started Active Devices : 5 Working Devices : 6 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric-6 Chunk Size : 512K New Layout : left-symmetric Name : server:0 (local to host server) UUID : c64d8bc1:fe26faa5:c9f6538f:8cb82480 Events : 1904783 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 65 1 active sync /dev/sde1 2 8 81 2 active sync /dev/sdf1 3 8 97 3 active sync /dev/sdg1 4 8 113 4 active sync /dev/sdh1 5 8 49 5 spare rebuilding /dev/sdd1
After several hours of troublshooting I did aCode:/dev/sdb1: State : clean Update Time : Fri Jun 29 20:38:02 2012 Chunk Size : 512K Device Role : Active device 0 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sdd1: State : clean Update Time : Fri Jun 29 20:38:02 2012 Chunk Size : 512K Device Role : Active device 5 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sde1: State : clean Update Time : Fri Jun 29 20:38:02 2012 Chunk Size : 512K Device Role : Active device 1 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sdf1: State : clean Update Time : Fri Jun 29 20:38:02 2012 Chunk Size : 512K Device Role : Active device 2 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sdg1: State : clean Update Time : Fri Jun 29 20:38:02 2012 Chunk Size : 512K Device Role : Active device 3 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sdh1: State : clean Update Time : Fri Jun 29 20:38:02 2012 Chunk Size : 512K Device Role : Active device 4 Array State : AAAAAA ('A' == active, '.' == missing)
mdadm --stop /dev/md0
and then tried to reassemble the array using
me@server:[/etc/mdadm]mdadm --create --assume-clean --verbose /dev/md0 --chunk=512 --level=6 --raid-devices=6 /dev/sd[befghd]1
Now the array is coming up, but the UUID has changed and it is coming up md127Code:mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: /dev/sdb1 appears to be part of a raid array: level=raid6 devices=6 ctime=Thu May 24 12:06:26 2012 mdadm: layout defaults to left-symmetric mdadm: /dev/sdd1 appears to be part of a raid array: level=raid6 devices=6 ctime=Thu May 24 12:06:26 2012 mdadm: layout defaults to left-symmetric mdadm: /dev/sde1 appears to be part of a raid array: level=raid6 devices=6 ctime=Thu May 24 12:06:26 2012 mdadm: layout defaults to left-symmetric mdadm: /dev/sdf1 appears to be part of a raid array: level=raid6 devices=6 ctime=Thu May 24 12:06:26 2012 mdadm: layout defaults to left-symmetric mdadm: /dev/sdg1 appears to be part of a raid array: level=raid6 devices=6 ctime=Thu May 24 12:06:26 2012 mdadm: layout defaults to left-symmetric mdadm: /dev/sdh1 appears to be part of a raid array: level=raid6 devices=6 ctime=Thu May 24 12:06:26 2012 mdadm: size set to 1933978112K Continue creating array?
me@server:[~]mdadm --detail /dev/md127
UUID on the partitions has changed as wellCode:/dev/md127: Version : 1.2 Creation Time : Fri Jun 29 22:51:25 2012 Raid Level : raid6 Array Size : 7735912448 (7377.54 GiB 7921.57 GB) Used Dev Size : 1933978112 (1844.39 GiB 1980.39 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Fri Jun 29 23:07:42 2012 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : server:0 (local to host server) UUID : d74f392c:f69a96d4:927668fd:91bc3f4a Events : 2 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 49 1 active sync /dev/sdd1 2 8 65 2 active sync /dev/sde1 3 8 81 3 active sync /dev/sdf1 4 8 97 4 active sync /dev/sdg1 5 8 113 5 active sync /dev/sdh1
me@server:[~]mdadm --misc --examine /dev/sd[bdefgh]1 | grep Array\ UUID
There is no backup.Code:Array UUID : d74f392c:f69a96d4:927668fd:91bc3f4a Array UUID : d74f392c:f69a96d4:927668fd:91bc3f4a Array UUID : d74f392c:f69a96d4:927668fd:91bc3f4a Array UUID : d74f392c:f69a96d4:927668fd:91bc3f4a Array UUID : d74f392c:f69a96d4:927668fd:91bc3f4a Array UUID : d74f392c:f69a96d4:927668fd:91bc3f4a
Is the array recoverable?



Adv Reply




Bookmarks