Sorry to bump this thread, but I just created a 3 drive RAID 5 with mdadm and it is currently showing 1 as a spare...
Code:
mdadm - v3.2.5 - 18th May 2012
Code:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sd[bcd]1
Code:
root@Loki:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid5 sdd1[3](S) sdc1[1] sdb1[0]
3906763776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
unused devices: <none>
Code:
root@Loki:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Jul 14 16:36:40 2013
Raid Level : raid5
Array Size : 3906763776 (3725.78 GiB 4000.53 GB)
Used Dev Size : 1953381888 (1862.89 GiB 2000.26 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sun Jul 14 16:36:40 2013
State : clean, degraded
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : Loki:0 (local to host Loki)
UUID : 7e33c05b:c1160a6a:fba6e791:5a1cae68
Events : 0
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 0 0 2 removed
3 8 49 - spare /dev/sdd1
Guess it's time to run SMART tests on all three drives, even though they completed badblocks fine.
EDIT: Maybe I am overracting.. I just created the array, so I think it's still syncing.
EDIT2: Nevermind, I forgot to format the array. Now it looks like it is rebuilding now. Recovery at 59000K/sec sounds about right (I guess?).
Bookmarks