Failed sdb1 , hot removed, then zeroed the super blocks
Code:
mike@bastion:~$ sudo mdadm /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
mike@bastion:~$ sudo mdadm /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0
mike@bastion:~$ sudo mdadm --zero-superblock /dev/sdb1
mike@bastion:~$ watch cat /proc/mdstat
mike@bastion:~$ watch cat /proc/mdstat
mike@bastion:~$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Mar 7 22:57:49 2024
Raid Level : raid5
Array Size : 2441267200 (2.27 TiB 2.50 TB)
Used Dev Size : 488253440 (465.63 GiB 499.97 GB)
Raid Devices : 6
Total Devices : 7
Persistence : Superblock is persistent
Update Time : Sun Mar 17 15:31:36 2024
State : clean, degraded, recovering
Active Devices : 5
Working Devices : 7
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 1% complete
Name : bastion:0 (local to host bastion)
UUID : 58952835:75d234f4:d4201fb9:d535d0c4
Events : 8239
Number Major Minor RaidDevice State
6 8 113 0 spare rebuilding /dev/sdh1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1
7 8 129 - spare /dev/sdi1
mike@bastion:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 9.8M 1 loop /snap/canonical-livepatch/264
loop1 7:1 0 63.9M 1 loop /snap/core20/2182
loop2 7:2 0 63.9M 1 loop /snap/core20/2105
loop3 7:3 0 74.2M 1 loop /snap/core22/1122
loop4 7:4 0 87M 1 loop /snap/lxd/27037
loop5 7:5 0 87M 1 loop /snap/lxd/27428
loop6 7:6 0 40.4M 1 loop /snap/snapd/20671
loop7 7:7 0 39.1M 1 loop /snap/snapd/21184
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 1G 0 part /boot/efi
└─sda2 8:2 0 231.8G 0 part /
sdb 8:16 0 465.8G 0 disk
└─sdb1 8:17 0 465.8G 0 part
sdc 8:32 0 465.8G 0 disk
└─sdc1 8:33 0 465.8G 0 part
└─md0 9:0 0 2.3T 0 raid5
sdd 8:48 0 465.8G 0 disk
└─sdd1 8:49 0 465.8G 0 part
└─md0 9:0 0 2.3T 0 raid5
sde 8:64 0 465.8G 0 disk
└─sde1 8:65 0 465.8G 0 part
└─md0 9:0 0 2.3T 0 raid5
sdf 8:80 0 465.8G 0 disk
└─sdf1 8:81 0 465.8G 0 part
└─md0 9:0 0 2.3T 0 raid5
sdg 8:96 0 465.8G 0 disk
└─sdg1 8:97 0 465.8G 0 part
└─md0 9:0 0 2.3T 0 raid5
sdh 8:112 0 465.8G 0 disk
└─sdh1 8:113 0 465.8G 0 part
└─md0 9:0 0 2.3T 0 raid5
sdi 8:128 0 465.8G 0 disk
└─sdi1 8:129 0 465.8G 0 part
└─md0 9:0 0 2.3T 0 raid5
mike@bastion:~$
things are looking really good. At this moment the array is resyncing / rebuilding. once completed one last grow command (Only if needed for some strange reason)
I know I posted that I desired that I wanted /dev/sdd1 to be listed as spare, but I'm trying to decide if I really want to force that member into the spare slot.
Which in my mind / thoughts would be to fail, remove then add back as a spare (sdd1).
OR if I want to just let it run as it is currently setup, which I am really leaning towards.
Bookmarks