Well, I only see two drives in the dmesg output.
I also see this, which is a bit concerning.
Code:
[ 3441.507732][ 3441.507732] megaraid_sas 0000:18:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0xffffffff
[ 3441.507827] megaraid_sas 0000:18:00.0: FW in FAULT state Fault code:0xfff0000 subcode:0xff00 func:megasas_wait_for_outstanding_fusion
[ 3441.507840] megaraid_sas 0000:18:00.0: resetting fusion adapter scsi0.
[ 3441.508622] megaraid_sas 0000:18:00.0: Outstanding fastpath IOs: 0
[ 3552.623247] megaraid_sas 0000:18:00.0: Diag reset adapter never cleared megasas_adp_reset_fusion 3977
[ 3663.982765] megaraid_sas 0000:18:00.0: Diag reset adapter never cleared megasas_adp_reset_fusion 3977
[ 3775.342278] megaraid_sas 0000:18:00.0: Diag reset adapter never cleared megasas_adp_reset_fusion 3977
[ 3775.342284] megaraid_sas 0000:18:00.0: Reset failed, killing adapter scsi0.
[ 3776.354351] megaraid_sas 0000:18:00.0: Failed from megasas_fault_detect_work 1933, do not re-arm timer
[ 4833.218938] sd 0:2:1:0: [sdb] tag#964 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=0s
[ 4833.218944] sd 0:2:1:0: [sdb] tag#964 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
[ 4833.218949] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 4833.219000] sd 0:2:1:0: [sdb] tag#965 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=0s
[ 4833.219003] sd 0:2:1:0: [sdb] tag#965 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
[ 4833.219006] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 4833.219011] Buffer I/O error on dev sdb, logical block 0, async page read
[ 4833.219035] sd 0:2:1:0: [sdb] tag#966 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=0s
[ 4833.219038] sd 0:2:1:0: [sdb] tag#966 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
[ 4833.219041] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 4833.219044] Buffer I/O error on dev sdb, logical block 0, async page read
[ 4833.219424] sd 0:2:0:0: [sda] tag#967 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=0s
[ 4833.219427] sd 0:2:0:0: [sda] tag#967 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
[ 4833.219430] blk_update_request: I/O error, dev sda, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 4833.219458] sd 0:2:0:0: [sda] tag#968 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=0s
[ 4833.219461] sd 0:2:0:0: [sda] tag#968 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
[ 4833.219463] blk_update_request: I/O error, dev sda, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 4833.219466] Buffer I/O error on dev sda, logical block 0, async page read
[ 4833.219486] sd 0:2:0:0: [sda] tag#969 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=0s
[ 4833.219489] sd 0:2:0:0: [sda] tag#969 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
[ 4833.219491] blk_update_request: I/O error, dev sda, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 4833.219493] Buffer I/O error on dev sda, logical block 0, async page read
Based on what I've seen with my older LSI 9260-8i, this type of error occurred and caused the array to disappear on my server.
Replacing the RAID card resolved the issue, but it recurred after a couple of years due what I assume was insufficient cooling. I've moved to a dumb HBA and software RAID and I haven't had any more of those issues.
Bookmarks