Page 1 of 4 123 ... LastLast
Results 1 to 10 of 40

Thread: MDADM Raid 5 failure

  1. #1
    Join Date
    May 2009
    Location
    Texas
    Beans
    61
    Distro
    Ubuntu

    MDADM Raid 5 failure

    Fresh install of Ubuntu 20.04
    I have a 4 disc drive raid 5 array consisting of four 2 terrabyte WD green drives, migrated from a prior Ubuntu install.
    When I installed MDADM on the new system everything seemed fine - for a while. Raid mounted fine.
    Now it seems /dev/sdb is missing from the array and it will no longer mount. How can I get /dev/sdb back into the /dev/md5 array? Is it a problem with its magic number?

    Code:
    johnw@JW-MediaServ:~$ sudo mdadm --examine /dev/sd[b-f]
    /dev/sdb:
       MBR Magic : aa55
    Partition[0] :   3907029167 sectors at            1 (type ee)
    /dev/sdc:
       MBR Magic : aa55
    Partition[0] :    429561856 sectors at    804812800 (type 83)
    Partition[1] :     16384000 sectors at         2048 (type 82)
    Partition[2] :     15886338 sectors at   1234376702 (type 05)
    Partition[3] :    788426752 sectors at     16386048 (type 83)
    /dev/sdd:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 52c035ba:7524c0e7:9ae73d40:d13ef8b4
               Name : johnw-desktop:5
      Creation Time : Wed Jan  8 17:58:05 2014
         Raid Level : raid5
       Raid Devices : 4
    
    
     Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
         Array Size : 5860150272 (5588.67 GiB 6000.79 GB)
      Used Dev Size : 3906766848 (1862.89 GiB 2000.26 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
       Unused Space : before=262064 sectors, after=176 sectors
              State : clean
        Device UUID : c805b298:a422eedd:67a295a0:96748203
    
    
        Update Time : Sat May 16 04:07:23 2020
           Checksum : dc61b744 - correct
             Events : 31288
    
    
             Layout : left-symmetric
         Chunk Size : 512K
    
    
       Device Role : Active device 0
       Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sde:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 52c035ba:7524c0e7:9ae73d40:d13ef8b4
               Name : johnw-desktop:5
      Creation Time : Wed Jan  8 17:58:05 2014
         Raid Level : raid5
       Raid Devices : 4
    
    
     Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
         Array Size : 5860150272 (5588.67 GiB 6000.79 GB)
      Used Dev Size : 3906766848 (1862.89 GiB 2000.26 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
       Unused Space : before=262064 sectors, after=176 sectors
              State : clean
        Device UUID : b35a61bd:e3027883:0be588d7:6b98abd3
    
    
        Update Time : Sat May 16 04:07:23 2020
           Checksum : adb89415 - correct
             Events : 31288
    
    
             Layout : left-symmetric
         Chunk Size : 512K
    
    
       Device Role : Active device 1
       Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdf:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 52c035ba:7524c0e7:9ae73d40:d13ef8b4
               Name : johnw-desktop:5
      Creation Time : Wed Jan  8 17:58:05 2014
         Raid Level : raid5
       Raid Devices : 4
    
    
     Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
         Array Size : 5860150272 (5588.67 GiB 6000.79 GB)
      Used Dev Size : 3906766848 (1862.89 GiB 2000.26 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
       Unused Space : before=262064 sectors, after=176 sectors
              State : clean
        Device UUID : 1816504c:f6de06bc:3f9103cc:2dbd98a7
    
    
        Update Time : Sat May 16 04:07:23 2020
           Checksum : 3d9cbb55 - correct
             Events : 31288
    
    
             Layout : left-symmetric
         Chunk Size : 512K
    
    
       Device Role : Active device 2
       Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

  2. #2
    Join Date
    Jan 2008
    Beans
    161
    Distro
    Kubuntu 18.04 Bionic Beaver

    Re: MDADM Raid 5 failure

    mdadm doesn't see anything raid-like on sdb, it only sees a non-raid partition there. If the device is healthy - and that's a big if because something made it fail, and if it is not in use for something else - another big if because partition ee is a GPT blocker implying that there are GPT partitions there, and given that the array was used for some unknown time after the failure, the only way is to fail and remove this device from the array (if it is still there which I doubt - can be checked in /proc/mdstat) and re-add as a new device. Before that checking the disk with something like smartctl or GSmartControl and examining its GPT partitions with [g]parted is strongly recommended. I also don't like the idea of using devices instead of partitions for the array, but if it was created this way it cannot be easily changed.

  3. #3
    Join Date
    May 2009
    Location
    Texas
    Beans
    61
    Distro
    Ubuntu

    Re: MDADM Raid 5 failure

    I ran smartctl:

    Code:
    johnw@JW-MediaServ:/media/johnw/System 2$ sudo smartctl -a /dev/sdb
    smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-29-generic] (local build)
    Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
    
    
    === START OF INFORMATION SECTION ===
    Model Family:     Western Digital Caviar Green (AF)
    Device Model:     WDC WD20EARS-22MVWB0
    Serial Number:    WD-WMAZA4770810
    LU WWN Device Id: 5 0014ee 002c80ebc
    Firmware Version: 51.0AB51
    User Capacity:    2,000,398,934,016 bytes [2.00 TB]
    Sector Sizes:     512 bytes logical, 4096 bytes physical
    Device is:        In smartctl database [for details use: -P show]
    ATA Version is:   ATA8-ACS (minor revision not indicated)
    SATA Version is:  SATA 2.6, 3.0 Gb/s
    Local Time is:    Sun May 17 15:22:30 2020 CDT
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    
    
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    
    
    General SMART Values:
    Offline data collection status:  (0x82)    Offline data collection activity
                        was completed without error.
                        Auto Offline Data Collection: Enabled.
    Self-test execution status:      (   0)    The previous self-test routine completed
                        without error or no self-test has ever 
                        been run.
    Total time to complete Offline 
    data collection:         (38580) seconds.
    Offline data collection
    capabilities:              (0x7b) SMART execute Offline immediate.
                        Auto Offline data collection on/off support.
                        Suspend Offline collection upon new
                        command.
                        Offline surface scan supported.
                        Self-test supported.
                        Conveyance Self-test supported.
                        Selective Self-test supported.
    SMART capabilities:            (0x0003)    Saves SMART data before entering
                        power-saving mode.
                        Supports SMART auto save timer.
    Error logging capability:        (0x01)    Error logging supported.
                        General Purpose Logging supported.
    Short self-test routine 
    recommended polling time:      (   2) minutes.
    Extended self-test routine
    recommended polling time:      ( 372) minutes.
    Conveyance self-test routine
    recommended polling time:      (   5) minutes.
    SCT capabilities:            (0x3035)    SCT Status supported.
                        SCT Feature Control supported.
                        SCT Data Table supported.
    
    
    SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
      1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       78
      3 Spin_Up_Time            0x0027   253   190   021    Pre-fail  Always       -       1116
      4 Start_Stop_Count        0x0032   095   095   000    Old_age   Always       -       5595
      5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
      7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
      9 Power_On_Hours          0x0032   062   062   000    Old_age   Always       -       27916
     10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
     11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
     12 Power_Cycle_Count       0x0032   095   095   000    Old_age   Always       -       5543
    192 Power-Off_Retract_Count 0x0032   199   199   000    Old_age   Always       -       752
    193 Load_Cycle_Count        0x0032   135   135   000    Old_age   Always       -       195245
    194 Temperature_Celsius     0x0022   123   102   000    Old_age   Always       -       27
    196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
    197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       126
    198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       129
    199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
    200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       33
    
    
    SMART Error Log Version: 1
    ATA Error Count: 3
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
    Powered_Up_Time is measured from power on, and printed as
    DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
    SS=sec, and sss=millisec. It "wraps" after 49.710 days.
    
    
    Error 3 occurred at disk power-on lifetime: 348 hours (14 days + 12 hours)
      When the command that caused the error occurred, the device was active or idle.
    
    
      After command completion occurred, registers were:
      ER ST SC SN CL CH DH
      -- -- -- -- -- -- --
      04 51 01 00 00 00 00  Error: ABRT
    
    
      Commands leading to the command that caused the error were:
      CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
      -- -- -- -- -- -- -- --  ----------------  --------------------
      b0 d6 01 e0 4f c2 00 08      00:22:02.268  SMART WRITE LOG
      ec 00 01 00 00 00 00 08      00:15:33.340  IDENTIFY DEVICE
      b0 d5 01 e1 4f c2 00 08      00:15:33.338  SMART READ LOG
      b0 d6 01 e0 4f c2 00 08      00:15:33.338  SMART WRITE LOG
    
    
    Error 2 occurred at disk power-on lifetime: 347 hours (14 days + 11 hours)
      When the command that caused the error occurred, the device was active or idle.
    
    
      After command completion occurred, registers were:
      ER ST SC SN CL CH DH
      -- -- -- -- -- -- --
      04 51 01 00 00 00 00  Error: ABRT
    
    
      Commands leading to the command that caused the error were:
      CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
      -- -- -- -- -- -- -- --  ----------------  --------------------
      b0 d5 01 e1 4f c2 00 08      00:15:33.338  SMART READ LOG
      b0 d6 01 e0 4f c2 00 08      00:15:33.338  SMART WRITE LOG
    
    
    Error 1 occurred at disk power-on lifetime: 347 hours (14 days + 11 hours)
      When the command that caused the error occurred, the device was active or idle.
    
    
      After command completion occurred, registers were:
      ER ST SC SN CL CH DH
      -- -- -- -- -- -- --
      04 51 01 00 00 00 00  Error: ABRT
    
    
      Commands leading to the command that caused the error were:
      CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
      -- -- -- -- -- -- -- --  ----------------  --------------------
      b0 d6 01 e0 4f c2 00 08      00:15:33.338  SMART WRITE LOG
    
    
    SMART Self-test log structure revision number 1
    No self-tests have been logged.  [To run self-tests, use: smartctl -t]
    
    
    SMART Selective self-test log data structure revision number 1
     SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
        1        0        0  Not_testing
        2        0        0  Not_testing
        3        0        0  Not_testing
        4        0        0  Not_testing
        5        0        0  Not_testing
    Selective self-test flags (0x0):
      After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.
    mdstat:
    Code:
    johnw@JW-MediaServ:/media/johnw/System 2$ cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md5 : active raid5 sdd[4] sdf[3] sde[1]
          5860150272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
          
    unused devices: <none>
    I'm not sure how to interpret the smart data, but I saw "passed", so I just need to re-add it to the array, correct?

    re-add doesn't appear to work:
    Code:
    johnw@JW-MediaServ:/media/johnw/System 2$ sudo mdadm --manage /dev/md5 --re-add /dev/sdb
    mdadm: --re-add for /dev/sdb to /dev/md5 is not possible
    Last edited by Slownis; May 17th, 2020 at 10:12 PM.

  4. #4
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    20,130
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: MDADM Raid 5 failure

    Code:
    197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       126
    That means data loss. Buy a new disk - last week. Today is too late.
    Devices over 1TB shouldn't be run except in RAiD1 or RAiD10 configs, imho.

    The "passed" just means the test finished.

    Raid is not a backup. Get the backups ready for restore.

  5. #5
    Join Date
    Jan 2008
    Beans
    161
    Distro
    Kubuntu 18.04 Bionic Beaver

    Re: MDADM Raid 5 failure

    Quote Originally Posted by Slownis View Post
    I'm not sure how to interpret the smart data, but I saw "passed"
    Unfortunately, the overall test line is often misleading and reports 'passed' for disks which are not in good health. It doesn't however mean that it is just the test which is finished, more can be found at the smartmontools site: https://www.smartmontools.org/wiki/F...D.Whatsgoingon Non-zero Current_Pending_Sector does not necessarily mean data loss either but it is a cause for concern and probably this is what caused the disk to drop out of array. More information can be found in syslog at the time the array failed. I concur with the previous speaker that it has to be replaced, I however strongly disagree with his statement regarding disk sizes. As a proud owner of md RAID6 built of reasonably fast 7200 rpm 10T disks I can say that rebuild time is quite acceptable - well under 1 day, and even if such views were ever correct these days are long gone.

  6. #6
    Join Date
    Aug 2011
    Location
    51.8° N 5.8° E
    Beans
    5,209
    Distro
    Xubuntu 19.10 Eoan Ermine

    Re: MDADM Raid 5 failure

    Quote Originally Posted by Slownis View Post
    I ran smartctl:

    Code:
    johnw@JW-MediaServ:/media/johnw/System 2$ sudo smartctl -a /dev/sdb
    smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-29-generic] (local build)
    Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
    sudo smartctl -a /dev/sdb doesn't actually run the test, it only prints the results from last test. And according to this:
    Code:
    SMART Self-test log structure revision number 1
    No self-tests have been logged.  [To run self-tests, use: smartctl -t]
    no test was performed.

    I suggest that you actually run a test:
    Code:
    sudo smartctl -t long /dev/sdb
    That will run the long test, which according to
    Code:
    Short self-test routine 
    recommended polling time:      (   2) minutes.
    Extended self-test routine
    recommended polling time:      ( 372) minutes.
    Conveyance self-test routine
    recommended polling time:      (   5) minutes.
    will take 6:12 hours. Don't suspend, hibernate or switch off your computer while running the test.

    One thing I don't understand about your test is this:
    Code:
      5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
    197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       126
    This tells us that 126 sectors are bad and will be reallocated to spare sectors (if still available), but no sectors have been reallocated yet, so spares must be available. Maybe running that test and printing results again will give more sensible numbers.

  7. #7
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,375
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: MDADM Raid 5 failure

    1. Best practice is to use mdadm with partitions, not disks. Make a partition spanning the whole disk (do NOT format it) and use that as device member of mdadm raid. In your case you used disks. However, this is not a reason to fail.

    2. Raid5 should still work correctly with one member failed. From your last posts it seems md5 is assembled, so post the output of its details:
    Code:
    sudo mdadm -D /dev/md5
    sudo blkid
    Don't do anything else for the moment until you get more info. Wrong action can put your data in danger. And if md5 is not mounted right now, don't try to mount it yet.
    Last edited by darkod; May 18th, 2020 at 05:29 PM.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  8. #8
    Join Date
    May 2009
    Location
    Texas
    Beans
    61
    Distro
    Ubuntu

    Re: MDADM Raid 5 failure

    Quote Originally Posted by darkod View Post
    1. Best practice is to use mdadm with partitions, not disks. Make a partition spanning the whole disk (do NOT format it) and use that as device member of mdadm raid. In your case you used disks. However, this is not a reason to fail.
    Yep, I've seen that posted many times, but this was set up many years ago to use the whole disks and its the world I live in for now - until I decide to buy more disks.

    Quote Originally Posted by darkod
    2. Raid5 should still work correctly with one member failed. From your last posts it seems md5 is assembled, so post the output of its details:
    Code:
    sudo mdadm -D /dev/md5
    sudo blkid
    Code:
    johnw@JW-MediaServ:~$ sudo mdadm -D /dev/md5
    [sudo] password for johnw: 
    /dev/md5:
               Version : 1.2
         Creation Time : Sun May 17 20:19:05 2020
            Raid Level : raid5
            Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
         Used Dev Size : 1953382400 (1862.89 GiB 2000.26 GB)
          Raid Devices : 4
         Total Devices : 3
           Persistence : Superblock is persistent
    
    
         Intent Bitmap : Internal
    
    
           Update Time : Sun May 17 20:19:05 2020
                 State : clean, degraded 
        Active Devices : 3
       Working Devices : 3
        Failed Devices : 0
         Spare Devices : 0
    
    
                Layout : left-symmetric
            Chunk Size : 512K
    
    
    Consistency Policy : bitmap
    
    
                  Name : JW-MediaServ:5  (local to host JW-MediaServ)
                  UUID : 26dea2a8:7a9868a6:9dc927a9:4b48d1ee
                Events : 0
    
    
        Number   Major   Minor   RaidDevice State
           0       8       64        0      active sync   /dev/sde
           1       8       80        1      active sync   /dev/sdf
           2       8       48        2      active sync   /dev/sdd
           -       0        0        3      removed
    Quote Originally Posted by darkod
    Don't do anything else for the moment until you get more info. Wrong action can put your data in danger. And if md5 is not mounted right now, don't try to mount it yet.
    Too late, I've already tried some stuff I googled that made sense to me, the result is what you see above. Now when I tried to add /dev/sdb to the array, I get "mdadm: add new device failed for /dev/sdb as 4: Invalid argument". When I attempt to mount it, I get a different generally worded error that talks of lack of a partition table. Lesson learned, I will not take further action until instructed to do so.

  9. #9
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,375
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: MDADM Raid 5 failure

    Give us again the member's superblocks:
    Code:
    sudo mdadm -E /dev/sd[def]
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  10. #10
    Join Date
    May 2009
    Location
    Texas
    Beans
    61
    Distro
    Ubuntu

    Re: MDADM Raid 5 failure

    Code:
    johnw@JW-MediaServ:~$ sudo mdadm -E /dev/sd[def]
    [sudo] password for johnw: 
    /dev/sdd:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : 26dea2a8:7a9868a6:9dc927a9:4b48d1ee
               Name : JW-MediaServ:5  (local to host JW-MediaServ)
      Creation Time : Sun May 17 20:19:05 2020
         Raid Level : raid5
       Raid Devices : 4
    
    
     Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
         Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
      Used Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=176 sectors
              State : clean
        Device UUID : 42cad468:4b4e48e3:867c4c1a:57cced19
    
    
    Internal Bitmap : 8 sectors from superblock
        Update Time : Sun May 17 20:19:05 2020
      Bad Block Log : 512 entries available at offset 16 sectors
           Checksum : c4d961a6 - correct
             Events : 0
    
    
             Layout : left-symmetric
         Chunk Size : 512K
    
    
       Device Role : Active device 2
       Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sde:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : 26dea2a8:7a9868a6:9dc927a9:4b48d1ee
               Name : JW-MediaServ:5  (local to host JW-MediaServ)
      Creation Time : Sun May 17 20:19:05 2020
         Raid Level : raid5
       Raid Devices : 4
    
    
     Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
         Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
      Used Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=176 sectors
              State : clean
        Device UUID : 5641c8d4:c87301c5:eb7aca56:568fb6ae
    
    
    Internal Bitmap : 8 sectors from superblock
        Update Time : Sun May 17 20:19:05 2020
      Bad Block Log : 512 entries available at offset 16 sectors
           Checksum : e3ccbf9a - correct
             Events : 0
    
    
             Layout : left-symmetric
         Chunk Size : 512K
    
    
       Device Role : Active device 0
       Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdf:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : 26dea2a8:7a9868a6:9dc927a9:4b48d1ee
               Name : JW-MediaServ:5  (local to host JW-MediaServ)
      Creation Time : Sun May 17 20:19:05 2020
         Raid Level : raid5
       Raid Devices : 4
    
    
     Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
         Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
      Used Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=176 sectors
              State : clean
        Device UUID : 81244e01:ed645a74:3417ff76:d5fbe970
    
    
    Internal Bitmap : 8 sectors from superblock
        Update Time : Sun May 17 20:19:05 2020
      Bad Block Log : 512 entries available at offset 16 sectors
           Checksum : a2139cb2 - correct
             Events : 0
    
    
             Layout : left-symmetric
         Chunk Size : 512K
    
    
       Device Role : Active device 1
       Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)

Page 1 of 4 123 ... LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •