Page 1 of 7 123 ... LastLast
Results 1 to 10 of 62

Thread: need help badly with RAID5 recovery

  1. #1
    Join Date
    Jan 2020
    Beans
    37

    need help badly with RAID5 recovery

    i'm a programmer and not a IT guy.

    i was asked to help a company my software company is partnered with (small business, no IT dept)

    they had a guy configure a linux mdadm raid5 with 4 drives. one drive failed at some point (it still does click of death). at some point someone there removed the 5th drive that was getting backups put on it. their last backup is from almost 3 years ago

    they had it booting off a USB which became corrupted somehow and it would no longer boot, they had a non IT guy there try to install linux, i dont know exactly what happened but the third drive won't assemble into the raid array anymore. it keeps saying it can't find a superblock, it has a linux (bootable) ext4 partition, which is different than the two good drives.

    at the point this got put in my lap the two good drives have had the partitions written over, i'm using testdisk to recover the old deleted partitions but it is taking a long time to scan i'm waiting for it now...

    the "good" drive that has the linux "bootable" partition on it, i've backed up with DD. i'm going to back up the other two drives with DD but it took 40 hours to do the first one

    if i try to assemble the raid array with "assume clean" does it need the partitions sorted out? can mdadm figure it out?

    i'm learning as i go on some of this .

    i think they are probably f'd and might have to send it to some company to try and recover stuff forensically

    82863169_1097935287217338_2952130880253984768_o.jpg

    83312896_1097935317217335_6083167858718670848_o.jpg

    at this point i think my only option is to use --create --assume-clean to try and get the raid array back.
    Last edited by slickymaster; January 24th, 2020 at 05:33 PM. Reason: Removed large images

  2. #2
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: need help badly with RAID5 recovery

    Wow... Good luck, sounds like a really really messed up situation.

    First of all, keep calm if you want to have any shot at this. Even then it might be impossible (not fault of yours).

    So, not counting the dead disk, you have three of them left and probably one of them was overwritten installing the OS on it. I got that right?

    Post the mdadm -E for each disk. Best if it's text output in CODE tags.

    HINT: When using Ubuntu Desktop live mode to troubleshoot/rescue, if you want to you can add openssh-server inside the live session if the machine has internet. Doing that will allow you to ssh into it from another computer and like that you can easily copy/paste your terminal output into the forum. Instead of relying on pictures.

    In live mode usually this setup wouldn't survive reboots, so you would need to install the openssh-server each time you reboot... Just an idea, you don't have to do it if you're not OK with it. But text output helps people have better idea and get full output while picture sometimes might be cutting things out etc.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  3. #3
    Join Date
    Jan 2020
    Beans
    37

    Re: need help badly with RAID5 recovery

    i ended up installing ubuntu on an old SSD drive i had.

    im trying to get the partitions back on the two of the drives that were good in the raid.

    right now the partitions are messed up, the screenshot for the -E on sdc is what two of them looked like. the third would show a normal ext4 partition like this:

    now both the working drives only show this instead of the information that was in the first sdc screenshot. i'm trying to get it back to that state from the first -E screenshot. i'm hoping testdisk will let me bring back the old partition information. its taking a long time to scan its at 89% now

    83080811_1097958503881683_6564467842438135808_n.jpg

    the fact that the testdisk partition scan is taking so long on sdc is makign me worried, before it would only take a couple mins and it would stop at about 57% and find the old partitions.
    Last edited by slickymaster; January 24th, 2020 at 05:35 PM. Reason: posts merged. Removed large image

  4. #4
    Join Date
    Jul 2019
    Beans
    34

    Re: need help badly with RAID5 recovery

    Your screen says it's raid 0 so it's 2x disks acting as 1 now? and raid 5 can only recover from 1 disk failure which should have been switched when it failed.

    Is this server using a dedicated raid controller or software raid.
    Last edited by bunny9000; January 22nd, 2020 at 11:23 PM. Reason: Not used software raid!

  5. #5
    Join Date
    Jan 2020
    Beans
    37

    Re: need help badly with RAID5 recovery

    it was using software raid "mdadm"

    i believe the underlying data and structure is on three of the drives its just a matter of getting it back configured properly
    Last edited by bohnnyjliss; January 22nd, 2020 at 11:11 PM.

  6. #6
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: need help badly with RAID5 recovery

    Don't worry about that raid0 message. Often you can receive inconsistent results during rescue work.

    But it worrying that you can't get the same mdadm -E output now. If that disk was good, with detected raid partition, I assume you didn't try to change anything on it. So it should still be as it is.

    Losing the first, dead, disk was not a problem. Even when a second disk goes out of sync it still not very big problem. The array would disassemble but you would have two good members and one just slightly out of sync (and one dead). With that you can force mdadm to re-assemble.

    But if you had only two good members, one dead, and one that was overwritten, then that could be serious because you can't restore raid5 with two missing members. You had at least one disk overwritten, right? That is why you are using testdisk to try restore the old partitions on it?
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  7. #7
    Join Date
    Jan 2020
    Beans
    37

    Re: need help badly with RAID5 recovery

    afaik just the partitions were changed. not the data.

    i'm trying to use testdisk to bring back the partition(s) so that it shows the raid and not the regular linux ext4 partition.

    the first screenshot of sdc is what im trying to get back at this point and move on from there.

    this is what it shows for sdc now:

    83080811_1097958503881683_6564467842438135808_n.jpg

    this is what i want:
    83312896_1097935317217335_6083167858718670848_o.jpg
    Last edited by slickymaster; January 24th, 2020 at 05:36 PM. Reason: removed large images

  8. #8
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: need help badly with RAID5 recovery

    First of all, do not forget that drive letters can change. When using usb sticks and/or new OS disk (you mentioned) then the disks might not get detected in the same order always.

    Could we get the output of:
    Code:
    lsblk
    sudo blkid
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  9. #9
    Join Date
    Jan 2020
    Beans
    37

    Re: need help badly with RAID5 recovery

    does mdadm need to have the drives formatted correctly? if i try to create the array with assume clean and the old parameters does it care about how the drives are partitioned?

    Code:
    NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    loop0    7:0    0  1008K  1 loop /snap/gnome-logs/61
    loop1    7:1    0 149.9M  1 loop /snap/gnome-3-28-1804/67
    loop2    7:2    0  54.4M  1 loop /snap/core18/1066
    loop3    7:3    0  44.9M  1 loop /snap/gtk-common-themes/1440
    loop4    7:4    0   956K  1 loop /snap/gnome-logs/81
    loop5    7:5    0   3.7M  1 loop /snap/gnome-system-monitor/100
    loop6    7:6    0  14.8M  1 loop /snap/gnome-characters/296
    loop7    7:7    0   3.7M  1 loop /snap/gnome-system-monitor/123
    loop8    7:8    0 156.7M  1 loop /snap/gnome-3-28-1804/110
    loop9    7:9    0  42.8M  1 loop /snap/gtk-common-themes/1313
    loop10   7:10   0     4M  1 loop /snap/gnome-calculator/406
    loop11   7:11   0  88.5M  1 loop /snap/core/7270
    loop12   7:12   0  14.8M  1 loop /snap/gnome-characters/375
    loop13   7:13   0  89.1M  1 loop /snap/core/8268
    loop14   7:14   0  54.7M  1 loop /snap/core18/1650
    loop15   7:15   0   4.2M  1 loop /snap/gnome-calculator/544
    loop16   7:16   0  15.8M  1 loop /snap/kolourpaint/44
    loop17   7:17   0 260.7M  1 loop /snap/kde-frameworks-5-core18/32
    sdb      8:16   0  55.9G  0 disk 
    └─sdb1   8:17   0  55.9G  0 part /
    sdc      8:32   0   3.7T  0 disk 
    sdd      8:48   0   3.7T  0 disk 
    └─sdd1   8:49   0     2T  0 part 
    sde      8:64   0   3.7T  0 disk 
    └─sde1   8:65   0     2T  0 part 
    sdf      8:80   0   3.7T  0 disk 
    └─sdf1   8:81   0     2T  0 part 
    sdg      8:96   0   3.7T  0 disk 
    sdh      8:112  0   3.7T  0 disk 
    └─sdh1   8:113  0     2T  0 part
    Code:
    /dev/sdb1: UUID="40ccaf56-2e46-4ba1-9eaa-042d8cd2ed54" TYPE="ext4" PARTUUID="c3e9cd3e-01"
    /dev/sde1: UUID="6b50d3e4-e44e-4432-8c7c-7091a5569b80" TYPE="ext4" PARTUUID="1503067f-01"
    /dev/loop0: TYPE="squashfs"
    /dev/loop1: TYPE="squashfs"
    /dev/loop2: TYPE="squashfs"
    /dev/loop3: TYPE="squashfs"
    /dev/loop4: TYPE="squashfs"
    /dev/loop5: TYPE="squashfs"
    /dev/loop6: TYPE="squashfs"
    /dev/loop7: TYPE="squashfs"
    /dev/sdc: PTUUID="42d38be4-ed7a-47d3-88cf-a5e8e6f60d2d" PTTYPE="gpt"
    /dev/sdd1: UUID="396bdf0d-c756-4665-918f-691c38ce7a02" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="ef5721a2-5503-45ad-ac64-b0911eeca98b"
    /dev/sdf1: UUID="6b50d3e4-e44e-4432-8c7c-7091a5569b80" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="cb8c8b8c-2c94-48dd-b987-374ab4e91b82"
    /dev/loop8: TYPE="squashfs"
    /dev/loop9: TYPE="squashfs"
    /dev/loop10: TYPE="squashfs"
    /dev/loop11: TYPE="squashfs"
    /dev/loop12: TYPE="squashfs"
    /dev/loop13: TYPE="squashfs"
    /dev/loop14: TYPE="squashfs"
    /dev/loop15: TYPE="squashfs"
    /dev/loop16: TYPE="squashfs"
    /dev/sdg: PTUUID="42d38be4-ed7a-47d3-88cf-a5e8e6f60d2d" PTTYPE="gpt"
    /dev/sdh1: UUID="396bdf0d-c756-4665-918f-691c38ce7a02" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="ef5721a2-5503-45ad-ac64-b0911eeca98b"
    /dev/loop17: TYPE="squashfs"
    Last edited by howefield; January 24th, 2020 at 05:33 PM. Reason: posts merged.

  10. #10
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: need help badly with RAID5 recovery

    It depends. You can use the whole disk (no partitions) as mdadm member or a partition. Of course there is difference, even if the disk has only one partition using the whole space, it would be called for example /dev/sda1. When using the whole disk without partitions, it would be /dev/sda.

    It makes a big difference which you put into the command, it has to match where the data is.

    Please reply to my previous post, you go little too fast. We need an idea of the partitions layout first.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

Page 1 of 7 123 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •