Updates from today
1) I've created a copy of "disk1" using ddrescue to "newdisk1"
Code:
ubuntu@ubuntu:~/Desktop$ sudo ddrescue -d --force /dev/sdb /dev/sda
~/Desktop/copy-disk1-newdisk1.log
GNU ddrescue 1.19
Press Ctrl-C to interrupt
rescued: 2000 GB, errsize: 0 B, current rate: 31285 kB/s
ipos: 2000 GB, errors: 0, average rate: 83083 kB/s
opos: 2000 GB, run time: 6.68 h, successful read: 0 s ago
Finished
ubuntu@ubuntu:~/Desktop$
2) I've written/recovered the lost linux raid partition using testdisk. This is how "newdisk1" looks like
Code:
root@ubuntu:/home/ubuntu# fdisk -l /dev/sda
Disk /dev/sda: 4.6 TiB, 5000981078016 bytes, 9767541168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 434961F3-7564-3148-BF16-D0C0A3318DAF
Device Start End Sectors Size Type
/dev/sda1 6474176 3907028869 3900554694 1.8T Linux RAID
3) I tried to mount /dev/sda1 (recovered linux raid partition) so that I can recover the data. Here is what I've tried to far
Code:
root@ubuntu:/home/ubuntu# sudo mdadm --assemble --scan
mdadm: failed to add /dev/sda1 to /dev/md2: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
root@ubuntu:/home/ubuntu# mdadm --examine --scan >> /etc/mdadm/mdadm.conf
root@ubuntu:/home/ubuntu# sudo mdadm --assemble --scan
root@ubuntu:/home/ubuntu# mdadm -A -R /dev/md2 /dev/sda1
mdadm: we match both /dev/md2 and /dev/md/2 - cannot decide which to use.
This is how the the file /etc/mdadm/mdadm.conf looks like:
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md2 metadata=1.2 UUID=526a8857:b89fdd6c:0006681b:33dc3bf6
name=MyBookWorld:2
ARRAY /dev/md/2 metadata=1.2 UUID=526a8857:b89fdd6c:0006681b:33dc3bf6
name=MyBookWorld:2
4) Then I tried to install newdisk1 directly and alone in the NAS - it did not boot up (I've tried to use both HDD slots in the NAS and the output was the same on either one of them).
5) Then I tried the procedure from this page http://mybookworld.wikidot.com/myboo...sx-and-windows (used the command sudo bash ./debrick.sh rootfs.img /dev/sda) on newdisk1, here is the output:
Code:
root@ubuntu:/home/ubuntu/Desktop/Debrick# bash ./debrick.sh rootfs.img /dev/sda
********************** DISK **********************
script will use the following disk:
Model: ATA WDC WD50EZRX-00M (scsi)
Disk /dev/sda: 5001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 3315MB 2000GB 1997GB raid
is this REALLY the disk you want? [y] Y
********************** IMAGE **********************
swap.c:12:1: warning: return type defaults to ‘int’ [-Wimplicit-int]
main(int argc, char *argv[])
^
swap.c: In function ‘main’:
swap.c:32:9: warning: implicit declaration of function ‘lseek64’
[-Wimplicit-function-declaration]
if (lseek64(fd, offset, 0) < 0LL) {
^
********************** IMPLEMENTATION **********************
everything is now prepared!
device: /dev/sda
image_img: rootfs.img
destroy: false
this is the point of no return, continue? [y] y
mdadm: /dev/sda1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sun Dec 13 21:43:01 2015
mdadm: size set to 1950277248K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: creation continuing despite oddities due to --run
mdadm: array /dev/md0 started.
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 487569312 4k blocks and 121896960 inodes
Filesystem UUID: 824198f5-0c97-46ea-9602-0b3e06f41c61
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Checking for bad blocks (read-only test): 0.00% done, 0:00 elapsed.
(0/0/0 errdone
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
mdadm: Cannot find /dev/sda2: No such file or directory
synchronize raid... done
copying image to disk...
3999616+0 records in
3999616+0 records out
2047803392 bytes (2.0 GB) copied, 44.0197 s, 46.5 MB/s
mdadm: stopped /dev/md0
lseek64: Success
/dev/sda2: No such file or directory
all done! device should be debricked!
root@ubuntu:/home/ubuntu/Desktop/Debrick#
I added "newdisk1" back in NAS - it did not boot up again.
Back again to the computer, I tried to mount it one more time:
Code:
root@ubuntu:/home/ubuntu# fdisk -l
Disk /dev/sda: 4.6 TiB, 5000981078016 bytes, 9767541168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 434961F3-7564-3148-BF16-D0C0A3318DAF
Device Start End Sectors Size Type
/dev/sda1 6474176 3907028869 3900554694 1.8T Linux RAID
root@ubuntu:/home/ubuntu# mdadm --examine /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 0.90.00
UUID : c1e1a4cb:d08df212:e368bf24:bd0fce41 (local to host ubuntu)
Creation Time : Sun Dec 13 21:44:17 2015
Raid Level : raid1
Used Dev Size : 1950277248 (1859.93 GiB 1997.08 GB)
Array Size : 1950277248 (1859.93 GiB 1997.08 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Update Time : Mon Dec 14 01:35:17 2015
State : clean
Internal Bitmap : present
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Checksum : fd2e571d - correct
Events : 37
Number Major Minor RaidDevice State
this 0 8 1 0 active sync /dev/sda1
0 0 8 1 0 active sync /dev/sda1
1 1 0 0 1 faulty removed
root@ubuntu:/home/ubuntu# mdadm -A -R /dev/md9 /dev/sda1
mdadm: /dev/md9 has been started with 1 drive (out of 2).
root@ubuntu:/home/ubuntu#
At this moment the drive mounted. However, when I navigate through it I couldn't find my data. I looked at the folder "DataVolume" which was empty. I also used the search feature for a folder I know and it did not return any result.
Also looked at the disk properties and it says only 500MB is used and the 1,4 TB is available, here is the screenshot:
Screenshot from 2015-12-14 02-20-37.png
Right now I've got 2 options to work with:
1) The image-disk-1.raw copied to newdisk0
2) The disk1 copy in newdisk1 with restored partition
I am stuck, but I feel I am really near to solution this problem and I am sure if your help I will finally be able to recover my data.
Bookmarks