Hm.
Is this a good idea?Code:root@titanium:~$ fsck /dev/md0fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) Superblock has an invalid journal (inode 8). Clear<y>?
Hm.
Is this a good idea?Code:root@titanium:~$ fsck /dev/md0fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) Superblock has an invalid journal (inode 8). Clear<y>?
Uh. I wouldn't clear the journal, as it would revert the file system to ext2. Can you check to see if the other superblocks are ok?
Find the alternative superblocks:
http://ubuntuforums.org/showthread.php?t=1245536Code:sudo dumpe2fs /dev/md0 | grep -i superblock
Last edited by CharlesA; October 2nd, 2013 at 04:20 AM. Reason: added better link
Come to #ubuntuforums! We have cookies! | Basic Ubuntu Security Guide
Tomorrow's an illusion and yesterday's a dream, today is a solution...
Blah. I'm reading different things, and I'm worried that I'll do something that there is no coming back from.Code:root@titanium:~# dumpe2fs /dev/md0 | grep -i superblockdumpe2fs 1.42 (29-Nov-2011) Journal superblock magic number invalid!
That's what I'm worried about as well.
Anyway, can you post the complete output of:
Code:sudo dumpe2fs /dev/md0
Come to #ubuntuforums! We have cookies! | Basic Ubuntu Security Guide
Tomorrow's an illusion and yesterday's a dream, today is a solution...
I was reading that there might be a possibility if I reassemble the array simply using --assemble --scan, then it might work? Maybe the array is simply out of order, but I would suspect that if that were the case, there would be some sort of superblock existing.Code:root@titanium:~# dumpe2fs /dev/md0dumpe2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: /media/storage Filesystem UUID: 62cde81c-d63e-4704-9ddd-df2ab00de9c7 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype n eeds_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nli nk extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 488382464 Block count: 1953519872 Reserved block count: 97659026 Free blocks: 178896900 Free inodes: 487815566 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 558 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 55567 Flex block group size: 16 Filesystem created: Sun Mar 21 22:39:55 2010 Last mount time: Mon Sep 16 20:20:46 2013 Last write time: Mon Sep 16 20:20:46 2013 Mount count: 19 Maximum mount count: 38 Last checked: Sun Feb 17 17:34:27 2013 Check interval: 15552000 (6 months) Next check after: Fri Aug 16 18:34:27 2013 Lifetime writes: 4882 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 5908a3b2-1720-4998-99c9-b0857b46c012 Journal backup: inode blocks Journal superblock magic number invalid!
Not sure. You could try reassembling it, but I don't know if that will work or not. I don't have access to the server I run mdadm on, so I can't verify the output of dump2fs.
Come to #ubuntuforums! We have cookies! | Basic Ubuntu Security Guide
Tomorrow's an illusion and yesterday's a dream, today is a solution...
I dug up an old email that contained a /proc/mdstat reading showing the following:
I have added 3 drives since then, so I'm assuming that at least the first 8 are in this order (or should be). Running --create shows this:Code:md0 : active raid5 sdb1[8](F) sde1[9](F) sdd1[10](F) sdh1[11](F) sdc1[12](F) sdf1[13](F) sdg1[14](F) sdi1[15](F)
Seems like an...odd...output. As per http://serverfault.com/questions/347...ad-of-re-using, it really shouldn't kill my data if I reorder the array incorrectly, just screw up the superblock.Code:root@titanium:~# mdadm --create /dev/md0 --chunk=64000 --level=6 --raid-devices=11 /dev/sdb1 /dev/sde1 /dev/sdd1 /dev/sdh1 /dev/sdc1 /dev/sdf1 /dev/sdg1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1mdadm: /dev/sdb1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sde1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sdh1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sdf1 appears to contain an ext2fs file system size=-775855104K mtime=Mon Sep 16 20:20:46 2013 mdadm: /dev/sdf1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sdg1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sdi1 appears to contain an ext2fs file system size=151086680K mtime=Sat May 31 13:57:45 1930 mdadm: /dev/sdi1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sdj1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sdk1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: /dev/sdl1 appears to contain an ext2fs file system size=-775855104K mtime=Tue Oct 10 03:32:04 2028 mdadm: /dev/sdl1 appears to be part of a raid array: level=raid6 devices=11 ctime=Sun Mar 21 22:38:08 2010 mdadm: largest drive (/dev/sdb1) exceeds size (976618496K) by more than 1% Continue creating array?
Still sounds a bit risky to me. I'd backup the data by doing a clone of the drive via dd to external media before trying anything.
I realize that is probably a huge amount of data to backup, but better safe than sorry.
Come to #ubuntuforums! We have cookies! | Basic Ubuntu Security Guide
Tomorrow's an illusion and yesterday's a dream, today is a solution...
Re-create is the right term here.
The best is indeed to do a full back-up before trying, it is however true that you won't lose data if you only change the order...but that means all other parameters need to be the same too (chunk, level, members, superblock version, etc).
Only thing is I miss here is why the order should be mixed up now.
You don't want to re-create the array without passing in the --assume-clean flag. Without this, it will cause a resync and will VERY likely kill your data. What do the superblocks show on all of your disks?
Code:mdadm -E /dev/sd[bcdefghijkl]1
Last edited by rubylaser; October 2nd, 2013 at 12:16 PM.
Bookmarks