Hello, I need someone with some RAID experience to help me figure this one out. For a couple days I have been searching and no good answers are coming so perhaps I can get some help here.
Take a look at the output from the command line:
Code:
-> mdsdm --detail /dev/md0:
Version : 0.90
Creation Time : Fri Jun 4 23:22:05 2010
Raid Level : raid1
Array Size : 244138944 (232.83 GiB 250.00 GB)
Used Dev Size : 244138944 (232.83 GiB 250.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Aug 9 08:55:11 2012
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : d29fccc1:4990675b:589ea98e:bab4a4a7
Events : 0.1163614
Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
1 8 97 1 active sync /dev/sdg1
Code:
-> mdsdm --detail /dev/md1:
Version : 0.90
Creation Time : Fri Jun 4 23:22:19 2010
Raid Level : raid6
Array Size : 2930282304 (2794.54 GiB 3000.61 GB)
Used Dev Size : 976760768 (931.51 GiB 1000.20 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Thu Aug 9 08:55:28 2012
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 495e40aa:e6089480:30fd1677:e0dbfce9
Events : 0.343636
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 1 1 active sync /dev/sda1
2 8 17 2 active sync /dev/sdb1
3 8 65 3 active sync /dev/sde1
4 8 33 4 active sync /dev/sdc1
Code:
-> cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdf1[0] sdg1[1]
244138944 blocks [2/2] [UU]
md1 : active raid6 sde1[3] sdc1[4] sdb1[2] sda1[1] sdd1[0]
2930282304 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
unused devices: <none>
Everything is looking good so far. The numbers are exactly what I would expect. But here comes the problem...
Code:
-> df -l
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md0 48060232 29107612 16511256 64% /
none 4055604 352 4055252 1% /dev
none 4096748 0 4096748 0% /dev/shm
none 4096748 400 4096348 1% /var/run
none 4096748 0 4096748 0% /var/lock
/dev/sdg2 48128344 184344 45499200 1% /tmp
/dev/md1 1769073308 271594200 1407615332 17% /export
Those are the exact same values that were in place before I added 2 drives to my MD1 array (2x1T each) and switched it from RAID-5 to RAID-6. I also replaced the small partitions on MD0 with 250G drives but it still reports 64% used of only 50G...
This is only when I do df -l and I am concerned for a couple reasons. First, will I run out of drive space when df thinks things are full? Does the system really only see part of the storage? Or does the system see it all and only df is confused? But the other reason I am concerned is that a miss-match like this could be a symptom of something larger being wrong that I simply haven't found yet.
Things I have tried:
- Rebooting (of course)
- sync
- Updating /etc/mdadm/mdadm.conf with the output of mdadm --detail --scan
Please please please, someone give me something else to try.
Thanks!
Bookmarks