Page 1 of 3 123 LastLast
Results 1 to 10 of 23

Thread: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

  1. #1
    Join Date
    Aug 2012
    Beans
    38

    12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    Hi,

    I have been doing my homework and playing around with raid arrays on my test system and using KVM machines. I have successfully built a number of raid 5 arrays grown them from 3-4 disks and resized ext fs. All is well. I now have to move into the real world. I have a server with 4x2TB drives in it. Currently 3x2TB are configured as a Raid 5 array. I built this machine as a NAS server. I used the 4th drive as a restore drive to restore the data backup onto the array after creating it from scratch. All is currently well.

    The array was created from 3 partitions sdx1 of 2TB each using the standard partitioning tool in the installer, essentially I used defaults for everything. The system boots from this array which is mounted as / and Grub is installed on all the sdx1 partitions. I noticed that there is a lot of wasted space approx. 5% so my 2TB drives are showing as 1.86TB. Is this something I just have to live with or is there a better way of partitioning and formatting them that would give me back over 300GB of space >200GB usable currently?

    If there is a better way then I can blat the array and start again. I currently have a backup machine that gets rysnc'd every morning. So I have my data elsewhere at present. (I don't want to blat it just to create the Raid 6 option, I need to learn this stuff )

    I would like to add the 4th drive as a new drive into the array. I am happy I can do this (having practiced on VM's) and end up with an 8TB (or 4x1.86TB) 6TB Usable raid 5 array. And after a long time it will resync and be good to go.

    However I want to make this NAS more resilient so am considering changing to Raid 6. Giving me double redundancy and allowing me to repurpoise the backup server. The quality of the Enterprise Disks is such that I should expect failures every 3-5 years although I of course appreciate failures can occur anytime, I have smart monitoring and alerting configured to get advanced warning of troublesome drives and advanced replacement warranties so I can swap out problem drives without having to keep expensive stock. So double protection using Raid 6 could be an answer as in thoery it should provide me protection from a double failure. This NAS gets a lot of reads and a small number of sustained high volume writes. i.e. Media being streamed to many clients and Media being archived once.

    In degraded mode when it was building the array it was still capable of meeting all of our streaming needs!

    Can anyone provide a step by step as to how I might achieve one or more of the asks above .Similar to this one that I have already used to good effect to egt where I am currently. The grow command does fill me with some trepidation
    I am a little confused about changing raid levels and backup files etc. Would it be correct/easier to add the new partition at Raid 5 then convert to Raid 6 (I think I might need to add another partition to do this, which could be an issue) or should I add the drive as a spare then convert to raid 6 which would use the spare drive to meet the min drive number.

    It is perfectly possible that I have misunderstood some/most of this, so please bear with me.

    Any help appreciated.

    Cheers
    Spart
    Last edited by sparticle2000; December 17th, 2013 at 04:36 PM.

  2. #2
    Join Date
    Aug 2012
    Beans
    38

    Re: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    So 91 views and no help! Not waht I came to expect from the Ubuntu Server community. Maybe I am asking the wrong questions or my asks are seen as too rudimentary to bother with. Just looking to gain some confidence bofre potentially wiping out my array

    Cheers
    Spart

  3. #3
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,133
    Distro
    Ubuntu 16.04 Xenial Xerus

    Re: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    Quote Originally Posted by sparticle2000 View Post
    I noticed that there is a lot of wasted space approx. 5% so my 2TB drives are showing as 1.86TB. Is this something I just have to live with or is there a better way of partitioning and formatting them that would give me back over 300GB of space >200GB usable currently?
    This discrepancy is due to the way that hard drive manufacturers report space. They report 1TB as 1,000,000,000,000 bytes. However, the operating system recognizes 1TB as 1 099,511,627,776 bytes (2^40). As result, a 1TB drive will be ~909GB and the 2TB ~1.82 TB.

    Quote Originally Posted by sparticle2000 View Post
    However I want to make this NAS more resilient so am considering changing to Raid 6. Giving me double redundancy and allowing me to repurpoise the backup server. The quality of the Enterprise Disks is such that I should expect failures every 3-5 years although I of course appreciate failures can occur anytime, I have smart monitoring and alerting configured to get advanced warning of troublesome drives and advanced replacement warranties so I can swap out problem drives without having to keep expensive stock. So double protection using Raid 6 could be an answer as in thoery it should provide me protection from a double failure. This NAS gets a lot of reads and a small number of sustained high volume writes. i.e. Media being streamed to many clients and Media being archived once.

    In degraded mode when it was building the array it was still capable of meeting all of our streaming needs!
    During a rebuild, performance will suffer, but you should be able to still stream from the NAS during a rebuild.

    Quote Originally Posted by sparticle2000 View Post
    Can anyone provide a step by step as to how I might achieve one or more of the asks above .Similar to this one that I have already used to good effect to egt where I am currently.
    I cover reshaping a RAID5 -> RAID6 on my website along with a bunch of other mdadm topics.

  4. #4
    Join Date
    Aug 2012
    Beans
    38

    Re: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    Rubylaser,

    Thank you for taking the time to respond. I have since found the tune2fs utility which succesfully reclaimed a lot of 'reserved space' hundreds of GB actually. Looking at your article, I can see that it is written for experienced mdadm users. I am still a little confused about the backup file. How big of a file are we talking about. By mount it somewhere else I am assuming you mean on another disc, or maybe usb drive etc. Without understanding the scale of this file and how long it need to stay around I am nervous even thinking about converting the array. Can you provide a bit more insight into the dynamics of moving the array to 6 from 5 and the requirements for backup file etc.

    Your site looks like my number one place for all things mdadm related.

    Many thanks
    Spart

  5. #5
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,133
    Distro
    Ubuntu 16.04 Xenial Xerus

    Re: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    The backup file is very small (like less than 2GB). If your OS is installed on a different disk than your RAID array, I would just write the backup file there. The reshape will take a long time. All of the parity needs to be recalculated for each disk. I wouldn't even consider this until you have your machine hooked up to a good UPS (verified that it works too). Please let me know if you need more info.


    Yes, tune2fs will allow you to set the reserved space to zero (tune2fs -m 0 /dev/md0), but the maximum space you can get out of a 2TB drive will be around 1.8TB when viewed with df -h. Here is an example (as you can see, I have a few 2TB disks in this system, and they all show 1.8TB total size).

    Code:
    /dev/sdb1 2.7T  1.6T  1.2T  58% /media/SEA-Z3102AD
    /dev/sdc1 2.7T  1.6T  1.2T  58% /media/SEA-Z2145ED
    /dev/sdd1 1.8T  1.6T  240G  87% /media/SEA-6YD1PM58
    /dev/sde1 1.8T  1.4T  391G  78% /media/SEA-5XW0M4T1
    /dev/sdf1 1.8T  1.3T  459G  74% /media/SEA-5YD2W98N
    /dev/sdg1 1.8T  1.4T  388G  78% /media/SEA-5YD17DD
    /dev/sdh1 1.8T  1.3T  416G  77% /media/HIT-ML0220F30YDEEA
    /dev/sdi1 1.8T  738G 1004G  43% /media/SEA-6YD0ZD3E
    /dev/sdj1 1.8T  1.6T  242G  87% /media/SEA-6YD1RSSAD
    /dev/sdk1 1.8T  741G 1000G  43% /media/SEA-5YD6ASDF
    /dev/sdl1 1.8T  1.4T  384G  22% /media/SEA-6YD3REDS
    /dev/sdm1 1.8T  1.4G  397G  22% /media/SEA-6YD6DSE3

  6. #6
    Join Date
    Aug 2012
    Beans
    38

    Re: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    Thank you for the additional info, you have reassured me that the backup file is not an issue. I would in this instance use an external drive temporarily whilst the array rebuilt. I am assuming it is not needed once the array is built and re-synced? This particular host is a re-purposed HP Data Vault X510 currently setup with 3x2TB Raid 5 array. I have 1 spare drive (not currently in the array as a spare) and it is configured as a single large / partition, so Ubuntu boots from it and runs from it. I am still reading on whether Raid 6 or 10 would be better as with this system they would both provide 4TB usable from 8. Then I would have the opportunity to utlise the esata port mulitplier to add space in an external Caddy. The system is hooked upto a UPS and that works fine using apcupsd and webmin for graphical reporting and shuts the system downs nicely after 5 mins, most outages we get are < 5 mins long so it has been doing a great job for over a year. I wrote an article for Ubuntu support on how to configure and set this up with Webmin.

    On the Raid 6 vs Raid 10 debate it looked like Raid 10 in this configuration would provide more performance if I didn't want to expand the number of drives outside the maximum 4 that the unit can accomodate internally. I would simply increase the size of each drive i.e. 2TB to 4 TB giving me 8TB usable, I saw you have an article for swapping the drives for larger drives one at a time. If I wanted to expand outside the case and utilise the Port multiplier then maybe setup a new array using 4 more drives and re-use the 2tb drives from the first array rather than try and expand the first array onto the external drives. Not sure how many additional drives I would need to add if I choose Raid 10. I think I would need to add 2 drives at a time to satisfy the min required drives. Raid 6 requires me to just add 1 drive at a time I think!

    Raid 6 seems to need a lot more horsepower frm what I am reading. What would your suggestion be for this use case Raid 6 or 10?

    Thank you for your help so far, it is good to get some re-assurance.

    Cheers
    Spart

  7. #7
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,133
    Distro
    Ubuntu 16.04 Xenial Xerus

    Re: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    No, you will not need the backup file once the reshape is completed. The main problem with mdadm RAID10 is that it is not expandable (doesn't support the --grow option), so you won't be able to add more disks in the future. The other reason I like RAID6 in your case is that you are guaranteed to survive two disk failures. With RAID10, if two disks in the same stripe fail, you lose everything. In regards to horsepower, I have had mdadm in RAID6 setup on machines sporting AMD 3000+ single core processors in the past. Your Celeron 440 should be more than up to the task. The benefits of RAID10 is that it will be faster than RAID6, and it doesn't use parity, so replacing a failed disk is typically a much faster operation.

  8. #8
    Join Date
    Aug 2012
    Beans
    38

    Re: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    That's good to hear. The X510 is the flagship model with a dual core E5200 and 2GB ram. Sounds like horsepower won't be an issue. From what you've said Raid 6 sounds like the answer if I wanted to expand the number of disks. I did a dry run through on my KVM machine with 4x2GB drives and it went fine.

    I think that Raid 10 would be the performant answer if I can upgrade the drives from 2TB to 4TB. Is this possible or am I stuck with the 2TB disks? All drives are WD20EFRX RED's so reliability should be good.

    Thank you for taking the time to provide some re-assurance. Not keen to have to start again and copy 3TB of data back from the backup drives.

    Cheers
    Spart

  9. #9
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,133
    Distro
    Ubuntu 16.04 Xenial Xerus

    Re: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    Quote Originally Posted by sparticle2000 View Post
    That's good to hear. The X510 is the flagship model with a dual core E5200 and 2GB ram. Sounds like horsepower won't be an issue. From what you've said Raid 6 sounds like the answer if I wanted to expand the number of disks. I did a dry run through on my KVM machine with 4x2GB drives and it went fine.

    I think that Raid 10 would be the performant answer if I can upgrade the drives from 2TB to 4TB. Is this possible or am I stuck with the 2TB disks? All drives are WD20EFRX RED's so reliability should be good.

    Thank you for taking the time to provide some re-assurance. Not keen to have to start again and copy 3TB of data back from the backup drives.

    Cheers
    Spart
    Yes, for home storage, I would suggest RAID6, and with a dual core, you should be in great shape. Personally, I wouldn't upgrade to 4TB drives at this point (because you would need to buy at least 4 to replace all of your existing drives before you could take advantage of the more space), but it is always a possibility in the future. I would add more 2TB disks, or move to a chassis that allows for more expansion before buying all new disks (unless you feel like you are going to need a lot more storage in the near future).

  10. #10
    Join Date
    Aug 2012
    Beans
    38

    Re: 12.04.3 Server, Grow/Reshape Raid 5 array to Raid 6 array

    Many thanks. It looks like Raid 6 it is then. I will push the button later and record the time to rebuild, might be useful for others to understand the dynamics of growing and converting to Raid 6 from Raid 5 with 3TB of data. I am happy that this host stay inside its design limitations i.e. 4 drives with the infrequent use of an external drive for bulk copy of data. So looks like 8TB usable is it's sensible limit using 16TB Raid 6. I have about a TB free at present, on this system that should be about 6 months. 4TB drives should have dropped in price by then so the timing should be good.

    It would have been interesting to understand the real world performance differences between Raid6 and Raid 10 on this host system.

    Is converting down Raid levels possible. e.g. From Raid 6 to Raid 0?

    Cheers
    Spart

Page 1 of 3 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •