Page 1 of 9 123 ... LastLast
Results 1 to 10 of 82

Thread: Setting up software RAID, post install and not for / or booting

  1. #1
    Join Date
    Jul 2009
    Location
    South Africa
    Beans
    168
    Distro
    Ubuntu

    Smile Setting up software RAID, post install and not for / or booting

    I'm currently planning a new server. My questions are:


    1. Is RAID 5 correct for what I want to do?
    2. I found this guide for setting up my raid post install, however some steps are done in Gnome Parted, but I won't be running a gui. Any advice/or other articles that you could recommend?
    3. What virtual machine platform should I use? I would like USB support, as well as have the guest be able to access the RAID partition. I assume doing it through samba/network would be the best?



    My Planned server setup:

    -Intel Quad Core with Virtualization
    -4GB Ram
    -Two network cards (firewall)
    -One 500GB HDD for Ubuntu server (root, boot, swap, /home)
    -5 x 2TB Sata6G Seagate HDD's (RAID 5/4/3 for whatever data)
    -3 Sunix 2 port Stata6G PCIe RAID controller cards http://www.sunix.com.tw/product/sata2600.html


    I will install Ubuntu Server 64 Bit and will be running the machine headless.
    The server will be running my firewall/mail server, as well as samba for sharing the data on the Raid partition. I'm also planning on running Windows 7 virtual guest os via virtualbox/vmware/kvm for some light windows programs like an accounting package.

    My reasoning behind the above setup:

    Cost is an issue, and I'm using some of what I've got plus acquiring some additional hardware. The drives will be purchased seperately from each other ensuring they are from different batches as to reduce the likelyhood of more than one drive failing at once. Also, I will only start with 3 drives, and then later grow the raid partition by another 2 (maybe).

    The Sunix RAID cards are probably not true hardware raid, but I will only be using them to connect the HDD's with SATA6G and have mdadm do software raid. The Sunix cards are for taking advantage of the Seagate's SATA 6Gbps transfer speeds, nothing more.

    I chose software RAID, as it is cheaper, and with the Quad Core CPU I shouldn't lose too much CPU cycles from a machine that will (except for firewall, mail and virtual machine) be otherwise dormant. Another reason for software raid is independence from a physical raid controller, should it fail.

    The type of files being stored on the raid partition will vary in size from documents (excel, word, artwok for being published on websites etc to video files up to about 15 GB, I'm guessing). Documents and small files will be worked on directly over network, whereas video files will only be backed up on the server, and occaisionally viewed from the server over network.

    Any pointers, advice would be appreciated.

  2. #2
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Setting up software RAID, post install and not for / or booting

    Sounds like you've thought this through. I wouldn't worry about buying PCI-e cards to take advantage of SATAIII speeds. No current splindle disk on the market can come close to maxing out SATAII let alone III. Some solid state drives can come close to maxing out SATAII, but you're going for storage, not raw speed (multi drive SSD array).

    If your motherboard has 6 SATAII ports, you can use those with no problem with mdadm, and you will see no speed difference between those ports and the 6G ports. I would suggest also using NFS to share. You'll see better throughput via NFS or CIFS at least in my experience.

    Those are a few pointers to start. Hope that helps.

  3. #3
    Join Date
    Jul 2009
    Location
    South Africa
    Beans
    168
    Distro
    Ubuntu

    Smile Re: Setting up software RAID, post install and not for / or booting

    Awesome advice, thanks! I have put some thought into it, because as I mentioned, cost is an issue, and i would like to stretch my moola as far as I "technically" can.

    On the SATA cards; You are making sense. I guess I woun't use the sata cards then for the raid, but I'm going to get at least one just to play around with, and then when I'm done, use it to add extra ports to my server since the board I'm using has only 5 sata ports. (5xraid+1 standalone = need 6 ports).

    As for NFS, I've never used it before, though I've seen it mentioned in passing on the forums and guides about other topics. I'll definately have to do some studying up. How would you suggest I go about implementing this? Would you suggest any reading material, or should I just Google it?

    Its for tidbits like this that I post on the formums, cause one can only Google if you know what question to ask, but if you don't know enough to form a question, thats a whole different ballgame.

    I'm still wondering about the raid though. I did my homework, but as I'm not an experienced expert with upper levels of raid, I'm open to debate.

    Your advice is much appreciated, thx!

  4. #4
    Join Date
    Nov 2006
    Location
    Belgium
    Beans
    3,025
    Distro
    Ubuntu 10.04 Lucid Lynx

    Re: Setting up software RAID, post install and not for / or booting

    re 2- : run 'parted' in a shell

  5. #5
    Join Date
    Oct 2009
    Beans
    2,199
    Distro
    Ubuntu 10.10 Maverick Meerkat

    Re: Setting up software RAID, post install and not for / or booting

    You haven't mentioned data security. Is this important? Are you intending to regularly back up this array?
    I strongly recommend you do not assume that because you are using "RAID" that you have good protection. Do not take this for granted. Get informed.
    I consider RAID 5 to be the least reliable of the standard RAIDs and I personally consider it to be a false economy and I would never trust my critical data to it. IMO RAID 6 is significantly more reliable. RAID 10 is best.

  6. #6
    Join Date
    Nov 2006
    Location
    Belgium
    Beans
    3,025
    Distro
    Ubuntu 10.04 Lucid Lynx

    Re: Setting up software RAID, post install and not for / or booting

    Quote Originally Posted by Demented ZA View Post
    As for NFS, I've never used it before, though I've seen it mentioned in passing on the forums and guides about other topics. I'll definately have to do some studying up. How would you suggest I go about implementing this? Would you suggest any reading material, or should I just Google it?
    Here's how I got started with NFS :
    http://users.telenet.be/mydotcom/how...networking.htm

  7. #7
    Join Date
    Nov 2006
    Location
    Belgium
    Beans
    3,025
    Distro
    Ubuntu 10.04 Lucid Lynx

    Re: Setting up software RAID, post install and not for / or booting

    Quote Originally Posted by YesWeCan View Post
    I consider RAID 5 to be the least reliable of the standard RAIDs
    Ha, opinions. Do you also have a reasoning behind this, and eg explain why RAID5 would be less reliable then RAID1 ?

    Quote Originally Posted by YesWeCan View Post
    IMO RAID 6 is significantly more reliable. RAID 10 is best.
    Last thing i heard, RAID6 also comes with a significant performance penalty on write.

    and RAID10 the best ? define "best"; faster than RAID0 ? better storage efficiency than RAID5 ? more fault tolerant than RAID6 ?

  8. #8
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Setting up software RAID, post install and not for / or booting

    I'd say for large storage sets, you can rule out RAID1 not based on reliability, but because you'll just waste a bunch of space. And, if you're planning on using mdadm, I'd rule out RAID10, because it doesn't support growing an existing RAID10 array (unless, you never plan on adding more disks).

    RAID10 is great, because you don't have a massive resync time if a disk fails, but it does require a lot of wasted hard drive space. In your scenario, I would choose, and have choosen for myself RAID6. Does it have a write penalty, sure. But, I can still write to my array at over 200MB/s, so this is more than adequate for my purposes, plus I have the benefit of 2 parity disks. With 5 2TB disks, your re-sync time is going to be lengthy in the even of a failure, and you'd hate to lose another disk and your array with RAID5 while it rebuilds.
    Last edited by rubylaser; February 20th, 2011 at 11:57 PM.

  9. #9
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,136
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Setting up software RAID, post install and not for / or booting

    In regards to NFS, if you want to mount NFS shares in Windows, you'll need to install Windows Services for Unix on Windows XP, or Client Services for NFS on Windows 7. The previous guide for NFS is good, and here's the one I used to follow.
    http://www.debianhelp.co.uk/nfs.htm

  10. #10
    Join Date
    Jul 2009
    Location
    South Africa
    Beans
    168
    Distro
    Ubuntu

    Re: Setting up software RAID, post install and not for / or booting

    @YesWeCan, Thank you for your input. I have considered RAID 6 quite carefully. While RAID 6 has double the parity of RAID 5, it has two other penalties that come with it:

    • Reduced performance VS RAID 5 due to additional parity calculations and
    • 20% loss of purchased storage capacity used for parity


    The pro's (added parity) and the cons (mentioned above) make RAID 5/6 fairly even contenders. In order to decide, I had to consider what I'm going to be storing on them.

    The estimated storage topography is based on my existing server and looks like this:


    • ~ 100Gb Documents/pictures (Office Files e.g. word, excel, html, jpg, bmp, pdf, cpp, etc)
    • ~ 2 TB Large Files (Video, Clonezilla HDD Clones, CD/DVD ISO's, Downloaded Software
    • Free Space for growth

    While assesing the data, I made the following observations:


    • The relatively small cumulation of office files, is the only real irreplacable data. Since this is less than 0.2TB, this is easy to backup regularly off the array as we are currently doing this and I intend to keep doing this. We have two drives, one is kept off site. The one on-site goes home at night, and the one at home is brought to office next mourning. These are kept in sync with rsync and a cron job.


    • All large files are replacable, even the video. We make sure that all video is backed up to two sets of Verbatim dual layer discs. The one goes to our customers, and the other is kept as backup. Having them on the RAID array is more for convenience, archive and reference. Same goes for the ISO's and software, etc.


    • I can afford to take the system offline while rebuilding a failed drive, since the server is not mission critical in such a way that business can't continue for a few hours to a week (As long as it would take replace the drive and rebuild the data)


    • Since I am buying my drives over time from different vendors, I'm minimising chances of having more than one drive fail at the same time, since they won't be of the same batch.


    • I have also taken proper measure such as a decent Antec case (Twelve Hundred) for cooling of the hardware and an Antec 1200 Truepower Power supply to help regulate power and prevent surges. The Server will also be connected to its own Line-R 1200 VA line conditioner.

    According to a study intel published early 2005 and another one in 2009, one can see that drive failure (At least amongst Seagate, Western Digital and two other manufacturers) has reduced significantly. In 2005, the study showed that out of 5 drives, data recovery operations would be neccesary every 2.3 years. This is an estemated average of the sum of all drive manufacturers, devided by the number of manufacturers. It therefore stands to reason that by selecting a reputable brand such as Seagate or Western Digital, in the year 2011, drive failure is mathematically less likely.

    I also opted to go for 5900 rpm green power drives as they spin slower and scale speed depending on load. This reduces wear on mechanical parts and produces less heat, providing even furhter longevity.

    The verdict, RAID 5 wins i would think, no?

    @koenn:

    Ha, opinions. Do you also have a reasoning behind this, and eg explain why RAID5 would be less reliable then RAID1 ?
    I suppose RAID 5 can be considered less reliable than RAID 1. Both arrays can survive one drive failing, but neither can survive two simultanous media failures. In the event of RAID 5, the more drives you have, the higher the likeliehood of failure, and the more likely you are to have more than one drive fail at a time (about 11% increase of risk per drive, calculated guess based on intel's figures). With that said, the performance benefits of RAID 5 over RAID 1 more than makes up for the increased risk in speed critical environments. Also, a well maintained RAID 5 array can help alleviate the chances of a disaster, by implementing preventative maintenance. Almost every RAID array has its place, but it depends on what the demands of the intended implementation will be.

    @rubylaser, thanks! I was just going to ask about ******* since most our desktops run Windows 7.

    once again, thank you guys for your input

Page 1 of 9 123 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •