Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 22

Thread: How Probable is Hard Drive Failure?

  1. #11
    Join Date
    Jun 2006
    Location
    Brisbane Australia
    Beans
    713

    Re: How Probable is Hard Drive Failure?

    Quote Originally Posted by Cheesemill View Post
    All hard drives fail, it's not a matter of if just a matter of when.
    I've got some 40GB drives kicking about that are still working but I've also had drives fail in the first week.
    Agree with this. Baring defects though, it is my experience that what vastly accelerates hard drives failing is a hot environment.

  2. #12
    Join Date
    Feb 2009
    Location
    Dallas, TX
    Beans
    7,272
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: How Probable is Hard Drive Failure?

    Quote Originally Posted by Zythyr View Post
    5) As @papibe, my RAID 5 can lose all the data if my OS is corrupted (which will be installed on my 500GB hard drive) of if this 500GB hard drive fails. Is there a way I can backup the server OS (Linux/FreeNAS) onto another location (ex: external hdd) in case the OS does get corrupted?
    As others have already suggested it, I also would install the OS on another media different from the RAID.

    Regards.

  3. #13
    Join Date
    Dec 2012
    Beans
    7

    Re: How Probable is Hard Drive Failure?

    I recently added a home server to my network and considered having a RAID setup for my HDDS. However, as I started to think more about it, while availability of the data is important, what I really wanted to ensure was that I had a reliable backup of data. In my case, I installed two identical 2 TB hard drives and use Rsync to keep them identical with the files I want to backup. I use rsync incrememtal backups with the link-dest option so that I can have multiple versions of the same files. This worked well for me siince I'm the primary user of the server and if I have a problem with a drive, I can always go to the other backup drive easily.

    I realize RAID is powerful and if you need it, it's great but just wanted to toss another idea out there.

  4. #14
    Join Date
    Jul 2010
    Beans
    28

    Re: How Probable is Hard Drive Failure?

    1) Is the RAID partition/data safe and recoverable if the hard drive/USB that the OS is installed on fails? What I mean is, lets say for example HDD A has the OS installed, HDD B and HDD C are in RAID configuration with the /home attached to it. If HDD A fails, can I just get another HDD and install the OS again, install mdadm, and re-mount my RAID partition (B/C) onto the OS?

    2) How do I mount certain directories to a partition? For example, how do I mount a the /var/log to a certain partition?

    3) I plan on mounting my /home directory to my RAID partition since data in the /home is the only data that is most important to me... However, which other directories are important to be mounted onto the RAID partition in case the HDD with my OS fails and I need to reinstall the OS. I am not sure which directory software and their configurations get installed to, but is it necessary to backup that directory also?

    4) Overall, what is the best route to install the OS? Should I install it on a hard drive or USB drive? Is there a guide on how to install Ubuntu on a USB drive? If I install to a USB drive, I will be able to reboot my server right?

    5) Is it possible to convert RAID1 to RAID5 without losing any data on the hard drive?

    6) Because currently my desktop enclosure don't have enough HDD slots, I was wondering is it possible to have a RAID5 configuring with only 2x HDD and the 3rd one being "missing"? I an install the 3rd HDD in the future when I upgrade my desktop enclosure...

  5. #15
    Join Date
    Feb 2007
    Location
    West Hills CA
    Beans
    9,481
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: How Probable is Hard Drive Failure?

    1. Yes, it will take some time but that is a valid recovery method. You may need to use blkid to get the UUID's of the RAID disks and put the UUID's in some config files.

    2. You can create a symbolic link from /var/log to /mnt/newdrive/var/log--see man ln

    3. You should make a backup of /etc once a month or so. Your configuration won't change much once your system is up and running. Whether you copy /etc to a USB flash drive or make a backup onto RAID is up to you. Remember RAID is not a backup strategy, It is a method to increase data availability or speed or both. Many home users think RAID is for backup when it's really designed for rapid data availability (as in a business with several users). Many home users are not prepared to perform RAID recovery when things go wrong.

    4. Hard Disk or SSD install is easiest, because it's less reliable to install updates to a USB disk.

    5. It's possible but there is always a risk of data loss in a major RAID configuration. Just as there is risk of data loss when resizing partitions. You would need to back up your RAID1 data, make the new RAID5 array then repopulate the data.

    6. Bad idea.
    -------------------------------------
    Oooh Shiny: PopularPages

    Unumquodque potest reparantur. Patientia sit virtus.

  6. #16
    Join Date
    Jul 2012
    Beans
    3

    Re: How Probable is Hard Drive Failure?

    I'm not sure of you hardware specs for this server but if it's 64 bit and has a fair amount of RAM, I would take a hard look at ZFS. Regarding OS failure, ZFS allows you to not only import the pool once you reinstall the OS, but doesn't even require that it be on the same hardware, hard drives connected to the same SATA/SAS ports, or even that it's the same OS. As long as the OS you install supports it's pool version it should import it with no issues. That is one big caveat to keep in mind though since a more recent ZFS update in the Linux world has surpassed the pool and filesystem versions the other "big implementation" OSes support (FreeBSD, Solaris). If you don't plan on switching OSes this isn't an issue. This is actually something I did 2 months ago moving a 5 disk RAID-Z array from FreeBSD to Ubuntu Server. Since the pool configuration is stored in metadata on the disks themselves you don't have any config files to restore and you recover doc would look like this:
    1. Reinstall OS
    2. Install ZFS Native
    3. Run zpool import
    4. On the list it displays, verify the pool you want to import and run zpool import -f [pool/ID]

    At that point the pool will be imported and all the filesystems residing on it remounted.
    ZFS also helps in the provisioning speed of those 3TB disks. If you set them up with mdadm you would still need to format them with whatever filesystem you plan on using and 3TB can take a substancial amount of time. ZFS on the other hand will have it ready for you in under a minute. Depending on the file system you would choose with a more traditional RAID setup, the feature set may also be much larger with ZFS with things like compression, quotas, and snapshots. Like many others have stated, RAID is not a replacement for full backups as it doesn't prevent end user issues like deleting important files. Snapshots are a great area to backup from and alleviate some of the overhead if there is an available snapshot when you need to restore. Just note that snapshots alone are not a full backup solution either, but definitely something worth looking in to when developing your backup solution.

    On your questions 5 and 6 I would first ask if you plan on filling up 3TB (and in the case of the RAID 5 6TB) right away and how much time do you need to buy that 3rd drive? Just based on what you've asked I would suggest RAID 1 as that's actually giving you working redundancy (and a performance bump for reads). I'm sure you can fake out mdadm and create a 3 disk RAID 5 group and then pull that 3rd psuedo disk, but at that point you are starting with a degraded group with no fault tolerance at all and are no better off then if you had used RAID 0 until you buy that 3rd drive and resilver/rebalance the group. If that 3TB drive is a month out I would have to say hold off for a month and build it the right way from the beginning. If it's more like 6 months to a year then go RAID 1 and 6 months from now really think about incurring the cost overhead of a 4th drive and go RAID 10.

  7. #17
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    556
    Distro
    Ubuntu 12.04 Precise Pangolin

    Re: How Probable is Hard Drive Failure?

    I don't think anyone has mentioned the difference between server-grade equipment and home-grade equipment. Server-grade equipment is designed to be online all the time and tend to enjoy longer life in 100% online, all-the-time usage. Taking home equipment and leaving it on all the time acting like a server will run the risk of the equipment failing even sooner. Other factors to consider is how "clean" your power is going into your equipment. Do you have it plugged directly into a wall (surge suppressor not much better)? Or do you have your equipment plugged into a good quality UPS? Does your area experience lots of brown-outs or lightning storms?

    But regardless, you always need to plan on the equipment failing and how much you are willing to spend to lower the risk of "when" it does.

    When speaking of RAID, do not think of it as any sort of "backup" at all. Only a means to keep your system running if a drive fails. You should always have a backup process in place. Having all your partitions backed up on a regular basis for full disaster recovery is good but it takes time to run such backups. It would also be good to couple it with smaller data/delta backups that happen more frequently. Where your backups reside is also a VERY important consideration. If the backup is on the same machine, a lightning strike or PSU failure could destroy it all in one shot. Having it write to a drive on the same machine is handy and easy to schedule on a regular basis but you might want to consider having it written to an external drive or another computer on the network. To further reduce risk of data loss, consider having the data written to an external disk and then alternate that disk with another at an offsite location on a regular basis.

    Once you have a process figured out, the next item to determine is how many backups are you going to retain (how far back you can restore).

    Getting the process automated as much as possible can save your neck when disaster strikes. Relying on manual human interaction is another risk since it might not happen.

    LHammonds

  8. #18
    Join Date
    Jul 2010
    Beans
    28

    Re: How Probable is Hard Drive Failure?

    Firstly I want to thank all of you for taking your time to answer my questions. I really appreciate everyone's help. Once again, thanks a lot!


    Quote Originally Posted by tgalati4 View Post
    1. Yes, it will take some time but that is a valid recovery method. You may need to use blkid to get the UUID's of the RAID disks and put the UUID's in some config files.

    3. You should make a backup of /etc once a month or so. Your configuration won't change much once your system is up and running. Whether you copy /etc to a USB flash drive or make a backup onto RAID is up to you. Remember RAID is not a backup strategy, It is a method to increase data availability or speed or both. Many home users think RAID is for backup when it's really designed for rapid data availability (as in a business with several users). Many home users are not prepared to perform RAID recovery when things go wrong.
    1. I looked at this guide, and it seems rebuilding/recovering the RAID partition is a lot more simpler: http://askubuntu.com/questions/11293...ard-drive-died, Are you sure recovering/rebuilding the RAID partition after a fresh OS install is difficult?

    2. So /etc store all the configurations? Which directory stores all the software/apps I install?

    Quote Originally Posted by joeyea3231 View Post
    I'm not sure of you hardware specs for this server but if it's 64 bit and has a fair amount of RAM, I would take a hard look at ZFS. Regarding OS failure, ZFS allows you to not only import the pool once you reinstall the OS, but doesn't even require that it be on the same hardware, hard drives connected to the same SATA/SAS ports, or even that it's the same OS. As long as the OS you install supports it's pool version it should import it with no issues. That is one big caveat to keep in mind though since a more recent ZFS update in the Linux world has surpassed the pool and filesystem versions the other "big implementation" OSes support (FreeBSD, Solaris). If you don't plan on switching OSes this isn't an issue. This is actually something I did 2 months ago moving a 5 disk RAID-Z array from FreeBSD to Ubuntu Server. Since the pool configuration is stored in metadata on the disks themselves you don't have any config files to restore and you recover doc would look like this:
    1. Reinstall OS
    2. Install ZFS Native
    3. Run zpool import
    4. On the list it displays, verify the pool you want to import and run zpool import -f [pool/ID]

    At that point the pool will be imported and all the filesystems residing on it remounted.
    ZFS also helps in the provisioning speed of those 3TB disks. If you set them up with mdadm you would still need to format them with whatever filesystem you plan on using and 3TB can take a substancial amount of time. ZFS on the other hand will have it ready for you in under a minute. Depending on the file system you would choose with a more traditional RAID setup, the feature set may also be much larger with ZFS with things like compression, quotas, and snapshots. Like many others have stated, RAID is not a replacement for full backups as it doesn't prevent end user issues like deleting important files. Snapshots are a great area to backup from and alleviate some of the overhead if there is an available snapshot when you need to restore. Just note that snapshots alone are not a full backup solution either, but definitely something worth looking in to when developing your backup solution.
    I was really wanted ZFS, but I thought ZFS was only for FreeNAS/FreeBSD. My understanding was/is that ZFS is not yet fully developed or stable for Ubuntu/Linux. Is it a good idea to use ZFS in Linux since it can have a lot of bugs unlike FreeBSD where ZFS is native? How would I go about installing or implementing ZFS in Ubuntu? Is there a guide?

    Quote Originally Posted by LHammonds View Post
    I don't think anyone has mentioned the difference between server-grade equipment and home-grade equipment. Server-grade equipment is designed to be online all the time and tend to enjoy longer life in 100% online, all-the-time usage. Taking home equipment and leaving it on all the time acting like a server will run the risk of the equipment failing even sooner. Other factors to consider is how "clean" your power is going into your equipment. Do you have it plugged directly into a wall (surge suppressor not much better)? Or do you have your equipment plugged into a good quality UPS? Does your area experience lots of brown-outs or lightning storms?

    But regardless, you always need to plan on the equipment failing and how much you are willing to spend to lower the risk of "when" it does.

    When speaking of RAID, do not think of it as any sort of "backup" at all. Only a means to keep your system running if a drive fails. You should always have a backup process in place. Having all your partitions backed up on a regular basis for full disaster recovery is good but it takes time to run such backups. It would also be good to couple it with smaller data/delta backups that happen more frequently. Where your backups reside is also a VERY important consideration. If the backup is on the same machine, a lightning strike or PSU failure could destroy it all in one shot. Having it write to a drive on the same machine is handy and easy to schedule on a regular basis but you might want to consider having it written to an external drive or another computer on the network. To further reduce risk of data loss, consider having the data written to an external disk and then alternate that disk with another at an offsite location on a regular basis.
    Yes, that is a very good point you mentioned. My server/RAID configuration is not going going to be my only backup. I also have multiple external drives to ensure I have multiple copies of my important data. However, doesn't a RAID configuration have data redundancy? Multiple hard drives in the RAID configuration have the same data, thus failure of one hard drive will still allow recovery of the data?
    Last edited by Zythyr; March 13th, 2013 at 04:03 AM.

  9. #19
    Join Date
    May 2008
    Location
    SoCal
    Beans
    Hidden!
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: How Probable is Hard Drive Failure?

    Quote Originally Posted by Zythyr View Post
    [B]...
    I was really wanted ZFS, but I thought ZFS was only for FreeNAS/FreeBSD. My understanding was/is that ZFS is not yet fully developed or stable for Ubuntu/Linux. Is it a good idea to use ZFS in Linux since it can have a lot of bugs unlike FreeBSD where ZFS is native? How would I go about installing or implementing ZFS in Ubuntu? Is there a guide?
    ZFS isn't anymore native to BSD that it is to Linux. ZFS was developed by SUN and is native to Solaris UNIX. Your best bet is to get Ubuntu forum user @rubylaser to help you. He uses ZFS on Both Linux and Solaris.

    See here for an interesting discussion with @rubylaser.

    Yes, that is a very good point you mentioned. My server/RAID configuration is not going going to be my only backup. I also have multiple external drives to ensure I have multiple copies of my important data. However, doesn't a RAID configuration have data redundancy? Multiple hard drives in the RAID configuration have the same data, thus failure of one hard drive will still allow recovery of the data?
    A RAID implementation should never be used in place of a backup strategy. The data is striped across all the disks in RAID 5. A mirrored RAID is no better that using a single disk and a backup in the long run. In my opinion, with large disks (1TB >) RAID fails for many when being rebuild after a disk failure.

    See here for more about why NOT to rely on RAID.
    -BAB1

  10. #20
    Join Date
    Jul 2010
    Location
    Michigan, USA
    Beans
    2,123
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: How Probable is Hard Drive Failure?

    I would love to know what type of stuff you want to store on the array. I use ZFS, mdadm, and SnapRAID and they all work very well. I have used both ZFS and mdadm for home storage (and enterprise) for years, but I've come to think that SnapRAID for most home users is the easiest solution. If you don't need the blazing throughput of ZFS or mdadm, SnapRAID + AUFS is a great solution. I have some setup directions for both mdadm and SnapRAID in my signature. The good news is any of these solutions will work great

Page 2 of 3 FirstFirst 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •