Page 1 of 3 123 LastLast
Results 1 to 10 of 26

Thread: Full server backup

  1. #1
    Join Date
    Aug 2013
    Beans
    15

    Full server backup

    I need to back up a client server running Ubuntu Server 12.04 ( I think...) to an external drive. They had someone in the past set up a command for them that tars the entire drive to a backup file then they move that file to a backup drive. I'm asking for help on a few items:
    1. Please review the backup steps they take now and tell me if they are sufficient to accomplish a bare metal restore assuming the server hard drive fails or we move it to another machine altogether. I'm trying to test that now, but at 100GB, the backup file takes some time to move around and attempt a restore. (more below)
    2. I'm going to attempt restoring this to a virtual machine in VirtualBox. That should work, right?
    3. What is the best long-term backup strategy for this. My priorities are minimal down-time, but a few hours is alright. My feeling is I should be able to get incremental backups set up using three or four external drives they'd cycle off-site weekly. A RAID1 mirror is on my list too. So, what's the best way to set up incremental backups on Ubuntu Server? Note: I have a test server here and tried Backula, but couldn't figure it out, even after reading a bunch of tutorials. Does this or another application work to do as I wanted, especially considering multiple drives and incremental backups? If not, what scheme do you all recommend?


    More info on 1.:
    Here are the steps they currently take to make a backup:
    1. Plug in backup drive, navigate to Webmin, mount the external drive under /mnt
    2. through Putty (remote from another machine):
      Code:
      cd /mnt
      tar cvpzf backup.tgz --exclude=/proc --exclude=/lost+found --exclude=/backup.tgz --exclude=/mnt --exclude=/sys /
    3. when complete they copy the backup.tgz file to a folder named by date
    4. unmount drive and walk off-site with it.

    so, does this look good enough for now? They've never tested a restore and that's what I'll try once I have this file copied. Any advice for setting this up in a vm as if a bmr?

    Thanks for any tips or advice.

  2. #2
    Join Date
    Mar 2010
    Location
    Portland; PM free zone.
    Beans
    9,400
    Distro
    Lubuntu 14.04 Trusty Tahr

    Re: Full server backup

    Tar is the 1980s way to backup servers. It is fine for small backups of a few MB, but probably shouldn't be used for anything larger. There are far better tools than tar - by far. rbackup, rsnapshot, a custom rsync script with versioning, rdiff-backup, duplicity ... lots of options that will be faster and easier to restore.

    I wouldn't trust that "cd" command to work. I'd point the tar directly at /mnt/backup-`date "+%Y%m%d"`.tgz, but only after validating that the drive was actually mounted somehow. ... either by looking for a specific file that is only on the other media or by checking the mtab via a df command. It is possible to write to /mnt with no media actually mounted there. It might fill the partition to 100% capacity and make lots of bad things happen. There are a few threads here where people lots 100G+ of storage and couldn't find it. It was buried under a mount that failed once.

    Versioned backups are important. You should be able to get the system back from yesterday, last week, last month.

    100% automatic backups are critical. If a human needs to swap in a HDD, it will fail eventually. Backups are my life and I can't be bothered to connect a HDD for backups even weekly. Automatic.

    Efficient on storage. Having 60 days of backups shouldn't require 60x the storage. OTOH, for 1.2x the original storage, you can have 60 days worth of versioned backups. I do this today across more than a few servers. Modern backup tools completely rock.

    Do you have multiple servers on the same LAN or do you need to push the data over a WAN connection? That first backup can suck, but the incrementals should be minimal. I like having a local backup, then a mirror of those backup areas off-site. The off-site backups have **never** been needed in 10 yrs, but the local ones have been used about once a year.

    Whether a restore will work to a VM or not depends on how tightly the OS is connected to non-standard hardware. The network device will certainly change names - it is controlled by the MAC address ... so if you have eth0, you will get an eth1 in the new VM ... until you clean that up or change the MAC in the appropriate /etc/udev/.... file.

    In the last week, I've written replies here on backups with lots of links to presentations about different backup tools, including my favorite, rdiff-backup. If you are interested, you can search and find those links and presentations. I hope they are helpful. There's an example script in those links too.

    If you need to store the backups on untrusted remote storage (Amazon S3 or an FTP site), then look at duplicity which can encrypt the files on the way to the storage. I don't like the "set" of files it uses - need the tool to restore 1 file, unlike rdiff-backup which stores the most recent backup as a mirror - want to restore 1 file - copy/scp/sftp rsync it back where you need it. Same for the entire directory. It is only to restore files that are older where the tool is handle ... even then, it isn't required. I really like rdiff-backup for those reasons. The data isn't stored in some weird format. I can get to it easily when my mind isn't working fully at 4am.

    I've been thinking about cleaning up my current backup script so it is safe to publish. Someday. For every server here, the backup scripts are just a little different - sometimes I need to shutdown a SQL server first, or dump the SQL to a file before running the backups to avoid corruption with changing files. OTOH, I have used rsync to migrate a running blog server to a different box and everything turned out fine.

    Testing the restore is a good idea. BTW, if you have just 1 partition of 100G, I think your server isn't setup in a "smart" way. It is a good idea to have the OS and apps on 1 partition and the data for the main app of the server on a different partition. It will make OS upgrades easier, trust me.

    I'm confused. Why use webmin at all? mounting a USB device is pretty easy for a script.

    Good question. Thanks.

  3. #3
    Join Date
    Oct 2009
    Beans
    Hidden!
    Distro
    Ubuntu 12.04 Precise Pangolin

    Re: Full server backup

    Don't forget to exclude /dev too.

    This probably has more information than you need but give it a read anyway.
    https://wiki.archlinux.org/index.php...ackup_with_tar

    As far as checking to see if a folder is mounted, mountpoint works wonders.

    I don't usually backup my server installs as they are easy to reinstall from scratch. I do backup my data via rsync.

    EDIT: I can dig up the rsync script I use that uses hard links to do incremental backups.
    Last edited by CharlesA; August 22nd, 2013 at 09:00 PM.
    Come to #ubuntuforums! We have cookies! | Basic Ubuntu Security Guide

    Tomorrow's an illusion and yesterday's a dream, today is a solution...

  4. #4
    Join Date
    Mar 2010
    Location
    Portland; PM free zone.
    Beans
    9,400
    Distro
    Lubuntu 14.04 Trusty Tahr

    Re: Full server backup

    Hey Charles!

    Loved the 'mountpoint' command. Taught me something new, but it doesn't always work.
    Code:
    $ mountpoint /Data
    mountpoint: /Data: Value too large for defined data type
    Tried it with sudo too, same failure. It worked fine on most directly connected mounts ... just the external, network ones failed.

    The rsync examples website https://rsync.samba.org/examples.html has lots of great sample scripts. I think rsnapshot and rbackup are based on those versions from there.

    To backup a list of packages on Debian or Ubuntu or Mint, use dpkg --get-selections > ~/list-o-pkgs.txt. Then when it is time to restore on a new HDD use dpkg --set-selections < ~/list-o-pkgs.txt.
    Last edited by TheFu; August 22nd, 2013 at 09:36 PM.

  5. #5
    Join Date
    Oct 2009
    Beans
    Hidden!
    Distro
    Ubuntu 12.04 Precise Pangolin

    Re: Full server backup

    Nice catch The Fu. So far I have only ever used mountpoint in bash scripts for local media. I guess if you want to test to make sure a remote file system is mounted, you'd have to do something else.
    Come to #ubuntuforums! We have cookies! | Basic Ubuntu Security Guide

    Tomorrow's an illusion and yesterday's a dream, today is a solution...

  6. #6
    Join Date
    Aug 2013
    Beans
    15

    Re: Full server backup

    Great advice all. I'll be messing with this extensively as soon as the file moves to my drive for testing. Love copying huge files....

    Sadly, I have to have a solution that works between external drives. I'll read through this all and see if I can make that work with one of the tools recommended. The problem is they're on a pretty miserable DSL connection with terrible uploads that would probably take weeks or months to establish the first off-site backup and could take days as large files are changed. Hey, at least it isn't satellite like some of my clients. Can anyone think of another solution? Rotating external HDDs is all I can come up with. A couple large-ish SSDs would speed things up and as long as they had at least 3 plus a mirrored HDD in the box, I'd be pretty comfortable with that. Doubt they'll want to spend double on the SSDs though.

    So, with this in mind, same suggestions? If at all possible a GUI the client can use would be fantastic. In fact, here would be my ideal. I mean ideal and realize all of this is unlikely:

    What the client sees
    One day each week the client would just pull one USB drive and plug in another. This would happen each week. That's what they'd have to know and all they'd need to know until/unless something goes wrong and we need a restore. At that time I'd have given the client a boot disk to put in the optical drive that would ask for the external HDD with the latest backup and it would be back to working a while later.

    What I need to do/think about
    I realize to do that I'd need incremental backups that could recognize states of various drives -- one would be weeks behind and rsync or similar would have to look at a config/status file to know the last backup on that specific drive. So, the first day a backup drive is swapped the incremental backup would be different by a few weeks. It would back up daily for a week and the next would be rotated.

    I'd like to know more about the Ubutnu core and apps and data backups separate. Specfically, I'd like to know how a restore could be simplest for my client. In the end, if it would require me doing the restore, that's fine too. I could reinstall Ubuntu then restore their data if the other is impossible or overly-complicated

    As for different partitions, that's not currently the case. This was set up for them a long time ago by someone else. They're running a webserver (Apache, MySQL, PHP) for SugarCRM and file sharing out to Windows and Mac OS clients. So, SMB server. We may be either ditching Sugar or moving it around anyway, so now would be a good time to do this right, but I doubt they'll go for it. Trying to do the best I can with what I have now. Weekly backups using Tar are good enough if they can be restored, but something better would be great.

    Hopefully that info explains what they're doing better. Nothing fancy. No DHCP server, no domain. Just Sugar and files.

  7. #7
    Join Date
    Mar 2010
    Location
    Portland; PM free zone.
    Beans
    9,400
    Distro
    Lubuntu 14.04 Trusty Tahr

    Re: Full server backup

    If you are backing up a DB, you need to take extra steps to ensure non-corrupt backups.

    I don't think I can help anymore in this forum than what I've said already or what my signature links provide.

    If you'd like more help, PM me.

  8. #8
    Join Date
    Aug 2013
    Beans
    15

    Re: Full server backup

    Quote Originally Posted by TheFu View Post
    If you are backing up a DB, you need to take extra steps to ensure non-corrupt backups.

    I don't think I can help anymore in this forum than what I've said already or what my signature links provide.

    If you'd like more help, PM me.
    Thanks. I'm reading through all this and have a big hill to climb. I spent most of my time yesterday trying to copy one of the backup files to my own drive so I can return the client's. Sadly, Virtualbox will only copy up to about 1.5MBps, making a single move tens of hours. I used an ext plug-in for Windows to copy but every time I copy one of the 100gb files they end up stopping just over 40gb.

    So long as I know the backups are working I don't necessarily need one. My goal now is to run a restore just so I can tell them their backups work. I'm doing that using instructions here: https://help.ubuntu.com/community/BackupYourSystem/TAR

    You're right that I'll have to back up that database seperately. I hadn't thought about that. If it's not already I can just install PHP MyAdmin and back that up as well. I'm sure I'll find plenty of ways to script that online.

    Again, thanks for all the info and links. I plan to read a bunch of that as soon as I'm able.

    BTW, I have a portable HDD I can use to get a copy of the backup from my client directly from their server. It's currently NTFS. Copying the backup file to it isn't a problem in any way, is it? Or, do I need to reformat the drive to EXT?

    Thanks again.

  9. #9
    Join Date
    Aug 2013
    Beans
    15

    Re: Full server backup

    I just found out they still access the server while is't running the backup. Will that screw things up? It should probably back up at night while they're away, right?

  10. #10
    Join Date
    Mar 2010
    Location
    Portland; PM free zone.
    Beans
    9,400
    Distro
    Lubuntu 14.04 Trusty Tahr

    Re: Full server backup

    Quote Originally Posted by Dan_Joyce View Post
    Thanks. I'm reading through all this and have a big hill to climb. I spent most of my time yesterday trying to copy one of the backup files to my own drive so I can return the client's. Sadly, Virtualbox will only copy up to about 1.5MBps, making a single move tens of hours. I used an ext plug-in for Windows to copy but every time I copy one of the 100gb files they end up stopping just over 40gb.

    So long as I know the backups are working I don't necessarily need one. My goal now is to run a restore just so I can tell them their backups work. I'm doing that using instructions here: https://help.ubuntu.com/community/BackupYourSystem/TAR

    You're right that I'll have to back up that database seperately. I hadn't thought about that. If it's not already I can just install PHP MyAdmin and back that up as well. I'm sure I'll find plenty of ways to script that online.

    Again, thanks for all the info and links. I plan to read a bunch of that as soon as I'm able.

    BTW, I have a portable HDD I can use to get a copy of the backup from my client directly from their server. It's currently NTFS. Copying the backup file to it isn't a problem in any way, is it? Or, do I need to reformat the drive to EXT?

    Thanks again.
    I know a little about virtualbox - give presentations on improving the performance at different groups in the US and a few other countries. See this. You should see 80-95% of the native performance for disk, networking and CPU on a properly tuned VM. Anyway, virtualbox is NOT a good choice for server-on-server virtualization, but should be fine to test a server restore. VirtualBox is meant for Desktop-on-Desktop VMs.

    USB2 is slow. USB3 has queuing issues with every use I've ever seen - too many different processes seems to confuse the USB bus - sometimes I see ZERO bytes transferred for 20+ seconds if more than 2 processes attempt to use USB storage.

    Tar retains user, group and simple permissions, but the VM where you restore needs to have 100% matching /etc/passwd files or the file permissions will be all screwed for non-system accounts. That could be bad. Tar really is a terrible choice for your backups, they are too large. Seriously. You have too much data for tar to be effective. On my servers, I see less than 50MB of change data daily, so pushing backups to a remote system is almost nothing even over slow DSL with a smarter backup tool. Definitely do the first backup local, then take that disk to a remote location and let the magic of incrementals help. It will also mean less downtime for the backups to happen.

    For 100G source, I'd estimate less than 120G would be needed for 60 days of backups using rdiff-backup. Obviously, the amount of change data matters, so this is just a guess. For all the other backup methods except duplicity and tools based on it, you will want a Linux file system - NOT NTFS. Sorry, but it is just an inferior file system for Linux backups.

    Forget the GUI. Those are a huge liability to server stability. Think PowerShell - which is based on bash.

Page 1 of 3 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •