Page 9 of 9 FirstFirst ... 789
Results 81 to 88 of 88

Thread: Time to build a server with old hardware.

  1. #81
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    22,127
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Time to build a server with old hardware.

    Quote Originally Posted by Phil Binner View Post
    I finally have a network. I'll spend a while trying out the permissions, following the reading and finishing my documentation before coming back to get the Windows machines connected.

    If you ever get fed up with your American colleagues boring you with how to make American beer, come and ask me and I'll bore you with how to make English beer, along with all the history about how IPA came to be, etc, etc.

    I don't know how you put up with me. Thank you.
    Glad you got it working. Nobody is born knowing this stuff.

  2. #82
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,442
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Time to build a server with old hardware.

    Quote Originally Posted by TheFu View Post
    Nobody is born knowing this stuff.
    Except for T-800 and T-1000 models. (Terminator movie reference)

  3. #83
    Join Date
    Feb 2008
    Location
    Lincolnshire
    Beans
    210
    Distro
    Ubuntu 11.04 Natty Narwhal

    Re: Time to build a server with old hardware.

    Now I have NFS working I would like to get to the next phase, which is to get the windows machines connected. My understanding is that the only way to do this is with Samba, but it would be great if that is not true.

    Where do I go next.

  4. #84
    Join Date
    Feb 2008
    Location
    Lincolnshire
    Beans
    210
    Distro
    Ubuntu 11.04 Natty Narwhal

    Re: Time to build a server with old hardware.

    That was ridiculously easy. Ubuntu documentation has a very simple seminar on the subject. Here

    Code:
    https://ubuntu.com/tutorials/install-and-configure-samba#1-overview
    What I did was to install Samba and edit smb.conf to add the lines:

    Code:
    [Vol01]
       Comment = Vol01 on Samba
       path = /Vol01
       read only = no
       browsable = yes
    Then set up the user and opened the firewall and restarted.

    I had to map a network drive in windows and there it all was.

  5. #85
    Join Date
    Feb 2008
    Location
    Lincolnshire
    Beans
    210
    Distro
    Ubuntu 11.04 Natty Narwhal

    Re: Time to build a server with old hardware.

    What I will need next is a good backup system, then I can go live. Currently I just copy my network files to a couple of removable hard drives, but I think I want better than that now, probably daily/weekly/monthly.

    Any suggestions as to a good system.

    Thanks.

  6. #86
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    22,127
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Time to build a server with old hardware.

    If multiple userids are involved, you'll need to plan for capturing permissions, ACLs, xattrs AND the data as each change over time. There are modern backup solutions and their are old-school backup solutions that follow the methods from 1970. Only 2 that I know will capture everything needed, not just the data as it changes over time. All the others fail in that way or are extremely inefficient. For a single user, desktop, setup, permissions aren't nearly so important, so lots of advice you might see for "simple backup tools" handles just the files, but little else.

    On multi-user systems or servers, the metadata about files and directories are absolutely critical. We need to capture the data (50%) and the metadata (50% owner, group, permissions, ACLs, xattrs) as each file changes over time. Attempting to restore without the correct metadata won't end well. For system files, it will lead to a system that won't boot - really the best case would be an non-secure system that shouldn't be trusted.

    Do some reading about the different types of backups available, how much time and storage you can allocate for backups, and consider how important a 1-click restore is. You'll find there are lots and lots of trade-offs. If you have 50x the storage of the original for backups and 2-5 hours a day where the system can be down, then imaged backups can be used. Most people do not have that amount of storage or that amount of time that a server can be unavailable. Trade-offs.

    Assuming you have multiple users and perhaps 1.3x the amount of storage as the source files being backed up, there are really only 2 choices. Duplicity and rdiff-backup. These are very different sorts of tools.

    Duplicity works in the 1970 way .... weekly "full" backups and daily incrementals. To restore, first restore the last "full" backup, then all the incremental backups to get to the date recovered. Running a full backup every week or every other week or even every month, will take lots of downtime, unless some enterprise storage management with snapshot capabilities was used, like LVM or ZFS. However, using LVM is **not** something for a beginner.

    rdiff-backup works in a more modern way. Nightly mirrors with compressed differences for changes between the current and last backup version. The differences in metadata are also captured and versioned for every file and directory even if the contents of those files doesn't change. About 90% of the time, when picking which backup set we want to restore, the most recent (last night) is the one used. rdiff-backup's most recent backup looks like a mirror - as though rsync had been used, so we can use rdiff-backup as the restore tool or rsync or cp or any tool that can copy the files (assuming the permissions allow that). If we need 1 file, a directory structure or the entire backup set restored, we can easily. OTOH, if we had a corrupted DB and nobody noticed that issue until weeks later, rdiff-backup -r 25 days will restore whatever is in the backup set from 25 days ago - or we can limit it to a directory structure or single file. The way that times can be specified are many - X days ago, Y backups ago, or an exact time following the DBMS standard timespec. The problem with this reverse, compressed, differential, backup set method is that intermediate backup sets cannot be removed, but the last one(s) can. In fact, after my daily backups finish, I have rdiff-backup remove any backup sets older than XXX days ago. That results in a rolling XXX days of retained versions. https://www.tecmint.com/rdiff-backup...kup-for-linux/ is an old how-to. Don't worry about the dependencies. APT will handle those as needed for your system.

    BTW, almost all efficient backup tools on Unix will be based on rsync. Both duplicity and rdiff-backup are. Lots of people just use rsync or rsync+hardlinks to provide versioning. I used rsync and a number of rsync+hardlink techniques for years before I was burned by changes to the file metadata in an important system.One last thing. Some solutions appear to recommend having 2 or 3 backup tools. IMHO, if you need more than 1 tool to accomplish this goal, perhaps you picked the wrong solution. Every tool has caveats. Learning those subtle caveats for each tool, especially when new to Linux, will be difficult to remember, assuming they can be understood.

    All these tools are network capable, so people like me have a network backup server and we "pull" backups from the different client systems on our network. "Pull" uses ssh connections and it much more secure than "push" from the client systems. "Pull" backups address problems with crypto-malware, among some other issues. "Push" or local backups can be compromised by crypto-locker malware. Setting up secure, networked, "pull" backups requires a good understanding of Unix users, permissions, and ssh. https://www.tecmint.com/rdiff-backup...kup-for-linux/ - no need for other repos or to use any PPA these days.



    You'll also see that most people don't backup the entire system with all the bits. That used to be how we all did things because a restore was the easiest way to get back to a working system. Since the Linux installers have become so friendly and UEFI became common, many of us begin our restore process by doing a fresh, minimal, install. This handles the partitioning, boot, UEFI vs Legacy BIOS, encryption or no encryption with the ability to change underlying storage and completely replace the hardware in a way that restoring an OS running from another system doesn't. Linux isn't nearly so picky about moving a HDD from 1 computer to another. I've done that a few times with only minor problems to be addressed, but for someone really new to Linux, those minor problems could easily cause days of a non-working system. OTOH, a fresh, minimal, install is 15 minutes to a working system. Then we just need to make that minimal system into "our" system with the next steps as outlined in those links above.

    Do some reading. Look at some different options. Only you can pick which of the non-perfect solutions you'll put up with for the next 2-5000 months. For me, the best trade-off is rdiff-backup with 90-180 days of versioned backup sets. Most of the time, 60 days of versioned backups uses just 20% more storage than the source files, so if you have 100G in the backup, to have 60 days of versioned backups would use about 120G. Seems like a pretty easy choice. Plus, daily backups require less than 5 minutes after the first mirror job completes. Obviously, that first mirror will take as long to finish as copying all the files requires. Usually, that time is limited by storage write performance. If the backup storage is tight, begin with just 30 days of versioned backups and see how much storage is used.

    As for automation, use cron for that. If you simply drop a script into /etc/cron.daily/ the backups will happen daily, sometime, thanks to anacron. Anacron guarantees a script will run sometime during a day, if the system is powered on. If you are like me and want more control over the exact time backups happen on each system, then using the /etc/crontab file directly will provide start time control for the script to the minute resolution. Scripts run from anacron or /etc/crontab run with root permissions. That is required for all backups that are more than getting a single user's HOME directory.

    GUI programs cannot be run via crontabs/anacron, which means there is no choice except to create a script to control the backups. Google "beginning bash scripting guide" to get started with that. There are a few hundred bash script examples on your computer already and millions of examples online.

    One last thing. Some solutions appear to recommend having 2 or 3 backup tools. IMHO, if you need more than 1 tool to accomplish this goal, perhaps you picked the wrong solution. Every tool has caveats. Learning those subtle caveats for each tool, especially when new to Linux, will be difficult to remember, assuming they can be understood. Keep everything as simple as possible, but not so simple as to fail to accomplish the task at hand.

    If you want to try out/use a rsync+hardlink solution, that wheel has been invented for a few decades. rsnapshot is the name and it is in the ubuntu repos, just like duplicity and rdiff-backup are in the repos.
    https://ubuntu.com/server/docs/tools-rsnapshot
    https://help.ubuntu.com/community/DuplicityBackupHowto - that guide spends way too much time with a dead protocol, plain FTP. Nobody, anywhere, should be using plain FTP in 2020. Nobody. Duplicity supports plenty of secure transfer methods. Use one of those instead - sftp, scp, rsync, one of those.

    It is probably time to end this thread. Please create another for backup questions, but include a link to this one as background information. That way, people who don't know NFS, but have great ideas about backups will be likely to respond with their expertise. Getting lots of points of view would only help you to make the best possible decision.

  7. #87
    Join Date
    Nov 2008
    Location
    Metro Boston
    Beans
    15,486
    Distro
    Kubuntu 20.04 Focal Fossa

    Re: Time to build a server with old hardware.

    I posted my home-grown backup script that sucks down specific content from three remote sites and stores them on a 4 TB removable drive. It keeps a specified number of days of backups in dated directories ("/201213/")and archives one backup per month. Maybe it will give you some clues.

    https://ubuntuforums.org/showthread....1#post13994172

    (The remotes are on Linode, and a snapshot of each machine is taken there every night. The last two nights' data, and data from a week ago, are available. You can take manual snapshots as well.)
    Last edited by SeijiSensei; December 14th, 2020 at 06:28 PM.
    If you ask for help, do not abandon your request. Please have the courtesy to check for responses and thank the people who helped you.

    Blog · Linode System Administration Guides · Android Apps for Ubuntu Users

  8. #88
    Join Date
    Feb 2008
    Location
    Lincolnshire
    Beans
    210
    Distro
    Ubuntu 11.04 Natty Narwhal

    Re: Time to build a server with old hardware.

    Thanks to all. That's for tomorrow now. I'll end this and start again.

Page 9 of 9 FirstFirst ... 789

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •