LVM will let you create snapshots of active storage, which can be backed up by any other backup tool.
But LVM needs to be installed at install-time or prior to creating any file systems. The backup data needs to be completely quiesced to ensure against corrupt data being saved in the backup.
I doubt any other solution that doesn't perform 'snapshots' below the file system level will actually work, 100%.
Options to LVM are ZFS, and BTRFS, though I wouldn't trust my data to BTRFS, and I don't think ZFS can be booted, though for large data storage, ZFS is probably the best choice today.
I've used fsarchiver - it would work great with LVM snapshots. Outside of that, it needs to be booted from alternate boot media to ensure static data throughout a backup. The same applies to the other options provided above, IMHO. If you run any DBMS, you might or might not get lucky. There are non-snapshot methods, but those involve dump/restore of the DBMS data prior to the backups being taken.
Data integrity is important.
IMHO.
There are lots of long threads here about doing backups. I'm with Herman on not bothering to backup most of the OS. I backup system settings, system data, personal settings, personal data and a list of manually installed packages. Because installing a fresh OS is 15 minutes, storing all the programs isn't something I want. Putting the system and personally settings and data back take another 15 min. Installing the previously installed packages takes less than 15 minutes. Basically, I can a system back and working in 45 minutes. Only huge datasets which are stored outside those locations might take more time, but the core 100G will be back and available in 45min or less.
Most normal desktop users would be happy with the aptik solution. But it won't work for servers. Servers will require scripting. There isn't any point-n-click backup solution for many reasons. GUIs can't be automated. Servers don't have GUIs. GUIs are inefficient to manage from 500-24K miles away. Lots of other reasons. You'll need to build your own backup script or it will never be right. Just sayin'.
Using that method, reduces the needed backup storage hugely. Why backup the same 4-6GB on every system? With 10,20,30, or more systems, that is a huge waste. But if having a monkey perform the restore at 3am is the most important thing, then you'll need everything, taken from a snapshot-based backup.
As an example, I have an email gateway server that has 120 daily, versioned, backups. It doesn't hold any data and I don't backup any data on it. 120 days of versioned backups are less than 115MB ( I just checked).
Code:
$ more inc-sizes-spam2-2018-06-17
Time Size Cumulative size
-----------------------------------------------------------------------------
Sun Jun 17 02:11:06 2018 114 MB 114 MB (current mirror)
Sat Jun 16 02:11:07 2018 941 bytes 114 MB
Fri Jun 15 02:11:07 2018 1.04 KB 114 MB
...
Mon Feb 19 02:11:04 2018 1.10 KB 114 MB
Sun Feb 18 02:11:06 2018 963 bytes 114 MB
Sat Feb 17 02:11:06 2018 1.21 KB 114 MB
That's a backup size report from the rdiff-backup summary. Having the OS included would have 4-6G, plus all the patched changes weekly.
Anyways, that info is for your consideration.
Bookmarks