Page 1 of 2 12 LastLast
Results 1 to 10 of 18

Thread: How to Install and Configure an Ubuntu Server 14.04.1 LTS

  1. #1
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb How to Install and Configure an Ubuntu Server 14.04.1 LTS

    The most current version of the guide can be found here: How to Install and Configure an Ubuntu Server 14.04.1 LTS @ HammondsLegacy.com

    Greetings and salutations,

    I hope this thread will be helpful to those who follow in my foot steps as well as getting any advice based on what I have done / documented.

    High-level overview

    This document will cover installation of a dedicated Ubuntu server. This will be the "base" installation of the server as a prerequisite for other documents that will build upon it (e.g. MediaWiki and MySQL). The server will be installed inside a virtual machine using vSphere running on ESXi servers. Notes will also be supplied for doing the same thing for Oracle's VirtualBox on a Windows 7 PC. Although there are some VMware-specific and VirtualBox-specific steps, they are very few and the majority of this documentation will work for other Virtual Machines or even directly installed onto a physical machine (e.g. bare-metal install).

    This document will also cover some custom scripts to help automate tasks such as backing up, automatically growing the file system when free space is low, etc.

    Tools utilized in this process





    Helpful links

    The list below are sources of information that helped me configure this system as well as some places that might be helpful to me later on as this process continues.




    Assumptions

    This documentation will need to make use of some very-specific information that will most-likely be different for each person / location. This variable data will be noted in this section and highlighted in red throughout the document as a reminder that you should plug-in your own value rather than actually using these "place-holder" values.

    Under no circumstance should you use the actual values listed below. They are place-holders for the real thing. This is just a checklist template you need to have answered before you start the install process.

    Wherever you see RED in this document, you need to substitute it for what your company uses.



    • Ubuntu Server name: srv-ubuntu
    • Internet domain: mydomain.com
    • Ubuntu Server IP address: 192.168.107.2
    • Ubuntu Server IP subnet mask: 255.255.255.0
    • Ubuntu Server IP gateway: 192.168.107.1
    • Internal DNS Server 1: 192.168.107.212
    • Internal DNS Server 2: 192.168.107.213
    • External DNS Server 1: 8.8.8.4
    • External DNS Server 2: 8.8.8.5
    • Ubuntu Admin ID: administrator
    • Ubuntu Admin Password: myadminpass
    • Email Server (remote): 192.168.107.25
    • Windows Share ID: myshare
    • Windows Share Password: mysharepass


    It is also assumed that the reader knows how to use the VI editor. If not, you will need to beef up your skill set or use a different editor in place of it.

  2. #2
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to Install and Configure an Ubuntu Server 14.04.1 LTS

    Analysis and Design

    The Ubuntu Server Long-Term Support (LTS) is great choice for companies because it is a solid operating system that happens to be free. If professional support is needed, there is an option to buy support for the Long-Term Support (LTS) versions of the operating system.

    The large decision over the configuration of Ubuntu is how the hard drive space is sliced up (partitioned). This documentation will focus on partitioning the drives in such a way that it allows for growth depending on what is needed for the specific application.

    This following design allows for dynamic growth and fine-tuning if need be. Being caught offguard with a scenario where space is filled up with no immediate option other than deleting files is never a good thing. Long-term life and growth of the system as well as budgeting concerns have to be taken into consideration.

    Isolating the root volume to mainly just static data that will not grow much over time is the central concern. Pushing the other folders into their own volumes will be done so their dynamic growth will not affect the root partition. Filling up the root volume on a *nix system is a very bad thing and should be avoided at all costs. The file systems will also not take up 100% of the logical volume. This will allow the file systems (through automated scripts) to grow as needed and give the administrators some time to add more drives if necessary or shrink other volumes to get more space.

    The volumes will initially be sliced up as follows:


    • boot - This will remain static in size. It is also the only space residing outside the Logical Volume Manager (LVM)
    • root volume - Operating system and everything else which should remain fairly static.
    • swap volume - This will remain static in size. However, if the amount of RAM is adjusted, this might need to be adjusted as well.
    • home volume - This is where personal files will be stored but likely not be used in most server configurations.
    • tmp volume - This location will be used for temporary storage. Size should be adjusted to match however it is being used.
    • usr volume - This will contain mostly static data and should not grow unexpectedly.
    • var volume - This is the app/database/log storage and will continue to grow over time.
    • srv volume - This will contain the files stored in the Samba share.
    • opt volume - This will contain specific software you add but may not be utilized at all depending on configuration.
    • bak volume - This will contain a local backup of the application. So space needs to be about double the size of your app data (typically double the /var size).
    • Offsite Storage - This will be handled elsewhere but will be mounted on this server.


    The partitions will be increased later as needed but will start off with a lower number.

    To get a good idea of the initial hard drive layout and to understand the process better, here is a graphical representation of the initial design for the server:



    These numbers will be used for the initial build of the system:

    boot = 200 MB
    root = 2 GB
    swap = 2 GB
    home = 0.2 GB
    tmp = 0.5 GB
    usr = 2.0 GB
    var = 2.0 GB
    srv = 0.2 GB
    opt = 0.2 GB
    bak = 0.5 GB

    NOTE: When the logical volumes and file systems are initially created, they consume the maximum amount of space allocated so that the file system size will initially equal the logical volume size. These partition sizes above are artificially small for that reason. These will be later modified so that the logical volume will be larger than the file system so that the file system has room to expand when needed in a safe and automated manner.

    Important info:
    - The /tmp folder is strictly temporary. By default, each time the server reboots, this folder is deleted and re-created.
    - The /bak folder will retain the most recent backup and is considered the "local" copy of the backup.

    VMware Virtual Machine Settings

    Virtual Manager: VMware vSphere Client 5.5
    Virtual Host: VMware ESXi Server 5.5


    • Configuration: Custom
    • Name: srv-ubuntu
    • Datastore: DS3400-LUN0
    • Virtual Machine Version: 8
    • Guest Operating System: Linux, Version: Ubuntu Linux (64-bit)
    • Number of virtual processors: 1
    • Memory Size: 1024 MB
    • Number of NICs: 1
    • NIC 1: VM Network
    • Adapter: E1000, Connect at Power On: Checked
    • SCSI controller: LSI Logic Parallel
    • Select a Disk: Create a new virtual disk
    • Create a Disk: 10 GB, No thin provisioning, No cluster features, Store with the virtual machine
    • Advanced Options: Virtual Device Node = SCSI (0:0)
    • Remove Floppy Drive
    • Mount CD/DVD Drive to Ubuntu ISO (ubuntu-12.04-server-amd64.iso). Make sure CD/DVD is set to Connect at power on
    • Set boot options to Force BIOS Setup so you can set CDROM to boot before the Hard Disk


    VirtualBox Virtual Machine Settings

    Virtual Manager: Oracle VirtualBox 4.3.12
    Virtual Host: Windows 7 Ultimate with SP1 (64-bit)


    • Name: srv-ubuntu
    • Operating System: Linux
    • Version: Ubuntu (64 bit)
    • Memory: 1024 MB
    • Check - Start-up Disk
      - Create new hard disk
      - VMDK
      - Dynamically allocated
      - Size: 10 GB
    • Select srv-ubuntu and click Settings (CTRL+S)
      - System, Processor, Enable PAE/NX
      - Network, Attached to: Bridged Adapter, Advanced, Adapter Type: Intel PRO/1000 MT Server
      - Storage, IDE Controller, Choose a virtual CD/DVD disk file, ubuntu-14.04.1-server-amd64.iso


    Install PuTTY

    When running inside a virtual machine, the response time for screen refreshes can be painfully slow to view man (manual) pages and navigating in VI (text editor). However, when using PuTTY via SSH, it is a far better solution for your Ubuntu console because it handles the screen draws much faster when scrolling and allows copying and pasting text between windows.

    For example, selecting and copying a command in this document and then right-clicking in the PuTTY window will paste the command and have it ready to execute. Any text/lines highlighted with the mouse will be automatically copied into clipboard memory.

    Download the portable edition and run the install...except it does not really "install" like a normal program, it simply extracts to a specified folder and will run from that folder even if you put it on a USB stick and carry over to a new computer (requires no install to run and thus leaves no footprint on your system)


    1. Start PuTTY
    2. Type the following and click the Save button:
      Host Name: SRV-Ubuntu (or the IP such as 192.168.107.2)
      Port: 22
      Connection type: SSH
      Saved Sessions: SRV-Ubuntu
    3. Now all you have to do is double-click on the session and it will connect to your server (when online).

  3. #3
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Re: How to Install and Configure an Ubuntu Server 14.04.1 LTS

    Install Ubuntu Server

    NOTE: During the setup process throughout this entire document, most commands will require "sudo" as a prefix. However, this document will be using "sudo su" to temporarily gain root privileges so that subsequent commands will work without the need for the "sudo" prefix.


    1. Power on the Virtual Machine (VM)
    2. Press {ENTER} to accept English
    3. Select Install Ubuntu Server {ENTER}
    4. Press {ENTER} to accept English
    5. Press {ENTER} to accept United States
    6. Select No to not detect keyboard layout
    7. Press {ENTER} to accept English (US)
    8. Press {ENTER} to accept English (US)
    9. Type srv-ubuntu {ENTER} (this is your hostname)
    10. Type Administrator, {ENTER} for the full name
    11. Press {ENTER} to accept the default of the lowercase name of administrator
    12. Type myadminpass, {ENTER}, myadminpass, {ENTER}
    13. Select No, {ENTER} to not encrypt your home directory
    14. Press {ENTER} to accept detected time zone (America/Chicago)
    15. Select Manual {ENTER}
    16. Select SCSI3 (0,0,0) (sda) - 10.7 GB VMware Virtual disk {ENTER}
    17. Select Yes to create new empty partition table, {ENTER}
    18. Select pri/log 10.7 GB FREE SPACE {ENTER}
    19. Select Create a new partition {ENTER}
    20. Type 500MB, {ENTER} (NOTE: This will be the /boot partition)
    21. Select Primary {ENTER}
    22. Select Beginning {ENTER}
    23. Select Use as: Ext4 journaling file system {ENTER}
    24. Select Ext2 file system {ENTER}
    25. Select Mount point: / {ENTER}
    26. Select /boot - static files of the boot loader {ENTER}
    27. Select Bootable flag: off {ENTER} (NOTE: This toggles it on)
    28. Select Done setting up the partition {ENTER}
    29. Select Configure the Logical Volume Manager {ENTER}
    30. Select Yes to write change to disks and configure LVM, {ENTER}
    31. Select Create volume group {ENTER}
    32. Type LVG {ENTER}
    33. Select /dev/sda free #1 (10537MB; FREE SPACE), {SPACEBAR}, {ENTER}
    34. Select Yes to write change to disks and configure LVM, {ENTER}
    35. Select Create logical volume {ENTER}
    36. Select LVG (10531MB) {ENTER}
    37. Type root {ENTER}
    38. Type 2G {ENTER}
    39. Select Create logical volume {ENTER}
    40. Select LVG (8535MB) {ENTER}
    41. Type swap {ENTER}
    42. Type 2G {ENTER} (NOTE: This is double the amount of RAM)
    43. Select Create logical volume {ENTER}
    44. Select LVG (6538MB) {ENTER}
    45. Type home {ENTER}
    46. Type 0.2G {ENTER}
    47. Select Create logical volume {ENTER}
    48. Select LVG (6341MB) {ENTER}
    49. Type tmp {ENTER}
    50. Type 0.5G {ENTER}
    51. Select Create logical volume {ENTER}
    52. Select LVG (5842MB) {ENTER}
    53. Type usr {ENTER}
    54. Type 2G {ENTER}
    55. Select Create logical volume {ENTER}
    56. Select LVG (3846MB) {ENTER}
    57. Type var {ENTER}
    58. Type 2G {ENTER}
    59. Select Create logical volume {ENTER}
    60. Select LVG (1849MB) {ENTER}
    61. Type srv {ENTER}
    62. Type 0.2G {ENTER}
    63. Select Create logical volume {ENTER}
    64. Select LVG (1652MB) {ENTER}
    65. Type opt {ENTER}
    66. Type 0.2G {ENTER}
    67. Select Create logical volume {ENTER}
    68. Select LVG (1455MB) {ENTER}
    69. Type bak {ENTER}
    70. Type 0.5G {ENTER} (we will have a small amount leftover in LVG)
    71. Select Finish {ENTER}
    72. Select #1 2.0 GB directly under LVM VG LVG, LV root, {ENTER}
    73. Select Use as: do not use {ENTER}
    74. Select Ext4 journaling file system {ENTER}
    75. Select Mount point: none {ENTER}
    76. Select / - the root file system {ENTER}
    77. Select Label: none {ENTER}
    78. Type root {ENTER}
    79. Select Done setting up the partition {ENTER}
    80. Select #1 2.0 GB directly under LVM VG LVG, LV swap, {ENTER}
    81. Select Use as: do not use {ENTER}
    82. Select swap area {ENTER}
    83. Select Done setting up the partition {ENTER}
    84. Select #1 197.1 MB directly under LVM VG LVG, LV home, {ENTER}
    85. Select Use as: do not use {ENTER}
    86. Select Ext4 journaling file system {ENTER}
    87. Select Mount point: none {ENTER}
    88. Select /home {ENTER}
    89. Select Label: none {ENTER}
    90. Type home {ENTER}
    91. Select Done setting up the partition {ENTER}
    92. Select #1 499.1 MB directly under LVM VG LVG, LV tmp, {ENTER}
    93. Select Use as: do not use {ENTER}
    94. Select Ext4 journaling file system {ENTER}
    95. Select Mount point: none {ENTER}
    96. Select /tmp {ENTER}
    97. Select Label: none {ENTER}
    98. Type tmp {ENTER}
    99. Select Label: tmp {ENTER}
    100. Select Done setting up the partition {ENTER}
    101. Select #1 2.0 GB directly under LVM VG LVG, LV usr, {ENTER}
    102. Select Use as: do not use {ENTER}
    103. Select Ext4 journaling file system {ENTER}
    104. Select Mount point: none {ENTER}
    105. Select /usr {ENTER}
    106. Select Label: none {ENTER}
    107. Type usr {ENTER}
    108. Select Done setting up the partition {ENTER}
    109. Select #1 2.0 GB directly under LVM VG LVG, LV var, {ENTER}
    110. Select Use as: do not use {ENTER}
    111. Select Ext4 journaling file system {ENTER}
    112. Select Mount point: none {ENTER}
    113. Select /var {ENTER}
    114. Select Label: none {ENTER}
    115. Type var {ENTER}
    116. Select Done setting up the partition {ENTER}
    117. Select #1 197.1 MB directly under LVM VG LVG, LV srv, {ENTER}
    118. Select Use as: do not use {ENTER}
    119. Select Ext4 journaling file system {ENTER}
    120. Select Mount point: none {ENTER}
    121. Select /srv {ENTER}
    122. Select Label: none {ENTER}
    123. Type srv {ENTER}
    124. Select Done setting up the partition {ENTER}
    125. Select #1 197.1 MB directly under LVM VG LVG, LV opt, {ENTER}
    126. Select Use as: do not use {ENTER}
    127. Select Ext4 journaling file system {ENTER}
    128. Select Mount point: none {ENTER}
    129. Select /opt {ENTER}
    130. Select Label: none {ENTER}
    131. Type opt {ENTER}
    132. Select Done setting up the partition {ENTER}
    133. Select #1 499.1 MB directly under LVM VG LVG, LV bak, {ENTER}
    134. Select Use as: do not use {ENTER}
    135. Select Ext4 journaling file system {ENTER}
    136. Select Mount point: none {ENTER}
    137. Select Enter manually {ENTER}
    138. Type /bak {ENTER}
    139. Select Label: none {ENTER}
    140. Type bak {ENTER}
    141. Select Done setting up the partition {ENTER}
    142. Here is what the screen looks like at this point: Partitions
    143. Select Finish partitioning and write changes to disk {ENTER}
    144. Select Yes to write changes to disk, {ENTER}
    145. Press {ENTER} to accept a blank line for the HTTP proxy
    146. Select No automatic updates, {ENTER} (* We will schedule a script for this later *)
    147. Highlight only OpenSSH server and press {SPACEBAR} to enable, {ENTER} to continue. NOTE: This allows us to use PuTTY after installation to connect to the server.
    148. Select Yes, {ENTER} to install GRUB boot loader to the master boot record
    149. Installation Complete - from the VM menu, select VM --> Edit Settings and select CD/DVD Drive 1 and change to "Client Device" which will effectively remove the ISO. Now press {ENTER} to reboot.


    Initial Configurations


    1. At the login prompt, login with your administrator account (administrator / myadminpass)
    2. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su{ENTER} and then provide the administrator password (myadminpass).
    3. Type vi /etc/network/interfaces {ENTER} and change the following: (We need to change the network interface card (NIC) from using DHCP to a static IP)
      From:
      Code:
      iface eth0 inet dhcp
      To:
      Code:
      iface eth0 inet static
      address 192.168.107.2
      netmask 255.255.255.0
      gateway 192.168.107.1
      network 192.168.107.0
      broadcast 192.168.107.255
      dns-nameservers 192.168.107.212 192.168.107.213 8.8.8.4 8.8.8.5
      NOTE #1: You may need to manually remove the DHCP record (lease) associated to this Ubuntu server from your DHCP server so the correct IP can be found by other machines on the network. This can be avoided by temporarily configuring the VM Network Adapter connection to be "Host Only Network" instead of "VM Network" so the server is isolated during setup...at least until you reach the testing of the static IP below.

      NOTE #2: You might also need to manually add a HOST(A) record to your Windows DNS server (for srv-ubuntu.mydomain.com and srv-ubuntu.work.mydomain.com)
    4. Restart the network by typing ifdown -a and then ifup -a
    5. Sanity check! Type ifconfig and make sure the settings are correct. Then type ping www.google.com or similar and see if ping works.
    6. Shutdown and power off the server by typing shutdown -P now
    7. At this point forward, you can use PuTTY to access the console rather than the console itself for better performance, ability to scroll, etc.
    8. In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like STEP 1 and description of Ubuntu Server 14.04.1 LTS, clean install, Static IP: 192.168.107.2 and click OK


    Operating System Patches


    1. Start the Ubuntu server and connect using PuTTY.
    2. At the $ prompt, temporarily grant yourself super user privilages by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    3. Install the patches by typing the following commands:
      Code:
      aptitude update
      aptitude safe-upgrade
    4. Shutdown and power off the server by typing shutdown -P now
    5. In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like STEP 2 and description of Ubuntu Server 14.04.1 LTS, Patches applied, Static IP: 192.168.107.2. The Snapshot Manager should now have a nice hierarchy of snapshots (STEP 1 --> STEP 2 --> You are here)

  4. #4
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to Install and Configure an Ubuntu Server 14.04.1 LTS

    Volume / Disk Management

    Earlier, it was mentioned that the partition design needed to have some breathing room in each volume so that the file system inside can grow as needed. When the volumes were created during setup, the file systems were automatically expanded to fill the entire volume. We will now correct this by adding more "drives" to the system and then extend each logical volume to gain some breathing space.

    Most logical volumes will be increased in size and then the file systems contained in them will be increased but not to the maximum amount.

    This design will allow growth when needed and ensure that there will be time to add additional hard drives BEFORE they are needed which will keep the administrators from being stuck between a rock and a hard place! Nobody wants to lose a job because somebody did not estimate growth correctly or the budget did not allow for large capacity when the system first rolled out.

    Here are the planned adjustments for each logical volume:

    root = 2 GB to 3 GB
    swap = 2 GB (no change)
    home = 0.2 GB to 1 GB
    tmp = 0.5 GB to 2 GB
    usr = 2.0 GB to 4 GB
    var = 2.0 GB to 3 GB
    srv = 0.2 GB to 2 GB
    opt = 0.2 GB to 2 GB
    bak = 0.5 GB to 4 GB

    Here are the planned adjustments for each file system:

    root = 2.0 GB (no change)
    swap = 2.0 GB (no change)
    home = 0.2 GB to 0.5 GB
    tmp = 0.5 GB to 1.0 GB
    usr = 2.0 GB to 3.0 GB
    var = 2.0 GB (no change)
    srv = 0.2 GB to 1.0 GB
    opt = 0.2 GB to 1.0 GB
    bak = 0.5 GB to 2.0 GB

    We started off with a 10 GB drive to hold these volumes but now need 22 GB. For this exercise, we will add two 12 GB drives to cover the additional storage needs. (NOTE: This was an arbitrary number in order to demonstrate how to add additional hard drives to the system)

    Here is a graphical representation of what needs to be accomplished:



    If we were to type df -h right now, we should see something like this:

    Code:
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/LVG-root  1.9G  429M  1.4G  24% /
    udev                  489M  4.0K  489M   1% /dev
    tmpfs                 200M  244K  199M   1% /run
    none                  5.0M     0  5.0M   0% /run/lock
    none                  498M     0  498M   0% /run/shm
    /dev/sda1             179M   47M  122M  28% /boot
    /dev/mapper/LVG-home  187M  9.5M  168M   6% /home
    /dev/mapper/LVG-tmp   473M   23M  427M   5% /tmp
    /dev/mapper/LVG-usr   1.9G  460M  1.4G  26% /usr
    /dev/mapper/LVG-var   1.9G  367M  1.5G  21% /var
    /dev/mapper/LVG-srv   187M  9.5M  168M   6% /srv
    /dev/mapper/LVG-opt   187M  9.5M  168M   6% /opt
    /dev/mapper/LVG-bak   473M   23M  427M   5% /bak
    Adding more space in VMware is easy. In this exercise, each drive will be added as a separate disk just as if we were to add a physical drive to a physical server.


    1. Shutdown and power off the server by typing shutdown -P now
    2. In the vSphere client, right-click the Virtual Machine and choose Edit Settings.
    3. On the hardware tab, click the Add button and select Hard Disk. Click Next, choose "Create a new virtual disk", click Next, set the size to 12 GB, click Next, Next, Finish.
    4. Add another 12 GB disk using the same steps above and click OK to close the settings and allow VMware to process the changes.


    Collect information about the newly added drives.


    1. Start the server and connect using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass) and then temporarily grant yourself super user privilages by typing sudo su
    3. Type pvdisplay which should show something similar to this:
      Code:
      --- Physical volume ---
        PV Name               /dev/sda5
        VG Name               LVG
        PV Size               9.81 GiB / not usable 3.00 MiB
        Allocatable           yes
        PE Size               4.00 MiB
        Total PE              2511
        Free PE               228
        Allocated PE          2283
        PV UUID               NkfC3i-ROqv-YuLZ-63VO-RTAU-l01p-suqi4O
      The important bits of info here are the PV Name and VG Name for our existing configuration.
    4. Type fdisk -l which should show something similar to this (however I abbreviated it to show just the important parts):
      Code:
      Disk /dev/sda: 10.7 GB, 10737418240 bytes
         Device Boot      Start         End      Blocks   Id  System
      /dev/sda1   *        2048      391167      194560   83  Linux
      /dev/sda2          393214    20969471    10288129    5  Extended
      /dev/sda5          393216    20969471    10288128   8e  Linux LVM
       
      Disk /dev/sdb: 12.9 GB, 12884901888 bytes
      Disk /dev/sdb doesn't contain a valid partition table
      Disk /dev/sdc: 12.9 GB, 12884901888 bytes
      Disk /dev/sdc doesn't contain a valid partition table
      The important bits of info here are the device paths for the new drives which I highlighted in red.


    Prepare the first drive (/dev/sdb) to be used by the LVM

    Type the following:
    Code:
    fdisk /dev/sdb
    n (Create New Partition)
    p (Primary Partition)
    1 (Partition Number)
    {ENTER} (use default for first cylinder)
    {ENTER} (use default for last cylinder)
    t (Change partition type)
    8e (Set to Linux LVM)
    p (Preview how the drive will look)
    w (Write changes)
    Prepare the second drive (/dev/sdc) to be used by the LVM

    Do the exact same steps as above but start with fdisk /dev/sdc

    Create physical volumes using the new drives

    If we type fdisk -l, we now see /dev/sdb1 and /dev/sdc1 which are Linux LVM partitions.

    Type the following to create physical volumes:
    Code:
    pvcreate /dev/sdb1
    pvcreate /dev/sdc1
    Now add the physical volumes to the volume group (LVG) by typing the following:
    Code:
    vgextend LVG /dev/sdb1
    vgextend LVG /dev/sdc1
    Now that the space of both drives have been added to the logical volume group called LVG, we can now allocate that space to grow the logical volumes.

    To get a list of volume paths to use in the next commands, type lvscan to show your current volumes and their sizes.

    Type the following to set the exact size of the volume by specifying the end-result size you want:

    Code:
    lvextend -L3G /dev/LVG/root
    lvextend -L1G /dev/LVG/home
    lvextend -L2G /dev/LVG/tmp
    lvextend -L4G /dev/LVG/usr
    lvextend -L3G /dev/LVG/var
    lvextend -L2G /dev/LVG/srv
    lvextend -L2G /dev/LVG/opt
    lvextend -L4G /dev/LVG/bak
    or you can grow each volume by the specified amount (the number after the plus sign):
    Code:
    lvextend -L+1G /dev/LVG/root
    lvextend -L+0.8G /dev/LVG/home
    lvextend -L+1.5G /dev/LVG/tmp
    lvextend -L+2G /dev/LVG/usr
    lvextend -L+1G /dev/LVG/var
    lvextend -L+1.8G /dev/LVG/srv
    lvextend -L+1.8G /dev/LVG/opt
    lvextend -L+3.5G /dev/LVG/bak
    To see the new sizes, type lvscan
    Code:
    ACTIVE            '/dev/LVG/root' [3.00 GiB] inherit
      ACTIVE            '/dev/LVG/swap' [1.86 GiB] inherit
      ACTIVE            '/dev/LVG/home' [1.00 GiB] inherit
      ACTIVE            '/dev/LVG/tmp' [2.00 GiB] inherit
      ACTIVE            '/dev/LVG/usr' [4.00 GiB] inherit
      ACTIVE            '/dev/LVG/var' [3.00 GiB] inherit
      ACTIVE            '/dev/LVG/srv' [2.00 GiB] inherit
      ACTIVE            '/dev/LVG/opt' [2.00 GiB] inherit
      ACTIVE            '/dev/LVG/bak' [4.00 GiB] inherit
    The last thing to do now is the actual growth of the file systems. We want to grow the existing file systems but only to a certain amount so we do not take up all the space in the volume. We want room for growth in the future so we have time to order and install new drives when needed.
    Code:
    resize2fs /dev/LVG/home 500M
    resize2fs /dev/LVG/tmp 1G
    resize2fs /dev/LVG/srv 1G
    resize2fs /dev/LVG/opt 1G
    resize2fs /dev/LVG/usr 3G
    resize2fs /dev/LVG/bak 2G
    If we need to increase space in /var at a later point, we can issue the following command without any downtime (we will automate this in a nifty script later):

    Code:
    resize2fs /dev/LVG/var 2560MB
    We could continue to increase this particular file system all the way until we reach the limit of the volume which is 3 GB at the moment.

    If we were to type df -h right now, we should see something like this:

    Code:
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/LVG-root  1.9G  429M  1.4G  24% /
    udev                  489M  4.0K  489M   1% /dev
    tmpfs                 200M  260K  199M   1% /run
    none                  5.0M     0  5.0M   0% /run/lock
    none                  498M     0  498M   0% /run/shm
    /dev/sda1             179M   47M  122M  28% /boot
    /dev/mapper/LVG-home  488M  9.7M  454M   3% /home
    /dev/mapper/LVG-tmp  1004M   23M  931M   3% /tmp
    /dev/mapper/LVG-usr   3.0G  481M  2.3G  17% /usr
    /dev/mapper/LVG-var   1.8G  267M  1.5G  16% /var
    /dev/mapper/LVG-srv   989M  2.8M  940M   12% /srv
    /dev/mapper/LVG-opt   996M  2.8M  940M   1% /opt
    /dev/mapper/LVG-bak   2.0G   3.0M  1.9G   1% /bak
    Remember, df -h will tell you the size of the file system and lvscan will tell you the size of the volumes where the file systems live in.

    TIP: If you want to see everything in a specific block size, such as everything showing up in megabytes, you can use df --block-size m

    Shutdown and power off the server by typing shutdown -P now

    In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like STEP 3 and description of Ubuntu Server 14.04.1 LTS, Storage space adjusted, Static IP: 192.168.107.2. The Snapshot Manager should now have a nice hierarchy of snapshots (STEP 1 --> STEP 2 --> STEP 3 --> You are here)

  5. #5
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to Install and Configure an Ubuntu Server 14.04.1 LTS

    Software Configurations


    1. Turn on the server and connect using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. At the $ prompt, type aptitude -y install vim-nox for use instead of the built-in VI editor. more info
    5. At the $ prompt, type aptitude -y install p7zip-full to install 7-zip archive utility.
    6. At the $ prompt, type aptitude -y install htop to install a CPU/RAM monitoring utility.
    7. At the $ prompt, type aptitude -y install fsarchiver to install the backup utility.
    8. At the $ prompt, type aptitude -y install sendemail to install a command-line email utility.
    9. Change the default shell from dash to bash. Type ls -l /bin/sh to see that it points to /bin/dash. Type dpkg-reconfigure dash and answer No. Type ls -l /bin/sh and it should now be pointing to /bin/bash
    10. It might be necessary to remove AppArmor to avoid problems by typing the following:
      Code:
      /etc/init.d/apparmor stop
      /etc/init.d/apparmor teardown
      update-rc.d -f apparmor remove
      aptitude remove apparmor apparmor-utils
    11. Type vi /etc/hosts and add your email server:
      Code:
      192.168.107.25    srv-mail
    12. Test the ability to send email by typing:
      Code:
      sendemail -f root@myserver -t MyTargetAddress@MyDomain.com -u "This is the Subject" -m "This is the body of the email" -s srv-mail:25
    13. Reboot the server by typing reboot


    VMware Tools

    VMware Tools need to be installed if the VM is using a VMware host. This will insure maximum performance in a virtual environment.

    Starting with Ubuntu 14.04, we now install drivers from the repository rather than using the "Guest" VMware Tools menu to attach the CD-ROM and install manually.


    1. Connect to the server using PuTTY
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. Type the following:
      Code:
      aptitude install open-vm-tools
      reboot


    VMware Tools Upgrade

    ?? Need to research ??

    VirtualBox Guest Additions - Installation

    The Guest Additions need to be installed if the VM is using a VirtualBox host. This will insure maximum performance in a virtual environment.


    1. Connect to the server using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. You need to perform the following commands to fulfill the prerequisites:
      Code:
      aptitude install dkms
      reboot
    5. Connect to the server using PuTTY.
    6. At the login prompt, login with your administrator account (administrator / myadminpass)
    7. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    8. From the VirtualBox menu, click Devices, Install Guest Additions
    9. At the console, type the following:
      Code:
      mkdir -p /mnt/cdrom
      mount /dev/cdrom /mnt/cdrom
      /mnt/cdrom/VBoxLinuxAdditions.run
    10. NOTE: The X Windows System drivers will fail to load because this is a headless server with no GUI (which is OK)
    11. To see if the services are running, type /etc/init.d/vboxadd-service status or service vboxadd-service status


    VirtualBox Guest Additions - Upgrading

    If VirtualBox is updated on the host machine, each VM also needs the upgraded Guest Additions.

    Then mount the CDROM and run the installer just like the above. Reboot after it is upgraded.


    1. Connect to the server using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privilages by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. From the VirtualBox menu, click Devices, Install Guest Additions
    5. At the console, type the following:
      Code:
      mount /dev/cdrom /mnt/cdrom
      /mnt/cdrom/VBoxLinuxAdditions.run
      reboot


    VirtualBox Guest Additions - Uninstallation

    If a VM will be migrated from VirtualBox to something like a VMware, the Guest Additions on the VM will need to be removed.


    1. Connect to the server using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. Type the following:
      Code:
      cd /opt/VBox*
      ./uninstall.sh


    Configure Ubuntu for File Sharing

    This file sharing section is optional but can be handy if you need to swap files between the Linux server and a Windows machine.

    This documentation will utilize this share for passing pre-configured files (configs, scripts, etc.) to make it faster/easier during installation.


    1. Start the Ubuntu server and connect using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. Install Samba by typing aptitude -y install cifs-utils samba smbfs (NOTE: To share a folder with Windows, you just need the samba package, to connect to a Windows share, you need both samba and smbfs)
    5. Type the following commands:
      Code:
      cp /etc/samba/smb.conf /etc/samba/smb.conf.bak
      mkdir -p /srv/samba/share
      chown nobody:nogroup /srv/samba/share/
      chmod 0777 /srv/samba/share
      
    6. Edit the configuration file by typing vi /etc/samba/smb.conf
    7. Change:
      Code:
      workgroup = WORKGROUP
      to:
      Code:
      workgroup = work
      (you are using the domain alias)
    8. Add the following section to the end of the file:
      Code:
      [share]
      comment = Ubuntu File Server Share
      path = /srv/samba/share
      browsable = yes
      guest ok = yes
      read only = no
      create mask = 0755
      
    9. Save and exit the file.
    10. Restart the samba services to utilize the new configuration by typing:
      Code:
      restart smbd
      restart nmbd
      
    11. You should now be able to click Start --> Run and type \\srv-ubuntu or \\192.168.107.2 {ENTER} and see an explorer window with a Share folder. Drag-n-drop a file into the Share folder. If it worked, it will not display an error message and you should be able to view it from the server by typing ls -l /srv/samba/share/
    12. Shutdown and power off the server by typing shutdown -P now {ENTER}
    13. In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like STEP 4 and description of Ubuntu Server 14.04.1 LTS, File share configured, Static IP: 192.168.107.2. The Snapshot Manager should now have a nice hierarchy of snapshots (STEP 1 --> STEP 2 --> STEP 3 --> STEP 4 --> You are here)

  6. #6
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to Install and Configure an Ubuntu Server 14.04.1 LTS

    Scripting

    Much of the solutions beyond this point involve scripts (programming snippets / automated commands).

    In particular, they are Bash Scripts. I chose this due to its popularity and the fact it comes with Ubuntu. I try to make use of what comes with the system without requiring additional software / services unless they really add to the bottom line such as decreasing the time it takes for a process to run or to conserve storage and bandwidth usage.

    When setting up a server and testing things out, there is typically very little concern for procedures / process since much of the activity is exploration and experimentation as well as not having an impact on production. However, once a server goes into production, processes and procedures need to be in place to ensure the availability of the services being provided.

    In regards to these scripts, they will be treated like any other program and will require being tested, documented and go through a promotion process.

    The ideal situation would involve 3 servers (for a single server setup). A test / development server, a quality assurance staging server and the production server itself. If 3 servers cannot be utilized, then it can still work well with 2 servers. Testing scripts / programs / restore on the production server is not advisable and many times impractical...how can you test your restore process / data periodically if you only have a production server?

    The QA Staging server would resemble the production server as close as possible. The server should be setup in such a way that production backups are restored to this server which also tests and validates the backup / restore process as well as maintains a close representation of the production server to mitigate variable risk involved when testing new or modified programs and upgrades.

    The test / development server can serve as the QA server if absolutely necessary.

    The directory structure and how scripts can import other scripts will be configured to facilitate this process.

    Example:

    Directory path for scripts to import common variables, functions and server settings: /var/scripts/common/

    Directory path for production scripts: /var/scripts/prod/

    Directory path for QA staging area scripts: /var/scripts/qa/

    Directory path for test / development scripts: /var/scripts/test/

    Directory path for data for use by scripts: /var/scripts/data/

    With a production and test servers on physically different machines, the "common" scripts folder can be custom-tailored for that environment and allow for minimal changes to a script when running on the test, QA or production server. This is similar to "normalizing" a database. If you have a variable, path or function that is duplicated in multiple scripts, consider pulling it out and placing it in the common folder. If ever you need to change who receives the email reports, you only need to update a single script and all programs will use the new reference from that point on.

    Most of my scripts will import a file called "standard.conf" from the common script folder.

    /var/scripts/common/standard.conf (contents of the file on the production server)
    Code:
    ## Global Variables ##
    COMPANY="abc"
    TEMPDIR="/tmp"
    LOGDIR="/var/log"
    SHAREDIR="/srv/samba/share"
    MYDOMAIN="mydomain.com"
    ADMINEMAIL="admin@${MYDOMAIN}"
    REPORTEMAIL="lhammonds@${MYDOMAIN}"
    BACKUPDIR="/bak"
    OFFSITEDIR="/mnt/backup"
    OFFSITETESTFILE="${OFFSITEDIR}/online.txt"
    ARCHIVEMETHOD="tar.7z"    ## Choices are tar.7z or tgz
    HOSTNAME="$(hostname -s)"
    SCRIPTNAME="$0"
    SCRIPTDIR="/var/scripts"
    MAILFILE="${TEMPDIR}/mailfile.$$"
     
    ## Global Functions ##
     
    function f_sendmail()
    {
      ## Purpose: Send administrative email message.
      ## Parameter #1 = Subject
      ## Parameter #2 = Body
      sendemail -f "${ADMINEMAIL}" -t "${REPORTEMAIL}" -u "${1}" -m "${2}\n\nServer: ${HOSTNAME}\nProgram: ${SCRIPTNAME}\nLog: ${LOGFILE}" -s srv-mail:25 1>/dev/null 2>&1
    }
     
    function f_sendusermail()
    {
      ## Purpose: Send end-user email message.
      ## Parameter #1 = To
      ## Parameter #2 = Subject
      ## Parameter #3 = Body
      sendemail -f "${ADMINEMAIL}" -t "${1}" -u "${2}" -m "${3}" -s srv-mail:25 1>/dev/null 2>&1
    }
     
    function f_mount()
    {
      ## Mount the pre-configured Windows share folder.
      ## NOTE: The Windows share should have a file called "online.txt"
      mount -t cifs //srv-backup/myshare ${OFFSITEDIR} --options nouser,rw,nofail,noatime,noexec,credentials=/etc/cifspw
    }
     
    function f_umount()
    {
      ## Dismount the Windows share folder.
      ## NOTE: The unmounted folder should have a file called "offline.txt"
      umount ${OFFSITEDIR}
    }

    /var/scripts/common/standard.conf (contents of the file on the test server)
    Code:
    ## Global Variables ##
    COMPANY="abc"
    TEMPDIR="/tmp"
    LOGDIR="/var/log"
    SHAREDIR="/srv/samba/share"
    MYDOMAIN="mytestdomain.com"
    ADMINEMAIL="test1@${MYDOMAIN}"
    REPORTEMAIL="test2@${MYDOMAIN}"
    BACKUPDIR="/bak"
    OFFSITEDIR="/mnt/fakedir"
    OFFSITETESTFILE="${OFFSITEDIR}/online.txt"
    ARCHIVEMETHOD="tar.7z"    ## Choices are tar.7z or tgz
    HOSTNAME="$(hostname -s)"
    SCRIPTNAME="$0"
    SCRIPTDIR="/var/scripts"
    MAILFILE="${TEMPDIR}/mailfile.$$"
     
    ## Global Functions ##
     
    function f_sendmail()
    {
      ## Purpose: Send administrative email message.
      ## Parameter #1 = Subject
      ## Parameter #2 = Body
      sendemail -f "${ADMINEMAIL}" -t "${REPORTEMAIL}" -u "${1}" -m "${2}\n\nServer: ${HOSTNAME}\nProgram: ${SCRIPTNAME}\nLog: ${LOGFILE}" -s srv-mail:25 1>/dev/null 2>&1
    }
     
    function f_sendusermail()
    {
      ## Purpose: Send end-user email message.
      ## Parameter #1 = To
      ## Parameter #2 = Subject
      ## Parameter #3 = Body
      sendemail -f "${ADMINEMAIL}" -t "${1}" -u "${2}" -m "${3}" -s srv-mail:25 1>/dev/null 2>&1
    }
     
    function f_mount()
    {
      ## Mount the pre-configured Windows share folder.
      ## NOTE: The Windows share should have a file called "online.txt"
      mount -t cifs //mypc/share ${OFFSITEDIR} --options nouser,rw,nofail,noexec,credentials=/etc/cifspw
    }
     
    function f_umount()
    {
      ## Dismount the Windows share folder.
      ## NOTE: The unmounted folder should have a file called "offline.txt"
      umount ${OFFSITEDIR}
    }

    When receiving administrative email notifications, the server name, script name and path will be included at the bottom of the email every time. It will be readily apparent if the email was generated from the test, qa or production server simply because of the location (even if test, qa and production are all on the same server).

    Here are the scripts to help automate the creation of this structure on the various servers (would run all of them if all are the same box)

    setup-script-prod.sh
    Code:
    #!/bin/bash
    if [ ! -d /var/scripts/prod ]; then
      mkdir -p /var/scripts/prod
    fi
    if [ ! -d /var/scripts/common ]; then
      mkdir -p /var/scripts/common
    fi
    if [ ! -d /var/scripts/data ]; then
      mkdir -p /var/scripts/data
    fi
    chown root:root -R /var/scripts
    chmod 0755 -R /var/scripts
    setup-script-qa.sh
    Code:
    #!/bin/bash
    if [ ! -d /var/scripts/qa ]; then
      mkdir -p /var/scripts/qa
    fi
    if [ ! -d /var/scripts/common ]; then
      mkdir -p /var/scripts/common
    fi
    if [ ! -d /var/scripts/data ]; then
      mkdir -p /var/scripts/data
    fi
    chown root:root -R /var/scripts
    chmod 0777 -R /var/scripts
    setup-script-test.sh
    Code:
    #!/bin/bash
    if [ ! -d /var/scripts/test ]; then
      mkdir -p /var/scripts/test
    fi
    if [ ! -d /var/scripts/common ]; then
      mkdir -p /var/scripts/common
    fi
    if [ ! -d /var/scripts/data ]; then
      mkdir -p /var/scripts/data
    fi
    chown root:root -R /var/scripts
    chmod 0777 -R /var/scripts

  7. #7
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to Install and Configure an Ubuntu Server 14.04.1 LTS

    Check Storage Space

    In favor of managing by exception, this script that can be scheduled to run daily to check the file systems to see if they are getting close to filling up and will automatically expand them a little bit and give you an email notice. Everything is done at the megabyte level. If you do not want the script to perform the increase, simply add a pound sign in front of the resize2fs command on line 62 to comment it out. Might also want to modify the log and email messages so it does not look like it actually "performed" the resize but instead is telling YOU how to perform the resize.

    Here are the lines added to the root crontab schedule which will check each file system.

    The script check the specified file system and see if the amount of space free is less than the threshold (e.g. 100 MB). If the file system has free space that is less than the threshold, it will attempt to add the specified amount (e.g. 50 MB).

    crontab
    Code:
    0 1 * * * /var/scripts/prod/check-storage.sh root 500 100 > /dev/null 2>&1
    15 1 * * * /var/scripts/prod/check-storage.sh home 100 50 > /dev/null 2>&1
    30 1 * * * /var/scripts/prod/check-storage.sh tmp 100 50 > /dev/null 2>&1
    45 1 * * * /var/scripts/prod/check-storage.sh usr 100 50 > /dev/null 2>&1
    0 2 * * * /var/scripts/prod/check-storage.sh var 100 50 > /dev/null 2>&1
    15 2 * * * /var/scripts/prod/check-storage.sh srv 100 50 > /dev/null 2>&1
    30 2 * * * /var/scripts/prod/check-storage.sh opt 100 50 > /dev/null 2>&1
    45 2 * * * /var/scripts/prod/check-storage.sh bak 100 50 > /dev/null 2>&1
    /var/scripts/prod/check-storage.sh
    Code:
    #!/bin/bash
    #############################################
    ## Name          : check-storage.sh
    ## Version       : 1.1
    ## Date          : 2014-04-19
    ## Author        : LHammonds
    ## Purpose       : Check available space for a file system and expand if necessary.
    ## Compatibility : Verified on Ubuntu Server 12.04-14.04 LTS
    ## Requirements  : None
    ## Run Frequency : Recommend once per day for each FS to monitor.
    ## Parameters    :
    ##    1 = (Required) File System name (e.g. var)
    ##    2 = (Required) File System Threshold in MB (e.g. 100)
    ##    3 = (Required) Amount to increase File System in MB (e.g. 50)
    ## Exit Codes    :
    ##    0 = Success (either nothing was done or FS expanded without error)
    ##    1 = ERROR: Missing or incorrect parameter(s)
    ##    2 = ERROR: Invalid parameter value(s)
    ##    4 = ERROR: Lock file detected
    ##    8 = ERROR: Resize2fs error
    ##   16 = SEVERE: No room to expand
    ##   32 = ERROR: Script not run by root user
    ################ CHANGE LOG #################
    ## DATE       WHO WHAT WAS CHANGED
    ## ---------- --- ----------------------------
    ## 2012-05-11 LTH Created script.
    ## 2014-04-19 LTH Added company prefix to log files.
    #############################################
     
    ## Import standard variables and functions. ##
    source /var/scripts/common/standard.conf
     
    ## Define local variables.
    LOGFILE="${LOGDIR}/${COMPANY}-check-storage.log"
    LOCKFILE="${TEMPDIR}/${COMPANY}-check-storage.lock"
    ErrorFlag=0
    ReturnCode=0
     
    #######################################
    ##            FUNCTIONS              ##
    #######################################
     
    function f_cleanup()
    {
      if [ -f ${LOCKFILE} ];then
        ## Remove lock file so other check space jobs can run.
        rm ${LOCKFILE} 1>/dev/null 2>&1
      fi
      exit ${ErrorFlag}
    }
     
    function f_showhelp()
    {
      echo -e "\nUsage : ${SCRIPTNAME} FileSystemName ThresholdSizeInMB AmountToIncreaseByInMB\n"
      echo -e "\nExample: ${SCRIPTNAME} var 50 50\n"
    }
     
    function f_auto-increment()
    {
      let RoomInLV=${LVSize}-${FSSize}
      if [[ ${RoomInLV} -gt ${FSIncreaseBy} ]]; then
        ## There is room in the LV to increase space to the FS.
        resize2fs ${FSVol} ${NewFSSize}M
        ReturnCode=$?
        echo "`date +%Y-%m-%d_%H:%M:%S` --- resize2fs ${FSVol} ${NewFSSize}M, ReturnCode=${ReturnCode}" | tee -a ${LOGFILE}
        if [[ ${ReturnCode} -ne 0 ]]; then
          ## There was an error in resize2fs.
          return ${ReturnCode}
        fi
      else
        ## There is not enough room in the LV to increase space in the FS.
        return 50
      fi
      return 0
    }
     
    #######################################
    ##           MAIN PROGRAM            ##
    #######################################
     
    if [ -f ${LOCKFILE} ]; then
      # Lock file detected.  Abort script.
      echo "Check space script aborted"
      echo "This script tried to run but detected the lock file: ${LOCKFILE}"
      echo "Please check to make sure the file does not remain when check space is not actually running."
      f_sendmail "ERROR: check storage script aborted" "This script tried to run but detected the lock file: ${LOCKFILE}\n\nPlease check to make sure the file does not remain when check space is not actually running.\n\nIf you find that the script is not running/hung, you can remove it by typing 'rm ${LOCKFILE}'"
      ErrorFlag=4
      f_cleanup
    else
      echo "`date +%Y-%m-%d_%H:%M:%S` ${SCRIPTNAME}" > ${LOCKFILE}
    fi
     
    ## Requirement Check: Script must run as root user.
    if [ "$(id -u)" != "0" ]; then
      ## FATAL ERROR DETECTED: Document problem and terminate script.
      echo "ERROR: Root user required to run this script."
      echo ""
      ErrorFlag=32
      f_cleanup
    fi
     
    ## Check existance of required command-line parameters.
    case "$1" in
      "")
        f_showhelp
        ErrorFlag=1
        f_cleanup
        ;;
      --help|-h|-?)
        f_showhelp
        ErrorFlag=1
        f_cleanup
        ;;
      *)
        FSName=$1
        ;;
    esac
    case "$2" in
      "")
        f_showhelp
        ErrorFlag=1
        f_cleanup
        ;;
      --help|-h|-?)
        f_showhelp
        ErrorFlag=1
        f_cleanup
        ;;
      *)
        FSThreshold=$2
        ;;
    esac
    case "$3" in
      "")
        f_showhelp
        ErrorFlag=1
        f_cleanup
        ;;
      --help|-h|-?)
        f_showhelp
        ErrorFlag=1
        f_cleanup
        ;;
      *)
        FSIncreaseBy=$3
        ;;
    esac
     
    ## Check validity of File System name.
    case "${FSName}" in
      "root")
        FSVol="/dev/LVG/root"
        FSMap="/dev/mapper/LVG-root"
        ;;
      "home")
        FSVol="/dev/LVG/home"
        FSMap="/dev/mapper/LVG-home"
        ;;
      "tmp")
        FSVol="/dev/LVG/tmp"
        FSMap="/dev/mapper/LVG-tmp"
        ;;
      "usr")
        FSVol="/dev/LVG/usr"
        FSMap="/dev/mapper/LVG-usr"
        ;;
      "var")
        FSVol="/dev/LVG/var"
        FSMap="/dev/mapper/LVG-var"
        ;;
      "srv")
        FSVol="/dev/LVG/srv"
        FSMap="/dev/mapper/LVG-srv"
        ;;
      "opt")
        FSVol="/dev/LVG/opt"
        FSMap="/dev/mapper/LVG-opt"
        ;;
      "bak")
        FSVol="/dev/LVG/bak"
        FSMap="/dev/mapper/LVG-bak"
        ;;
      *)
        echo "ERROR: ${FSName} does not match a known file system defined in this script."
        f_showhelp
        ErrorFlag=2
        f_cleanup
        ;;
    esac
     
    ## Check validity of threshold value.
    test ${FSThreshold} -eq 0 1>/dev/null 2>&1
    if [[ $? -eq 2 ]]; then
      ## Threshold parameter is not an integer.
      echo "ERROR: ${FSThreshold} is not an integer."
      f_showhelp
      ErrorFlag=2
      f_cleanup
    fi
     
    ## Check validity of increment value.
    test ${FSIncreaseBy} -eq 0 1>/dev/null 2>&1
    if [[ $? -eq 2 ]]; then
      ## FSIncreaseBy parameter is not an integer.
      echo "ERROR: ${FSIncreaseBy} is not an integer."
      f_showhelp
      ErrorFlag=2
      f_cleanup
    fi
     
    ## Get available space for the file system.
    FSAvailable="`df --block-size=m ${FSMap} | awk '{ print $4 }' | tail -n 1 | sed 's/M//'`"
     
    ## Get the current size of the File System.
    FSSize="`df --block-size=m ${FSMap} | awk '{ print $2 }' | tail -n 1 | sed 's/M//'`"
     
    ## Get the current size of the Logical Volume for the File System
    LVSize="`lvs --noheadings --nosuffix --units=m ${FSMap} | awk '{ print $4}' | sed 's/[.].*//'`"
     
    ## Calculate the new size of the FS in case we need it.
    let NewFSSize=${FSSize}+${FSIncreaseBy}
     
    if [[ ${FSAvailable} -lt ${FSThreshold} ]]; then
      echo "`date +%Y-%m-%d_%H:%M:%S` - Starting expansion of ${FSVol}" | tee -a ${LOGFILE}
      echo "`date +%Y-%m-%d_%H:%M:%S` --- LVSize=${LVSize}MB, FSSize=${FSSize}MB, FSAvail=${FSAvailable}MB, FSThreshold=${FSThreshold}MB, FSIncreaseBy=${FSIncreaseBy}MB" | tee -a ${LOGFILE}
      ## Run the auto-expansion function.
      f_auto-increment
      ReturnCode=$?
      case ${ReturnCode} in
      0)
        f_sendmail "NOTICE: File System Expanded" "${FSVol} was expanded because it was nearing max capacity.  Please review disk space usage and plan appropriately. LVSize=${LVSize}MB, FSSize=${FSSize}MB, FSAvailable=${FSAvailable}MB, FSThreshold=${FSThreshold}MB, FSIncreaseBy=${FSIncreaseBy}MB"
        ;;
      50)
        echo "`date +%Y-%m-%d_%H:%M:%S` - SEVERE: No room to expand ${FSVol}" | tee -a ${LOGFILE}
        ErrorFlag=16
        f_sendmail "SEVERE: No room to expand ${FSVol}" "There is not enough room in the Logical Volume to expand the ${FSVol} File System.  Immediate action is required.  Make sure there is free space in the Volume Group 'LVG' and then expand the Logical Volume...then expand the File System.\n\nLVSize=${LVSize}MB, FSSize=${FSSize}MB, FSAvailable=${FSAvailable}MB, FSThreshold=${FSThreshold}MB, FSIncreaseBy=${FSIncreaseBy}MB.\n\nType 'vgs' to see if there is any free space in the Volume Group which can be given to the Logical Volume.\n\nType 'lvs' to see the current sizes of the LVs.\n\nType 'lvdisplay' to see a list of Logical Volumes so you can get the LV Name which is used in the lvextend and resize2fs commands.\n\nType 'lvextend -L+50M /dev/LVG/var' if you want to extend the var Logical Volume by 50 megabytes (assuming there is 50MB available in the Volume Group).\n\nType 'df --block-size=m' to see a list of file systems and their associated size and available space.\n\nType 'resize2fs /dev/LVG/var ${NewFSSize}M' to set the size of var to ${NewFSSize} megabytes. Make sure you set the size to the desired end-result which should be LARGER than the current FS size so you do not lose data."
        ;;
      *)
        echo "`date +%Y-%m-%d_%H:%M:%S` - ERROR: Expansion failure for ${FSVol}" | tee -a ${LOGFILE}
        ErrorFlag=8
        f_sendmail "ERROR: File System Expansion Failed" "${FSVol} Expansion failed with return code of ${ReturnCode}.  LVSize=${LVSize}MB, FSSize=${FSSize}MB, FSAvailable=${FSAvailable}MB, FSThreshold=${FSThreshold}MB, FSIncreaseBy=${FSIncreaseBy}MB"
        ;;
      esac
      echo "`date +%Y-%m-%d_%H:%M:%S` - Finished expansion of ${FSVol}" | tee -a ${LOGFILE}
    else
      echo "`date +%Y-%m-%d_%H:%M:%S` - ${FSVol} ${FSAvailable}M>${FSThreshold}M No action required." | tee -a ${LOGFILE}
    fi
     
    ## Perform cleanup routine.
    f_cleanup
    Here is the typical output when it does not have to increase the FS:

    /var/log/check-storage.log
    Code:
    2012-05-01_01:00:00 - /dev/LVG/root 1377M>500M No action required.
    2012-05-01_01:15:00 - /dev/LVG/home 454M>100M No action required.
    2012-05-01_01:30:00 - /dev/LVG/tmp 776M>100M No action required.
    2012-05-01_01:45:00 - /dev/LVG/usr 1126M>100M No action required.
    2012-05-01_02:00:00 - /dev/LVG/var 1417M>100M No action required.
    2012-05-01_02:15:00 - /dev/LVG/srv 935M>100M No action required.
    2012-05-01_02:30:00 - /dev/LVG/opt 935M>100M No action required.
    2012-05-01_02:45:00 - /dev/LVG/bak 1871M>100M No action required.
    2012-05-02_01:00:00 - /dev/LVG/root 1377M>500M No action required.
    2012-05-02_01:15:00 - /dev/LVG/home 454M>100M No action required.
    2012-05-02_01:30:00 - /dev/LVG/tmp 776M>100M No action required.
    2012-05-02_01:45:00 - /dev/LVG/usr 1126M>100M No action required.
    2012-05-02_02:00:00 - /dev/LVG/var 1417M>100M No action required.
    2012-05-02_02:15:00 - /dev/LVG/srv 935M>100M No action required.
    2012-05-02_02:30:00 - /dev/LVG/opt 935M>100M No action required.
    2012-05-02_02:45:00 - /dev/LVG/bak 1871M>100M No action required.
    Here is a sample of what the log will look like when it performs increases:

    /var/log/check-storage.log
    Code:
    2012-05-02_01:30:00 - Starting expansion of /dev/LVG/tmp
    2012-05-02_01:30:00 --- LVSize=2048MB, FSSize=1004MB, FSAvail=93MB, FSThreshold=100MB, IncreaseBy=50MB
    2012-05-02_01:30:00 --- resize2fs /dev/LVG/temp 1054, ReturnCode=0
    2012-05-02_01:30:00 - Finished expansion of /dev/LVG/tmp
    2012-05-02_02:00:00 - Starting expansion of /dev/LVG/var
    2012-05-02_02:00:00 --- LVSize=3072MB, FSSize=1901MB, FSAvail=95MB, FSThreshold=100MB, IncreaseBy=50MB
    2012-05-02_02:00:00 --- resize2fs /dev/LVG/var 1951, ReturnCode=0
    2012-05-02_02:00:00 - Finished expansion of /dev/LVG/var
    2012-05-02_02:45:00 - Starting expansion of /dev/LVG/bak
    2012-05-02_02:45:00 --- LVSize=4096MB, FSSize=1996MB, FSAvail=91MB, FSThreshold=100MB, IncreaseBy=50MB
    2012-05-02_02:45:00 --- resize2fs /dev/LVG/bak 2044, ReturnCode=0
    2012-05-02_02:45:00 - Finished expansion of /dev/LVG/bak

  8. #8
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to Install and Configure an Ubuntu Server 14.04.1 LTS

    APT Upgrade

    This script that can be scheduled to run daily to check for updates and then install them if available.

    The following is an example of a crontab entry to schedule the script to run once per day @ 3am.

    /var/scripts/data/crontab.root
    Code:
    0 3 * * * /var/scripts/prod/apt-upgrade.sh > /dev/null 2>&1
    /var/scripts/prod/apt-upgrade.sh
    Code:
    #!/bin/bash
    #############################################
    ## Name          : apt-upgrade.sh
    ## Version       : 1.1
    ## Date          : 2013-01-08
    ## Author        : LHammonds
    ## Purpose       : Keep system updated (rather than use unattended-upgrades)
    ## Compatibility : Verified on Ubuntu Server 12.04-14.04 LTS
    ## Requirements  : Sendemail, run as root
    ## Run Frequency : Recommend once per day.
    ## Parameters    : None
    ## Exit Codes    :
    ##    0 = Success
    ##    1 = ERROR: Lock file detected.
    ##    2 = ERROR: Not run as root user.
    ##    4 = ERROR: Aptitude update Error.
    ##    8 = ERROR: Aptitude safe-upgrade Error.
    ##   16 = ERROR: Aptitude autoclean Error.
    ################ CHANGE LOG #################
    ## DATE       WHO WHAT WAS CHANGED
    ## ---------- --- ----------------------------
    ## 2012-06-01 LTH Created script.
    ## 2013-01-08 LTH Allow visible status output if run manually.
    #############################################
     
    ## Import standard variables and functions. ##
    source /var/scripts/common/standard.conf
    
    ## Define local variables.
    LOGFILE="${LOGDIR}/${COMPANY}-apt-upgrade.log"
    LOCKFILE="${TEMPDIR}/${COMPANY}-apt-upgrade.lock"
    ErrorFlag=0
    ReturnCode=0
    APTCMD="$(which aptitude)"
    
    #######################################
    ##            FUNCTIONS              ##
    #######################################
    
    function f_cleanup()
    {
      if [ -f ${LOCKFILE} ];then
        ## Remove lock file so subsequent jobs can run.
        rm ${LOCKFILE} 1>/dev/null 2>&1
      fi
      ## Temporarily pause script in case user is watching output.
      sleep 2
      exit ${ErrorFlag}
    }
    
    #######################################
    ##           MAIN PROGRAM            ##
    #######################################
    
    clear
    if [ -f ${LOCKFILE} ]; then
      # Lock file detected.  Abort script.
      echo "** Script aborted **"
      echo "This script tried to run but detected the lock file: ${LOCKFILE}"
      echo "Please check to make sure the file does not remain when check space is not actually running."
      f_sendmail "ERROR: Script aborted" "This script tried to run but detected the lock file: ${LOCKFILE}\n\nPlease check to make sure the file does not remain when check space is not actually running.\n\nIf you find that the script is not running/hung, you can remove it by typing 'rm ${LOCKFILE}'"
      ErrorFlag=1
      f_cleanup
    else
      echo "`date +%Y-%m-%d_%H:%M:%S` ${SCRIPTNAME}" > ${LOCKFILE}
    fi
    
    ## Requirement Check: Script must run as root user.
    if [ "$(id -u)" != "0" ]; then
      ## FATAL ERROR DETECTED: Document problem and terminate script.
      echo -e "ERROR: Root user required to run this script.\n"
      ErrorFlag=2
      f_cleanup
    fi
    
    echo "`date +%Y-%m-%d_%H:%M:%S` - Begin script." | tee -a ${LOGFILE}
    echo "`date +%Y-%m-%d_%H:%M:%S` --- Aptitude Update" | tee -a ${LOGFILE}
    ${APTCMD} update > /dev/null 2>&1
    ReturnCode=$?
    if [[ "${ReturnCode}" -gt 0 ]]; then
      ErrorFlag=4
      f_cleanup
    fi
    echo "`date +%Y-%m-%d_%H:%M:%S` --- Aptitude Safe-Upgrade" | tee -a ${LOGFILE}
    echo "--------------------------------------------------" >> ${LOGFILE}
    ${APTCMD} safe-upgrade --assume-yes --target-release `lsb_release -cs`-security >> ${LOGFILE} 2>&1
    ReturnCode=$?
    if [[ "${ReturnCode}" -gt 0 ]]; then
      ErrorFlag=8
      f_cleanup
    fi
    echo "--------------------------------------------------" >> ${LOGFILE}
    echo "`date +%Y-%m-%d_%H:%M:%S` --- Aptitude Autoclean" | tee -a ${LOGFILE}
    echo "--------------------------------------------------" >> ${LOGFILE}
    ${APTCMD} autoclean >> ${LOGFILE} 2>&1
    ReturnCode=$?
    if [[ "${ReturnCode}" -gt 0 ]]; then
      ErrorFlag=16
      f_cleanup
    fi
    echo "--------------------------------------------------" >> ${LOGFILE}
    echo "`date +%Y-%m-%d_%H:%M:%S` - End script." | tee -a ${LOGFILE}
    
    ## Perform cleanup routine.
    f_cleanup
    Here is the typical output:

    /var/log/apt-upgrade.log
    Code:
    2012-06-01_09:31:19 - Begin script.
    2012-06-01_09:31:19 --- Aptitude Update
    2012-06-01_09:32:01 --- Aptitude Safe-Upgrade
    --------------------------------------------------
    Reading package lists...
    Building dependency tree...
    Reading state information...
    Reading extended state information...
    Initializing package states...
    No packages will be installed, upgraded, or removed.
    0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B of archives. After unpacking 0 B will be used.
    Reading package lists...
    Building dependency tree...
    Reading state information...
    Reading extended state information...
    Initializing package states...
    --------------------------------------------------
    2012-06-01_09:32:03 --- Aptitude Autoclean
    --------------------------------------------------
    Reading package lists...
    Building dependency tree...
    Reading state information...
    Reading extended state information...
    Initializing package states...
    Freed 0 B of disk space
    --------------------------------------------------
    2012-06-01_09:32:04 - End script.
    2012-06-02_03:00:01 - Begin script.
    2012-06-02_03:00:01 --- Aptitude Update
    2012-06-02_03:00:26 --- Aptitude Safe-Upgrade
    --------------------------------------------------
    Reading package lists...
    Building dependency tree...
    Reading state information...
    Reading extended state information...
    Initializing package states...
    The following packages will be upgraded:
      grub-common grub-pc grub-pc-bin grub2-common libcups2 libgcrypt11
    6 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    Need to get 3,611 kB of archives. After unpacking 59.4 kB will be freed.
    Writing extended state information...
    Get: 1 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main libgcrypt11 amd64 1.5.0-3ubuntu0.1 [280 kB]
    Get: 2 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main libcups2 amd64 1.5.3-0ubuntu1 [171 kB]
    Get: 3 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main grub-pc amd64 1.99-21ubuntu3.1 [140 kB]
    Get: 4 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main grub-pc-bin amd64 1.99-21ubuntu3.1 [861 kB]
    Get: 5 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main grub2-common amd64 1.99-21ubuntu3.1 [94.3 kB]
    Get: 6 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main grub-common amd64 1.99-21ubuntu3.1 [2,066 kB]
    Fetched 3,611 kB in 3s (941 kB/s)
    debconf: unable to initialize frontend: Dialog
    debconf: (TERM is not set, so the dialog frontend is not usable.)
    debconf: falling back to frontend: Readline
    debconf: unable to initialize frontend: Readline
    debconf: (This frontend requires a controlling tty.)
    debconf: falling back to frontend: Teletype
    dpkg-preconfigure: unable to re-open stdin:
    (Reading database ... 61584 files and directories currently installed.)
    Preparing to replace libgcrypt11 1.5.0-3 (using .../libgcrypt11_1.5.0-3ubuntu0.1_amd64.deb) ...
    Unpacking replacement libgcrypt11 ...
    Preparing to replace libcups2 1.5.2-9ubuntu1 (using .../libcups2_1.5.3-0ubuntu1_amd64.deb) ...
    Unpacking replacement libcups2 ...
    Preparing to replace grub-pc 1.99-21ubuntu3 (using .../grub-pc_1.99-21ubuntu3.1_amd64.deb) ...
    Unpacking replacement grub-pc ...
    Preparing to replace grub-pc-bin 1.99-21ubuntu3 (using .../grub-pc-bin_1.99-21ubuntu3.1_amd64.deb) ...
    Unpacking replacement grub-pc-bin ...
    Preparing to replace grub2-common 1.99-21ubuntu3 (using .../grub2-common_1.99-21ubuntu3.1_amd64.deb) ...
    Unpacking replacement grub2-common ...
    Preparing to replace grub-common 1.99-21ubuntu3 (using .../grub-common_1.99-21ubuntu3.1_amd64.deb) ...
    Unpacking replacement grub-common ...
    Processing triggers for man-db ...
    debconf: unable to initialize frontend: Dialog
    debconf: (TERM is not set, so the dialog frontend is not usable.)
    debconf: falling back to frontend: Readline
    debconf: unable to initialize frontend: Readline
    debconf: (This frontend requires a controlling tty.)
    debconf: falling back to frontend: Teletype
    Processing triggers for install-info ...
    Processing triggers for ureadahead ...
    Setting up libgcrypt11 (1.5.0-3ubuntu0.1) ...
    Setting up libcups2 (1.5.3-0ubuntu1) ...
    Setting up grub-common (1.99-21ubuntu3.1) ...
    Installing new version of config file /etc/grub.d/10_linux ...
    Setting up grub2-common (1.99-21ubuntu3.1) ...
    Setting up grub-pc-bin (1.99-21ubuntu3.1) ...
    Setting up grub-pc (1.99-21ubuntu3.1) ...
    debconf: unable to initialize frontend: Dialog
    debconf: (TERM is not set, so the dialog frontend is not usable.)
    debconf: falling back to frontend: Readline
    debconf: unable to initialize frontend: Readline
    debconf: (This frontend requires a controlling tty.)
    debconf: falling back to frontend: Teletype
    Installation finished. No error reported.
    Generating grub.cfg ...
    Found linux image: /boot/vmlinuz-3.2.0-24-generic
    Found initrd image: /boot/initrd.img-3.2.0-24-generic
    Found linux image: /boot/vmlinuz-3.2.0-23-generic
    Found initrd image: /boot/initrd.img-3.2.0-23-generic
    Found memtest86+ image: /memtest86+.bin
    done
    Processing triggers for libc-bin ...
    ldconfig deferred processing now taking place
    ldconfig deferred processing now taking place
    Reading package lists...
    Building dependency tree...
    Reading state information...
    Reading extended state information...
    Initializing package states...
    --------------------------------------------------
    2012-06-02_03:00:43 --- Aptitude Autoclean
    --------------------------------------------------
    Reading package lists...
    Building dependency tree...
    Reading state information...
    Reading extended state information...
    Initializing package states...
    Freed 0 B of disk space
    --------------------------------------------------
    2012-06-02_03:00:44 - End script.

  9. #9
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to Install and Configure an Ubuntu Server 14.04.1 LTS

    Backup Partitions Using LVM Snapshots and FSArchiver

    This method will allow online backup of the server at the partition level. It is designed to run via crontab schedule but can also be run manually.

    This should be considered a full backup which means you will probably need to rely on other methods for granular backups and restores such as using rsync at the file level.

    This method is great for backing up a system just prior to and just after a major upgrade of the OS or application. It is not very helpful for retrieving individual files although it could be done but would require a bit of work by temporarily restoring to an unused area, retrieving the file(s) and then destroying the temporary partition.

    The /bak partition is skipped because that is where the archives are being stored.

    The /tmp partition is skipped because there should not be anything in there that needs to be restored...but feel free to include it if you like.

    The script below was built around a few very basic commands that do the bulk of the work but most of the code is for error handling.

    Here are examples of the commands:
    Code:
    ## Create the snapshot volume of the partition to be backed up.
    lvcreate --size=5G --snapshot --name="tempsnap" /dev/LVG/root
     
    ## Create the compressed and encrypted archive of the snapshot.
    fsarchiver savefs --compress=7 --jobs=1 --cryptpass="abc123" --label="insert comment here" /bak/root.fsa /dev/LVG/tempsnap
     
    ## Create an informational text file about the archive.
    fsarchiver archinfo --cryptpass="abc123" /bak/root.fsa > /bak/root.txt 2>&1
     
    ## Remove the snapshot.
    lvremove --force /dev/LVG/tempsnap
     
    ## Create a checksum file about the archive.
    md5sum /bak/root.fsa > /bak/root.md5
     
    ## Verify that the checksum file can validate against the archive.
    md5sum --check /bak/root.md5
    Here is an example of a crontab entry to run the script once a day.

    /var/scripts/data/crontab.root
    Code:
    0 4 * * * /var/scripts/prod/back-parts.sh > /dev/null 2>&1
    Here is the script.

    /var/scripts/prod/back-parts.sh
    Code:
    #!/bin/bash
    #############################################################
    ## Name : back-parts.sh (Backup Partitions)
    ## Version : 1.0
    ## Date : 2013-01-09
    ## Author : LHammonds
    ## Purpose : Backup partitions
    ## Compatibility : Verified on Ubuntu Server 12.04-14.04 LTS (fsarchiver 0.6.12-0.6.19)
    ## Requirements : Fsarchiver, Sendemail, run as root
    ## Run Frequency : Once per day or as often as desired.
    ## Parameters : None
    ## Exit Codes :
    ## 0 = Success
    ## 1 = ERROR: Lock file detected
    ## 2 = ERROR: Must be root user
    ## 4 = ERROR: Missing software
    ## 8 = ERROR: LVM problems
    ## 16 = ERROR: File creation problems
    ## 32 = ERROR: Mount/Unmount problems
    ###################### CHANGE LOG ###########################
    ## DATE VER WHO WHAT WAS CHANGED
    ## ---------- --- --- ---------------------------------------
    ## 2013-01-09 1.0 LTH Created script.
    #############################################################
    
    ## Import standard variables and functions. ##
    source /var/scripts/common/standard.conf
    
    ## Define local variables.
    LOGFILE="${LOGDIR}/${COMPANY}-back-parts.log"
    LOCKFILE="${TEMPDIR}/${COMPANY}-back-parts.lock"
    LVG="/dev/LVG"
    TempLV="${LVG}/tempsnap"
    MaxTempVolSize=1G
    ErrorFlag=0
    ReturnCode=0
    CryptPass="abc123"
    
    #######################################
    ##            FUNCTIONS              ##
    #######################################
    
    function f_cleanup()
    {
      if [ -f ${LOCKFILE} ];then
        ## Remove lock file so other check space jobs can run.
        rm ${LOCKFILE} 1>/dev/null 2>&1
      fi
      if [ ${ErrorFlag} != 0 ]; then
        f_sendmail "ERROR: Script Failure" "Please review the log file on ${HOSTNAME}${LOGFILE}"
        echo "`date +%Y-%m-%d_%H:%M:%S` - Backup aborted." >> ${LOGFILE}
      fi
      exit ${ErrorFlag}
    }
    
    function f_archive_fs()
    {
      FSName=$1
      FSPath=$2
    
      ## Purge old backup files.
      if [ -f ${BACKUPDIR}/${HOSTNAME}-${FSName}.fsa ]; then
        rm ${BACKUPDIR}/${HOSTNAME}-${FSName}.fsa
      fi
      if [ -f ${BACKUPDIR}/${HOSTNAME}-${FSName}.txt ]; then
        rm ${BACKUPDIR}/${HOSTNAME}-${FSName}.txt
      fi
      if [ -f ${BACKUPDIR}/${HOSTNAME}-${FSName}.md5 ]; then
        rm ${BACKUPDIR}/${HOSTNAME}-${FSName}.md5
      fi
    
      ## Unmount FileSystem.
      umount /${FSName}
    
      LVLabel="${HOSTNAME}:${FSPath}->/${FSName}"
      ## Create the compressed and encrypted archive of the snapshot.
      fsarchiver savefs --compress=7 --jobs=1 --cryptpass="${CryptPass}" --label="${LVLabel}" ${BACKUPDIR}/${HOSTNAME}-${FSName}.fsa ${FSPath} > /dev/null 2>&1
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Creation of the archive failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: Creation of ${BACKUPDIR}/${FSName}.fsa failed, Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=16
        f_cleanup
      fi
    
      ## Create an informational text file about the archive.
      fsarchiver archinfo --cryptpass="${CryptPass}" ${BACKUPDIR}/${HOSTNAME}-${FSName}.fsa > ${BACKUPDIR}/${HOSTNAME}-${FSName}.txt 2>&1
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Creation of info text failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: Creation of info file failed for ${BACKUPDIR}/${FSName}.txt, Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=16
        f_cleanup
      fi
    
      ## Create a checksum file about the archive.
      md5sum ${BACKUPDIR}/${HOSTNAME}-${FSName}.fsa > ${BACKUPDIR}/${HOSTNAME}-${FSName}.md5
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Creation of md5 checksum failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: Creation of checksum failed for ${BACKUPDIR}/${FSName}.md5, Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=16
        f_cleanup
      fi
    
      ## Verify that the checksum file can validate against the archive.
      md5sum --check --status ${BACKUPDIR}/${HOSTNAME}-${FSName}.md5
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Verification failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: md5 validation check failed for ${BACKUPDIR}/${FSName}.md5. Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=16
        f_cleanup
      fi
    
      BackupSize=`ls -lak --block-size=m "${BACKUPDIR}/${HOSTNAME}-${FSName}.fsa" | awk '{ print $5 }'`
    
      echo "`date +%Y-%m-%d_%H:%M:%S` --- Created: ${BACKUPDIR}/${HOSTNAME}-${FSName}.fsa, ${BackupSize}" >> ${LOGFILE}
    
      ## Copy the backup to an offsite storage location.
      echo "`date +%Y-%m-%d_%H:%M:%S` --- Copying archive file to offsite location." >> ${LOGFILE}
      cp ${BACKUPDIR}/${HOSTNAME}-${FSName}.* ${OFFSITEDIR}/. 1>/dev/null 2>&1
      if [ ! -f ${OFFSITEDIR}/${HOSTNAME}-${FSName}.fsa ]; then
        ## NON-FATAL ERROR: Copy command did not work.  Send email notification.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- WARNING: Remote copy failed. ${OFFSITEDIR}/${HOSTNAME}-${FSName}.fsa does not exist!" >> ${LOGFILE}
        f_sendmail "Backup Failure - Remote Copy" "Remote copy failed. ${OFFSITEDIR}/${HOSTNAME}-${FSName}.fsa does not exist\n\nBackup file still remains in this location: ${HOSTNAME}:${BACKUPDIR}/${HOSTNAME}-${FSName}.fsa"
      fi
    
      ## Remount FileSystem.
      mount /${FSName}
    }
    
    function f_archive_vol()
    {
      LVName=$1
      LVPath=${LVG}/${LVName}
    
      ## Purge old backup files.
      if [ -f ${BACKUPDIR}/${HOSTNAME}-${LVName}.fsa ]; then
        rm ${BACKUPDIR}/${HOSTNAME}-${LVName}.fsa
      fi
      if [ -f ${BACKUPDIR}/${HOSTNAME}-${LVName}.txt ]; then
        rm ${BACKUPDIR}/${HOSTNAME}-${LVName}.txt
      fi
      if [ -f ${BACKUPDIR}/${HOSTNAME}-${LVName}.md5 ]; then
        rm ${BACKUPDIR}/${HOSTNAME}-${LVName}.md5
      fi
    
      ## Create the snapshot volume of the partition to be backed up.
      lvcreate --size=${MaxTempVolSize} --snapshot --name="tempsnap" ${LVPath} > /dev/null 2>&1
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Creation of temporary volume failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: Creation of temp volume failed for ${LVPath}, size=${MaxTempVolSize}, Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=8
        f_cleanup
      fi
    
      ## Give the OS a moment to let the LV create command do its thing.
      sleep 2
    
      LVLabel="${HOSTNAME}:${LVPath}"
      ## Create the compressed and encrypted archive of the snapshot.
      fsarchiver savefs --compress=7 --jobs=1 --cryptpass="${CryptPass}" --label="${LVLabel}" ${BACKUPDIR}/${HOSTNAME}-${LVName}.fsa ${TempLV} > /dev/null 2>&1
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Creation of the archive failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: Creation of ${BACKUPDIR}/${HOSTNAME}-${LVName}.fsa failed, Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=16
        f_cleanup
      fi
    
      ## Create an informational text file about the archive.
      fsarchiver archinfo --cryptpass="${CryptPass}" ${BACKUPDIR}/${HOSTNAME}-${LVName}.fsa > ${BACKUPDIR}/${HOSTNAME}-${LVName}.txt 2>&1
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Creation of info text failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: Creation of info file failed for ${BACKUPDIR}/${HOSTNAME}-${LVName}.txt, Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=16
        f_cleanup
      fi
    
      ## Create a checksum file about the archive.
      md5sum ${BACKUPDIR}/${HOSTNAME}-${LVName}.fsa > ${BACKUPDIR}/${HOSTNAME}-${LVName}.md5
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Creation of md5 checksum failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: Creation of checksum failed for ${BACKUPDIR}/${HOSTNAME}-${LVName}.md5, Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=16
        f_cleanup
      fi
    
      ## Remove the snapshot.
      lvremove --force ${TempLV} > /dev/null 2>&1
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Removal of temporary volume failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: Removal of temp volume failed. ${TempLV}. Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=8
        f_cleanup
      fi
    
      ## Give the OS a moment to let the LV remove command do its thing.
      sleep 2
    
      ## Verify that the checksum file can validate against the archive.
      md5sum --check --status ${BACKUPDIR}/${HOSTNAME}-${LVName}.md5
      ReturnCode=$?
      if [ ${ReturnCode} != 0 ]; then
        ## Verification failed.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: md5 validation check failed for ${BACKUPDIR}/${HOSTNAME}-${LVName}.md5. Return Code = ${ReturnCode}" >> ${LOGFILE}
        ErrorFlag=16
        f_cleanup
      fi
    
      BackupSize=`ls -lak --block-size=m "${BACKUPDIR}/${HOSTNAME}-${LVName}.fsa" | awk '{ print $5 }'`
    
      echo "`date +%Y-%m-%d_%H:%M:%S` --- Created: ${BACKUPDIR}/${HOSTNAME}-${LVName}.fsa, ${BackupSize}" >> ${LOGFILE}
    
      ## Copy the backup to an offsite storage location.
      echo "`date +%Y-%m-%d_%H:%M:%S` --- Copying archive file to offsite location." >> ${LOGFILE}
      cp ${BACKUPDIR}/${HOSTNAME}-${LVName}.* ${OFFSITEDIR}/. 1>/dev/null 2>&1
      if [ ! -f ${OFFSITEDIR}/${HOSTNAME}-${LVName}.fsa ]; then
        ## NON-FATAL ERROR: Copy command did not work.  Send email notification.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- WARNING: Remote copy failed. ${OFFSITEDIR}/${HOSTNAME}-${LVName}.fsa does not exist!" >> ${LOGFILE}
        f_sendmail "Backup Failure - Remote Copy" "Remote copy failed. ${OFFSITEDIR}/${HOSTNAME}-${LVName}.fsa does not exist\n\nBackup file still remains in this location: ${HOSTNAME}:${BACKUPDIR}/${HOSTNAME}-${LVName}.fsa"
      fi
    }
    
    #######################################
    ##           MAIN PROGRAM            ##
    #######################################
    
    if [ -f ${LOCKFILE} ]; then
      # Lock file detected.  Abort script.
      echo "Backup partitions script aborted"
      echo "This script tried to run but detected the lock file: ${LOCKFILE}"
      echo "Please check to make sure the file does not remain when backup partitions is not actually running."
      f_sendmail "ERROR: Backup partitions script aborted" "This script tried to run but detected the lock file: ${LOCKFILE}\n\nPlease check to make sure the file does not remain when backup partitions is not actually running.\n\nIf you find that the script is not running/hung, you can remove it by typing 'rm ${LOCKFILE}'"
      ErrorFlag=1
      exit ${ErrorFlag}
    else
      echo "`date +%Y-%m-%d_%H:%M:%S` ${SCRIPTNAME}" > ${LOCKFILE}
    fi
    
    ## Requirement Check: Script must run as root user.
    if [ "$(id -u)" != "0" ]; then
      ## FATAL ERROR DETECTED: Document problem and terminate script.
      echo "`date +%Y-%m-%d_%H:%M:%S` ERROR: Root user required to run this script." >> ${LOGFILE}
      ErrorFlag=2
      f_cleanup
    fi
    
    ## Requirement Check: Software
    command -v fsarchiver > /dev/null 2>&1 && ReturnCode=0 || ReturnCode=1
    if [ ${ReturnCode} = 1 ]; then
      ## Required program not installed.
      echo "`date +%Y-%m-%d_%H:%M:%S` ERROR: fsarchiver not installed." >> ${LOGFILE}
      ErrorFlag=4
      f_cleanup
    fi
    
    ## Mount the remote folder. ##
    f_mount
    
    if [ ! -f ${OFFSITETESTFILE} ]; then
      ## Could not find expected file on remote site.  Assuming failed mount.
      echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: Cannot detect remote location: ${OFFSITETESTFILE}" >> ${LOGFILE}
      ErrorFlag=32
      f_cleanup
    fi
    
    StartTime="$(date +%s)"
    echo "`date +%Y-%m-%d_%H:%M:%S` - Backup started." >> ${LOGFILE}
    
    f_archive_fs boot /dev/sda1
    f_archive_vol root
    f_archive_vol home
    f_archive_vol usr
    f_archive_vol var
    f_archive_vol srv
    f_archive_vol opt
    #f_archive_vol swap
    
    ## Unmount the Windows shared folder.
    f_umount
    
    ## Calculate total time for backup.
    FinishTime="$(date +%s)"
    ElapsedTime="$(expr ${FinishTime} - ${StartTime})"
    Hours=$((${ElapsedTime} / 3600))
    ElapsedTime=$((${ElapsedTime} - ${Hours} * 3600))
    Minutes=$((${ElapsedTime} / 60))
    Seconds=$((${ElapsedTime} - ${Minutes} * 60))
    
    echo "`date +%Y-%m-%d_%H:%M:%S` --- Total backup time: ${Hours} hour(s) ${Minutes} minute(s) ${Seconds} second(s)" >> ${LOGFILE}
    
    echo "`date +%Y-%m-%d_%H:%M:%S` - Backup Finished." >> ${LOGFILE}
    f_cleanup
    Here is an example of the log file:

    /var/log/back-parts.log
    Code:
    2012-06-28_18:19:36 - Backup started.
    2012-06-28_18:19:46 --- Created: /bak/srv-ubuntu-boot.fsa, 40M
    2012-06-28_18:19:46 --- Created: /bak/srv-ubuntu-home.fsa, 1M
    2012-06-28_18:20:33 --- Created: /bak/srv-ubuntu-root.fsa, 96M
    2012-06-28_18:20:44 --- Created: /bak/srv-ubuntu-opt.fsa, 1M
    2012-06-28_18:20:55 --- Created: /bak/srv-ubuntu-srv.fsa, 1M
    2012-06-28_18:22:21 --- Created: /bak/srv-ubuntu-usr.fsa, 162M
    2012-06-28_18:23:28 --- Created: /bak/srv-ubuntu-var.fsa, 189M
    2012-06-28_18:23:40 --- Backup time: 0 hour(s) 4 minute(s) 4 second(s)
    2012-06-28_18:23:40 - Backup Finished.
    An example email notification when a fatal error occured:
    Code:
    #### Still working on it ####
    An example email notification when non-fatal errors occured:
    Code:
    #### Still working on it ####
    An example email notification when no errors occur and email notifications turned on:
    Code:
    From: admin@mydomain.com
    To: lhammonds@mydomain.com
    Sent: Friday, June 29, 2012 10:36:45 AM
    Subject: Backup Completed
     
    INFO: The partition backup job has completed without any errors.
     
    Server: srv-ubuntu
    Program: /var/scripts/prod/back-parts.sh
    Log: /var/log/back-parts.log

    NOTE: If the snapshot volume could not be automatically removed, here is how you do it:

    Code:
    dmsetup ls
    Code:
    LVG-srv (252, 6)
    LVG-tempsnap    (252, 9)
    LVG-opt (252, 7)
    LVG-swap        (252, 1)
    LVG-root        (252, 0)
    LVG-opt-real    (252, 10)
    LVG-bak (252, 8)
    LVG-tmp (252, 3)
    LVG-tempsnap-cow        (252, 11)
    LVG-usr (252, 4)
    LVG-var (252, 5)
    LVG-home        (252, 2)
    Code:
    dmsetup remove LVG-tempsnap
    dmsetup remove LVG-tempsnap-cow
    Backup Test

    Before the partitions are backed on your server, create a couple of empty test files to verify that the restore in the next section will work.

    Type the following commands:

    Code:
    touch /important.txt
    touch /srv/samba/share/important.txt
    Make sure the above files are included in your backup before testing the restore in the next section.

  10. #10
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to Install and Configure an Ubuntu Server 14.04.1 LTS

    Restore Partitions Using SystemRescueCD and FSArchiver

    Partitions cannot be mounted when restoring to them. If services can be stopped that use files on a specific partition, it can be unmounted and restored. However, the root partition can never be restore while the server is online so these instructions will cover the common denominator which requires taking the server offline.

    The server needs to be booted up with a CD but not just any CD will do because it needs to have FSArchiver on it. For this document, the ISO image from www.sysresccd.org will be used.

    Once downloaded, the ISO can be burned to a CD-ROM disc or uploaded to your ISO repository such as a LUN. VMware and VirtualBox can attach an ISO image and mount it in the CD-ROM device to allow the virtual machine to boot the ISO image.

    For this example, the root and srv partitions will be restored.

    Be sure the partitions have been backed up and the files are sitting in the /bak volume.

    Code:
    ls -l /bak
    Code:
    -rw-r--r-- 1 root root  41515916 Jun 28 18:19 srv-ubuntu-boot.fsa
    -rw-r--r-- 1 root root        59 Jun 28 18:19 srv-ubuntu-boot.md5
    -rw-r--r-- 1 root root       732 Jun 28 18:19 srv-ubuntu-boot.txt
    -rw-r--r-- 1 root root      8467 Jun 28 18:19 srv-ubuntu-home.fsa
    -rw-r--r-- 1 root root        59 Jun 28 18:19 srv-ubuntu-home.md5
    -rw-r--r-- 1 root root       730 Jun 28 18:19 srv-ubuntu-home.txt
    -rw-r--r-- 1 root root      5045 Jun 28 18:20 srv-ubuntu-opt.fsa
    -rw-r--r-- 1 root root        58 Jun 28 18:20 srv-ubuntu-opt.md5
    -rw-r--r-- 1 root root       729 Jun 28 18:20 srv-ubuntu-opt.txt
    -rw-r--r-- 1 root root  99626058 Jun 28 18:20 srv-ubuntu-root.fsa
    -rw-r--r-- 1 root root        59 Jun 28 18:20 srv-ubuntu-root.md5
    -rw-r--r-- 1 root root       732 Jun 28 18:20 srv-ubuntu-root.txt
    -rw-r--r-- 1 root root      5458 Jun 28 18:20 srv-ubuntu-srv.fsa
    -rw-r--r-- 1 root root        58 Jun 28 18:21 srv-ubuntu-srv.md5
    -rw-r--r-- 1 root root       728 Jun 28 18:20 srv-ubuntu-srv.txt
    -rw-r--r-- 1 root root 169110383 Jun 28 18:22 srv-ubuntu-usr.fsa
    -rw-r--r-- 1 root root        58 Jun 28 18:22 srv-ubuntu-usr.md5
    -rw-r--r-- 1 root root       730 Jun 28 18:22 srv-ubuntu-usr.txt
    -rw-r--r-- 1 root root 198015579 Jun 28 18:23 srv-ubuntu-var.fsa
    -rw-r--r-- 1 root root        58 Jun 28 18:23 srv-ubuntu-var.md5
    -rw-r--r-- 1 root root       730 Jun 28 18:23 srv-ubuntu-var.txt
    As a little test of the restore, let's delete the two text files created in the previous section:

    Code:
    rm /important.txt
    rm /srv/samba/share/important.txt
    These files should have been included in the backup image. When the restore is complete, these files should return.

    Insert the CDROM (or mount the ISO image) and boot the server with it.

    Here is the 1st screen:


    The server in this document is a 64-bit server so option #6 was chosen.

    The next screen takes you to the command prompt:




    root@sysresccd /root % mkdir /mnt/test
    root@sysresccd /root % fsarchiver probe simple
    Code:
    [======DISK======] [=============NAME==============] [====SIZE====] [MAJ] [MIN]
    [sda             ] [Virtual disk                   ] [    10.00 GB] [  8] [  0]
    [sdb             ] [Virtual disk                   ] [    12.00 GB] [  8] [ 16]
    [sdc             ] [Virtual disk                   ] [    12.00 GB] [  8] [ 32]
    [sr0             ] [VMware IDE CDR10               ] [   378.96 MB] [ 11] [  0]
     
    [=====DEVICE=====] [==FILESYS==] [======LABEL======] [====SIZE====] [MAJ] [MIN]
    [loop0           ] [squashfs   ] [<unknown>        ] [   297.63 MB] [  7] [  0]
    [sda1            ] [ext2       ] [boot             ] [   190.00 MB] [  8] [  1]
    [sda5            ] [LVM2_member] [<unknown>        ] [     9.81 GB] [  8] [  5]
    [sdb1            ] [LVM2_member] [<unknown>        ] [    12.00 GB] [  8] [ 17]
    [sdc1            ] [LVM2_member] [<unknown>        ] [    12.00 GB] [  8] [ 33]
    [dm-0            ] [ext4       ] [root             ] [     3.00 GB] [253] [  0]
    [dm-1            ] [swap       ] [<unknown>        ] [     1.86 GB] [253] [  1]
    [dm-2            ] [ext4       ] [home             ] [     1.00 GB] [253] [  2]
    [dm-3            ] [ext4       ] [tmp              ] [     2.00 GB] [253] [  3]
    [dm-4            ] [ext4       ] [usr              ] [     3.00 GB] [253] [  4]
    [dm-5            ] [ext4       ] [var              ] [     3.00 GB] [253] [  5]
    [dm-6            ] [ext4       ] [srv              ] [     2.00 GB] [253] [  6]
    [dm-7            ] [ext4       ] [opt              ] [     2.00 GB] [253] [  7]
    [dm-8            ] [ext4       ] [bak              ] [     4.00 GB] [253] [  8]
    root@sysresccd /root % mkdir /bak
    root@sysresccd /root % mount --read-only /dev/dm-8 /bak
    root@sysresccd /root % ls -l /bak
    Code:
    -rw-r--r-- 1 root root  41515916 Jun 28 18:19 srv-ubuntu-boot.fsa
    -rw-r--r-- 1 root root        59 Jun 28 18:19 srv-ubuntu-boot.md5
    -rw-r--r-- 1 root root       732 Jun 28 18:19 srv-ubuntu-boot.txt
    -rw-r--r-- 1 root root      8467 Jun 28 18:19 srv-ubuntu-home.fsa
    -rw-r--r-- 1 root root        59 Jun 28 18:19 srv-ubuntu-home.md5
    -rw-r--r-- 1 root root       730 Jun 28 18:19 srv-ubuntu-home.txt
    -rw-r--r-- 1 root root      5045 Jun 28 18:20 srv-ubuntu-opt.fsa
    -rw-r--r-- 1 root root        58 Jun 28 18:20 srv-ubuntu-opt.md5
    -rw-r--r-- 1 root root       729 Jun 28 18:20 srv-ubuntu-opt.txt
    -rw-r--r-- 1 root root  99626058 Jun 28 18:20 srv-ubuntu-root.fsa
    -rw-r--r-- 1 root root        59 Jun 28 18:20 srv-ubuntu-root.md5
    -rw-r--r-- 1 root root       732 Jun 28 18:20 srv-ubuntu-root.txt
    -rw-r--r-- 1 root root      5458 Jun 28 18:20 srv-ubuntu-srv.fsa
    -rw-r--r-- 1 root root        58 Jun 28 18:21 srv-ubuntu-srv.md5
    -rw-r--r-- 1 root root       728 Jun 28 18:20 srv-ubuntu-srv.txt
    -rw-r--r-- 1 root root 169110383 Jun 28 18:22 srv-ubuntu-usr.fsa
    -rw-r--r-- 1 root root        58 Jun 28 18:22 srv-ubuntu-usr.md5
    -rw-r--r-- 1 root root       730 Jun 28 18:22 srv-ubuntu-usr.txt
    -rw-r--r-- 1 root root 198015579 Jun 28 18:23 srv-ubuntu-var.fsa
    -rw-r--r-- 1 root root        58 Jun 28 18:23 srv-ubuntu-var.md5
    -rw-r--r-- 1 root root       730 Jun 28 18:23 srv-ubuntu-var.txt
    root@sysresccd /root % md5sum --check /bak/srv-ubuntu-srv.md5
    Code:
    /bak/srv-ubuntu-srv.fsa: OK
    root@sysresccd /root % fsarchiver restfs --cryptpass="abc123" /bak/srv-ubuntu-srv.fsa id=0,dest=/dev/dm-6
    Code:
    Statistics for filesystem 0
    * files successfully processed:....regfiles=1, directories=4, symlinks=0, hardlinks=0, specials=0
    * files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
    root@sysresccd /root % mount --read-only /dev/dm-6 /mnt/test
    root@sysresccd /root % ls -l /mnt/test/samba/share
    Code:
    -rw-r--r-- 1 root root 0 Jun 28 09:21 important.txt
    root@sysresccd /root % umount /mnt/test
    root@sysresccd /root % md5sum --check /bak/srv-ubuntu-root.md5
    Code:
    /bak/srv-ubuntu-root.fsa: OK
    root@sysresccd /root % fsarchiver restfs --cryptpass="abc123" /bak/srv-ubuntu-root.fsa id=0,dest=/dev/dm-0
    Code:
    Statistics for filesystem 0
    * files successfully processed:....regfiles=8938, directories=1693, symlinks=855, hardlinks=11, specials=80
    * files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
    root@sysresccd /root % mount --read-only /dev/dm-0 /mnt/test
    root@sysresccd /root % ls -l /mnt/test
    Code:
    drwxr-xr-x   3 root root  1024 Jun 27 12:25 bak
    drwxr-xr-x   2 root root  4096 Jun 26 12:37 bin
    drwxr-xr-x   4 root root  1024 Jun 26 12:36 boot
    drwxr-xr-x  14 root root  4400 Jun 28 09:17 dev
    drwxr-xr-x  90 root root  4096 Jun 28 09:17 etc
    drwxr-xr-x   4 root root  1024 Jun 26 10:20 home
    -rw-r--r--   1 root root     0 Jun 28 09:20 important.txt
    lrwxrwxrwx   1 root root    33 Jun 26 10:33 initrd.img -> /boot/initrd.img-3.2.0-25-generic
    lrwxrwxrwx   1 root root    33 Jun 26 10:16 initrd.img.old -> /boot/initrd.img-3.2.0-23-generic
    drwxr-xr-x  17 root root  4096 Jun 26 12:34 lib
    drwxr-xr-x   2 root root  4096 Jun 26 10:16 lib64
    drwx------   2 root root 16384 Jun 26 10:15 lost+found
    drwxr-xr-x   4 root root  4096 Jun 26 10:16 media
    drwxr-xr-x   3 root root  4096 Jun 26 12:35 mnt
    drwxr-xr-x   3 root root  1024 Jun 26 10:15 opt
    dr-xr-xr-x 111 root root     0 Jun 28 09:17 proc
    drwx------   3 root root  4096 Jun 26 19:57 root
    drwxr-xr-x  15 root root   500 Jun 28 09:18 run
    drwxr-xr-x   2 root root  4096 Jun 26 12:37 sbin
    drwxr-xr-x   2 root root  4096 Mar  5 11:54 selinux
    drwxr-xr-x   4 root root  1024 Jun 28 08:34 srv
    drwxr-xr-x  13 root root     0 Jun 28 09:17 sys
    drwxrwxrwt   4 root root  1024 Jun 28 09:18 tmp
    drwxr-xr-x  11 root root  4096 Jun 26 10:16 usr
    drwxr-xr-x  14 root root  4096 Jun 26 13:08 var
    lrwxrwxrwx   1 root root    29 Jun 26 10:33 vmlinuz -> boot/vmlinuz-3.2.0-25-generic
    lrwxrwxrwx   1 root root    29 Jun 26 10:16 vmlinuz.old -> boot/vmlinuz-3.2.0-23-generic
    root@sysresccd /root % umount /mnt/test

    Eject the CDROM/ISO and reboot the server.

    If it worked right, the server will boot up (we restored the root file system) and the test files should have been restored.

    NOTE: If you noticed any ext file systems that had <unknown> labels, you can update them using the tune2fs command. Example 1: tune2fs /dev/sda1 -L boot, Example 2: tune2fs /dev/dm-0 root

Page 1 of 2 12 LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •