Page 1 of 3 123 LastLast
Results 1 to 10 of 27

Thread: How to install and configure an Ubuntu Server 18.04 LTS

  1. #1
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb How to install and configure an Ubuntu Server 18.04 LTS

    NOTE: This documentation was developed using the "alternative installer" for Ubuntu Server 18.04 which is just like 16.04. I will not cover the default "live" install which seeks to mimic a desktop install and does not allow LVM partitions nor RAID.

    Greetings and salutations,

    I hope this thread will be helpful to those who follow in my foot steps as well as getting any advice based on what I have done / documented.

    Link to original post: HammondsLegacy Forums (best when viewed at original location due to custom formatting)

    High-level overview

    This document will cover installation of a dedicated Ubuntu server. This will be the "base" installation of the server as a prerequisite for other documents that will build upon it (e.g. MediaWiki and MySQL). The server will be installed inside a virtual machine using vSphere running on ESXi servers. Notes will also be supplied for doing the same thing for Oracle's VirtualBox on a Windows 10 PC. Although there are some VMware-specific and VirtualBox-specific steps, they are very few and the majority of this documentation will work for other Virtual Machines or even directly installed onto a physical machine (e.g. bare-metal install).

    This document will also cover some custom scripts to help automate tasks such as backing up, automatically growing the file system when free space is low, etc.

    Tools utilized in this process




    Helpful links

    The list below are sources of information that helped me configure this system as well as some places that might be helpful to me later on as this process continues.




    Assumptions

    This documentation will need to make use of some very-specific information that will most-likely be different for each person / location. This variable data will be noted in this section and highlighted in red throughout the document as a reminder that you should plug-in your own value rather than actually using these "place-holder" values.

    Under no circumstance should you use the actual values listed below. They are place-holders for the real thing. This is just a checklist template you need to have answered before you start the install process.

    The RED below are the values you need to substitute throughout this tutorial for use in your environment.


    • Ubuntu Server name: srv-ubuntu
    • Internet domain: mydomain.com
    • Ubuntu Server IP address: 192.168.107.2
    • Ubuntu Server IP subnet mask: 255.255.255.0
    • Ubuntu Server IP gateway: 192.168.107.1
    • Internal DNS Server 1: 192.168.107.212
    • Internal DNS Server 2: 192.168.107.213
    • External DNS Server 1: 8.8.8.8
    • Ubuntu Admin ID: administrator
    • Ubuntu Admin Password: myadminpass
    • Email Server (remote): 192.168.107.25
    • Windows Share ID: myshare
    • Windows Share Password: mysharepass


    It is also assumed that the reader knows how to use the VI editor. If not, you will need to beef up your skill set or use a different editor in place of it. The vim-nox package that is installed later includes "vimtutor" which is also a good place to learn how to use the vi editor.
    Last edited by LHammonds; May 1st, 2020 at 08:30 PM. Reason: Updated to match author site

  2. #2
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to install and configure an Ubuntu Server 18.04 LTS

    Analysis and Design

    The Ubuntu Server Long-Term Support (LTS) is great choice for companies because it is a solid operating system that happens to be free. If professional support is needed, there is an option to buy support for the Long-Term Support (LTS) versions of the operating system.

    The large decision over the configuration of Ubuntu is how the hard drive space is sliced up (partitioned). This documentation will focus on partitioning the drives in such a way that it allows for growth depending on what is needed for the specific application.

    This following design allows for dynamic growth and fine-tuning if need be. Being caught offguard with a scenario where space is filled up with no immediate option other than deleting files is never a good thing. Long-term life and growth of the system as well as budgeting concerns have to be taken into consideration.

    Isolating the root volume to mainly just static data that will not grow much over time is the central concern. Pushing the other folders into their own volumes will be done so their dynamic growth will not affect the root partition. Filling up the root volume on a *nix system is a very bad thing and should be avoided at all costs. The file systems will also not take up 100% of the logical volume. This will allow the file systems (through automated scripts) to grow as needed and give the administrators some time to add more drives if necessary or shrink other volumes to get more space.

    The volumes will initially be sliced up as follows:


    • boot - This will remain static in size. It is also the only space residing outside the Logical Volume Manager (LVM)
    • root volume - Operating system and everything else which should remain fairly static.
    • home volume - This is where personal files will be stored but likely not be used in most server configurations.
    • tmp volume - This location will be used for temporary storage. Size should be adjusted to match however it is being used.
    • usr volume - This will contain mostly static data and should not grow unexpectedly.
    • var volume - This is the app/database/log storage and will continue to grow over time.
    • srv volume - This will contain the files stored in the Samba share.
    • opt volume - This will contain specific software you add but may not be utilized at all depending on configuration.
    • bak volume - This will contain a local backup of the server/applications/data. So space needs to be about double the size of your application data (typically double the /var and /opt size).
    • Offsite Storage - This will be handled elsewhere but will be mounted on this server.

    The partitions will be increased later as needed but will start off with a minimum size.

    To get a good idea of the initial hard drive layout and to understand the process better, here is a graphical representation of the initial design for the server:



    These numbers will be used for the initial build of the system:

    boot = 500 MB
    root = 2 GB
    home = 0.2 GB
    tmp = 0.5 GB
    usr = 2.0 GB
    var = 2.0 GB
    srv = 0.2 GB
    opt = 0.2 GB
    bak = 0.5 GB

    Important information

    • When the logical volumes and file systems are initially created, they consume the maximum amount of space allocated which means the file system size will initially equal the logical volume size. These partition sizes above are artificially small for that reason. These will be later modified so that the logical volume will be larger than the file system so that the file system has room to expand when needed in a safe and automated manner.
    • If you want, you can initially allocate a larger disk such as a 50 GB drive rather than adding the 2nd and 3rd disk as noted in these steps. These are just examples on how to manage and expand your storage when needed.
    • The /tmp folder is strictly temporary. By default, each time the server reboots, this folder is deleted and re-created.
    • The /bak folder will retain the most recent backup and is considered the "local" copy of the backup.


    VMware Virtual Machine Settings

    • Configuration: Custom
    • Name: srv-ubuntu
    • Datastore: DS3400-LUN0
    • Virtual Machine Version: 8
    • Guest Operating System: Linux, Version: Ubuntu Linux (64-bit)
    • Number of virtual processors: 1
    • Memory Size: 1024 MB
    • Number of NICs: 1
    • NIC 1: VM Network
    • Adapter: E1000, Connect at Power On: Checked
    • SCSI controller: LSI Logic Parallel
    • Select a Disk: Create a new virtual disk
    • Create a Disk: 25 GB, No thin provisioning, No cluster features, Store with the virtual machine
    • Advanced Options: Virtual Device Node = SCSI (0:0)
    • Remove Floppy Drive
    • Mount CD/DVD Drive to Ubuntu ISO (ubuntu-18.04-server-amd64.iso). Make sure CD/DVD is set to Connect at power on
    • Set boot options to Force BIOS Setup so you can set CDROM to boot before the Hard Disk

    VirtualBox Virtual Machine Settings

    • Name: srv-ubuntu
    • Operating System: Linux
    • Version: Ubuntu (64 bit)
    • Memory: 1024 MB
    • Check - Start-up Disk
      - Create new hard disk
      - VMDK
      - Dynamically allocated
      - Size: 25 GB
    • Select srv-ubuntu and click Settings (CTRL+S)
      - System, Processor, Enable PAE/NX
      - Network, Attached to: Bridged Adapter, Advanced, Adapter Type: Intel PRO/1000 MT Server
      - Storage, IDE Controller, Choose a virtual CD/DVD disk file, ubuntu-18.04-server-amd64.iso

    Install PuTTY

    When running inside a virtual machine, the response time for screen refreshes can be painfully slow to view man (manual) pages and navigating in VI (text editor). However, when using PuTTY via SSH, it is a far better solution for your Ubuntu console because it handles the screen draws much faster when scrolling and allows copying and pasting text between windows.

    For example, selecting and copying a command in this document and then right-clicking in the PuTTY window will paste the command and have it ready to execute. Any text/lines highlighted with the mouse will be automatically copied into clipboard memory.

    Download the portable edition and run the install...except it does not really "install" like a normal program, it simply extracts to a specified folder and will run from that folder even if you put it on a USB stick and carry over to a new computer (requires no install to run and thus leaves no footprint on your system)


    1. Start PuTTY
    2. Under Window - Translation - Remote character set, select UTF-8
    3. Type the following and click the Save button:
      Host Name: SRV-Ubuntu (or the IP such as 192.168.107.2)
      Port: 22
      Connection type: SSH
      Saved Sessions: SRV-Ubuntu
    4. Now all you have to do is double-click on the session and it will connect to your server (when online).
    Last edited by LHammonds; September 6th, 2019 at 08:42 PM. Reason: Updated to match author site

  3. #3
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Re: How to install and configure an Ubuntu Server 18.04 LTS

    Install Ubuntu Server

    NOTE: During the setup process throughout this entire document, most commands will require "sudo" as a prefix. However, this document will be using "sudo su" to temporarily gain root privileges so that subsequent commands will work without the need for the "sudo" prefix.


    1. Power on the Virtual Machine (VM)
    2. Press {ENTER} to accept English
    3. Select Install Ubuntu Server {ENTER}
    4. Press {ENTER} to accept English
    5. Press {ENTER} to accept United States
    6. Press {ENTER} to accept do not detect keyboard layout
    7. Press {ENTER} to accept English (US)
    8. Press {ENTER} to accept English (US)
    9. Type srv-ubuntu {ENTER} (this is your hostname)
    10. Type Administrator, {ENTER} for the full name
    11. Press {ENTER} to accept the default of the lowercase name of administrator
    12. Type myadminpass, {ENTER}, myadminpass, {ENTER}
    13. Press {ENTER} to accept detected time zone (America/Chicago)
    14. Select Manual {ENTER}
    15. Select SCSI3 (0,0,0) (sda) - 26.8 GB VMware Virtual disk {ENTER}
    16. Select Yes to create new empty partition table, {ENTER}
    17. Select pri/log 26.8 GB FREE SPACE {ENTER}
    18. Select Create a new partition {ENTER}
    19. Type 500MB, {ENTER} (NOTE: This will be the /boot partition)
    20. Select Primary {ENTER}
    21. Select Beginning {ENTER}
    22. Select Use as: Ext4 journaling file system {ENTER}
    23. Select Ext2 file system {ENTER}
    24. Select Mount point: / {ENTER}
    25. Select /boot - static files of the boot loader {ENTER}
    26. Select Label: and type boot and press {ENTER}
    27. Select Bootable flag: off {ENTER} (NOTE: This toggles it on)
    28. Select Done setting up the partition {ENTER}
    29. Select Configure the Logical Volume Manager {ENTER}
    30. Select Yes to write change to disks and configure LVM, {ENTER}
    31. Select Create volume group {ENTER}
    32. Type LVG {ENTER}
    33. Select /dev/sda free #1 (26343MB; FREE SPACE), {SPACEBAR}, {ENTER}
    34. Select Yes to write change to disks and configure LVM, {ENTER}
    35. At this point, you need to loop through this menu selecting the same options but different values, here is the short list of options/values:
      • Create logical volume, select LVG, type root, type 2G
      • Create logical volume, select LVG, type usr, type 2G
      • Create logical volume, select LVG, type var, type 2G
      • Create logical volume, select LVG, type tmp, type 0.5G
      • Create logical volume, select LVG, type bak, type 0.5G
      • Create logical volume, select LVG, type srv, type 0.2G
      • Create logical volume, select LVG, type opt, type 0.2G
      • Create logical volume, select LVG, type home, type 0.2G
    36. Select Finish {ENTER}
    37. At this point, you need to loop through this menu selecting the same options but different values, here is the short list of options/values:
      • Directly under LVM VG LVG, LV root, Ext4 journaling file system, Mount point: / - the root file system, Label: root
      • Directly under LVM VG LVG, LV usr, Ext4 journaling file system, Mount point: /usr - static data, Label: usr
      • Directly under LVM VG LVG, LV var, Ext4 journaling file system, Mount point: /var - variable data, Label: var
      • Directly under LVM VG LVG, LV tmp, Ext4 journaling file system, Mount point: /tmp - temporary files, Label: tmp
      • Directly under LVM VG LVG, LV bak, Ext4 journaling file system, Mount point: /bak - Enter manually, Label: bak
      • Directly under LVM VG LVG, LV srv, Ext4 journaling file system, Mount point: /srv - data for services, Label: srv
      • Directly under LVM VG LVG, LV opt, Ext4 journaling file system, Mount point: /opt - add-on application, Label: opt
      • Directly under LVM VG LVG, LV home, Ext4 journaling file system, Mount point: /home - user home directories, Label: home
    38. Here is what the screen looks like at this point: Partitions
    39. Select Finish partitioning and write changes to disk {ENTER}
    40. Here is what the screen looks like at this point: Partitions
    41. Select Yes to write changes to disk, {ENTER}
    42. Press {ENTER} to accept a blank line for the HTTP proxy
    43. Select No automatic updates, {ENTER} (* We will schedule a script for this later *)
    44. Set the following and press {ENTER} to continue:
      Uncheck - DNS server
      Uncheck - LAMP server
      Uncheck - Mail server
      Uncheck - PostgreSQL database
      Uncheck - Print server
      Uncheck - Samba file server
      Check - OpenSSH server (allows us to use PuTTY after installation to connect to the server)
    45. Select Yes, {ENTER} to install GRUB boot loader to the master boot record
    46. Installation Complete - from the VM menu, select VM --> Edit Settings and select CD/DVD Drive 1 and change to "Client Device" which will effectively remove the ISO. Now press {ENTER} to reboot.
    Last edited by LHammonds; September 6th, 2019 at 08:43 PM. Reason: Updated to match author site

  4. #4
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Re: How to install and configure an Ubuntu Server 18.04 LTS

    Initial Configurations

    1. At the login prompt, login with your administrator account (administrator / myadminpass)
    2. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    3. If you made a mistake labeling any of the partitions, you can list them and make changes with the following example commands:
      Code:
      blkid | grep LABEL
      e2label /dev/sda1 boot
      e2label /dev/mapper/LVG-bak bak
      e2label /dev/mapper/LVG-opt opt
      e2label /dev/mapper/LVG-srv srv
      e2label /dev/mapper/LVG-tmp tmp
      e2label /dev/mapper/LVG-usr usr
      e2label /dev/mapper/LVG-var var
      e2label /dev/mapper/LVG-root root
      e2label /dev/mapper/LVG-home home
    4. Edit the network configuration file:
      Code:
      vi /etc/netplan/01-netcfg.yaml
    5. Change the Ethernet interface: (We need to change it from using DHCP to a static IP)
      From:
      Code:
      network:
        version: 2
          renderer: networkd
            ethernets:
              enp0s3:
                dhcp4: yes
      To:
      Code:
      network:
        version: 2
        renderer: networkd
        ethernets:
          enp0s3:
            addresses: [192.168.107.2/24]
            gateway4: 192.168.107.1
            nameservers:
              addresses: [192.168.107.212,192.168.107.213,1.1.1.1,8.8.8.8]
      NOTE #1: The above YAML format is extremely sensitive to spaces. Each indentation needs to be exactly 2 spaces. Visit NetPlan.io for more information.

      NOTE #2: You may need to manually remove the DHCP record (lease) associated to this Ubuntu server from your DHCP server so the correct IP can be found by other machines on the network. This can be avoided by temporarily configuring the VM Network Adapter connection to be "Host Only Network" instead of "VM Network" so the server is isolated during setup...at least until you reach the testing of the static IP below.

      NOTE #3: You might also need to manually add a HOST(A) record to your Windows DNS server (for srv-ubuntu.mydomain.com and srv-ubuntu.work.mydomain.com)
    6. Restart the network by typing the following:
      Code:
      netplan apply
    7. Sanity check! Type ifconfig and make sure the settings are correct. Then type ping www.google.com or similar and see if ping works.
    8. Make sure any file created by the root account is set to only be accessible to root by default:
      Code:
      echo 'umask 0077' >> ~/.bashrc
    9. Disable command history for the root user on production systems to prevent hackers from seeing the commands you have typed in the past which might expose passwords:
      Code:
      echo 'set +o history' >> ~/.bashrc
    10. Make sure menus will correctly draw lines instead of displaying ascii codes:
      Code:
      echo 'export NCURSES_NO_UTF8_ACS=1' >> ~/.bashrc
    11. Shutdown and power off the server by typing shutdown -P now
    12. At this point forward, you can use PuTTY to access the console rather than the console itself for better performance, ability to scroll, etc.
    13. In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like STEP 1 and description of Ubuntu Server 18.04 LTS, Clean install, Static IP: 192.168.107.2 and click OK
    14. NOTE: Please remember to delete all snapshots when you are 100% done with the setup and happy with the results. Not deleting snapshots can secretly consume all your storage space over time. These snapshots are only there so you can revert back to a prior step quickly and easily without the need to completely re-install.


    Add more SUDO users and lockdown SSH

    The root user is locked by default which is good. When you installed the server, you created the administrator account which can run sudo commands as root. Let's add one more user that can use SUDO and ensure only these user accounts can login via SSH to the server.

    Create a new user called "newadmin"
    Code:
    adduser newadmin
    Now add them to the SUDO group which will allow them to use the SUDO command.
    Code:
    usermod -aG sudo newadmin
    Now modify SSH service to only allow these 2 users to login via SSH.

    Code:
    vi /etc/ssh/sshd_config
    Add the following line anywhere in the file:
    Code:
    AllowUsers administrator newadmin
    Reload the SSH config for the change to take affect:
    Code:
    systemctl reload sshd
    Now only administrator and newadmin can login to the server via SSH. If you create another user, that user will not be able to login even with the correct password. It will just say "Access denied."

    The firewall and fail2ban sections later on will further increase SSH security.

    You can also use SSH key-based authentication and disable user/password authentication.

    Operating System Patches


    1. Start the Ubuntu server and connect using PuTTY.
    2. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    3. Install the patches by typing the following commands:
      Code:
      apt update
      apt upgrade
      apt dist-upgrade
    4. Shutdown and power off the server by typing shutdown -P now
    5. In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like STEP 2 and description of Ubuntu Server 18.04 LTS, Patches applied, Static IP: 192.168.107.2. The Snapshot Manager should now have a nice hierarchy of snapshots (STEP 1 --> STEP 2 --> You are here)
    Last edited by LHammonds; September 6th, 2019 at 08:50 PM. Reason: Match the author site

  5. #5
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to install and configure an Ubuntu Server 18.04 LTS

    Volume / Disk Management

    Earlier, it was mentioned that the partition design needed to have some breathing room in each volume so that the file system inside can grow as needed. When the volumes were created during setup, the file systems were automatically expanded to fill the entire volume. We will now correct this by adding more "drives" to the system and then extend each logical volume to gain some breathing space.

    Most logical volumes will be increased in size and then the file systems contained in them will be increased but not to the maximum amount.

    This design will allow growth when needed and ensure that there will be time to add additional hard drives BEFORE they are needed which will keep the administrators from being stuck between a rock and a hard place! Nobody wants to lose a job because somebody did not estimate growth correctly or the budget did not allow for large capacity when the system first rolled out.

    We started off with a 25 GB drive to hold these volumes and the changes below will use 22 GB.

    Here are the planned adjustments for each logical volume:

    root = 2 GB to 3 GB
    home = 0.2 GB to 1 GB
    tmp = 0.5 GB to 2 GB
    usr = 2.0 GB to 4 GB
    var = 2.0 GB to 3 GB
    srv = 0.2 GB to 2 GB
    opt = 0.2 GB to 2 GB
    bak = 0.5 GB to 4 GB

    Here are the planned adjustments for each file system:

    root = 2.0 GB (no change)
    home = 0.2 GB to 0.5 GB
    tmp = 0.5 GB to 1.0 GB
    usr = 2.0 GB to 3.0 GB
    var = 2.0 GB (no change)
    srv = 0.2 GB to 1.0 GB
    opt = 0.2 GB to 1.0 GB
    bak = 0.5 GB to 2.0 GB

    Here is a graphical representation of what will be accomplished:



    If we were to type df -h right now, we should see something like this:

    Code:
    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    udev                  464M     0  464M   0% /dev
    tmpfs                  99M  836K   98M   1% /run
    /dev/mapper/LVG-root  1.8G  916M  816M  53% /
    /dev/mapper/LVG-usr   1.8G  837M  895M  49% /usr
    tmpfs                 493M     0  493M   0% /dev/shm
    tmpfs                 5.0M     0  5.0M   0% /run/lock
    tmpfs                 493M     0  493M   0% /sys/fs/cgroup
    /dev/sda1             461M  141M  297M  33% /boot
    /dev/mapper/LVG-var   1.8G  398M  1.4G  23% /var
    /dev/mapper/LVG-home  179M  1.6M  164M   1% /home
    /dev/mapper/LVG-tmp   453M  2.3M  423M   1% /tmp
    /dev/mapper/LVG-bak   453M  2.3M  423M   1% /bak
    /dev/mapper/LVG-srv   179M  1.6M  164M   1% /srv
    /dev/mapper/LVG-opt   179M  1.6M  164M   1% /opt
    tmpfs                  99M     0   99M   0% /run/user/1000
    To get a list of volume paths to use in the next commands, type lvscan to show your current volumes and their sizes.

    Code:
    # lvscan
      ACTIVE            '/dev/LVG/root' [<1.86 GiB] inherit
      ACTIVE            '/dev/LVG/usr' [<1.86 GiB] inherit
      ACTIVE            '/dev/LVG/var' [<1.86 GiB] inherit
      ACTIVE            '/dev/LVG/tmp' [476.00 MiB] inherit
      ACTIVE            '/dev/LVG/bak' [476.00 MiB] inherit
      ACTIVE            '/dev/LVG/srv' [188.00 MiB] inherit
      ACTIVE            '/dev/LVG/opt' [188.00 MiB] inherit
      ACTIVE            '/dev/LVG/home' [188.00 MiB] inherit
    Type the following to set the exact size of the volume by specifying the end-result size you want:

    Code:
    lvextend -L3G /dev/LVG/root
    lvextend -L1G /dev/LVG/home
    lvextend -L2G /dev/LVG/tmp
    lvextend -L4G /dev/LVG/usr
    lvextend -L3G /dev/LVG/var
    lvextend -L2G /dev/LVG/srv
    lvextend -L2G /dev/LVG/opt
    lvextend -L4G /dev/LVG/bak
    or you can grow each volume by the specified amount (the number after the plus sign):
    Code:
    lvextend -L+1G /dev/LVG/root
    lvextend -L+0.8G /dev/LVG/home
    lvextend -L+1.5G /dev/LVG/tmp
    lvextend -L+2G /dev/LVG/usr
    lvextend -L+1G /dev/LVG/var
    lvextend -L+1.8G /dev/LVG/srv
    lvextend -L+1.8G /dev/LVG/opt
    lvextend -L+3.5G /dev/LVG/bak
    To see the new sizes, type lvscan
    Code:
    # lvscan
      ACTIVE            '/dev/LVG/root' [3.00 GiB] inherit
      ACTIVE            '/dev/LVG/usr' [4.00 GiB] inherit
      ACTIVE            '/dev/LVG/var' [3.00 GiB] inherit
      ACTIVE            '/dev/LVG/tmp' [2.00 GiB] inherit
      ACTIVE            '/dev/LVG/bak' [4.00 GiB] inherit
      ACTIVE            '/dev/LVG/srv' [2.00 GiB] inherit
      ACTIVE            '/dev/LVG/opt' [2.00 GiB] inherit
      ACTIVE            '/dev/LVG/home' [1.00 GiB] inherit
    The last thing to do now is the actual growth of the file systems. We want to grow the existing file systems but only to a certain amount so we do not take up all the space in the volume. We want room for growth in the future so we have time to order and install new drives when needed.
    Code:
    resize2fs /dev/LVG/root 2G
    resize2fs /dev/LVG/home 500M
    resize2fs /dev/LVG/tmp 1G
    resize2fs /dev/LVG/usr 3G
    resize2fs /dev/LVG/var 2G
    resize2fs /dev/LVG/srv 1G
    resize2fs /dev/LVG/opt 1G
    resize2fs /dev/LVG/bak 3G
    If we need to increase space in /var at a later point, we can issue the following command without any downtime (we will automate this in a nifty script later):

    Code:
    resize2fs /dev/LVG/var 2560MB
    We could continue to increase this particular file system all the way until we reach the limit of the volume which is 3 GB at the moment.

    If we were to type df -h right now, we should see something like this:

    Code:
    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    udev                  464M     0  464M   0% /dev
    tmpfs                  99M  836K   98M   1% /run
    /dev/mapper/LVG-root  2.0G  916M  953M  50% /
    /dev/mapper/LVG-usr   3.0G  837M  2.0G  30% /usr
    tmpfs                 493M     0  493M   0% /dev/shm
    tmpfs                 5.0M     0  5.0M   0% /run/lock
    tmpfs                 493M     0  493M   0% /sys/fs/cgroup
    /dev/sda1             461M  141M  297M  33% /boot
    /dev/mapper/LVG-var   2.0G  398M  1.5G  22% /var
    /dev/mapper/LVG-home  481M  2.3M  453M   1% /home
    /dev/mapper/LVG-tmp   984M  2.8M  932M   1% /tmp
    /dev/mapper/LVG-bak   2.9G  3.1M  2.8G   1% /bak
    /dev/mapper/LVG-srv   989M  2.7M  940M   1% /srv
    /dev/mapper/LVG-opt   989M  2.7M  940M   1% /opt
    tmpfs                  99M     0   99M   0% /run/user/1000
    Remember, df -h will tell you the size of the file system and lvscan will tell you the size of the volumes where the file systems live in.

    TIP: If you want to see everything in a specific block size, such as everything showing up in megabytes, you can use df --block-size m

    Swap File Management

    If you do not specify a swap partition during the initial setup, a swap file system will be created automatically and point to a file called "swapfile" in the root filesystem.

    Type swapon --summary to see the status of the swap system:

    Code:
    # swapon --summary
    Filename                                Type            Size    Used    Priority
    /swapfile                               file            88324   0       -2
    Specific needs vary but the current rule of thumb for Linux servers is to have a swap file 1/2 the size of the amount of RAM in your system.

    Let's assume we want a 1GB swap file and we want it on the /opt partition. Here are the steps to setup this scenario:

    1. Make sure you have space on /opt by typing df -h /opt
    2. Run the following commands for the new swap file:
      Code:
      fallocate --length 1G /opt/swapfile1g
      chown root:root /opt/swapfile1g
      chmod 600 /opt/swapfile1g
      mkswap /opt/swapfile1g
      swapon /opt/swapfile1g
    3. Look at the current swap settings:
      Code:
      # swapon --summary
      Filename                                Type            Size    Used    Priority
      /swapfile                               file            88324   0       -2
      /opt/swapfile1g                         file            1048500 0       -3
    4. Now disable the old swap file using these commands:
      Code:
      swapoff /swapfile
      rm /swapfile
    5. Remove the old swapfile from /etc/fstab and add the new one.
      Code:
      vi /etc/fstab
      Remove:
      Code:
      /swapfile                                 none            swap    sw              0       0
      Add:
      Code:
      /opt/swapfile1g     none            swap    sw              0       0
    6. Look at the current swap settings again:
      Code:
      # swapon --summary
      Filename                                Type            Size    Used    Priority
      /opt/swapfile1g                         file            1048500 0       -2
    7. Reboot the server and run the summary command again to verify that your /etc/fstab changes worked.


    Adding More Hard Drives

    For this exercise, we will add two additional hard drives. The addition of these drives are NOT necessary and this section can be skipped. The extra drives are only to demonstrate how to add additional hard drives to the system.

    Adding more space in VMware or VirtualBox is easy. In this exercise, each drive will be added as a separate disk just as if we were to add a physical drive to a physical server.

    vSphere Steps
    1. Shutdown and power off the server by typing shutdown -P now
    2. In the vSphere client, right-click the Virtual Machine and choose Edit Settings.
    3. On the hardware tab, click the Add button and select Hard Disk. Click Next, choose "Create a new virtual disk", click Next, set the size to 10 GB, click Next, Next, Finish.
    4. Add another 15 GB disk using the same steps above and click OK to close the settings and allow VMware to process the changes.


    VirtualBox Steps
    1. Shutdown and power off the server by typing shutdown -P now
    2. In the VirtualBox Manager, select the Virtual Machine and click Settings.
    3. On the Storage tab, select Controller: SATA and click the Add new storage attachment button and select Add Hard Disk. Click Create new disk, VDI, Next, Fixed size, Next, give it a Name/Location/Size of 10 GB, click Create.
    4. Add another 15 GB disk using the same steps above and click OK to close the settings and allow VirtualBox to process the changes.


    Collect information about the newly added drives.

    1. Start the server and connect using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass) and then temporarily grant yourself super user privileges by typing sudo su
    3. Make note of how much "Free PE / Size" you have in your logical volume group by typing vgdisplay. When done adding the new drives, the free space listed here will increase by the amount added.
    4. Type pvdisplay which should show something similar to this:
      Code:
        --- Physical volume ---
        PV Name               /dev/sda5
        VG Name               LVG
        PV Size               9.53 GiB / not usable 0
        Allocatable           yes
        PE Size               4.00 MiB
        Total PE              2440
        Free PE               157
        Allocated PE          2283
        PV UUID               g8IVWI-sF3A-aWAp-0KSJ-vmJE-SOkL-00R7DN
      The important bits of info here are the PV Name and VG Name for our existing configuration.
    5. Type fdisk -l which should show something similar to this (however I abbreviated it to show just the important parts):
      Code:
      Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
      Device     Boot  Start      End  Sectors  Size Id Type
      /dev/sda1  *      2048   976895   974848  476M 83 Linux
      /dev/sda2       978942 20969471 19990530  9.5G  5 Extended
      /dev/sda5       978944 20969471 19990528  9.5G 8e Linux LVM
      
      Disk /dev/sdb: 12 GiB, 12884901888 bytes, 25165824 sectors
      Disk /dev/sdc: 12 GiB, 12884901888 bytes, 25165824 sectors
      The important bits of info here are the device paths for the new drives (/dev/sdb, /dev/sdc).

    Prepare the first drive (/dev/sdb) to be used by the LVM

    Type the following:
    Code:
    fdisk /dev/sdb
    n (Create New Partition)
    p (Primary Partition)
    1 (Partition Number)
    {ENTER} (use default for first cylinder)
    {ENTER} (use default for last cylinder)
    t (Change partition type)
    8e (Set to Linux LVM)
    p (Preview how the drive will look)
    w (Write changes)
    Prepare the second drive (/dev/sdc) to be used by the LVM

    Do the exact same steps as above but start with fdisk /dev/sdc

    Create physical volumes using the new drives

    If we type fdisk -l, we now see /dev/sdb1 and /dev/sdc1 which are Linux LVM partitions.

    Type the following to create physical volumes:
    Code:
    pvcreate /dev/sdb1
    pvcreate /dev/sdc1
    Now add the physical volumes to the volume group (LVG) by typing the following:
    Code:
    vgextend LVG /dev/sdb1
    vgextend LVG /dev/sdc1
    You can run the vgdisplay command to see that the "Free PE / Size" has increased.

    Now that the space of both drives have been added to the logical volume group called LVG, we can now allocate that space to grow the logical volumes and then the file systems as needed.

    Shutdown and power off the server by typing shutdown -P now

    In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like STEP 3 and description of Ubuntu Server 18.04 LTS, Storage space adjusted, Static IP: 192.168.107.2. The Snapshot Manager should now have a nice hierarchy of snapshots (STEP 1 --> STEP 2 --> STEP 3 --> You are here)
    Last edited by LHammonds; September 6th, 2019 at 09:09 PM. Reason: Match the author site

  6. #6
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to install and configure an Ubuntu Server 18.04 LTS

    Software Configurations

    1. Turn on the server and connect using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. At the $ prompt, type the following to install various utilities which are described below:
      Code:
      apt -y install vim-nox p7zip-full htop fsarchiver sendemail dialog
      • vim-nox for use instead of the built-in VI editor. more info
      • p7zip-full is a 7-zip archive utility.
      • htop is a CPU/RAM monitoring utility.
      • fsarchiver is a backup utility.
      • sendemail is a command-line email utility.
      • dialog is used to build menu selections.
    5. Change the default shell from dash to bash. Type the following to see that it currently points /bin/sh to dash:
      Code:
      ls -l /bin/sh
      Now change it to bash and answer No when prompted:
      Code:
      dpkg-reconfigure dash
      Type the following to see that it now points /bin/sh to bash:
      Code:
      ls -l /bin/sh
    6. It might be necessary to remove AppArmor to avoid problems by typing the following:
      Code:
      /etc/init.d/apparmor stop
      /etc/init.d/apparmor teardown
      update-rc.d -f apparmor remove
      apt-get remove apparmor
    7. Type vi /etc/hosts and add your email server:
      Code:
      192.168.107.25    srv-mail
    8. Test the ability to send email by typing:
      Code:
      sendemail -f root@myserver -t MyTargetAddress@MyDomain.com -u "This is the Subject" -m "This is the body of the email" -s srv-mail:25
    9. If you are like me and like indents to be 2 spaces and not a tab character, then edit the vim-nox preference file:
      Code:
      vi ~/.vimrc
      Add the following:
      Code:
      set tabstop=2
      set shiftwidth=2
      set expandtab


    VMware Tools

    Starting with Ubuntu 16.04, open-vm-tools are installed automatically. We should not need to perform any actions in this section.

    VirtualBox Guest Additions - Installation

    The Guest Additions need to be installed if the VM is using a VirtualBox host. This will insure maximum performance in a virtual environment.

    1. Connect to the server using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. You need to perform the following commands to fulfill the prerequisites:
      Code:
      apt install dkms
      reboot
    5. Connect to the server using PuTTY.
    6. At the login prompt, login with your administrator account (administrator / myadminpass)
    7. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    8. From the VirtualBox menu, click Devices, Install Guest Additions
    9. At the console, type the following:
      Code:
      mkdir -p /media/cdrom
      mount /dev/cdrom /media/cdrom
      /media/cdrom/VBoxLinuxAdditions.run
      umount /media/cdrom
    10. NOTE: The X Windows System drivers will fail to load because this is a headless server with no GUI (which is OK)
    11. To see the status, stop or start the service, you can use these commands:
      Code:
      service vboxadd-service status
      service vboxadd-service stop
      service vboxadd-service start


    VirtualBox Guest Additions - Upgrading

    If VirtualBox is updated on the host machine, each VM also needs the upgraded Guest Additions.

    Then mount the CDROM and run the installer just like the above. Reboot after it is upgraded.

    1. Connect to the server using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. From the VirtualBox menu, click Devices, Install Guest Additions
    5. At the console, type the following:
      Code:
      mount /dev/cdrom /mnt/cdrom
      /mnt/cdrom/VBoxLinuxAdditions.run
      reboot


    VirtualBox Guest Additions - Uninstallation

    If a VM will be migrated from VirtualBox to something like a VMware, the Guest Additions on the VM will need to be removed.

    1. Connect to the server using PuTTY.
    2. At the login prompt, login with your administrator account (administrator / myadminpass)
    3. At the $ prompt, temporarily grant yourself super user privileges by typing sudo su {ENTER} and then provide the administrator password (myadminpass).
    4. Type the following:
      Code:
      cd /opt/VBox*
      ./uninstall.sh
    Last edited by LHammonds; September 6th, 2019 at 09:10 PM. Reason: Match the author site

  7. #7
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Re: How to install and configure an Ubuntu Server 18.04 LTS

    Many of the sections below utilize BASH scripts as part of the solution/automation.

    To speed up installation for myself and others, the commands below will download and extract the base set of scripts I use as a foundation for all my servers. You can download all of them at once or skip this step and copy/paste the scripts one at a time throughout this tutorial.

    Code:
    cd /tmp
    wget https://files.hammondslegacy.com/linux/init-scripts-1804.tar.gz
    tar -xvf /tmp/init-scripts-1804.tar.gz -C /
    rm /tmp/init-scripts-1804.tar.gz
    NOTE: If you are leary about extracting the contents of the archive before seeing it, use the following command to peek inside before extracting:
    Code:
    tar -ztvf /tmp/init-scripts-1804.tar.gz
    The directory structure will be as follows after extraction:

    Code:
    /var/scripts/common
    /var/scripts/data
    /var/scripts/prod
    /var/scripts/test
    The "common" directory contains code that is commonly used in other scripts.
    The "data" directory contains stored information (currently, just backups of crontab schedule)
    The "prod" directory contains all production-ready scripts
    The "test" directory contains scripts that are under development or never need to be run in production-mode.

    Most of my scripts will import a file called "standard.conf" from the common script folder.

    /var/scripts/common/standard.conf
    Code:
    ## Global Variables ##
    Company="abc"
    TempDir="/tmp"
    LogDir="/var/log"
    ShareDir="/srv/samba/share"
    MyDomain="mydomain.com"
    AdminEmail="admin@${MyDomain}"
    ReportEmail="LHammonds <lhammonds@${MyDomain}>"
    BackupDir="/bak"
    OffsiteDir="/mnt/backup"
    OffsiteTestFile="${OffsiteDir}/offline.txt"
    ArchiveMethod="tar.7z"    ## Choices are tar.7z or tgz
    Hostname="$(hostname -s)"
    ScriptName="$0"
    ScriptDir="/var/scripts"
    MailFile="${TempDir}/mailfile.$$"
     
    ## Global Functions ##
     
    function f_sendmail()
    {
      ## Purpose: Send administrative email message.
      ## Parameter #1 = Subject
      ## Parameter #2 = Body
      sendemail -f "${AdminEmail}" -t "${ReportEmail}" -u "${1}" -m "${2}\n\nServer: ${Hostname}\nProgram: ${ScriptName}\nLog: ${LogFile}" -s srv-mail:25 1>/dev/null 2>&1
    }
     
    function f_sendusermail()
    {
      ## Purpose: Send end-user email message.
      ## Parameter #1 = To
      ## Parameter #2 = Subject
      ## Parameter #3 = Body
      sendemail -f "${AdminEmail}" -t "${1}" -u "${2}" -m "${3}" -s srv-mail:25 1>/dev/null 2>&1
    }
     
    function f_mount()
    {
      ## Mount the pre-configured remote share folder.
      ## NOTE: The Linux mount point should have a file called "offline.txt"
      mount -t cifs //srv-backup/MyShare ${OffsiteDir} --options nouser,rw,nofail,noexec,credentials=/etc/cifspw
    }
    
    function f_umount()
    {
      ## Dismount the remote share folder.
      ## NOTE: The unmounted folder should have a file called "offline.txt"
      umount ${OffsiteDir}
    }
    Here is a script to help automate the creation of this structure on the various servers if you do not use the "init-scripts" archive:

    setup-folders.sh
    Code:
    #!/bin/bash
    if [ ! -d /var/scripts/prod ]; then
      mkdir -p /var/scripts/prod
    fi
    if [ ! -d /var/scripts/test ]; then
      mkdir -p /var/scripts/test
    fi
    if [ ! -d /var/scripts/common ]; then
      mkdir -p /var/scripts/common
    fi
    if [ ! -d /var/scripts/data ]; then
      mkdir -p /var/scripts/data
    fi
    chown root:root -R /var/scripts
    chmod 0755 -R /var/scripts
    Last edited by LHammonds; May 2nd, 2020 at 12:37 AM. Reason: Match the author site

  8. #8
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Crontab Schedule

    Crontab Schedule

    The crontab schedule can be edited directly by typing "crontab -e" but that can be a bit dangerous. It would be safer to edit a file and then load that file into the schedule. This will allow backups of the schedule to be made. If there is ever a problem with the schedule, it can be re-loaded with a known-good schedule or at least back to the way it was before the last change. This requires the person doing the editing to always work with a copy of the schedule 1st.

    Here is an example crontab scheduling file for the root user:

    /var/scripts/data/crontab.root
    Code:
    ########################################
    # Name: Crontab Schedule for root user
    # Author: LHammonds
    ############# Update Log ###############
    # 2012-05-20 - LTH - Created schedule
    ########################################
    
    SHELL=/bin/sh
    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    
    # Crontab SYNTAX:
    #       __________ Minute (0-59)
    #      / _________ Hour (0-23)
    #     / /  _______ Day Of Month (1-31)
    #    / /  /   ____ MONth (1-12)
    #   / /  /   /   _ Day Of Week (0-7) (Sun = 0 or 7)
    #  / /  /   /   /  -------------------------------------------------------------
    # m h dom mon dow  command <arguments> > /dev/null 2>&1
    #
    # Backup MySQL Server
    #
    0 23 * * * /var/scripts/prod/mysql-backup.sh > /dev/null 2>&1
    #
    # Backup MySQL Database On Demand
    #
    0-59 * * * * /var/scripts/prod/mysql-db-backup.sh > /dev/null 2>&1
    #
    # Daily checks for available space
    #
    0 1 * * * /var/scripts/prod/check-storage.sh root 500 100 > /dev/null 2>&1
    15 1 * * * /var/scripts/prod/check-storage.sh home 100 50 > /dev/null 2>&1
    30 1 * * * /var/scripts/prod/check-storage.sh tmp 100 50 > /dev/null 2>&1
    45 1 * * * /var/scripts/prod/check-storage.sh usr 100 50 > /dev/null 2>&1
    0 2 * * * /var/scripts/prod/check-storage.sh var 100 50 > /dev/null 2>&1
    15 2 * * * /var/scripts/prod/check-storage.sh srv 100 50 > /dev/null 2>&1
    30 2 * * * /var/scripts/prod/check-storage.sh opt 100 50 > /dev/null 2>&1
    45 2 * * * /var/scripts/prod/check-storage.sh bak 100 50 > /dev/null 2>&1
    #
    # Daily software upgrade check
    #
    0 3 * * * /var/scripts/prod/apt-upgrade.sh > /dev/null 2>&1
    30 3 * * * /var/scripts/prod/reboot-check.sh > /dev/null 2>&1
    Once the file is created, make sure appropriate permissions are set by typing the following:
    Code:
    chown root:root /var/scripts/data/crontab.root
    chmod 0600 /var/scripts/data/crontab.root
    To enable the root schedule using this file, type the following:

    Code:
    crontab -u root /var/scripts/data/crontab.root
    To disable the root schedule, type the following:
    Code:
    touch /tmp/deleteme
    crontab -u root /tmp/deleteme
    rm /tmp/deleteme
    If you need to modify the schedule, make a backup copy 1st. For example:

    Code:
    cp /var/scripts/data/crontab.root /var/scripts/data/2012-11-28-crontab.root
    vi /var/scripts/data/crontab.root
    (make your changes)
    crontab -u root /var/scripts/data/crontab.root
    Last edited by LHammonds; May 1st, 2020 at 06:39 PM. Reason: Match the author site

  9. #9
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Operator Scripts

    I like scripts to be as generic as possible when copying among multiple servers.

    I push the service start/stop specifics into their own scripts which can be unique to each server depending on what services need to be stopped/started.

    On a MySQL/MariaDB server, the scripts that start and stop services would just contain the "mysql" line and the other service controls that do not apply are commented out.

    NOTE: This script is custom-tailored to each server it is placed on to safely stop the services that are unique to it.
    /var/scripts/prod/servicestop.sh
    Code:
    #############################################
    ## Name          : servicestop.sh
    ## Version       : 1.0
    ## Date          : 2018-04-19
    ## Author        : LHammonds
    ## Compatibility : Ubuntu Server 16.04 thru 18.04 LTS
    ## Requirements  : None
    ## Purpose       : Stop primary services.
    ## Run Frequency : As needed
    ## Exit Codes    : None
    ################ CHANGE LOG #################
    ## DATE       WHO  WHAT WAS CHANGED
    ## ---------- ---- ----------------------------
    ## 2013-01-08 LTH  Created script.
    #############################################
    ## NOTE: Configure whatever services you need stopped here.
    echo "Stopping services..."
    #service vsftpd stop
    #service nagios stop
    #service apache2 stop
    service mysql stop
    sleep 1
    NOTE: This script is custom-tailored to each server it is placed on to start the services that are unique to it. Although the services are likely to auto-start with the server, this is mainly used if only restarting the services and not the entire server.
    /var/scripts/prod/servicestart.sh
    Code:
    #############################################
    ## Name          : servicestart.sh
    ## Version       : 1.0
    ## Date          : 2018-04-19
    ## Author        : LHammonds
    ## Compatibility : Ubuntu Server 16.04 thru 18.04 LTS
    ## Requirements  : None
    ## Purpose       : Start primary services.
    ## Run Frequency : As needed
    ## Exit Codes    : None
    ################ CHANGE LOG #################
    ## DATE       WHO  WHAT WAS CHANGED
    ## ---------- ---- ----------------------------
    ## 2018-04-19 LTH  Created script.
    #############################################
    ## NOTE: Add whatever services you need started here.
    echo "Starting services..."
    service mysql start
    #service apache2 start
    #service nagios start
    #service vsftpd start
    sleep 1
    The service restart, reboot and shutdown scripts can simply call the service stop and start scripts and should never need to be modified from default.

    NOTE: This script is generic enough that it should not need to be modified when deployed to any server.
    /var/scripts/prod/servicerestart.sh
    Code:
    #!/bin/bash
    #############################################
    ## Name          : servicerestart.sh
    ## Version       : 1.1
    ## Date          : 2018-04-19
    ## Author        : LHammonds
    ## Compatibility : Ubuntu Server 12.04 thru 18.04 LTS
    ## Requirements  : None
    ## Purpose       : Stop/Start primary services.
    ## Run Frequency : As needed
    ## Exit Codes    : None
    ################ CHANGE LOG #################
    ## DATE       WHO  WHAT WAS CHANGED
    ## ---------- ---- ----------------------------
    ## 2013-01-08 LTH  Created script.
    ## 2018-04-19 LTH  Spit stop/start code into individual scripts.
    #############################################
    ## Import standard variables and functions. ##
    source /var/scripts/common/standard.conf
    clear
    ${ScriptDir}/prod/servicestop.sh
    ${ScriptDir}/prod/servicestart.sh
    /var/scripts/prod/reboot.sh
    Code:
    #!/bin/bash
    #############################################
    ## Name          : reboot.sh
    ## Version       : 1.3
    ## Date          : 2018-05-08
    ## Author        : LHammonds
    ## Compatibility : Ubuntu Server 12.04 thru 18.04 LTS
    ## Requirements  : Run as root
    ## Purpose       : Notify logged in users, stop services and reboot server.
    ## Run Frequency : As needed
    ## Parameters    :
    ##    1 = (Optional) Expected downtime in minutes.
    ## Exit Codes    : None
    ################ CHANGE LOG #################
    ## DATE       WHO  WHAT WAS CHANGED
    ## ---------- ---- ----------------------------
    ## 2013-01-07 LTH  Created script.
    ## 2017-12-18 LTH  Added logging.
    ## 2018-04-19 LTH  Various minor changes.
    ## 2018-05-08 LTH  Added broadcast message and loop function.
    #############################################
    
    ## Import standard variables and functions. ##
    source /var/scripts/common/standard.conf
    
    ## Define local variables.
    LogFile="${LogDir}/${Company}-reboot.log"
    DefaultDowntime=3
    
    #######################################
    ##            FUNCTIONS              ##
    #######################################
    
    function f_loop()
    {
      LoopCount=$1
      for LoopIndex in $(seq ${LoopCount} -1 1)
      do
        echo ${LoopIndex}
        sleep 1
      done
    } ## f_loop
    
    function f_showhelp()
    {
      echo -e "NOTE: Default expected downtime is ${DefaultDowntime} minutes and is optional.\n"
      echo -e "Usage : ${ScriptName} ExpectedDowntimeInMinutes\n"
      echo -e "Example: ${ScriptName} 5\n"
      exit
    } ## f_showhelp
    
    ## Requirement Check: Script must run as root user.
    if [ "$(id -u)" != "0" ]; then
      ## FATAL ERROR DETECTED: Document problem and terminate script.
      echo -e "\nERROR: Root user required to run this script.\n"
      echo -e "Type 'sudo su' to temporarily become root user.\n"
      exit
    fi
    
    #######################################
    ##           MAIN PROGRAM            ##
    #######################################
    
    ## Check existance of optional command-line parameter.
    case "$1" in
      --help|-h|-?)
        f_showhelp
        ;;
      *[0-9]*)
        ## If parameter is a number, allow override.
        TimeOverride=$1
        ;;
      *)
        ## Invalid input supplied. Discard.
        ;;
    esac
    
    #clear
    echo ""
    if [ -z ${TimeOverride+8} ]; then
      ## No override given.  Display user input prompt.
      echo -e "How many minutes do you expect the server to be offline? (default=${DefaultDowntime})"
      read -t 30 TimeInput
      ReturnCode=$?
      if [[ ${ReturnCode} -gt 128 ]]; then
        ## User input timed out. Use default.
        let OfflineTime=${DefaultDowntime}
      else
        ## Evaluate the user-supplied input.
        case ${TimeInput} in
          *[0-9]*)
            ## User-supplied input is a number.  Use it instead of default.
            let OfflineTime="${TimeInput}" ;;
          *)
            ## User-supplied input is invalid.  Use default.
            let OfflineTime=${DefaultDowntime} ;;
        esac
      fi
    else
      ## Use commandline override.
      OfflineTime=${TimeOverride}
    fi
    ## Broadcasting message to any other users logged in via SSH.
    clear
    echo "WARNING: Rebooting server. Should be back online in ${OfflineTime} minutes" | wall
    
    echo "`date +%Y-%m-%d_%H:%M:%S` - Reboot initiated." | tee -a ${LogFile}
    ${ScriptDir}/prod/servicestop.sh
    echo "Rebooting..."
    f_loop 10
    shutdown -r now
    /var/scripts/prod/shutdown.sh
    Code:
    #!/bin/bash
    #############################################
    ## Name          : shutdown.sh
    ## Version       : 1.1
    ## Date          : 2018-05-08
    ## Author        : LHammonds
    ## Compatibility : Ubuntu Server 16.04 thru 18.04 LTS
    ## Requirements  : Run as root
    ## Purpose       : Notify logged in users, stop services and power off server.
    ## Run Frequency : As needed
    ## Parameters    :
    ##    1 = (Optional) Expected downtime in minutes.
    ## Exit Codes    : None
    ################ CHANGE LOG #################
    ## DATE       WHO  WHAT WAS CHANGED
    ## ---------- ---- ----------------------------
    ## 2018-04-19 LTH  Created script.
    ## 2018-05-08 LTH  Added broadcast message and loop function.
    #############################################
    
    ## Import standard variables and functions. ##
    source /var/scripts/common/standard.conf
    
    ## Define local variables.
    LogFile="${LogDir}/${Company}-shutdown.log"
    DefaultDowntime=5
    
    #######################################
    ##            FUNCTIONS              ##
    #######################################
    
    function f_loop()
    {
      LoopCount=$1
      for LoopIndex in $(seq ${LoopCount} -1 1)
      do
        echo ${LoopIndex}
        sleep 1
      done
    } ## f_loop
    
    function f_showhelp()
    {
      echo -e "NOTE: Default expected downtime is ${DefaultDowntime} minutes and is optional.\n"
      echo -e "Usage : ${ScriptName} ExpectedDowntimeInMinutes\n"
      echo -e "Example: ${ScriptName} 5\n"
      exit
    } ## f_showhelp
    
    ## Requirement Check: Script must run as root user.
    if [ "$(id -u)" != "0" ]; then
      ## FATAL ERROR DETECTED: Document problem and terminate script.
      echo -e "\nERROR: Root user required to run this script.\n"
      echo -e "Type 'sudo su' to temporarily become root user.\n"
      exit
    fi
    
    #######################################
    ##           MAIN PROGRAM            ##
    #######################################
    
    ## Check existance of optional command-line parameter.
    case "$1" in
      --help|-h|-?)
        f_showhelp
        ;;
      *[0-9]*)
        ## If parameter is a number, allow override.
        TimeOverride=$1
        ;;
      *)
        ## Invalid input supplied. Discard.
        ;;
    esac
    
    #clear
    echo ""
    if [ -z ${TimeOverride+8} ]; then
      ## No override given.  Display user input prompt.
      echo -e "How many minutes do you expect the server to be offline? (default=${DefaultDowntime})"
      read -t 30 TimeInput
      ReturnCode=$?
      if [[ ${ReturnCode} -gt 128 ]]; then
        ## User input timed out. Use default.
        let OfflineTime=${DefaultDowntime}
      else
        ## Evaluate the user-supplied input.
        case ${TimeInput} in
          *[0-9]*)
            ## User-supplied input is a number.  Use it instead of default.
            let OfflineTime="${TimeInput}" ;;
          *)
            ## User-supplied input is invalid.  Use default.
            let OfflineTime=${DefaultDowntime} ;;
        esac
      fi
    else
      ## Use commandline override.
      OfflineTime=${TimeOverride}
    fi
    ## Broadcasting message to any other users logged in via SSH.
    clear
    echo "WARNING: Shutting down server. Should be back online in ${OfflineTime} minutes" | wall
    
    echo "`date +%Y-%m-%d_%H:%M:%S` - Shutdown initiated." | tee -a ${LogFile}
    ${ScriptDir}/prod/servicestop.sh
    echo "Shutting down..."
    f_loop 10
    shutdown -P now
    Last edited by LHammonds; September 6th, 2019 at 09:13 PM. Reason: Match the author site

  10. #10
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,689
    Distro
    Ubuntu 20.04 Focal Fossa

    Lightbulb Scripts - APT Upgrade

    APT Upgrade

    This script that can be scheduled to run daily to check for OS/software updates in the repositories and then install them if available.

    The following is an example of a crontab entry to schedule the script to run once per day @ 3am.

    /var/scripts/data/crontab.root
    Code:
    0 3 * * * /var/scripts/prod/apt-upgrade.sh > /dev/null 2>&1
    /var/scripts/prod/apt-upgrade.sh
    Code:
    #!/bin/bash
    #############################################
    ## Name          : apt-upgrade.sh
    ## Version       : 1.3
    ## Date          : 2017-03-16
    ## Author        : LHammonds
    ## Purpose       : Keep system updated (rather than use unattended-upgrades)
    ## Compatibility : Verified on Ubuntu Server 16.04 thru 18.04 LTS
    ## Requirements  : Sendemail, run as root
    ## Run Frequency : Recommend once per day.
    ## Parameters    : None
    ## Exit Codes    :
    ##    0 = Success
    ##    1 = ERROR: Lock file detected.
    ##    2 = ERROR: Not run as root user.
    ##    4 = ERROR: APT update Error.
    ##    8 = ERROR: APT upgrade Error.
    ##   16 = ERROR: APT autoremove Error.
    ##   32 = ERROR: APT autoclean Error.
    ##   64 = ERROR: APT clean Error.
    ################ CHANGE LOG #################
    ## DATE       WHO WHAT WAS CHANGED
    ## ---------- --- ----------------------------
    ## 2012-06-01 LTH Created script.
    ## 2013-01-08 LTH Allow visible status output if run manually.
    ## 2013-03-11 LTH Added company prefix to log files.
    ## 2017-03-16 LTH Made compatible with 16.04 LTS
    #############################################
    
    ## Import standard variables and functions. ##
    source /var/scripts/common/standard.conf
    
    ## Define local variables.
    LogFile="${LogDir}/${Company}-apt-upgrade.log"
    LockFile="${TempDir}/${Company}-apt-upgrade.lock"
    ErrorFlag=0
    ErrorMsg=""
    ReturnCode=0
    AptCmd="$(which apt)"
    AptGetCmd="$(which apt-get)"
    
    #######################################
    ##            FUNCTIONS              ##
    #######################################
    function f_cleanup()
    {
      if [ -f ${LockFile} ]; then
        ## Remove lock file so subsequent jobs can run.
        rm ${LockFile} 1>/dev/null 2>&1
      fi
      ## Temporarily pause script in case user is watching output.
      sleep 2
      if [ ${ErrorFlag} -gt 0 ]; then
        ## Display error message to user in case being run manually.
        echo "`date +%Y-%m-%d_%H:%M:%S` --- ERROR: ${ErrorMsg}" | tee -a ${LogFile}
        ## Email error notice.
        f_sendmail "ERROR ${ErrorFlag}: Script aborted" "${ErrorMsg}"
      fi
      exit ${ErrorFlag}
    }
    
    #######################################
    ##           MAIN PROGRAM            ##
    #######################################
    clear
    if [ -f ${LockFile} ]; then
      # Lock file detected.  Abort script.
      echo "** Script aborted **"
      echo "This script tried to run but detected the lock file: ${LockFile}"
      echo "Please check to make sure the file does not remain when check space is not actually running."
      ErrorMsg="This script tried to run but detected the lock file: ${LockFile}\n\nPlease check to make sure the file does not remain when check space is not actually running.\n\nIf you find that the script is not running/hung, you can remove it by typing 'rm ${LockFile}'"
      ErrorFlag=1
      f_cleanup
    else
      echo "`date +%Y-%m-%d_%H:%M:%S` ${ScriptName}" > ${LockFile}
    fi
    ## Requirement Check: Script must run as root user.
    if [ "$(id -u)" != "0" ]; then
      ## FATAL ERROR DETECTED: Document problem and terminate script.
      echo -e "ERROR: Root user required to run this script.\n"
      ErrorMsg="Root user required to run this script."
      ErrorFlag=2
      f_cleanup
    fi
    
    ## Make sure the cleanup function is called from this point forward.
    trap f_cleanup EXIT
    
    echo "`date +%Y-%m-%d_%H:%M:%S` - Begin script." | tee -a ${LogFile}
    echo "`date +%Y-%m-%d_%H:%M:%S` --- Apt-Get Update" | tee -a ${LogFile}
    ${AptGetCmd} update > /dev/null 2>&1
    ReturnCode=$?
    if [[ "${ReturnCode}" -gt 0 ]]; then
      ErrorMsg="Apt-Get Update return code of ${ReturnCode}"
      ErrorFlag=4
      f_cleanup
    fi
    echo "`date +%Y-%m-%d_%H:%M:%S` --- Apt-Get Upgrade" | tee -a ${LogFile}
    echo "--------------------------------------------------" >> ${LogFile}
    ${AptGetCmd} --assume-yes upgrade >> ${LogFile} 2>&1
    ReturnCode=$?
    if [[ "${ReturnCode}" -gt 0 ]]; then
      ErrorMsg="Apt-Get Upgrade return code of ${ReturnCode}"
      ErrorFlag=8
      f_cleanup
    fi
    echo "--------------------------------------------------" >> ${LogFile}
    echo "`date +%Y-%m-%d_%H:%M:%S` --- Apt-Get Autoremove" | tee -a ${LogFile}
    echo "--------------------------------------------------" >> ${LogFile}
    ${AptGetCmd} --assume-yes autoremove >> ${LogFile} 2>&1
    ReturnCode=$?
    if [[ "${ReturnCode}" -gt 0 ]]; then
      ErrorMsg="Apt-Get Autoremove return code of ${ReturnCode}"
      ErrorFlag=16
      f_cleanup
    fi
    echo "--------------------------------------------------" >> ${LogFile}
    echo "`date +%Y-%m-%d_%H:%M:%S` --- Apt-get Autoclean" | tee -a ${LogFile}
    echo "--------------------------------------------------" >> ${LogFile}
    ${AptGetCmd} autoclean >> ${LogFile} 2>&1
    ReturnCode=$?
    if [[ "${ReturnCode}" -gt 0 ]]; then
      ErrorMsg="Apt-Get Autoclean return code of ${ReturnCode}"
      ErrorFlag=32
      f_cleanup
    fi
    echo "--------------------------------------------------" >> ${LogFile}
    echo "`date +%Y-%m-%d_%H:%M:%S` --- Apt-get Clean" | tee -a ${LogFile}
    echo "--------------------------------------------------" >> ${LogFile}
    ${AptGetCmd} clean >> ${LogFile} 2>&1
    ReturnCode=$?
    if [[ "${ReturnCode}" -gt 0 ]]; then
      ErrorMsg="Apt-Get Clean return code of ${ReturnCode}"
      ErrorFlag=64
      f_cleanup
    fi
    echo "--------------------------------------------------" >> ${LogFile}
    echo "`date +%Y-%m-%d_%H:%M:%S` - End script." | tee -a ${LogFile}
    
    ## Perform cleanup routine.
    f_cleanup
    Here is the typical output:

    /var/log/apt-upgrade.log
    Code:
    2018-04-20_16:24:49 - Begin script.
    2018-04-20_16:24:49 --- Apt-Get Update
    2018-04-20_16:24:54 --- Apt-Get Upgrade
    --------------------------------------------------
    Reading package lists...
    Building dependency tree...
    Reading state information...
    Calculating upgrade...
    The following packages will be upgraded:
      libnih1 update-notifier-common
    2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    Need to get 210 kB of archives.
    After this operation, 1024 B of additional disk space will be used.
    Get:1 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libnih1 amd64 1.0.3-6ubuntu2 [49.3 kB]
    Get:2 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 update-notifier-common all 3.192 [160 kB]
    Fetched 210 kB in 0s (618 kB/s)
    (Reading database ... ^M(Reading database ... 5%^M(Reading database ... 10%^M(Reading database ... 15%^M(Reading database ... 20%^M(Reading database ... 25%^M(Reading database ... 30%^M(Reading database ... 35%^M(Reading database ... 40%^M(Reading database ... 45%^M(Reading database ... 50%^M(Reading database ... 55%^M(Reading database ... 60%^M(Reading database ... 65%^M(Reading database ... 70%^M(Reading database ... 75%^M(Reading database ... 80%^M(Reading database ... 85%^M(Reading database ... 90%^M(Reading database ... 95%^M(Reading database ... 100%^M(Reading database ... 73808 files and directories currently installed.)^M
    Preparing to unpack .../libnih1_1.0.3-6ubuntu2_amd64.deb ...^M
    Unpacking libnih1:amd64 (1.0.3-6ubuntu2) over (1.0.3-6ubuntu1) ...^M
    Preparing to unpack .../update-notifier-common_3.192_all.deb ...^M
    Unpacking update-notifier-common (3.192) over (3.191) ...^M
    Setting up update-notifier-common (3.192) ...^M
    Processing triggers for libc-bin (2.27-3ubuntu1) ...^M
    Setting up libnih1:amd64 (1.0.3-6ubuntu2) ...^M
    Processing triggers for libc-bin (2.27-3ubuntu1) ...^M
    --------------------------------------------------
    2018-04-20_16:24:58 --- Apt-Get Autoremove
    --------------------------------------------------
    Reading package lists...
    Building dependency tree...
    Reading state information...
    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    --------------------------------------------------
    2018-04-20_16:24:59 --- Apt-get Autoclean
    --------------------------------------------------
    Reading package lists...
    Building dependency tree...
    Reading state information...
    --------------------------------------------------
    2018-04-20_16:24:59 --- Apt-get Clean
    --------------------------------------------------
    --------------------------------------------------
    2018-04-20_16:24:59 - End script.
    Reboot Check

    You can schedule the server to run your custom reboot script right after an upgrade to see if it needs to be rebooted automatically. Use at your own discretion since many admins do not like having a server reboot without them being present just to make extra sure the service comes back up without any issues or at least be ready to handle them immediately.

    You can run this reboot check every time the upgrade script but it will only initiate the reboot sequence if the upgrade placed a notification file saying it needs a reboot to complete the upgrade.

    The following is an example of a crontab entry to schedule the script to run shortly after the upgrade script:

    /var/scripts/data/crontab.root
    Code:
    0 3 * * * /var/scripts/prod/apt-upgrade.sh > /dev/null 2>&1
    30 3 * * * /var/scripts/prod/reboot-check.sh > /dev/null 2>&1
    Code:
    #!/bin/bash
    #############################################
    ## Name          : reboot-check.sh
    ## Version       : 1.0
    ## Date          : 2017-12-13
    ## Author        : LHammonds
    ## Compatibility : Verified on Ubuntu Server 16.04 thru 18.04 LTS
    ## Requirements  : Run as root
    ## Purpose       : Stop services and reboot server.
    ## Run Frequency : As needed
    ## Exit Codes    : None
    ################ CHANGE LOG #################
    ## DATE       WHO  WHAT WAS CHANGED
    ## ---------- ---- ----------------------------
    ## 2017-12-13 LTH  Created script.
    #############################################
    
    ## Import standard variables and functions. ##
    source /var/scripts/common/standard.conf
    
    ## Define local variables.
    LogFile="${LogDir}/${Company}-reboot-check.log"
    
    ## Requirement Check: Script must run as root user.
    if [ "$(id -u)" != "0" ]; then
      ## FATAL ERROR DETECTED: Document problem and terminate script.
      echo -e "\nERROR: Root user required to run this script.\n"
      echo -e "Type 'sudo su' to temporarily become root user.\n"
      exit
    fi
    
    if [ -f /var/run/reboot-required ]; then
      echo "`date +%Y-%m-%d_%H:%M:%S` - Reboot required." >> ${LogFile}
      cat /var/run/reboot-required.pkgs >> ${LogFile}
      ${ScriptDir}/prod/reboot.sh
    else
      echo "No reboot required."
    fi
    Last edited by LHammonds; September 6th, 2019 at 09:13 PM. Reason: Match the author site

Page 1 of 3 123 LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •