Volume / Disk Management
Earlier, it was mentioned that the partition design needed to have some breathing room in each volume so that the file system inside can grow as needed. When the volumes were created during setup, the file systems were automatically expanded to fill the entire volume. We will now correct this by adding more "drives" to the system and then extend each logical volume to gain some breathing space.
Most logical volumes will be increased in size and then the file systems contained in them will be increased but not to the maximum amount.
This design will allow growth when needed and ensure that there will be time to add additional hard drives BEFORE they are needed which will keep the administrators from being stuck between a rock and a hard place! Nobody wants to lose a job because somebody did not estimate growth correctly or the budget did not allow for large capacity when the system first rolled out.
We started off with a 25 GB drive to hold these volumes and the changes below will use 22 GB.
Here are the planned adjustments for each logical volume:
root = 5 GB to 6 GB
home = 0.2 GB to 1 GB
tmp = 0.5 GB to 2 GB
var = 2.0 GB to 3 GB
bak = 0.5 GB to 4 GB
Here are the planned adjustments for each file system:
root = 5 GB (no change)
home = 0.2 GB to 0.5 GB
tmp = 0.5 GB to 1 GB
var = 2 GB (no change)
bak = 0.5 GB to 2 GB
Here is a graphical representation of what will be accomplished:
If we were to type df -h right now, we should see something like this:
Code:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 448M 0 448M 0% /dev
tmpfs 99M 772K 98M 1% /run
/dev/mapper/LVG-root 4.6G 2.3G 2.1G 52% /
tmpfs 491M 0 491M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 491M 0 491M 0% /sys/fs/cgroup
/dev/mapper/LVG-tmp 446M 772K 411M 1% /tmp
/dev/mapper/LVG-bak 446M 732K 412M 1% /bak
/dev/sda1 922M 197M 662M 23% /boot
/dev/mapper/LVG-home 167M 232K 153M 1% /home
/dev/mapper/LVG-var 1.8G 268M 1.5G 16% /var
tmpfs 99M 0 99M 0% /run/user/1000
To get a list of volume paths to use in the next commands, use lvscan to show your current volumes and their sizes.
Code:
sudo lvscan
ACTIVE '/dev/LVG/root' [<4.66 GiB] inherit
ACTIVE '/dev/LVG/var' [<1.86 GiB] inherit
ACTIVE '/dev/LVG/tmp' [476.00 MiB] inherit
ACTIVE '/dev/LVG/bak' [476.00 MiB] inherit
ACTIVE '/dev/LVG/home' [188.00 MiB] inherit
Type the following to set the exact size of the volume by specifying the end-result size you want:
Code:
sudo lvextend -L6G /dev/LVG/root
sudo lvextend -L1G /dev/LVG/home
sudo lvextend -L2G /dev/LVG/tmp
sudo lvextend -L3G /dev/LVG/var
sudo lvextend -L4G /dev/LVG/bak
or you can grow each volume by the specified amount (the number after the plus sign):
Code:
sudo lvextend -L+1G /dev/LVG/root
sudo lvextend -L+0.8G /dev/LVG/home
sudo lvextend -L+1.5G /dev/LVG/tmp
sudo lvextend -L+1G /dev/LVG/var
sudo lvextend -L+3.5G /dev/LVG/bak
To see the new sizes, use lvscan
Code:
sudo lvscan
ACTIVE '/dev/LVG/root' [6.00 GiB] inherit
ACTIVE '/dev/LVG/var' [3.00 GiB] inherit
ACTIVE '/dev/LVG/tmp' [2.00 GiB] inherit
ACTIVE '/dev/LVG/bak' [4.00 GiB] inherit
ACTIVE '/dev/LVG/home' [1.00 GiB] inherit
The last thing to do now is the actual growth of the file systems. We want to grow the existing file systems but only to a certain amount so we do not take up all the space in the volume. We want room for growth in the future so we have time to order and install new drives when needed.
Code:
sudo resize2fs /dev/LVG/home 500M
sudo resize2fs /dev/LVG/tmp 1G
sudo resize2fs /dev/LVG/bak 2G
If we need to increase space in /var at a later point, we can issue the following command without any downtime (we will automate this in a nifty script later):
Code:
sudo resize2fs /dev/LVG/var 2560MB
We could continue to increase this particular file system all the way until we reach the limit of the volume which is 3 GB at the moment.
If we were to type df -h right now, we should see something like this:
Code:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 448M 0 448M 0% /dev
tmpfs 99M 772K 98M 1% /run
/dev/mapper/LVG-root 4.6G 2.3G 2.1G 52% /
tmpfs 491M 0 491M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 491M 0 491M 0% /sys/fs/cgroup
/dev/mapper/LVG-tmp 979M 1.3M 924M 1% /tmp
/dev/mapper/LVG-bak 2.0G 1.5M 1.9G 1% /bak
/dev/sda1 922M 197M 662M 23% /boot
/dev/mapper/LVG-home 473M 324K 452M 1% /home
/dev/mapper/LVG-var 1.8G 268M 1.5G 16% /var
tmpfs 99M 0 99M 0% /run/user/1000
Remember, df -h will tell you the size of the file system and lvscan will tell you the size of the volumes where the file systems live in.
TIP: If you want to see everything in a specific block size, such as everything showing up in megabytes, you can use df --block-size m
Swap File Management
If you do not specify a swap partition during the initial setup, a swap file system will be created automatically and point to a file called "swapfile" in the root filesystem.
Type swapon --summary to see the status of the swap system:
Code:
# swapon --summary
Filename Type Size Used Priority
/swapfile file 88324 0 -2
Specific needs vary but the current rule of thumb for Linux servers is to have a swap file 1/2 the size of the amount of RAM in your system.
Let's assume we want a 1GB swap file and we want it on the /var partition. Keep in mind that it is normally recommended to keep the swapfile with the root partition for performance reasons. Here are the steps to setup this scenario:
- Make sure you have space on /var by typing df -h /var
- Run the following commands for the new swap file:
Code:
sudo fallocate --length 1G /var/swapfile1g
sudo chown root:root /var/swapfile1g
sudo chmod 600 /var/swapfile1g
sudo mkswap /var/swapfile1g --label swap
sudo swapon /var/swapfile1g
- Look at the current swap settings:
Code:
swapon --summary
Filename Type Size Used Priority
/swapfile file 88324 0 -2
/var/swapfile1g file 1048572 0 -3
- Now disable the old swap file using these commands:
Code:
sudo swapoff /swapfile
sudo rm /swapfile
- Remove the old swapfile from /etc/fstab and add the new one.
Remove:
Code:
/swapfile none swap sw 0 0
Add:
Code:
/var/swapfile1g none swap sw 0 0
- Look at the current swap settings again:
Code:
# swapon --summary
Filename Type Size Used Priority
/var/swapfile1g file 1048500 0 -2
- Reboot the server and run the summary command again to verify that your /etc/fstab changes worked.
Adding More Hard Drives
For this exercise, we will add two additional hard drives. The addition of these drives are NOT necessary and this section can be skipped. The extra drives are only to demonstrate how to add additional hard drives to the system.
Adding more space in VMware or VirtualBox is easy. In this exercise, each drive will be added as a separate disk just as if we were to add a physical drive to a physical server.
vSphere Steps
- Shutdown and power off the server by typing sudo shutdown -P now
- In the vSphere client, right-click the Virtual Machine and choose Edit Settings.
- On the hardware tab, click the Add button and select Hard Disk. Click Next, choose "Create a new virtual disk", click Next, set the size to 10 GB, click Next, Next, Finish.
- Add another 15 GB disk using the same steps above and click OK to close the settings and allow VMware to process the changes.
VirtualBox Steps
- Shutdown and power off the server by typing sudo shutdown -P now
- In the VirtualBox Manager, select the Virtual Machine and click Settings.
- On the Storage tab, select Controller: SATA and click the Add new storage attachment button and select Add Hard Disk. Click Create new disk, VDI, Next, Fixed size, Next, give it a Name/Location/Size of 10 GB, click Create.
- Add another 15 GB disk using the same steps above and click OK to close the settings and allow VirtualBox to process the changes.
Collect information about the newly added drives.
- Start the server and connect using PuTTY.
- At the login prompt, login with your administrator account (administrator / myadminpass)
- Make note of how much "Free PE / Size" you have in your logical volume group when using the vgdisplay command. When done adding the new drives, the free space listed here will increase by the amount added.
- Use pvdisplay which should show something similar to this:
Code:
sudo pvdisplay
--- Physical volume ---
PV Name /dev/sda5
VG Name LVG
PV Size 9.53 GiB / not usable 0
Allocatable yes
PE Size 4.00 MiB
Total PE 2440
Free PE 157
Allocated PE 2283
PV UUID g8IVWI-sF3A-aWAp-0KSJ-vmJE-SOkL-00R7DN
The important bits of info here are the PV Name and VG Name for our existing configuration. - Use fdisk -l which should show something similar to this (however I abbreviated it to show just the important parts):
Code:
sudo fdisk -l
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 976895 974848 476M 83 Linux
/dev/sda2 978942 20969471 19990530 9.5G 5 Extended
/dev/sda5 978944 20969471 19990528 9.5G 8e Linux LVM
Disk /dev/sdb: 12 GiB, 12884901888 bytes, 25165824 sectors
Disk /dev/sdc: 12 GiB, 12884901888 bytes, 25165824 sectors
The important bits of info here are the device paths for the new drives which I highlighted in red.
Prepare the first drive (/dev/sdb) to be used by the LVM
Type the following:
Code:
sudo fdisk /dev/sdb
n (Create New Partition)
p (Primary Partition)
1 (Partition Number)
{ENTER} (use default for first cylinder)
{ENTER} (use default for last cylinder)
t (Change partition type)
8e (Set to Linux LVM)
p (Preview how the drive will look)
w (Write changes)
Prepare the second drive (/dev/sdc) to be used by the LVM
Do the exact same steps as above but start with sudo fdisk /dev/sdc
Create physical volumes using the new drives
If we type sudo fdisk -l, we now see /dev/sdb1 and /dev/sdc1 which are Linux LVM partitions.
Type the following to create physical volumes:
Code:
sudo pvcreate /dev/sdb1
sudo pvcreate /dev/sdc1
Now add the physical volumes to the volume group (LVG) by typing the following:
Code:
sudo vgextend LVG /dev/sdb1
sudo vgextend LVG /dev/sdc1
You can run the sudo vgdisplay command to see that the "Free PE / Size" has increased.
Now that the space of both drives have been added to the logical volume group called LVG, we can now allocate that space to grow the logical volumes and then the file systems as needed.
Shutdown and power off the server by typing sudo shutdown -P now
In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like STEP 3 and description of Ubuntu Server 20.04 LTS, Storage space adjusted, Static IP: 192.168.107.2. The Snapshot Manager should now have a nice hierarchy of snapshots (STEP 1 --> STEP 2 --> STEP 3 --> You are here)
Bookmarks