Volume / Disk Management
Earlier, it was mentioned that the partition design needed to have some breathing room in each volume so that the file system inside can grow as needed. When the volumes were created during setup, the file systems were automatically expanded to fill the entire volume. We will now correct this by adding more "drives" to the system and then extend each logical volume to gain some breathing space.
Most logical volumes will be increased in size and then the file systems contained in them will be increased but not to the maximum amount.
This design will allow growth when needed and ensure that there will be time to add additional hard drives BEFORE they are needed which will keep the administrators from being stuck between a rock and a hard place! Nobody wants to lose a job because somebody did not estimate growth correctly or the budget did not allow for large capacity when the system first rolled out.
We started off with a 25 GB drive to hold these volumes and the changes below will use 22 GB.
Here are the planned adjustments for each logical volume:
root = 2 GB to 3 GB
home = 0.2 GB to 1 GB
tmp = 0.5 GB to 2 GB
usr = 2.0 GB to 4 GB
var = 2.0 GB to 3 GB
srv = 0.2 GB to 2 GB
opt = 0.2 GB to 2 GB
bak = 0.5 GB to 4 GB
Here are the planned adjustments for each file system:
root = 2.0 GB (no change)
home = 0.2 GB to 0.5 GB
tmp = 0.5 GB to 1.0 GB
usr = 2.0 GB to 3.0 GB
var = 2.0 GB (no change)
srv = 0.2 GB to 1.0 GB
opt = 0.2 GB to 1.0 GB
bak = 0.5 GB to 2.0 GB
Here is a graphical representation of what will be accomplished:
If we were to type df -h right now, we should see something like this:
Code:
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 464M 0 464M 0% /dev
tmpfs 99M 836K 98M 1% /run
/dev/mapper/LVG-root 1.8G 916M 816M 53% /
/dev/mapper/LVG-usr 1.8G 837M 895M 49% /usr
tmpfs 493M 0 493M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 493M 0 493M 0% /sys/fs/cgroup
/dev/sda1 461M 141M 297M 33% /boot
/dev/mapper/LVG-var 1.8G 398M 1.4G 23% /var
/dev/mapper/LVG-home 179M 1.6M 164M 1% /home
/dev/mapper/LVG-tmp 453M 2.3M 423M 1% /tmp
/dev/mapper/LVG-bak 453M 2.3M 423M 1% /bak
/dev/mapper/LVG-srv 179M 1.6M 164M 1% /srv
/dev/mapper/LVG-opt 179M 1.6M 164M 1% /opt
tmpfs 99M 0 99M 0% /run/user/1000
To get a list of volume paths to use in the next commands, type lvscan to show your current volumes and their sizes.
Code:
# lvscan
ACTIVE '/dev/LVG/root' [<1.86 GiB] inherit
ACTIVE '/dev/LVG/usr' [<1.86 GiB] inherit
ACTIVE '/dev/LVG/var' [<1.86 GiB] inherit
ACTIVE '/dev/LVG/tmp' [476.00 MiB] inherit
ACTIVE '/dev/LVG/bak' [476.00 MiB] inherit
ACTIVE '/dev/LVG/srv' [188.00 MiB] inherit
ACTIVE '/dev/LVG/opt' [188.00 MiB] inherit
ACTIVE '/dev/LVG/home' [188.00 MiB] inherit
Type the following to set the exact size of the volume by specifying the end-result size you want:
Code:
lvextend -L3G /dev/LVG/root
lvextend -L1G /dev/LVG/home
lvextend -L2G /dev/LVG/tmp
lvextend -L4G /dev/LVG/usr
lvextend -L3G /dev/LVG/var
lvextend -L2G /dev/LVG/srv
lvextend -L2G /dev/LVG/opt
lvextend -L4G /dev/LVG/bak
or you can grow each volume by the specified amount (the number after the plus sign):
Code:
lvextend -L+1G /dev/LVG/root
lvextend -L+0.8G /dev/LVG/home
lvextend -L+1.5G /dev/LVG/tmp
lvextend -L+2G /dev/LVG/usr
lvextend -L+1G /dev/LVG/var
lvextend -L+1.8G /dev/LVG/srv
lvextend -L+1.8G /dev/LVG/opt
lvextend -L+3.5G /dev/LVG/bak
To see the new sizes, type lvscan
Code:
# lvscan
ACTIVE '/dev/LVG/root' [3.00 GiB] inherit
ACTIVE '/dev/LVG/usr' [4.00 GiB] inherit
ACTIVE '/dev/LVG/var' [3.00 GiB] inherit
ACTIVE '/dev/LVG/tmp' [2.00 GiB] inherit
ACTIVE '/dev/LVG/bak' [4.00 GiB] inherit
ACTIVE '/dev/LVG/srv' [2.00 GiB] inherit
ACTIVE '/dev/LVG/opt' [2.00 GiB] inherit
ACTIVE '/dev/LVG/home' [1.00 GiB] inherit
The last thing to do now is the actual growth of the file systems. We want to grow the existing file systems but only to a certain amount so we do not take up all the space in the volume. We want room for growth in the future so we have time to order and install new drives when needed.
Code:
resize2fs /dev/LVG/root 2G
resize2fs /dev/LVG/home 500M
resize2fs /dev/LVG/tmp 1G
resize2fs /dev/LVG/usr 3G
resize2fs /dev/LVG/var 2G
resize2fs /dev/LVG/srv 1G
resize2fs /dev/LVG/opt 1G
resize2fs /dev/LVG/bak 3G
If we need to increase space in /var at a later point, we can issue the following command without any downtime (we will automate this in a nifty script later):
Code:
resize2fs /dev/LVG/var 2560MB
We could continue to increase this particular file system all the way until we reach the limit of the volume which is 3 GB at the moment.
If we were to type df -h right now, we should see something like this:
Code:
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 464M 0 464M 0% /dev
tmpfs 99M 836K 98M 1% /run
/dev/mapper/LVG-root 2.0G 916M 953M 50% /
/dev/mapper/LVG-usr 3.0G 837M 2.0G 30% /usr
tmpfs 493M 0 493M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 493M 0 493M 0% /sys/fs/cgroup
/dev/sda1 461M 141M 297M 33% /boot
/dev/mapper/LVG-var 2.0G 398M 1.5G 22% /var
/dev/mapper/LVG-home 481M 2.3M 453M 1% /home
/dev/mapper/LVG-tmp 984M 2.8M 932M 1% /tmp
/dev/mapper/LVG-bak 2.9G 3.1M 2.8G 1% /bak
/dev/mapper/LVG-srv 989M 2.7M 940M 1% /srv
/dev/mapper/LVG-opt 989M 2.7M 940M 1% /opt
tmpfs 99M 0 99M 0% /run/user/1000
Remember, df -h will tell you the size of the file system and lvscan will tell you the size of the volumes where the file systems live in.
TIP: If you want to see everything in a specific block size, such as everything showing up in megabytes, you can use df --block-size m
Swap File Management
If you do not specify a swap partition during the initial setup, a swap file system will be created automatically and point to a file called "swapfile" in the root filesystem.
Type swapon --summary to see the status of the swap system:
Code:
# swapon --summary
Filename Type Size Used Priority
/swapfile file 88324 0 -2
Specific needs vary but the current rule of thumb for Linux servers is to have a swap file 1/2 the size of the amount of RAM in your system.
Let's assume we want a 1GB swap file and we want it on the /opt partition. Here are the steps to setup this scenario:
- Make sure you have space on /opt by typing df -h /opt
- Run the following commands for the new swap file:
Code:
fallocate --length 1G /opt/swapfile1g
chown root:root /opt/swapfile1g
chmod 600 /opt/swapfile1g
mkswap /opt/swapfile1g
swapon /opt/swapfile1g
- Look at the current swap settings:
Code:
# swapon --summary
Filename Type Size Used Priority
/swapfile file 88324 0 -2
/opt/swapfile1g file 1048500 0 -3
- Now disable the old swap file using these commands:
Code:
swapoff /swapfile
rm /swapfile
- Remove the old swapfile from /etc/fstab and add the new one.
Remove:
Code:
/swapfile none swap sw 0 0
Add:
Code:
/opt/swapfile1g none swap sw 0 0
- Look at the current swap settings again:
Code:
# swapon --summary
Filename Type Size Used Priority
/opt/swapfile1g file 1048500 0 -2
- Reboot the server and run the summary command again to verify that your /etc/fstab changes worked.
Adding More Hard Drives
For this exercise, we will add two additional hard drives. The addition of these drives are NOT necessary and this section can be skipped. The extra drives are only to demonstrate how to add additional hard drives to the system.
Adding more space in VMware or VirtualBox is easy. In this exercise, each drive will be added as a separate disk just as if we were to add a physical drive to a physical server.
vSphere Steps
- Shutdown and power off the server by typing shutdown -P now
- In the vSphere client, right-click the Virtual Machine and choose Edit Settings.
- On the hardware tab, click the Add button and select Hard Disk. Click Next, choose "Create a new virtual disk", click Next, set the size to 10 GB, click Next, Next, Finish.
- Add another 15 GB disk using the same steps above and click OK to close the settings and allow VMware to process the changes.
VirtualBox Steps
- Shutdown and power off the server by typing shutdown -P now
- In the VirtualBox Manager, select the Virtual Machine and click Settings.
- On the Storage tab, select Controller: SATA and click the Add new storage attachment button and select Add Hard Disk. Click Create new disk, VDI, Next, Fixed size, Next, give it a Name/Location/Size of 10 GB, click Create.
- Add another 15 GB disk using the same steps above and click OK to close the settings and allow VirtualBox to process the changes.
Collect information about the newly added drives.
- Start the server and connect using PuTTY.
- At the login prompt, login with your administrator account (administrator / myadminpass) and then temporarily grant yourself super user privileges by typing sudo su
- Make note of how much "Free PE / Size" you have in your logical volume group by typing vgdisplay. When done adding the new drives, the free space listed here will increase by the amount added.
- Type pvdisplay which should show something similar to this:
Code:
--- Physical volume ---
PV Name /dev/sda5
VG Name LVG
PV Size 9.53 GiB / not usable 0
Allocatable yes
PE Size 4.00 MiB
Total PE 2440
Free PE 157
Allocated PE 2283
PV UUID g8IVWI-sF3A-aWAp-0KSJ-vmJE-SOkL-00R7DN
The important bits of info here are the PV Name and VG Name for our existing configuration. - Type fdisk -l which should show something similar to this (however I abbreviated it to show just the important parts):
Code:
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 976895 974848 476M 83 Linux
/dev/sda2 978942 20969471 19990530 9.5G 5 Extended
/dev/sda5 978944 20969471 19990528 9.5G 8e Linux LVM
Disk /dev/sdb: 12 GiB, 12884901888 bytes, 25165824 sectors
Disk /dev/sdc: 12 GiB, 12884901888 bytes, 25165824 sectors
The important bits of info here are the device paths for the new drives (/dev/sdb, /dev/sdc).
Prepare the first drive (/dev/sdb) to be used by the LVM
Type the following:
Code:
fdisk /dev/sdb
n (Create New Partition)
p (Primary Partition)
1 (Partition Number)
{ENTER} (use default for first cylinder)
{ENTER} (use default for last cylinder)
t (Change partition type)
8e (Set to Linux LVM)
p (Preview how the drive will look)
w (Write changes)
Prepare the second drive (/dev/sdc) to be used by the LVM
Do the exact same steps as above but start with fdisk /dev/sdc
Create physical volumes using the new drives
If we type fdisk -l, we now see /dev/sdb1 and /dev/sdc1 which are Linux LVM partitions.
Type the following to create physical volumes:
Code:
pvcreate /dev/sdb1
pvcreate /dev/sdc1
Now add the physical volumes to the volume group (LVG) by typing the following:
Code:
vgextend LVG /dev/sdb1
vgextend LVG /dev/sdc1
You can run the vgdisplay command to see that the "Free PE / Size" has increased.
Now that the space of both drives have been added to the logical volume group called LVG, we can now allocate that space to grow the logical volumes and then the file systems as needed.
Shutdown and power off the server by typing shutdown -P now
In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like STEP 3 and description of Ubuntu Server 18.04 LTS, Storage space adjusted, Static IP: 192.168.107.2. The Snapshot Manager should now have a nice hierarchy of snapshots (STEP 1 --> STEP 2 --> STEP 3 --> You are here)
Bookmarks