Why are people assuming the hypervisor is using file-based storage? The OP never said that.
I'm using KVM as the hypervisor.
Anyways, I have a 20.04 VM system running on a 16.04 server that uses LVM to provide storage to the VM-guest. Here's exactly the situation and how I'll expand the storage in just a few commands:
Current situation, inside the guest:
Code:
$ dft
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vgubuntu--mate-root ext4 12G 11G 188M 99% /
/dev/mapper/vgubuntu--mate-home ext4 12G 5.9G 5.4G 53% /home
/dev/vda1 vfat 511M 7.1M 504M 2% /boot/efi
It be out of storage on /. 12G with 11G used. Ouch! I knew my 30G virtual-HDD was going to be tight, but the prior system was 20G with 10G free on 16.04. Ah ... 2.5G of snaps. There's one bloated difference. I also set a 4.1G swap.
Code:
$ sudo lvs
LV VG Attr LSize Pool Origin
home vgubuntu-mate -wi-ao---- 12.00g
root vgubuntu-mate -wi-ao---- 12.00g
swap_1 vgubuntu-mate -wi-ao---- 4.10g
Ok, because I use LVM, I don't need to resize vda. I'll just add vdb using the hypervisor.
From the hypervisor:
Code:
$ sudo lvs
LV VG Attr LSize
lv-regulus hadar-vg -wi-ao---- 30.00g
Code:
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
hadar-vg 1 13 0 wz--n- 476.22g 42.44g
vg-hadar 1 1 0 wz--n- 465.76g 36.00m
There are other LVs for other VMs and for local OS storage. hadar-vg has some free storage. 10G more for the new vHDD.
From the hypvisor, I added 10G more while the system was running. This is like adding a hot-swap HDD without any partitions, no partition table, just slapped it into the VM. This is from the hypervisor still:
Code:
$ sudo lvs
LV VG Attr LSize Pool
lv-regulus hadar-vg -wi-ao---- 30.00g
lv-regulus-2 hadar-vg -wi-ao---- 10.00g
Inside the guest-VM, regulus, it showed up as:
Code:
$ sudo fdisk -l /dev/vdb
Disk /dev/vdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Sweet!
Create a partition table, primary partition, set the partition type to Linux LVM inside vdb, then check it:
Code:
Disk /dev/vdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FADD8143-17B4-E446-B328-34BCE660B93C
Device Start End Sectors Size Type
/dev/vdb1 2048 20971486 20969439 10G Linux LVM
That's it for partitioning. The system was running the entire time.
vgextend is our friend to add the new vdb1 partition to our existing VG, vgubuntu-mate
Code:
$ sudo vgextend vgubuntu-mate /dev/vdb1
Volume group "vgubuntu-mate" successfully extended
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
vgubuntu-mate 2 3 0 wz--n- 39.49g 11.39g
Finally, let's extend the LV nearly out of space by 5G:
Code:
$ sudo lvextend -r --size +5G /dev/vgubuntu-mate/root
Size of logical volume vgubuntu-mate/root changed from 12.00 GiB (3072 extents) to 17.00 GiB (4352 extents).
Logical volume vgubuntu-mate/root successfully resized.
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/mapper/vgubuntu--mate-root is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 3
The filesystem on /dev/mapper/vgubuntu--mate-root is now 4456448 (4k) blocks long.
and checking the df:
Code:
$ dft
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vgubuntu--mate-root ext4 17G 11G 4.9G 69% /
/dev/mapper/vgubuntu--mate-home ext4 12G 5.9G 5.4G 53% /home
/dev/vda1 vfat 511M 7.1M 504M 2% /boot/efi
17G with 11G used.
And the lsblka:
Code:
$ lsblk
NAME SIZE TYPE FSTYPE MOUNTPOINT
sr0 1024M rom
vda 30G disk
├─vda1 512M part vfat /boot/efi
├─vda2 1K part
└─vda5 29.5G part LVM2_member
├─vgubuntu--mate-root 17G lvm ext4 /
├─vgubuntu--mate-swap_1 4.1G lvm swap [SWAP]
└─vgubuntu--mate-home 12G lvm ext4 /home
vdb 10G disk
└─vdb1 10G part LVM2_member
└─vgubuntu--mate-root 17G lvm ext4 /
Why did I add 10G more storage to the VG, but only add 5G more to the LV? Hummmm.
And just for fun, here's the uptime inside the VM-guest, regulus:
Code:
$ uptime
19:01:43 up 6 days, 11:25, 2 users, load average: 0.28, 0.18, 0.10
Zero downtime for this process.
It would take under 5 minutes if I wasn't posting here as I did it and checking manpages carefully. Knowing what was possible means it isn't hard to find a path to a solution.
Ok, the only reason I did it this way was because all the storage in this physical system is on a single SSD, so there wouldn't be any redundancy problem by adding another virtual-HDD to an existing system in this way. I could have disabled the swap LV, stolen that 4.1G for the "root" LV, then added a different virtual-disk for the swap too. I might do that still, just because it will seem cleaner.
And if this really bothers me, I could create a new vHDD and use pvmove to relocate all the current PVs onto the new storage as a way to consolidate it. LVM provides, options. So does using virtual machines. But only if the VMs don't get full. Having some free storage in the VG means we can create snapshots before doing backups, take the non-changing snapshot for a backup, then delete the snapshot when we finish.
Bob's my uncle too.
Bookmarks