Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 28

Thread: Ubuntu server 20.04.1 parition errors

  1. #11
    Join Date
    Jun 2015
    Beans
    17

    Re: Ubuntu server 20.04.1 parition errors

    Hi thank you for your reply. You are right that during initial installation, I did not pay attention to Ubuntu only formatting 50% of the drive. I did change setting wherein I mounted SSD volume allocated to the VM (since I need IOPs for collection, parsing and analytics.) However that SSD is also thin provisioned but entire 360 GB is visible.

    The disk is thin provisioned in ESXi. Is there a way to claim the free space or will I need to rebuild?


    sudo pvs
    Code:
      PV         VG        Fmt  Attr PSize   PFree
      /dev/sda3  ubuntu-vg lvm2 a--  <98.50g <49.25g
    sudo vgs
    Code:
      VG        #PV #LV #SN Attr   VSize   VFree
      ubuntu-vg   1   1   0 wz--n- <98.50g <49.25g
    sudo lvs
    Code:
      LV        VG        Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      ubuntu-lv ubuntu-vg -wi-ao---- 49.25g

  2. #12
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    21,110
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Ubuntu server 20.04.1 parition errors

    The output above shows that Ubuntu doesn't know anything about thin provisioning. Appears that during the installation, sda3 only had about 50G, which it used for the PV. I've never seen that before, but we don't use thin provisioning for VMs, just containers.

    If you understand LVM, it looks like resizing the different logical parts should be easy and not need downtime. If it was me, I'd try this:
    1. Use pvresize to increase the PV size.
    2. Use vgextend to increase the VG size. (may not be needed; pvresize may automatically vgextend too)
    3. Use lvresize to shrink the current LV to 25G or less - the size depends on how much you want to waste with the OS and file-based swap.
    4. Use lvcreate to make another LV to be used for /home and all data, sized to use the amount needed for the next 3 months.


    Whenever using lvresize or lvcreate, I use the -r option so the file system gets handled automatically. Forget to do that and you'll need another step.

    I'd leave at least 20% of the VG free for future needs and to be used for swap.
    Never allocate the entire disk to LVs. We need to leave some space to extend storage where it is needed later, some for snapshots so our backups can be clean, and perhaps some for swap, which it seems the system doesn't have. Normally, there should be a swap LV on a system like this.
    I'm not a fan of using file-based swap. LVs are much cleaner and avoid certain issues.

    To give you an idea about different LVM:
    Code:
    regulus:~$ sudo lvs
      LV     VG            Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      root   vgubuntu-mate -wi-ao---- 17.00g                                                    
      home   vgubuntu-mate -wi-ao---- 12.00g                                                    
      swap_1 vgubuntu-mate -wi-ao----  4.10g                                                    
    regulus:~$ sudo vgs 
      VG            #PV #LV #SN Attr   VSize  VFree
      vgubuntu-mate   2   3   0 wz--n- 39.49g 6.39g
    regulus:~$ sudo pvs
      PV         VG            Fmt  Attr PSize   PFree
      /dev/vda5  vgubuntu-mate lvm2 a--  <29.50g    0 
      /dev/vdb1  vgubuntu-mate lvm2 a--  <10.00g 6.39g
    That's on my main desktop. I screwed up during the install and only gave the VM 30G because the prior desktop VM was using only 25G total. 20.04 is bloated. Anyway, I added a new 10G physical disk (as far as the VM was installed knew), vdb. then I made that new disk part of the VG and gave some of the storage to the root LV.
    Code:
    regulus:~$ lsblk -e 7 -o name,size,type,fstype,mountpoint
    NAME                       SIZE TYPE FSTYPE      MOUNTPOINT
    sr0                       1024M rom              
    vda                         30G disk             
    ├─vda1                     512M part vfat        /boot/efi
    ├─vda2                       1K part             
    └─vda5                    29.5G part LVM2_member 
      ├─vgubuntu--mate-root     17G lvm  ext4        /
      ├─vgubuntu--mate-swap_1  4.1G lvm  swap        [SWAP]
      └─vgubuntu--mate-home     12G lvm  ext4        /home
    vdb                         10G disk             
    └─vdb1                      10G part LVM2_member 
      └─vgubuntu--mate-root     17G lvm  ext4        /
    All this work was done while the VM kept running. No downtime needed. These were the commands: https://ubuntuforums.org/showthread....6#post13963156

    Notice how I left 6G free? This gets used for nightly backups as snapshot storage. At some point in the future, I may clean this up by extending the vda allocation to the VM - really doesn't matter. On the VM host, vda and vdb are just LVs presented to the VM, regulus, as block devices. See:
    Code:
    $ sudo lvs
      LV                VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      lv-regulus        hadar-vg -wi-ao----  30.00g                                                    
      lv-regulus-2      hadar-vg -wi-ao----  10.00g
    A little "Inception" loop here. Like that movie.

  3. #13
    Join Date
    Jun 2015
    Beans
    17

    Re: Ubuntu server 20.04.1 parition errors

    Quote Originally Posted by TheFu View Post
    The output above shows that Ubuntu doesn't know anything about thin provisioning. Appears that during the installation, sda3 only had about 50G, which it used for the PV. I've never seen that before, but we don't use thin provisioning for VMs, just containers.

    If you understand LVM, it looks like resizing the different logical parts should be easy and not need downtime. If it was me, I'd try this:
    1. Use pvresize to increase the PV size.
    2. Use vgextend to increase the VG size. (may not be needed; pvresize may automatically vgextend too)
    3. Use lvresize to shrink the current LV to 25G or less - the size depends on how much you want to waste with the OS and file-based swap.
    4. Use lvcreate to make another LV to be used for /home and all data, sized to use the amount needed for the next 3 months.


    Whenever using lvresize or lvcreate, I use the -r option so the file system gets handled automatically. Forget to do that and you'll need another step.

    I'd leave at least 20% of the VG free for future needs and to be used for swap.
    Never allocate the entire disk to LVs. We need to leave some space to extend storage where it is needed later, some for snapshots so our backups can be clean, and perhaps some for swap, which it seems the system doesn't have. Normally, there should be a swap LV on a system like this.
    I'm not a fan of using file-based swap. LVs are much cleaner and avoid certain issues.

    To give you an idea about different LVM:
    Code:
    regulus:~$ sudo lvs
      LV     VG            Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      root   vgubuntu-mate -wi-ao---- 17.00g                                                    
      home   vgubuntu-mate -wi-ao---- 12.00g                                                    
      swap_1 vgubuntu-mate -wi-ao----  4.10g                                                    
    regulus:~$ sudo vgs 
      VG            #PV #LV #SN Attr   VSize  VFree
      vgubuntu-mate   2   3   0 wz--n- 39.49g 6.39g
    regulus:~$ sudo pvs
      PV         VG            Fmt  Attr PSize   PFree
      /dev/vda5  vgubuntu-mate lvm2 a--  <29.50g    0 
      /dev/vdb1  vgubuntu-mate lvm2 a--  <10.00g 6.39g
    That's on my main desktop. I screwed up during the install and only gave the VM 30G because the prior desktop VM was using only 25G total. 20.04 is bloated. Anyway, I added a new 10G physical disk (as far as the VM was installed knew), vdb. then I made that new disk part of the VG and gave some of the storage to the root LV.
    Code:
    regulus:~$ lsblk -e 7 -o name,size,type,fstype,mountpoint
    NAME                       SIZE TYPE FSTYPE      MOUNTPOINT
    sr0                       1024M rom              
    vda                         30G disk             
    ├─vda1                     512M part vfat        /boot/efi
    ├─vda2                       1K part             
    └─vda5                    29.5G part LVM2_member 
      ├─vgubuntu--mate-root     17G lvm  ext4        /
      ├─vgubuntu--mate-swap_1  4.1G lvm  swap        [SWAP]
      └─vgubuntu--mate-home     12G lvm  ext4        /home
    vdb                         10G disk             
    └─vdb1                      10G part LVM2_member 
      └─vgubuntu--mate-root     17G lvm  ext4        /
    All this work was done while the VM kept running. No downtime needed. These were the commands: https://ubuntuforums.org/showthread....6#post13963156

    Notice how I left 6G free? This gets used for nightly backups as snapshot storage. At some point in the future, I may clean this up by extending the vda allocation to the VM - really doesn't matter. On the VM host, vda and vdb are just LVs presented to the VM, regulus, as block devices. See:
    Code:
    $ sudo lvs
      LV                VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      lv-regulus        hadar-vg -wi-ao----  30.00g                                                    
      lv-regulus-2      hadar-vg -wi-ao----  10.00g
    A little "Inception" loop here. Like that movie.



    Thank you very much for the detailed explanation. Given that I am a novice I will need sometime before I go through the commands man pages and carry out the change.

    Thank you very much once again.

  4. #14
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    21,110
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Ubuntu server 20.04.1 parition errors

    If you aren't 100% certain about each command, it is best to create another VM just with tiny LVMs to do the testing. For testing out LVM commands, it doesn't matter if 100G or 100M are used.

  5. #15
    Join Date
    Jun 2015
    Beans
    17

    Re: Ubuntu server 20.04.1 parition errors

    I think I have already screwed up.

    lsblk

    Code:
    NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    loop0                       7:0    0    15M  1 loop /snap/aws-cli/130
    loop1                       7:1    0    97M  1 loop /snap/core/9665
    loop2                       7:2    0   9.1M  1 loop /snap/canonical-livepatch/95
    loop3                       7:3    0    55M  1 loop /snap/core18/1880
    loop4                       7:4    0 118.7M  1 loop /snap/google-cloud-sdk/143
    loop5                       7:5    0  71.3M  1 loop /snap/lxd/16100
    loop6                       7:6    0  71.5M  1 loop /snap/lxd/16530
    loop7                       7:7    0  59.6M  1 loop /snap/powershell/137
    loop8                       7:8    0  29.9M  1 loop /snap/snapd/8542
    loop9                       7:9    0    55M  1 loop /snap/core18/1705
    loop10                      7:10   0  27.1M  1 loop /snap/snapd/7264
    loop11                      7:11   0   3.5M  1 loop /snap/stress-ng/4462
    loop12                      7:12   0 118.9M  1 loop /snap/google-cloud-sdk/144
    sda                         8:0    0   100G  0 disk
    ├─sda1                      8:1    0   512M  0 part /boot/efi
    ├─sda2                      8:2    0     1G  0 part /boot
    └─sda3                      8:3    0  98.5G  0 part
      └─ubuntu--vg-ubuntu--lv 253:0    0  49.3G  0 lvm  /
    sdb                         8:16   0   360G  0 disk /mnt/ssd


    Then I ran:


    pvresize --setphysicalvolumesize 75G /dev/sda3


    Code:
    /dev/sda3: Requested size 75.00 GiB is less than real size <98.50 GiB. Proceed?  [y/n]: y
      WARNING: /dev/sda3: Pretending size is 157286400 not 206565376 sectors.
      Physical volume "/dev/sda3" changed
      1 physical volume(s) resized or updated / 0 physical volume(s) not resized

    Re-ran

    lsblk

    Code:
    NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    loop0                       7:0    0    15M  1 loop /snap/aws-cli/130
    loop1                       7:1    0    97M  1 loop /snap/core/9665
    loop2                       7:2    0   9.1M  1 loop /snap/canonical-livepatch/95
    loop3                       7:3    0    55M  1 loop /snap/core18/1880
    loop4                       7:4    0 118.7M  1 loop /snap/google-cloud-sdk/143
    loop5                       7:5    0  71.3M  1 loop /snap/lxd/16100
    loop6                       7:6    0  71.5M  1 loop /snap/lxd/16530
    loop7                       7:7    0  59.6M  1 loop /snap/powershell/137
    loop8                       7:8    0  29.9M  1 loop /snap/snapd/8542
    loop9                       7:9    0    55M  1 loop /snap/core18/1705
    loop10                      7:10   0  27.1M  1 loop /snap/snapd/7264
    loop11                      7:11   0   3.5M  1 loop /snap/stress-ng/4462
    loop12                      7:12   0 118.9M  1 loop /snap/google-cloud-sdk/144
    sda                         8:0    0   100G  0 disk
    ├─sda1                      8:1    0   512M  0 part /boot/efi
    ├─sda2                      8:2    0     1G  0 part /boot
    └─sda3                      8:3    0  98.5G  0 part
      └─ubuntu--vg-ubuntu--lv 253:0    0  49.3G  0 lvm  /
    sdb                         8:16   0   360G  0 disk /mnt/ssd
    root@inmum-i-ssslp01:~# pvresize --setphysicalvolumesize 75G /dev/sda3

    I may need hand holding. I have to submit my last year project and hence the rush.

  6. #16
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,423
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Ubuntu server 20.04.1 parition errors

    I don't understand what is happening here and in your place I would STOP, until we clarify few things.

    And lets go back to basics, yes/no questions to keep it simple.

    1) The root LV according to the first posts here is only 22% full. You don't even need to expand it. I know it has 50GB out of 100GB but that is not an issue. So, do you want to extend root LV to have more than 50GB size? Yes/No? If yes, to which size you want to expand it?

    2) I see you tried to shrink the PV using pvresize. Is there a reason you really need to do this?

    TheFu gave you a lot of detailed information and he is respected forum contributor but in this case with a LVM novice all that info might be counter-productive. You see, he clearly said "use pvresize to increase PV size" and you tried to shrink your PV. Wihtout detailing a valid reason before doing it. If you continue hacking commands like that without understanding what they do, and without needing to run them at all in the first place, that is an excellent recipe for disaster.

    Also, the thin in VMware has no influence. We can talk about that too but I don't see how it is related here. Please elaborate. The thin disks are no different from VM and OS side. They are important from VMware side because using thin you can overprovision your datastore and fill it up and crash it.

    If you answer the above questions we can tell you how to extend your root LV in 5mins. But again, it is only 22% full and I don't even see a need to extend it. I think that not knowing what LVM is you just got "scared" seeing you have 50GB instead of the expected 100GB.
    Last edited by darkod; August 5th, 2020 at 05:33 PM.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  7. #17
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    21,110
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Ubuntu server 20.04.1 parition errors

    Darknod -
    I think the OP thin provisioned on the VMware side, starting with 50G. The installation saw that and used 50G for the PV - as seen by the output commands pvs, vgs, lvs, in post #11 above. Then VMware thin provisioning appears to have resized the storage behind the scenes. This surprised the OP and it surprised me, but I know thin provisioning can do that, if setup in that manner.

    I don't think the pvresize command did anything bad. Seeing the pvs, vgs, and lvs output again, now, would be useful.

    The plain lsblk command isn't as useful as this alias: alias lsb-no-loop='lsblk -e 7 -o name,size,type,fstype,mountpoint' This version doesn't show any loop storage and does show the file systems for each. Not seeing cruft helps for understanding.
    Last edited by TheFu; August 5th, 2020 at 06:43 PM.

  8. #18
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,423
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Ubuntu server 20.04.1 parition errors

    I think the OP thin provisioned on the VMware side, starting with 50G. The installation saw that and used 50G for the PV
    I have to disagree on this. The very first post shows lsblk output which clearly states sda as 100GB (in line with having 100GB VHDD in VMware, thin or thick provisioned doesn't matter). And it also states the LV is only 50GB. The latter might be due to selection during installation, or due to the guided LVM method not using 100% of the available disk, not sure.

    Anyway, to me it looked pretty clear. 100GB disk with 50GB LV which you can extend if needed. No messing with PV and stuff necessary, not that I can see.

    I think the only OP confusion here was seeing 50GB for / after giving the VM a 100GB disk in VMware. Because of not understanding of how LVM works and that you can easily extend the / to 100GB if you wanted.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  9. #19
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    21,110
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Ubuntu server 20.04.1 parition errors

    How would someone end up with a PV of 50G doing a simple server install with 100G partition? When I've tried to have a root LV of 25G setup during the installer, I've never been successful. It always takes the entire partition for the PV, VG and root LV. Drives me crazy, but fortunately, for someone experienced, reducing the root LV to the size I want and adding the other LVs in the desired sizes isn't THAT difficult, just a hassle. At install time, really miss the CentOS disk/LVM tools.

    Completely unrelated, but this is bad:
    Code:
    sdb                         8:16   0   360G  0 disk /mnt/ssd
    Really, the /dev/sdb disk should have been partitioned, then a file system placed onto that partition ... or better, use LVM, so the storage can be better managed on the ssd with PVs, VGs and LVs. Normally, people make this mistake with RAID setups.
    The diagram here: https://www.brainupdaters.net/ca/bri...e-application/ should help understand the relationship between
    • whole drives
    • Partition tables (msdos or GPT)
    • partitions
    • PVs
    • VGs
    • LVs and
    • File systems

    With LVM, we almost always work from the VG, LV and file system levels after a disk is added to a system. LVM is more complex, but provides crazy flexibility, especially for storage that will be used over multiple years.

    But new-to-Unix users are seldom ready for LVM's complexities/capabilities and are often better off NOT using LVM.

  10. #20
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,423
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Ubuntu server 20.04.1 parition errors

    How would someone end up with a PV of 50G doing a simple server install with 100G partition?
    That is the thing bugging me. But since I still haven't done a new clean install of 20.04 my first assumption was that the installer "helps" you by not using the whole VG for only one LV right from the start.

    I have used plenty thin disks in VMware and that shouldn't have created this situation because from VM and OS side, the disk is seen as whole 100GB size. The only difference is on VMware side where thick disk would grab the whole 100GB right away, while a thin disk would only use amount of space in the datastore related to how full the disk is in the OS. And with the usage in the OS growing, the VHDD file in the datastore grows too up to 100GB.

    But anyway going into too much VMware details can also add to confusion here.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

Page 2 of 3 FirstFirst 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •