PDA

View Full Version : 18.04 DH replacement



Robbyx
May 16th, 2019, 08:57 PM
I have just bought a PCIe ssd drive to replace my sata ssd.

Is it easy to set up the new drive as LVM. I looked for a network alternative installer image but could not find one. For a single desktop machine should I persevere with the LVM? I wanted it to be easy to change partition sizes.

I found this message in askubuntu, which put me off installing into an lvm drive:


You could try to run ubuntu without installing it and partition the way You like also with lvm. Then run the installer and choose lvm volumes to install to it. But You should be aware, that lvm setup on ubutu 18.04 might slow down your boot time significantly. You should not split usr and root partition on systemd managed os.

I have never split the usr and root partitions deliberately. I have split the home into a separate partition.

I read elsewhere that LVM is embedded into the kernel and so should not slow down the system. Which view is right?

TheFu
May 16th, 2019, 11:47 PM
If it is a new HDD, I'd try about 10 different installs and see for myself. If you do a minimal install, it should take less than 10 min with an SSD. I tested with a spinning HDD and it was 12 min. At the time I was testing 18.04 installer didn't support LVM in any way.
We don't use 18.04, so I can't say anything about the situation besides that you should test yourself. Make certain you use EFI, GPT, and align the underlying partitions on sector boundaries. The automatic installer might make that possible, but in prior versions it always setup MBR partition tables, even if the disk was already setup for GPT.

I'm a huge believer in using LVM. The flexibility it allows more than makes up for any negatives. I've found LVM to be performance enhancing, especially for the mkfs command. Seconds rather than 20+ minutes.

Robbyx
May 17th, 2019, 12:04 AM
Thank you. Your reply has given me the confidence to try out LVM. However, I do not know how to deal with your comment to " Make certain you use EFI, GPT, and align the underlying partitions on sector boundaries." Have you seen a step by step set of instructions that cover those factors as well?

TheFu
May 17th, 2019, 01:55 AM
I've never looked. None of them is hard, if you understand the requirements for each.

Make a plan.
Write it out.
Look up all the commands in advance.
I like to test things inside a virtual machine if I'm not certain. Basically zero risk that way. Spinning up a VM takes a few minutes. When I'm done, wipe it. Gone.

If you don't understand the terms, do some research. GPT and sector alignment are in wikipedia. For UEFI and how that relates to Ubuntu requirements, oldfred posts links all-the-time.

Dennis N
May 17th, 2019, 03:33 AM
I prefer using LVM myself whenever possible, but it takes some study and practice to get the hang of it. LVM management is largely done through terminal commands you need to get familiar with.

Here is a guide for you. It's the first thing I read when I started using LVM for my installs several years ago. Fortunately, its still online and hasn't disappeared.
http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html

Of course, with an NVMe disk, the device notation will change from that in the article. Also, last time I looked, the GUI tool referred to in the article isn't around anymore.

Enjoy your new speedy disk!

Robbyx
May 20th, 2019, 03:52 PM
I am struggling a bit with VLM because I am not able to create logical volumes:

I created the PV on my ssd through the live cd. This I did by doing a new install of the os onto the ssd and chose the option to create a VLM.

The PV name is /dev/nvme and that is showing as 960gb

VG name is ubuntu-vg

My next step is to try to allocate some space in the VG so that it becomes part of a logical volume.

sudo lcreate -L 30G -n home ubuntu-vg

This has not worked because I get a no space available response to the above command.

I intended to mount the home partition but can not get to that point at present because I can not create the logical drive.

What steps have I missed out?

cruzer001
May 20th, 2019, 04:00 PM
This may help your understanding of LVM.

https://ubuntuforums.org/showthread.php?t=2388271

Robbyx
May 20th, 2019, 04:37 PM
Cruzer001

Thank you for the referral. I have picked up some really good points about using lvm, but I did not see a solution to my specific problem that I outlined above. Could you have another look at my query in #6? I am sure it is just a misunderstanding of the steps in creating a working LVM but I can not spot what is missing.

TheFu
May 20th, 2019, 04:52 PM
This may help your understanding of LVM.

https://ubuntuforums.org/showthread.php?t=2388271

Ah - that was a good thread. Plenty of stuff you won't learn from any tutorial or book there, just experience.

Robby, the way to get the best help is to post exact commands that you copy/paste and the exact output that is copy/pasted too. That last post had a few things that would never work. That just adds to the confusion.

Also, whenever posting commands and output, please use code-tags. We are used to seeing it that way and often the indentation/columns matter greatly. Adv-Reply (#) - or Go Advanced (#) is how.

Ok, moving on.

There just aren't any good GUIs for LVM stuff. There are many ways that LVM can be used that a GUI just can't handle. At worst, the GUI can lead to total data loss, so nobody uses them.

Some helpful commands for all the LVM tools:
lvs pvs vgs
The older commands with more details that are seldom needed:
lvdisplay vgdisplay pvdisplay

Most LV commands can handle file systems at the same time - to see then easily, type "lv{tab}" to let command completion show all the commands that begin with lv.

$ lv{tab}
lvchange lvextend lvmconfig lvmpolld lvremove lvscan
lvconvert lvm lvmdiskscan lvmsadc lvrename
lvcreate lvmchange lvmdump lvmsar lvresize
lvdisplay lvmconf lvmetad lvreduce lvs
See how nice the code-tags look?

So, back to the last question above about not being able to create a new LV. It is likely that the entire VG was used by the installer. If you use lvs and vgs, you can see, probably, that there isn't any empty VG area left. Good news ... see that list of "lv" commands above ... see lvreduce? Use that to reduce the "root" LV and be certain to use the option that also reduces the file system at the same time. Here's an example:

$ sudo lvs
LV VG Attr LSize Pool Origin Data%
home-lv ubuntu-vg -wi-ao---- 75.00g
root ubuntu-vg -wi-ao---- 25.00g
stuff ubuntu-vg -wi-ao---- 100.00g
swap_1 ubuntu-vg -wi-ao---- 4.46g

IMHO, the "root" LV should be 25G in size. Not larger. Reasons.
My "home-lv" above is probably too large. I'm use 20G right now and have a bunch of crap that needs to be deleted. On an SSD, especially on an SSD, leaving lots of free storage unused is important. Inside the SSD, the controller does all sorts of wear levelling. Nothing inside the SSD works anything like spinning disk storage. SSDs are all logical, virtually mapped. The better, longer-lasting SSDs have some smart controllers that will wear level best when there's plenty of working room i.e. unused storage.
I also had to resize my swap LV.
You can ignore the "stuff" LV. I do. My SSD is 500G, for reference. Over 50% isn't allocated.

$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
ubuntu-vg 1 4 0 wz--n- 464.54g 260.08g


Whenever I'm working with LVM commands, I'll have 1 terminal for entering the command and another with the manpage up for the command options.

I use lvreduce, lvextend, lvresize the most. If I need to move LVs to a new disk/SSD, there are VG and PV commands to move the entire VG (and all LVs inside). lvreduce only works with ext4. Most of the time, these lv commands can be run while the OS is running. I'm positive that lvextend works on a running system. They all have a "handle the file system too" option.

Dennis N
May 20th, 2019, 05:55 PM
This I did by doing a new install of the os onto the ssd and chose the option to create a VLM.

When you choose the 'use LVM' option on the Installation type, Ubuntu creates one volume group, with one logical volume for root taking up the entire volume group, so there is no room to create another logical volume (like your 'home')

Creating the LVM structure on the disk before installing will avoid this problem.

To fix your problem, you can shrink the root logical volume, and then create a new logical volume. Here is an example to reduce by 5000 extents. 1000 extents is about 4GB:

dmn@Sydney:~$ sudo lvreduce --resizefs --extents -5000 /dev/sn500_vg/manjaro_mate
You can use gigabytes instead of extents. Like:

dmn@Sydney:~$ sudo lvreduce --resizefs --size -20G /dev/sn500_vg/manjaro_mate
will reduce size by 20 GB

Robbyx
May 26th, 2019, 06:36 PM
I am trying to reduce the size of the Root LV created when I installed the o/s onto a new SSDe.
What is the command line to activate the LV "ubunttu-vg/root"?


sudo vgs

VG #PV #LV #SN Attr VSize VFree
ubuntu-vg 1 2 0 wz--n- 893.75g 0


sudo blkid
...
[CODE]/dev/nvme0n1: PTUUID="c0d..PTTYPE.="gpt"
/dev/nvme0n1p1: UUID="3E57..." TYPE="vfat"
/dev/nvme0n1p2: UUID="FFg..." TYPE="LVM2_member"


sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root ubuntu-vg -wi------- <892.80g
swap_1 ubuntu-vg -wi------- 976.00m


sudo lvreduce --resizefs --size 25G /dev/ubuntu-vg/root

Logical volume ubuntu-vg/root must be activated before resizing filesystem.

TheFu
May 26th, 2019, 08:10 PM
This should do it.

sudo vgchange -ay
that's from memory, so might want to check the manpage.
It shouldn't be necessary unless you booted from alternate media like a desktop live-installer. That is a good practice, BTW.

The vgchange scans all connected storage for LVM members, looks at the PVs, VGs and LVs, then activates all the VGs. Sweet!

TheFu
May 26th, 2019, 08:12 PM
BTW, not all file systems can be reduced inside. I think only ext3 and ext4 support this. For any others, there are a multitude of options, none single-step.

Dennis N
May 26th, 2019, 09:29 PM
Just to be sure, realize that what you have in your post:

sudo lvreduce --resizefs --size 25G /dev/ubuntu-vg/root
will reduce the file system size to 25 Gb, not reduce it by 25 Gb.

Robbyx
May 26th, 2019, 11:18 PM
Just to be sure, realize that what you have in your post:

sudo lvreduce --resizefs --size 25G /dev/ubuntu-vg/root
will reduce the file system size to 25 Gb, not reduce it by 25 Gb.

I want the root to be 25gb.
I will then add inside the group other LV's such as home and data

Robbyx
May 26th, 2019, 11:23 PM
This should do it.

sudo vgchange -ay
that's from memory, so might want to check the manpage.
It shouldn't be necessary unless you booted from alternate media like a desktop live-installer. That is a good practice, BTW.

The vgchange scans all connected storage for LVM members, looks at the PVs, VGs and LVs, then activates all the VGs. Sweet!

I am currently booting off my old SSD and doing the changes from within there. As soon as I have set up the home LV and data I can boot off the new SSDe on a permanent basis.

Dennis N
May 27th, 2019, 12:44 AM
I want the root to be 25gb.
I will then add inside the group other LV's such as home and data

Then it's correct as it is. Good luck with the LVM.

Robbyx
May 30th, 2019, 10:53 PM
This is how far I have got:


robins@robins-desktop:~$ sudo vgdisplay
[sudo] password for robins:
--- Volume group ---
VG Name ubuntu-vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 893.75 GiB
PE Size 4.00 MiB
Total PE 228800
Alloc PE / Size 6644 / 25.95 GiB
Free PE / Size 222156 / <867.80 GiB
VG UUID JVNXsA-B45j-NlbV-WccT-gTVS-Mi1q-iThHoD



robins@robins-desktop:~$ sudo lvdisplay
--- Logical volume ---
LV Path /dev/ubuntu-vg/root
LV Name root
VG Name ubuntu-vg
LV UUID dE1hPa-1uEE-iv95-11OD-PTCJ-37ce-JAgCJ0
LV Write Access read/write
LV Creation host, time ubuntu, 2019-05-20 11:39:09 +0100
LV Status available
# open 0
LV Size 25.00 GiB
Current LE 6400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:0

--- Logical volume ---
LV Path /dev/ubuntu-vg/swap_1
LV Name swap_1
VG Name ubuntu-vg
LV UUID pN0qIp-LcSp-FLkT-qxxk-EPFv-k6jt-yBLFHQ
LV Write Access read/write
LV Creation host, time ubuntu, 2019-05-20 11:39:09 +0100
LV Status available
# open 0
LV Size 976.00 MiB
Current LE 244
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:1



robins@robins-desktop:~$ sudo blkid
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop6: TYPE="squashfs"
/dev/loop7: TYPE="squashfs"
/dev/nvme0n1p1: UUID="3E57-48AC" TYPE="vfat" PARTUUID="e2d9ef6c-4dfb-4e32-8b4a-99da33b95fff"
/dev/nvme0n1p2: UUID="FFgm0c-d4Ln-VJQ0-gQOl-BWgm-HrWf-JNhh1G" TYPE="LVM2_member" PARTUUID="0e807de8-cb3c-4e83-bf04-1edc1a90f350"
/dev/sda2: UUID="0414cae8-b4fe-48d6-b06f-9509c3f9f8cb" TYPE="ext4" PARTUUID="00083267-02"
/dev/sda3: UUID="bf95a4fd-9e7a-4a06-afed-e82fdb82ac63" TYPE="ext4" PARTUUID="00083267-03"
/dev/sda5: LABEL="mydocs" UUID="8d99bfc8-90ab-4488-b7da-87e7676aff19" TYPE="ext4" PARTUUID="00083267-05"
/dev/sda6: LABEL="Data inc AV" UUID="1a2e8d10-7b93-4de1-b551-31db6a621276" TYPE="ext4" PARTUUID="00083267-06"
/dev/sda7: UUID="B1BA-7E06" TYPE="vfat" PARTUUID="00083267-07"
/dev/sda8: UUID="02c1d09e-54db-4346-863d-1097ff0ed637" TYPE="swap" PARTUUID="00083267-08"
/dev/sdb1: LABEL="hitachi 750gb" UUID="3c4f4688-4bd7-4262-a2b8-86f71bbd6d0b" TYPE="ext4" PARTUUID="0008ffd0-01"
/dev/sdb2: LABEL="Deju_dup_backup" UUID="c79b648e-48c5-45db-949f-a835928f3816" TYPE="ext4" PARTUUID="0008ffd0-02"
/dev/sdc1: SEC_TYPE="msdos" LABEL="xhd_vfat" UUID="5933-1C4A" TYPE="vfat" PARTUUID="846a1f97-01"
/dev/sdc2: LABEL="xhd system data" UUID="23fcb4e3-c94e-44de-b7a9-105da58918c6" TYPE="ext4" PARTUUID="846a1f97-02"
/dev/sdc3: LABEL="xhd old bacs" UUID="c99be12f-3a2d-4411-ab7c-5696167482dc" TYPE="ext4" PARTUUID="846a1f97-03"
/dev/sdc4: LABEL="spare4" UUID="84a9cb0b-6006-4996-8b4a-2edc07cc25c4" TYPE="ext4" PARTUUID="846a1f97-04"
/dev/loop8: TYPE="squashfs"
/dev/loop9: TYPE="squashfs"
/dev/loop10: TYPE="squashfs"
/dev/loop11: TYPE="squashfs"
/dev/loop12: TYPE="squashfs"
/dev/loop14: TYPE="squashfs"
/dev/loop15: TYPE="squashfs"
/dev/loop16: TYPE="squashfs"
/dev/loop17: TYPE="squashfs"
/dev/loop18: TYPE="squashfs"
/dev/loop19: TYPE="squashfs"
/dev/loop20: TYPE="squashfs"
/dev/loop21: TYPE="squashfs"
/dev/loop22: TYPE="squashfs"
/dev/loop23: TYPE="squashfs"
/dev/loop24: TYPE="squashfs"
/dev/loop25: TYPE="squashfs"
/dev/loop26: TYPE="squashfs"
/dev/loop27: TYPE="squashfs"
/dev/loop28: TYPE="squashfs"
/dev/loop29: TYPE="squashfs"
/dev/loop31: TYPE="squashfs"
/dev/loop33: TYPE="squashfs"
/dev/mapper/ubuntu--vg-root: UUID="ae1f1f8e-39fa-499f-bf18-0d1f172e6a58" TYPE="ext4"
/dev/mapper/ubuntu--vg-swap_1: UUID="b7d4305e-39d9-4c6c-92e8-9e83265d3f65" TYPE="swap"
/dev/loop34: TYPE="squashfs"
/dev/loop35: TYPE="squashfs"
/dev/nvme0n1: PTUUID="c0d9849b-b1a8-4c26-b8e6-bb09676863f0" PTTYPE="gpt"


Could I please have some help to do the following. Despite spending hours trying to work out what to do I am failing to set up my SDDe drive in the way I want:.

If you could give me the command lines I would be very grateful:

1. create a home LV in the VG (ubuntu-vg). Size 75GB, partition format btrfs, type 8e
2. create a data LV 400GB partition format btrfs type 8e
3. copy my old active home on my existing ssd to the new home in the VG
4. copy the printer settings from the old system area to the new one in the VG

Based on an earlier reply I am proposing to leave the rest of the VG without further space taken up by LVs so as to allow the disk management system space for alterations and corrections. This means that although the VG is 893GB I am only partitioning 25GB+75GB+400GB ie 500gb

What settings should I use in fstab for btrfs type partitions?

TheFu
May 31st, 2019, 12:39 AM
I don't think btrfs is compatible with LVM. ZFS and btrfs include volume management already, so LVM is not a good idea. Decide which you want more - LVM or brtfs.

btrfs has been deprecated by Redhat for a few years. Something to consider. https://access.redhat.com/discussions/3138231

I know that btrfs lies to both du and df commands. That, along with early data loss issues and performance problems if used as storage for virtual machines made it something I could never consider using. I was excited for a merged LVM+file system solution, but btrfs never seemed to become production ready.

The *display commands are hard to read. Please use the pvs, lvs, vgs instead, if you've decided to use LVM and not use btrfs. That output gets to the heart of the LVM data, quickly. blkid output isn't needed. It won't be used.

lsblk output can be useful as an overview.

Robbyx
June 1st, 2019, 10:17 AM
I thought I would see what happened if I tried to install btrfs in place of an ext4 lv:


$ sudo mkfs.btrfs -f /dev/ubuntu-vg/test
btrfs-progs v4.15.1
See http://btrfs.wiki.kernel.org for more information.

Detected a SSD, turning off metadata duplication. Mkfs with -m dup if you want to force metadata duplication.
Label: (null)
UUID: dc82aa36-d520-4745-8771-de47062548ca
Node size: 16384
Sector size: 4096
Filesystem size: 500.00MiB
Block group profiles:
Data: single 8.00MiB
Metadata: single 8.00MiB
System: single 4.00MiB
SSD detected: yes
Incompat features: extref, skinny-metadata
Number of devices: 1
Devices:
ID SIZE PATH
1 500.00MiB /dev/ubuntu-vg/test


Seeing the line in the above code:

Detected a SSD, turning off metadata duplication. Mkfs with -m dup if you want to force metadata duplication.

It seemed like running mkfs with -m would cause all sorts of unknown problems including btrfs not working efficiently so I decided I had better revert to ext4:


sudo mkfs.ext4 /dev/ubuntu-vg/test
mke2fs 1.44.1 (24-Mar-2018)
/dev/ubuntu-vg/test contains a btrfs filesystem
Proceed anyway? (y/N) y
Creating filesystem with 512000 1k blocks and 128016 inodes
Filesystem UUID: ec183a2b-c09b-4013-be40-a3bbd97d5f68
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done


I would like some assurance that metadata duplication now applies to this ext4 volume? How do I check it?

Here are the latest outputs to show my progress so far:


sudo pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p2 ubuntu-vg lvm2 a-- 893.75g <392.31g



sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home ubuntu-vg -wi-a----- 75.00g
lv_data ubuntu-vg -wi-a----- 400.00g
root ubuntu-vg -wi-a----- 25.00g
swap_1 ubuntu-vg -wi-a----- 976.00m
test ubuntu-vg -wi-a----- 500.00

Robbyx
June 1st, 2019, 10:29 AM
I have found this useful because of the clear examples:

https://www.tecmint.com/manage-and-create-lvm-parition-using-vgcreate-lvcreate-and-lvextend/

Robbyx
June 1st, 2019, 01:29 PM
I have just found this link about metadate duplication. It is not easy for me to understand but I think the author is claiming to btrfs can be run in a vlm without loss of security provided that the metadata duplication is switched on.

http://bogdan.org.ua/2016/12/30/how-to-add-enable-metadata-duplication-existing-btrfs-filesystem.html

TheFu
June 1st, 2019, 03:33 PM
Dude. You are so far out in the unknown areas that nobody uses, please make certain you have plenty of excellent backups and expect data loss.

Really need to read and understand the manpages. -m doesn't do what you think it does.

-m reserved-blocks-percentage
Specify the percentage of the filesystem blocks reserved for the
super-user. This avoids fragmentation, and allows root-owned
daemons, such as syslogd(8), to continue to function correctly
after non-privileged processes are prevented from writing to the
filesystem. The default percentage is 5%.

That link doesn't say anything about LVM. May want to read the fine-print on the blog. USE AT YOUR OWN RISK.

Robbyx
June 2nd, 2019, 04:52 PM
Thank you for your warning, which I have followed with the effect that I will not use btrfs within the vlm system.

Robbyx
June 3rd, 2019, 10:15 PM
I am following the advice at
https://www.maketecheasier.com/move-home-folder-ubuntu/

I am wanting to mount the LV in the SSDe called home into /media/home on the existing SSD file system. I have added a line in fstab:


uuid=5146d52d-6c4d-446e-a558-a73068ac99a3 /media/home ext4 defaults,noatime,errors=remount-ro 0 0

The uuid comes from blkid:


/dev/mapper/ubuntu--vg-home: UUID="5146d52d-6c4d-446e-a558-a73068ac99a3" TYPE="ext4"


sudo mount -a
mount: /media/home: special device uuid=5146d52d-6c4d-446e-a558-a73068ac99a3 does not exist.

I think I have entered the correct UUID. What can I do to ensure that a LV mounts with its UUID? Removing the uuid and replacing it with /dev/mapper/ubuntu--vg-home in fstab does work with no error message, when running sudo mount -a.

Robbyx
June 3rd, 2019, 10:51 PM
I suspect that the fault was that fstab's entry did not have UUID in caps. I changed in the new ssde and the error went away.

Dennis N
June 3rd, 2019, 11:41 PM
That's true, UUID has to be caps. Another way is first make a label for the file system and then mount in fstab using the label. Like so:


# mount shared data partition
LABEL=Common-Files /mnt/Common-Files ext4 defaults,noatime 0 2

Instead of a separate home folder, I always use a separate data LV (as above) that can be shared by any other OS by simply mounting it.

Robbyx
June 8th, 2019, 02:37 PM
At present some of the LV's that I have created will not allow new files to be created in them. This is due to the directory being owned by Root and therefore I am tempted to change the owner to my user name. When is it safe to change a directory or individual files from root to a non root name?

My concerns about changing permissions arose from a different problem to which I received this comment, and so I wonder if there is a parallel problem for the LVs I have created:


And then... this is REALLY important. You have to make sure that run_ubuntu.sh is owned by root:root. If you don't do that, ANYONE who has write access to the file will be able to execute ANYTHING as root. You need to make sure that this is a safe command because other users will also be able to execute this file as well (as root) unless you do some more stuff.

Robbyx
June 9th, 2019, 09:25 PM
When I run Ubuntu from the lvm I change the boot disk in the bios to Force MP5.

I am reinstalling ubuntu 18.04 using the other option in the live CD setup.

Could you please indicate which LV/ partition I should choose for these 2 settings:

1. /boot

2. device for boot loader installation

The SDDe is split into a Fat32 partition and the LVM group ubuntu-vg as can be seen from this extract from blkid:


/dev/nvme0n1p1: UUID="3E57-48AC" TYPE="vfat" PARTUUID="e2d9ef6c-4dfb-4e32-8b4a-99da33b95fff"
/dev/nvme0n1p2: UUID="FFgm0c-d4Ln-VJQ0-gQOl-BWgm-HrWf-JNhh1G" TYPE="LVM2_member" PARTUUID="0e807de8-cb3c-4e83-bf04-1edc1a90f350"


All the following LVs in Ubuntu-vg, except the swap file, are ext4


sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home ubuntu-vg -wi-ao---- 75.00g
lv_data ubuntu-vg -wi-a----- 350.00g
mydocs ubuntu-vg -wi-a----- 50.00g
root ubuntu-vg -wi-a----- 45.00g
swap ubuntu-vg -wi-a----- 14.95g
test ubuntu-vg -wi-a----- 300.00m

Dennis N
June 9th, 2019, 10:21 PM
When is it safe to change a directory or individual files from root to a non root name?

You mean non-root owner? Don't change anything that's not in your own home folder. Exception: new LVs you make for your own data. Another exception I did is folders in /opt on a case-by-case basis - in here there are some game programs sitting in separate folders where I made my user the owner. /opt is initially empty.


When I run Ubuntu from the lvm I change the boot disk in the bios to Force MP5

Sorry, don't know what MP5 refers to.


Could you please indicate which LV/ partition I should choose for these 2 settings:

/boot
Do nothing. Installer will put it as a subfolder of /root. You don't need a separate LV for it in unencrypted installs. I can't comment about using encryption.

boot loader location.

With UEFI, the evidence is that Ubuntu installer installs grub to EFI system partition on sda. Although it displays other choices, they don't work. That said, when I installed Ubuntu 18.04 on this computer that has NVMe SSD and a SATA SSD (which is sda), I could have tested if NVMe makes a difference and boot loader would install on NVMe, but didn't. I just assumed it wouldn't work with Ubuntu since sda existed. But, the Manjaro Gnome install that you see has its boot loader installed to the NVMe disk. The other OS you see were installed before the NVMe was added.


[dmn@sydney ~]$ lsblk -o name,label,fstype
NAME LABEL FSTYPE
sda
├─sda1 vfat
├─sda2 swap
├─sda3 LVM2_member
│ ├─os_vg2-umate_1604 Ubuntu-Mate-1604 ext4
│ └─os_vg2-fedora Fedora-29 ext4
└─sda4 LVM2_member
├─os_vg2-Xubuntu_1804 Xubuntu-1804 ext4
└─os_vg2-Common Common-Files ext4
nvme0n1
├─nvme0n1p1 vfat
└─nvme0n1p2 LVM2_member
├─sn500_vg-manjaro_mate Manjaro-Mate ext4
├─sn500_vg-vm_disks VM-Disks2 ext4
├─sn500_vg-ubuntu_1804 Ubuntu-1804 ext4
└─sn500_vg-manjaro_gnome Manjaro-Gnome ext4

Robbyx
June 9th, 2019, 11:01 PM
I will try to follow your suggestions when I reinstall tomorrow. I really appreciate your very helpful response.

Dennis N
June 10th, 2019, 01:14 PM
I will try to follow your suggestions when I reinstall tomorrow. I really appreciate your very helpful response.
The SDDe is split into a Fat32 partition and the LVM group ubuntu-vg as can be seen from this extract from blkid:
Code:
/dev/nvme0n1p1: UUID="3E57-48AC" TYPE="vfat" PARTUUID="e2d9ef6c-4dfb-4e32-8b4a-99da33b95fff"
/dev/nvme0n1p2: UUID="FFgm0c-d4Ln-VJQ0-gQOl-BWgm-HrWf-JNhh1G" TYPE="LVM2_member" PARTUUID="0e807de8-cb3c-4e83-bf04-1edc1a90f350"

This is the way to start on a new drive. Two physical partitions (you can create these with gparted), one for EFI system partition and one LVM partition for your volume group. And create a GPT partition table first, of course. (I don't use all the available space - only what I plan to use).

After booting the install into 'try ubuntu' live session, use the terminal to setup the volume group in the LVM partition, the LV for root, and optionally LVs for swap and home. A separate data LV and additional LVs can optionally be created after the OS gets installed. Start the installation process and don't select the 'use LVM' option with your custom installation. Use the 'something else' option and setup the LV to be used for root and any optional LVs for home and swap.
---------------------------
Note on setup I showed in previous post:
LVM makes it easy to install the additional OSes which were added over time. I use a swap partition because some OS don't offer swap files, and because it can be shared by all installed OSes.

TheFu
June 10th, 2019, 02:45 PM
Some excellent advice by Dennis above.

My rules for who needs to own a directory are pretty simple. I don't touch OS directories. Any directory that I create for my purposes needs to be owned by the userid doing most of the work inside that directory. On a single-user desktop, that is probably me. On servers, different processes run as different userids, not me, so those directories and files probably have specific ownership demands. This is very important for system security.


And then... this is REALLY important. You have to make sure that run_ubuntu.sh is owned by root:root. If you don't do that, ANYONE who has write access to the file will be able to execute ANYTHING as root. You need to make sure that this is a safe command because other users will also be able to execute this file as well (as root) unless you do some more stuff.
Without seeing the exact permissions for the file, I cannot say if this is complete bunk or 100% truth. There are ways to have programs run as root - it is called "set uid root". It is necessary for Unix to work. In the old days, sshd and login programs worked that way. pppd on my system is still setup that way.

-rwsr-xr-- 1 root dip 394984 Jun 12 2018 pppd*
See that 's' where the userid X should be? This is a setuid-root program. Anyone in the 'dip' group can run it. When they do, the process is started as root.
Some others:

/usr/bin$ ll |grep rws
-rwsr-sr-x 1 daemon daemon 51464 Jan 14 2016 at*
-rwsr-xr-x 1 root root 71824 Mar 26 15:34 chfn*
-rwsr-xr-x 1 root root 40432 Mar 26 15:34 chsh*
-rwsr-xr-x 1 root root 384520 May 30 04:19 firejail*
-rwsr-xr-x 1 root root 75304 Mar 26 15:34 gpasswd*
-rwsr-xr-x 1 root root 39904 Mar 26 15:34 newgrp*
-rwsr-xr-x 1 root root 54256 Mar 26 15:34 passwd*
-rwsr-xr-x 1 root root 23376 Mar 27 10:40 pkexec*
-rwsr-xr-- 1 root plugdev 45336 May 18 2014 pmount*
-rwsr-xr-- 1 root plugdev 35832 May 18 2014 pumount*
-rwsr-xr-x 1 root root 136808 May 1 12:22 sudo*

Clear as mud?

Robbyx
June 10th, 2019, 09:38 PM
TheFu: Thank you for that explanation.

Everyone:
Something went wrong with my installation on the SSDe. I have been attempting to reinstall using the standard install on a DVD and reformatting just the root directory, not the home. I have done the reinstall a few times but it crashes because unfortunately when I install onto the SSDe the installer complains at the end of the installation that it can not install the boot loader on the chosen (fat32 partition) and then will not allow me to cancel the installation because it refuses to exit, probably crashed.

As I am only using Ubuntu with no Windows installations, I assume I only need to create an ESP partition for the boot loader but I can not see that option in Gparted and so have chosen FAT32.

Am I correct to have chosen the boot partition to be formatted to Fat32? If not what should I choose?
Following Dennis's comments I would assume that I would have the option to create an EFI sys partition with Gparted, but it does not show in the options.

Here is the current state of play on the SSDe:

As per gparted looking at /dev/vmOn1

/dev/vme0n1p3 fat32 /media/A2AB-61C4 100MiB
/dev/vme0n1p1 fat32 /media/robin/BOOT 100MiB
/dev/vme0n1p 2 lvm2.pv ubuntu-vg 893GiB

This is the content of the LVM partition:


LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home ubuntu-vg -wi-ao---- 75.00g
lv_data ubuntu-vg -wi-a----- 350.00g
mydocs ubuntu-vg -wi-a----- 50.00g
root ubuntu-vg -wi-a----- 45.00g
swap ubuntu-vg -wi-a----- 14.95g
test ubuntu-vg -wi-a----- 300.00m

I am assuming that I do not need to do a complete reformat of the SSDe, but I can not work out how to get a boot partition to work so that the reinstall can proceed to the end.

Dennis N
June 10th, 2019, 10:02 PM
Am I correct to have chosen the boot partition to be formatted to Fat32? If not what should I choose?
FAT32 is good. Maybe you forgot to set the boot flag on this partition. Gparted will then show 'boot,esp' in the 'flags' column.

Partition > Manage Flags

Robbyx
June 11th, 2019, 02:34 AM
I think I followed your advice about the flag but I am still getting a fatal error:

During install, bootloader install failed executng 'grub-install/dev/nvme01p1' failed.

In gparted the SSDe drive now looks like:

/dev/vme0n1p1 fat32 /media/robin/BOOT 100MiB #this partition is flaged as boot, esp
/dev/vme0n1p 2 lvm2.pv ubuntu-vg 893GiB

Any idea as to what I am misunderstanding or should change?

Dennis N
June 11th, 2019, 03:31 AM
Well, this isn't right for the EFI system partition (this is not called a boot partition - you don't need to make a boot partition).

/dev/vme0n1p1 fat32 /media/robin/BOOT
The partition you created, formatted FAT32 and set the boot flag, is automatically found and set up by the installer to mount at /boot/efi and will automatically be entered in the /etc/fstab file. So you should not specify a mount point for it. There's really nothing you need to do about this partition.

The only thing you would specify during the install is which LV is to be used for /. Using the 'something else' installation option, the LVs you made show up at the top of the partitioning screen that appears next. Select the one you called 'root' and press the 'change' button. You get a dialog where you must specify use it as ext4 file system, and to mount it at /. If you already formatted it as ext4, you don't need to check the format box to do it again. If not, check the format option here.

Then, click the install button to supply user information and finish up! It should work now.

Robbyx
June 12th, 2019, 06:02 AM
I still can not get past the point of the failure to install a bootloader. The installer will not let me ignore the boot loader setup. If I leave it to default it reports it is not valid and also does not act on my choice to continue without a bootloader; I have to force a reboot and as it has not installed the os I have to use the old SSD setup

Dennis N
June 12th, 2019, 12:21 PM
I still can not get past the point of the failure to install a bootloader.
Was the installer booted in UEFI mode? If not, it would fail to install grub bootloader because your drive is not set up for legacy BIOS install. (You could install an LVM system using legacy BIOS with slightly different partitioning).

May not be a relevant here, but does your motherboard and computer firmware fully support NVMe? I had to upgrade the firmware to enable full NVMe support for my Intel H97 board (from 2014).

What is the exact error message you get? Often a Google search on the error message will get some matches that will give some information.

Robbyx
June 12th, 2019, 06:06 PM
Before I bought the SDDe I spoke to the supplier of my MB and they confirmed that the CPU is suitable for the SDDe and it would be bootable from it. My MB is newer than your one.

Error message:
Error occurred and it was not possible to install the boot loader at the specified location. I tried the other locations on the SDDe and the error remained.

I have done a search on this error and found that it has a long history. The solutions that sometimes worked seem complex and difficult to follow.

I have spent weeks on trying to install the SDDe with LVM and am at the point of giving up. I propose to reformat the drive and ditch the LVM installation. I will go for the normal fixed size partitions but will use BTRFS instead of ext 4 for all partitions except the small FAT32 partition to take the bootloader. I am going for BTRFS because I have read that it has a number of advantages over ext4

Dennis N
June 12th, 2019, 07:37 PM
You might try a 'mixed' system. The drive in the screenshot has standard partitions and LVM partitions. This started with the two regular partitions for two OS. Then I wanted to learn about LVM, so I made just one LVM partition (sdc5) to try one LVM install and to learn the various LVM commands in the process. Eventually, I added another logical volume and installed another OS. And so on. When space ran out, I added another LVM partition (sdc7) and expanded my volume group. After a while, I clearly saw the advantages of LVM, and now I'm all in. The exception is an OS where the installer wouldn't do LVM - example of this is MXLinux on sdc2.

Robbyx
June 13th, 2019, 10:40 PM
I think a mixed system is a good idea I would like to try it. Thank you also for the screen shot.

Is the procedure?

Create a first partition on the drive with gparted formatted to Fat32 say 100MiB

Leave the rest of the drive unformatted
Create a Virtual group in the unformatted area. This should be say 100GB less than the available space. for the SSD drrive to keep problem areas under control.
Create appropriate LVs in the virtual group
Format LVs
Install os with live CD.

Dennis N
June 14th, 2019, 04:54 AM
Referring to the screenshot in post #41, in the beginning, I just made the EFI System Partition on sdc1 and regular ext4 partition sdc2 for my OS (I might have had a separate data partition on a separate HDD, but not sure about that). I suggest you do a minimum UEFI system with standard partitions like that to start out with on the NVMe disk to be sure it will boot OK on your computer. If you wanted a separate home partition, you should also add that at the time of installation.

I don't know if you are interested in having more than one OS installed like I am, so what you do after that is up to you. At some point, try the LVM again by adding an LVM partition. Might be for data storage (LVs easily resizable!) or install an OS there. (In post #41, I think sdc7 and sdc9 both contain parts of a big data LV). Your system will grow over time.

You may notice more than one EFI system partition (ESP)- because at one point, I wanted to see if you could use more than one ESP per drive. The answer was yes. Years ago that was not clear at all. Many distros (Fedora & Manjaro for example) will allow you to choose an ESP to use, but Ubuntu's installer (ubiquity) will only install to the first ESP on the first drive - on my SATA + NVMe system (see post #30), I selected sda and Ubuntu installed boot loader to sda. If no sda is present, I would think it would have to install to the NVMe. If both NVMe and SATA sda are present with ESPs and you choose the NVMe for boot loader location, will it work? Let us know what you find out if you do that.