
Originally Posted by
YannBuntu
Thx MAF for your feedback. wow, that's challenging configs!
1st case: l.11 bootinfoscript doesn't list the partitions
l.15 partition missing
l.26 output of df -Th / is empty !! how can this be?
l.400: errors code 32 , shouldn't try to mount these.
2nd case:
l20: bootinfoscript doesn't list the LVM volumes
l66: the ESP is not recognized correctly, with the consequence you mention on the suggested repair.
You gave me work for several weeks

On 1st Case, Line 26, compare that with it's system-info report, under the heading "---------- File system specs from 'df -h':"
Would it help if I share an early-version mount/chroot script I have in the Ama-Gi Project for ZFS file manager systems? I identify ZFS from the results from fdisk where both "Solaris_root" and "Solaris_boot" types exist... As you can see in it's system-info report under the heading "---------- Disk/Partition Information From 'fdisk':".
This was early work, that I haven't gotten back to yet, but should give you some ideas:
Code:
#!/bin/bash
# MAFoElffen <mafoelffen@ubuntu.com> 2021.08.18
# This should work for a LiveCD that has a read-only filesystem
# This will only work if we have already confirmed that Solaris_root and Solaris_boot exists.
#
#########################################################################
# Copyright (c) 2012, 2021
#
# GNU General Public Llicense (GPL-3.0-or-later)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
echo "This LiveCD already has OpenSSH Server installed, With a default user and password.
Reminder: Use the \"ubuntu\" user and the \"ubuntu\" as password."
# View and display IP Adresss
ip addr show scope global | grep -E 'inet ' | grep -v br0
echo "You can use the address shown above to ssh remotely into this machine"
pause = input -p "Press any key to continue..."
echo "Configuring Temporary ZFS Environment"
echo "Becoming root"
sudo -i
echo "Installing ZFS in the Live CD environment"
apt-add-repository universe
echo "Ignore any errors on the following, about moving an old database out of the way..."
apt update
apt install --yes debootstrap gdisk zfs-initramfs
# Check if it has existing datasets or "no datasets available"
test1 = zpool lists | grep -q 'no datasets' # If output, then export
test2 = zfs list | grep -q 'no datasets' # If output, then export
if [ $test1 ]; then
echo "Existing Datasets found. Exporting rpool."
zpool export rpool
elfi [ $test2 ]; then
echo "Existing Datasets found. Exporting rpool."
zpool export rpool
else
echo "No existing Datasets found."
fi
# Chroot into ZFS pool. Import pool to non-default location. The -N flag means 'don’t automatically mount' and is necessary,
# otherwise the rpool root, and the rpool/root/UBUNTU pool, will both try to mount on /mnt.
echo "Importing the Pools"
zpool import -N -R /mnt rpool
echo "Mounting the root system"
zfs mount rpool/ROOT/ubuntu
echo "Mounting the remaining file systems"
zfs mount -a
# Bind the virtual filesystems from the LiveCD environment to the new system and chroot into it:
echo "Mounting the virtual filesystems"
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
echo "chrooting into the mounted system"
chroot /mnt /bin/bash --login
# After exiting chroot, shutdown ZFS filesystems cleanly, unmount all ZFS pools, unmount all mounts and reboot
zfs umount rpool/ROOT/ubuntu
zpool export rpool
umount /mnt
reboot