Page 1 of 9 123 ... LastLast
Results 1 to 10 of 89

Thread: zfs focal > jammy upgrade fail

  1. #1
    Join Date
    Oct 2022
    Beans
    44

    Unhappy zfs focal > jammy upgrade fail

    Can you help me recover this failed jammy upgrade please?
    I ran do-release-upgrade on a focal zfs on root (legacy bios, bpool, & rpool) and everything seemed to go fine until after reboot. Grub still showed 20.04 with old kernel and trying to boot it dumps to an emergency mode. The askubuntu should have all relevant details but I'm happy to provide any other necessary info or try any other suggestions. Thanks!

    https://askubuntu.com/questions/1435...-jammy-upgrade

    https://www.reddit.com/r/Ubuntu/comm...n_this_system/

    Perhaps related:
    https://ubuntuforums.org/printthread...8&pp=10&page=1

  2. #2
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: zfs focal > jammy upgrade fail

    I am the contributor to Zannubuntu for ZFS and LUKS for the 'boot-info' and 'boot-repair' script... Development of the [boot-info] and [boot-repair] Utilities

    You want me to answer you "here" in this thread or "there" on AskUbuntu?

    Before trying to do anything... I would recommend booting from 22.04 LiveCD USB and backup everything to an external USB storage drive... Which you should have done before doing a release upgrade. That way you have one more chance for a fall back point...

    ZFS snap shots are not a replacement for backups.

    There are a few things that contributed to this over time that need to be corrected first. Or not. Your decision after I expand on this.
    Last edited by MAFoElffen; October 23rd, 2022 at 05:16 PM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  3. #3
    Join Date
    Oct 2022
    Beans
    44

    Re: zfs focal > jammy upgrade fail

    Thanks for answering.
    Askubuntu please. (unless you have another preference)
    Data is backed up, thanks for the advice.

  4. #4
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: zfs focal > jammy upgrade fail

    I think here, in this thread, would be better, as I have more freedom of how long a post can be for instructions.

    First, the reason ZSys stopped taking snaphots a while back, is that if you checked your bpool in the /boot partition, you are probably going to find out that it is almost full, which is also probably a reason why your release upgrade failed (not enough space there)... When the bpool has less than 20% free space, then ZSys stops taking any new snapshots. I created a script for mine which deletes the old bpool snapshots, and leaves the 5 newest there.

    This usually prompts an error saying that during any apt kinds of activity. You didn't notice that? No matters. Was in the past.

    So the plan would be to boot from a Desktop LiveCD, mount the partitions in the installed system, activate the ZFS rpool and bpool of the installed system, delete some of the old bpool snap shots to free up some space... Before trying to roll back to your newest snap shot. If you try to revert too soon, I don't know if you would have enough space left in the bpool to do that.

    Does that sound like a good plan?
    Last edited by MAFoElffen; October 24th, 2022 at 05:46 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  5. #5
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: zfs focal > jammy upgrade fail

    Boot from an Ubuntu LiveCD and open up a graphical terminal and a browser. Navigate the browser back to this thread, so you can cut and paste commands and output between those.

    From the terminal session:
    Code:
    sudo fdisk -l 2>&1 | sed '/\/dev\/loop/,+3 d' 2> /dev/null | uniq | grep 'Solaris'
    lsblk -o NAME,SIZE,FSTYPE,LABEL,MOUNTPOINT,MODEL | grep -v '/snap/\|loop' | grep 'zfs_'
    Please post the output of those two commands... The first will have output of Solaris boot and Solaris root. The second will ID the names of the ZFS rpool and bpool to be able to mount and activate them...

    From that output, I'll post the next commands.
    Last edited by MAFoElffen; October 24th, 2022 at 06:00 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  6. #6
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: zfs focal > jammy upgrade fail

    The next step is
    Code:
    sudo -i
    apt update
    apt install -y zfs-initramfs
    zpool list   # ensure the output says: "no datasets available"
    zfs list  # ensure the output says: "no datasets available"
    zpool import -N -R /mnt rpool
    Adjust the name rpool above to either rpool or tmprpool, depending on the output from above...
    Code:
    zfs list
    This output, under the heading of "mount", look for which rpool is mounted directly at /mnt... It should be something like
    Code:
    NAME                                                  USED  AVAIL     REFER  MOUNTPOINT
    rpool/ROOT/ubuntu_##fs##                             7.78G  9.15G     3.52G  /mnt
    You are going to use that output in the next command, for example
    Code:
    zfs mount rpool/ROOT/ubuntu_72fs0l
    Then the rest of the mount like this
    Code:
    zfs mount -a
    mount --rbind /dev  /mnt/dev
    mount --rbind /proc /mnt/proc
    mount --rbind /sys  /mnt/sys
    chroot /mnt /bin/bash --login
    You are now chrooted into the installed system and the ZFS filesystem, able to make live changes to it...

    Tell me when you get that far...

    Note: Do Not make changes to the installed system and then reboot until you first shutdown the ZFS filesystem and exit out cleanly (for much later, for when that time comes...):
    Code:
    zpool export -a
    exit
    umount /mnt
    exit
    Last edited by MAFoElffen; October 24th, 2022 at 08:17 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  7. #7
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: zfs focal > jammy upgrade fail

    Next you want to make room in your bpool...

    I wrote this script to automate that for me on mine:
    Code:
    #!/bin/bash
    # MAFoElffen, <mafoelffen@ubuntu.com>, 2021.12.28, last modified: 2022.02.23
    # Purpose: ZFS Snapshots bpool maintenance
    
    function ZfsShowFreeSpace ()
    {
        zfs list -o space
        zpool list
    }
    
    function ZfsListBpool() {
       zfs list -r -t snapshot -o name,used,referenced,creation bpool/BOOT | less
    }
    
    function GetZfsBpoolSnapshotCount()
    {
        ZfsSnapshotCount=$(zfs list -r -t snapshot -o name,used,referenced,creation bpool/BOOT | wc -l)
        line_count="$(($ZfsSnapshotCount-1))"
        echo -e "There are $line_count snapshots in bpool."
    }
    
    function ZfsTrimLast5()
    {
        # Command for removing is: zsysctl state remove --system <statename>
        zfs list -r -t snapshot -o name,used,referenced,creation bpool/BOOT | \
             head -n 5 | \
             cut -c 35-40 | \
             xargs -n 1 zsysctl state remove --system
    }
    
    function ShowMenu() {
        
        while [[ "$menu_response" !=  "5" ]]
        do
            clear
            echo -e "=== ZFS BPool Maintenance ==="
            echo
            echo -e "1 - Show space in Zpools"
            echo -e "2 - Show list of BPool Snapshots."
            echo -e "3 - Show count of BPool Snapshots"
            echo -e "4 - Destroy oldest 5 BPool SnapShots"
            echo -e "5 - Exit"
            echo 
            read -p "Enter a valid menu response from 1 through 5:  " menu_response
            case $menu_response in
                1) ZfsShowFreeSpace;;
                2) ZfsListBpool;;
                3) GetZfsBpoolSnapshotCount;;
                4) ZfsTrimLast5;;
                5) exit;;
                *) echo -e "The response was not a valid choice.";; 
            esac
            
            echo
            read -p "Press any <Enter> key to continue" trashbin
        done
    }
    
    ShowMenu
    Use gedit to paste/save it into a script file named "ZfsBpoolMaintenance". Then to run, while you are in the same folder
    Code:
    chmod +x ./ZfsBpoolMainteance
    ./ZfsBpoolMaintenance
    Last edited by MAFoElffen; October 25th, 2022 at 05:06 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  8. #8
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: zfs focal > jammy upgrade fail

    Then check the status of the do-release-upgrade
    Code:
    apt install -f
    dpkg --configure -a

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  9. #9
    Join Date
    Oct 2022
    Beans
    44

    Re: zfs focal > jammy upgrade fail

    Quote Originally Posted by MAFoElffen View Post
    I think here, in this thread, would be better, as I have more freedom of how long a post can be for instructions.

    First, the reason ZSys stopped taking snaphots a while back, is that if you checked your bpool in the /boot partition, you are probably going to find out that it is almost full, which is also probably a reason why your release upgrade failed (not enough space there)... When the bpool has less than 20% free space, then ZSys stops taking any new snapshots. I created a script for mine which deletes the old bpool snapshots, and leaves the 5 newest there.

    This usually prompts an error saying that during any apt kinds of activity. You didn't notice that? No matters. Was in the past.

    So the plan would be to boot from a Desktop LiveCD, mount the partitions in the installed system, activate the ZFS rpool and bpool of the installed system, delete some of the old bpool snap shots to free up some space... Before trying to roll back to your newest snap shot. If you try to revert too soon, I don't know if you would have enough space left in the bpool to do that.

    Does that sound like a good plan?
    Although bpool is currently not even close to full, you may be correct that (at some point in 2020) it was too full and snapshots stopped as a result.
    Thanks again for the help and I will attempt to follow the rest of your instructions.

  10. #10
    Join Date
    Oct 2022
    Beans
    44

    Re: zfs focal > jammy upgrade fail

    Quote Originally Posted by MAFoElffen View Post
    Boot from an Ubuntu LiveCD and open up a graphical terminal and a browser. Navigate the browser back to this thread, so you can cut and paste commands and output between those.

    From the terminal session:
    Code:
    sudo fdisk -l 2>&1 | sed '/\/dev\/loop/,+3 d' 2> /dev/null | uniq | grep 'Solaris'
    lsblk -o NAME,SIZE,FSTYPE,LABEL,MOUNTPOINT,MODEL | grep -v '/snap/\|loop' | grep 'zfs_'
    Please post the output of those two commands... The first will have output of Solaris boot and Solaris root. The second will ID the names of the ZFS rpool and bpool to be able to mount and activate them...

    From that output, I'll post the next commands.
    Code:
    ubuntu@ubuntu:~$ sudo fdisk -l 2>&1 | sed '/\/dev\/loop/,+3 d' 2> /dev/null | uniq | grep 'Solaris'
    ubuntu@ubuntu:~$ lsblk -o NAME,SIZE,FSTYPE,LABEL,MOUNTPOINT,MODEL | grep -v '/snap/\|loop' | grep 'zfs_'
    ├─sda5     2G zfs_member bpool                                                       
    └─sda6   927G zfs_member rpool
    I took the liberty of trying your same command grepping for FreeBSD instead. Hopefully this is what you were aiming for?
    root@ubuntu:/home/ubuntu# fdisk -l 2>&1 | sed '/\/dev\/loop/,+3 d' 2> /dev/null | uniq | grep 'FreeBSD'
    /dev/sda5 5249024 9443327 4194304 2G a5 FreeBSD
    /dev/sda6 9445376 1953525167 1944079792 927G a5 FreeBSD

Page 1 of 9 123 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •