so a power outage will cause this to happen? can i automate a repair attempt? (having to use a live GUI session on a headless server is no fun)
right now the server is booted and running, i should be able to do it without using a live session, guessing there is a faster way since the server is up and running
Code:
chad@niceserver:~$ apt-cache policy zfs-initramfs zfsutils-linux
zfs-initramfs:
Installed: (none)
Candidate: 2.1.5-1ubuntu6~22.04.1
Version table:
2.1.5-1ubuntu6~22.04.1 500
500 http://us.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
500 http://us.archive.ubuntu.com/ubuntu jammy-security/main amd64 Packages
2.1.2-1ubuntu3 500
500 http://us.archive.ubuntu.com/ubuntu jammy/main amd64 Packages
zfsutils-linux:
Installed: 2.1.5-1ubuntu6~22.04.1
Candidate: 2.1.5-1ubuntu6~22.04.1
Version table:
*** 2.1.5-1ubuntu6~22.04.1 500
500 http://us.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
500 http://us.archive.ubuntu.com/ubuntu jammy-security/main amd64 Packages
100 /var/lib/dpkg/status
2.1.2-1ubuntu3 500
500 http://us.archive.ubuntu.com/ubuntu jammy/main amd64 Packages
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 31.0G 7.09T 30.9G /mnt/HDD
the power did go out again after the original post, before the UPS battery died i used a laptop to ssh in and run poweroff, when it booted up again (assuming it did not go out again then boot again) that would have been a clean shutdown and it still happened
at this time i have this modification to my /etc/fstab, no idea if it will help or not (have not rebooted since saving it)
Code:
$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/md0p1 during curtin installation
/dev/disk/by-id/md-uuid-ede07d21:4114781a:b6762645:db22bf0e-part1 / ext4 defaults,noatime 0 1
# /mnt/Data was on /dev/md1p1 during curtin installation
/dev/disk/by-id/md-uuid-28843301:920a306f:47f8cd80:d4d35fcd-part1 /mnt/Data ext4 defaults,noatime 0 1
#/dev/disk/by-id/md-uuid-a1a41d92:421eda05:59f6da66:3d3f0f6e /mnt/HDD ext4 defaults 0 1
/mnt/Data/kvm-images /home/chad/kvm/images bind x-systemd.requires=/mnt/Data,defaults,bind 0 0
/mnt/HDD/www /var/www bind x-systemd.after=zfs-mount.service,x-systemd.requires=/mnt/HDD,defaults,bind 0 0
/mnt/HDD/apt-cacher-ng /var/cache/apt-cacher-ng bind x-systemd.after=zfs-mount.service,x-systemd.requires=/mnt/HDD,defaults,bind 0 0
/swap.img none swap sw 0 0
when i tried to use ctrl+d the next time it was not enough to get it working, i had to use the reset button and press enter then reboot or something like that
and my pool is degraded now... one scrub and clear and it looks fine
Bookmarks