Results 1 to 8 of 8

Thread: Canonical Livepatch Service versus automatic-updates cron job

  1. #1
    Join Date
    Oct 2020
    Beans
    2

    Canonical Livepatch Service versus automatic-updates cron job

    We enrolled for Canonical Livepatch Service. Earlier we wrote shell scripts in automatic-updates and ran these scripts daily 6am.
    The scripts are sudo apt update && sudo apt upgrade -y. we also followed instruction as described in here -> https://libre-software.net/ubuntu-automatic-updates/

    What additional benefit do we get by enrolling for Canonical Livepatch Service versus cronjob - following the instructions as described in the link ?

    Many thanks.

  2. #2
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Canonical Livepatch Service versus automatic-updates cron job

    I think automatic patching is a mistake for any production system. I've had patches break production syytems multiple times. Blindly accepting patches will eventually fail. If you insist, please make snapshots with LVM or ZFS before patching and have a way to deleted those snapshots before all the storage runs out.

    TANSTAFL.

  3. #3
    Join Date
    Aug 2020
    Location
    Denmark
    Beans
    100
    Distro
    Ubuntu 22.04 Jammy Jellyfish

    Re: Canonical Livepatch Service versus automatic-updates cron job

    Livepatching and "unattended upgrades" are two different things.

    1. Unattended upgrades (or upgrade script) are just regular updates run at specific intervals. Kernel or microcode updates will still require a reboot to be applied
    2. Canonical Livepatch is a special service that patches "high" and "critical" kernel vulnerabilities while the system is running, and thus reboot is not required.

    Be aware that only a limited number of kernel vulnerabilities can and will be livepatched. For 20.04 I've only experienced 1 or 2 livepatches, while the remaining kernel updates have been regular updates that required a reboot. But livepatching makes sure that the most critical vulnerabilities are patched asap, and the other kernel updates can be applied when a reboot is possible.
    Supermicro Server :: Atom C3558 (4) @ 2.2 GHz :: 64 GB ECC DDR4-2400 :: 512 GB NVMe :: 4x 2 TB SSD @ RAID-Z :: 1x Raspberry Pi 4 Server

    Scripts: directory-tools | rsync-backup | add-apt-key

  4. #4
    Join Date
    Oct 2020
    Beans
    1

    Re: Canonical Livepatch Service versus automatic-updates cron job

    if using zfs or btrfs your safe to update the kernel etc, live update are essential, it is important to make sure ubuntu are livepatch and update be sure to update asap

    Best regards

  5. #5
    Join Date
    Oct 2020
    Beans
    2

    Re: Canonical Livepatch Service versus automatic-updates cron job

    In production, we cannot afford to have a server restart. We understand Canonical Livepatch does the most critical kernel update patches but other kernel updates patches come into effect only when the server is restarted. What is most feasible option to avoid restart but still have a healthy kernel? Thanks.

  6. #6
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Canonical Livepatch Service versus automatic-updates cron job

    Quote Originally Posted by avasudevan View Post
    In production, we cannot afford to have a server restart. We understand Canonical Livepatch does the most critical kernel update patches but other kernel updates patches come into effect only when the server is restarted. What is most feasible option to avoid restart but still have a healthy kernel? Thanks.
    Schedule weekly maintenance and reboot then. Setup multiple servers, perhaps with sticky affinity, so clients get directed to one begining 1.5x the average session time. If you have a "follow the sun" service, you'll want 4+ of these setups.

    Attempting to get over 98% availability out of a single server setup is a fools errand. There are availability questionnaires which make that pretty clear. I think a "server" with redundant disks, redundant networking, redundant PSUs, and all addon cards placed internally onto different redundant buses can only guarantee 95% uptime. That leaves something like 400+ hours a year of downtime.
    Sure, we all do better than that, but there's luck and reality. I worked as a systems architect for services that can never be offline, ever, for over a decade. We always had N+2 setups in each location. If the service had 50 front end servers, we'd deploy 52. If there are only 3 DB servers, we'd deploy 5 with the extra 2 in slave mode to keep up with transactions. Those 2 would replicate their data to different regions of the world, so if power or internet failed, we could still continue with just a minute or so of lost data. These servers were placed into class-5 data centers with power fed from 3 different electric substations. The buildings were designed to take anything mother nature could throw. There were battery banks, diesel electric power generators (like a multiple train engines) and gas turbine power generation on-sight should long term locally generated power be required. All this testing begins with something as simple as yanking an ethernet cable out or pulling the power cable out. What happens, automatically? If I did my job, the system barely notices and operations gets an alert, 4 pagers go off to the on-call numbers for application support, DB support, the application manager/owner, and the local administrator. Most importantly, I do not get any calls because the system worked as designed. That is key. I had a job where I was 24/7/364 support. I wasn't allowed to leave the local town. Once I went to the beach for a long lunch - about 45 min away - and my pager started going crazy by the application owner. It wasn't any emergency or my job he wanted help concerning. Never again. 6 months later, I moved to a different job in a different state.

    These data centers were #1 on the govt list of critical locations to get refueled, just after regional trauma centers. We failed our servers over between different DCs every other week, then failed them back 2 weeks later. If an entire DC was taken out for any reason, the IPs and all subnet for it could be switched to another DC that had servers to be taken over in a few hours. Critical, national, infrastructure. We didn't mess around.
    All the storage was the most expensive, full-featured, redundant SAN equipment available. These services could never go down, but each system had scheduled maintenance so we could do those things required to meed the 5-9s availability mandate.

    Wishes are not a plan. For truly 5-9s available services, a plan is required. It needs to be funded, implemented, tested and best if it is actually part of the normal systems testing every few weeks. It's like backups, until we restore, we don't actually know whether our backups are any good. Same for availability. Any failover plans must be tested.
    Last edited by TheFu; October 14th, 2020 at 02:53 PM.

  7. #7
    Join Date
    Mar 2007
    Location
    Denver, CO
    Beans
    7,958
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Canonical Livepatch Service versus automatic-updates cron job

    Do you need a High Availability setup? Clustering?

  8. #8
    Join Date
    Sep 2011
    Location
    Behind you!
    Beans
    1,690
    Distro
    Ubuntu 20.04 Focal Fossa

    Re: Canonical Livepatch Service versus automatic-updates cron job

    Quote Originally Posted by avasudevan View Post
    In production, we cannot afford to have a server restart. We understand Canonical Livepatch does the most critical kernel update patches but other kernel updates patches come into effect only when the server is restarted. What is most feasible option to avoid restart but still have a healthy kernel? Thanks.
    If that truly is the case, then you should be running a high-availability farm...which means you can reboot whenever you need.

    If you only have 1 server, then the system was not designed with the "cannot afford to have a server restart" goal.

    EDIT: I do not know your setup but using redundant servers and load balancers can help you achieve the "always online" goal by letting you take servers offline at any level and the entire system still work.

    Examples:
    Load balancer for a web server
    Load balancer for a database cluster
    MariaDB Galera cluster

    LHammonds
    Last edited by LHammonds; October 14th, 2020 at 02:35 PM.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •