Be happy:
Code:
$ systemd-analyze
Startup finished in 25.901s (firmware) + 2.635s (loader) + 7.169s (kernel) + 2min 28.329s (userspace) = 3min 4.036s
graphical.target reached after 2min 27.753s in userspace
And on another physical machine:
Code:
$ systemd-analyze
Startup finished in 10.185s (firmware) + 2.845s (loader) + 5.081s (kernel) + 16.177s (userspace) = 34.290s
graphical.target reached after 14.210s in userspace
These are nearly identical systems, except one has 6 more HDDs connected. USB Peripherals make booting slower. If the system has lots of USB ports, perhaps using an added front-panel with a 35-in-5 card reader, 4 USB2 ports, 2 USB3 ports, audio jacks and 1 eSATA port, if those are all enabled, the boot time can become 5+ minutes. I ended up disconnecting all the USB2 things on my front-panel to prevent the ludicrous boot times. Basically the opposite of "they've gone plaid". Hopefully, everyone will get that reference.
If you want it to be faster, stop rebooting all the time, learn to boot and get coffee like the rest of the world does. Or use the time for mediation to find your happy place.
If you want a really fast boot, use the 30MB TinyCore http://tinycorelinux.net/downloads.html
Just providing options.
I've already tuned my boot to remove all sorts of slow things that I'll never use - things like network-manager or ZFS checks or GUI things I just don't need, like the snap subsystem.
Code:
$ systemd-analyze critical-chain
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.
graphical.target @14.210s
└─udisks2.service @5.371s +8.838s
└─basic.target @5.303s
└─sockets.target @5.302s
└─libvirtd-ro.socket @5.301s
└─libvirtd.socket @5.285s +11ms
└─sysinit.target @5.277s
└─systemd-sysctl.service @5.469s +3ms
└─systemd-modules-load.service @385ms +36ms
└─systemd-journald.socket @335ms
└─-.mount @297ms
└─system.slice @297ms
└─-.slice @297ms
That's the entire critical chain on 1 system. I need to look at removing udisks. It used to be useless for my needs.
On the slower booting system,
Code:
$ systemd-analyze critical-chain
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.
graphical.target @2min 27.753s
└─multi-user.target @2min 27.753s
└─zfs.target @2min 27.753s
└─zfs-share.service @2min 27.722s +30ms
└─nfs-server.service @5.950s +2min 21.771s
└─rpc-statd.service @2min 18.783s +15ms
└─nss-lookup.target @2min 18.780s
I use LXD, which requires snaps and ZFS. The other things are related to NFS, which I use extensively.
The worst offender on the slow booting system is NFS:
Code:
2min 21.771s nfs-server.service
12.048s snap.lxd.activate.service >
7.889s smartmontools.service >
7.740s udisks2.service >
7.129s snapd.service >
6.303s apt-daily.service >
4.095s fstrim.service >
3.029s systemd-networkd-wait-online.service >
1.324s apt-daily-upgrade.service >
1.239s man-db.service >
1.174s fwupd-refresh.service
...
Slow disks, connected via USB is my initial guess.
I generally reboot every 2-3 weeks, when a new kernel is provided.
On a virtual machine, running on the slower booting system, with Linux Mint 21.2, but a minimal GUI, I see this:
Code:
$ systemd-analyze
Startup finished in 6.402s (kernel) + 7.401s (userspace) = 13.804s
graphical.target reached after 7.391s in userspace
Plenty fast.
Code:
$ systemd-analyze blame
4.008s plocate-updatedb.service
3.748s postfix@-.service
1.810s munin-node.service
1.788s systemd-udev-settle.service
1.670s input-remapper-daemon.service
1.267s fwupd-refresh.service
858ms x2goserver.service
...
That's my primary desktop system.
BTW, sudo isn't needed for systemd-analyze.
Bookmarks