230G is not "quite big" by standards for today. 14TB is. 8TB is "average".
Anyway, looks like your zm logs haven't been properly managed. You should fix that using logrotate. BTW, since I don't use either emby or ZM, I don't know what is "reasonable" for those logs. There's little reason to allow any individual logs over 10MB and all historical logs to be over 200MB in a home environment. It isn't like you or anyone else will be reading them. logrotate will let you setup automatic log rotation based on the size or specific times. There are tutorials online for configuring logrotate. It is a very mature tool, so I'd be surprised if google didn't find a 15 line config file for zm that will get you started - now, today, in 30 seconds. https://github.com/ZoneMinder/zonemi...rotate.conf.in shows an example logrotate conf file for zoneminder, so you probably already have it on your system. Looking at it, there's no time-based rotation or size limit. Booooo. That sorta defeats the point, if you ask me. I guess they can put the ZM log data into syslog or into separate ZM-specific files. Depends on what you did, I suppose.
As for emby, I use jellyfin. Jellyfin logs are in
Code:
$ ls -l /var/log/jellyfin/
total 61740
-rw-r--r-- 1 jellyfin jellyfin 6952 Sep 12 11:13 FFmpeg.DirectStream-2024-09-12_11-13-21_4092ad420d06a9b67fa043b8449c3167_05240223.log
-rw-r--r-- 1 jellyfin jellyfin 15196 Sep 9 14:02 FFmpeg.Transcode-2024-09-09_14-02-54_16747cd149cfad60fa0680ddbd222d7a_861b9e8f.log
-rw-r--r-- 1 jellyfin jellyfin 58716 Sep 9 14:07 FFmpeg.Transcode-2024-09-09_14-06-37_0b4f708b0c122ef5802b40d143cb1697_0ffb4e9d.log
-rw-r--r-- 1 jellyfin jellyfin 20752 Sep 9 14:15 FFmpeg.Transcode-2024-09-09_14-15-47_7485a6b1a35a43b6a756c8ada85e2beb_92643988.log
-rw-r--r-- 1 jellyfin jellyfin 19534 Sep 9 14:16 FFmpeg.Transcode-2024-09-09_14-15-56_7485a6b1a35a43b6a756c8ada85e2beb_cb5d3ed6.log
-rw-r--r-- 1 jellyfin jellyfin 17025 Sep 9 14:16 FFmpeg.Transcode-2024-09-09_14-16-04_7485a6b1a35a43b6a756c8ada85e2beb_ddc616e2.log
-rw-r--r-- 1 jellyfin jellyfin 7374 Sep 12 11:13 FFmpeg.Transcode-2024-09-12_11-13-21_4092ad420d06a9b67fa043b8449c3167_00e13579.log
-rw-r--r-- 1 jellyfin jellyfin 7374 Sep 12 11:13 FFmpeg.Transcode-2024-09-12_11-13-21_4092ad420d06a9b67fa043b8449c3167_2509a323.log
-rw-r--r-- 1 jellyfin jellyfin 19050 Sep 12 11:15 FFmpeg.Transcode-2024-09-12_11-15-10_89f449b15f3a5d2f2637be5795a637b3_06908d8c.log
-rw-r--r-- 1 jellyfin jellyfin 42165 Sep 12 11:24 FFmpeg.Transcode-2024-09-12_11-24-25_10d1ba56f8a8efcfd626a6e5607c50a7_4cf3dfa1.log
-rw-r--r-- 1 jellyfin jellyfin 7615139 Sep 9 23:31 jellyfin20240909_001.log
-rw-r--r-- 1 jellyfin jellyfin 10485867 Sep 9 13:51 jellyfin20240909.log
-rw-r--r-- 1 jellyfin jellyfin 8527661 Sep 10 23:31 jellyfin20240910.log
-rw-r--r-- 1 jellyfin jellyfin 10485843 Sep 11 22:30 jellyfin20240911_001.log
-rw-r--r-- 1 jellyfin jellyfin 3633574 Sep 11 23:31 jellyfin20240911_002.log
-rw-r--r-- 1 jellyfin jellyfin 10485772 Sep 11 22:23 jellyfin20240911.log
-rw-r--r-- 1 jellyfin jellyfin 1217731 Sep 12 17:01 jellyfin20240912_001.log
-rw-r--r-- 1 jellyfin jellyfin 10490738 Sep 12 09:17 jellyfin20240912.log
$ du -c /var/log/jellyfin/*g
61M total
Not much there. Probably more than I need and I can certainly gzip/compress the older logs with logrotate. Looks like logrotate isn't managing the jellyfin logs at all. I haven't manually cleaned that directory ... er ... ever, so those few log files are all controlled by jellyfin, I guess.
When looking for large files, I use this alias:
Code:
alias du-big-sort='du -sh *|sort -h'
For example,
Code:
$ cd /var
$ du-big-sort
0 lock
0 run
4.0K local
4.0K mail
4.0K opt
16K lost+found
24K tmp
128K snap
4.8M spool
14M backups
563M crash
566M cache
1.1G lib
2.3G log
So, I need to work my way down the /var/log directory to figure out which logs are eating 2.3G. I'll not show the small stuff below, just the largest few files/directories as I go deeper.
Code:
...
13M kern.log.1
15M kern.log
96M syslog
131M syslog.1
2.0G journal
By far, the /var/log/journal/ directory is using the most storage.
Code:
$ du-big-sort
2.0G 299bc9611c014b03a884708043024ec2
/var/log/journal$ ll
total 12
drwxr-sr-x+ 3 root systemd-journal 4096 May 22 2023 ./
drwxrwxr-x 15 root syslog 4096 Sep 12 00:00 ../
drwxr-sr-x+ 2 root systemd-journal 4096 Sep 12 14:38 299bc9611c014b03a884708043024ec2/
That looks like the systemd journalctl is eating most of the space, so I can run a few commands to clean that up immediately, the I can modify the journald.conf file to prevent it getting that large again. Some examples:
Code:
journalctl --disk-usage # See log file disk use
sudo journalctl --vacuum-size=200M # Drop log file size to 200M, if possible.
sudo journalctl --vacuum-time=10d # Drop logs, over 10 days old
After running a sudo journalctl --vacuum-size=200M,
Code:
$ du-big-sort
161M 299bc9611c014b03a884708043024ec2
It was 2.0G before. Not anymore. But it will grow to that size again. Inside /etc/systemd/journald.conf I can change the size limits. Use sudoedit /etc/systemd/journald.conf to safely change that file:
Old:
Code:
#SystemMaxUse=
#SystemKeepFree=
SystemMaxFileSize=200M
#SystemMaxFiles=100
#RuntimeMaxUse=
#RuntimeKeepFree=
RuntimeMaxFileSize=200M
RuntimeMaxFiles=100
New:
Code:
SystemMaxUse=200M
#SystemKeepFree=
SystemMaxFileSize=20M
#SystemMaxFiles=100
#RuntimeMaxUse=
#RuntimeKeepFree=
RuntimeMaxFileSize=20M
RuntimeMaxFiles=100
Now I need to reload/restart journald. A reboot would be the safest, but I have about 20 programs open, so that will need to wait. Looks like restart isn't a valid option, so I put to commands on the command like to stop/start systemd-journald quickly.
Code:
$ sudo service systemd-journald status
● systemd-journald.service - Journal Service
Loaded: loaded (/lib/systemd/system/systemd-journald.service; static; vendor preset: enabled)
Active: active (running) since Thu 2024-09-12 17:45:59 EDT; 10s ago
...
Looks fine.
That should do it. I didn't check the manpage to be certain what each specific setting meant, so it is very possible I'm misinterpreting what the descriptive setting truly means. Oh well.
BTW, my /var/ storage is allocated separately from / ... for exactly this reason.
Code:
NAME TYPE FSTYPE SIZE FSAVAIL FSUSE% LABEL MOUNTPOINT
nvme0n1 disk 931.5G
├─nvme0n1p1 part ext2 1M
├─nvme0n1p2 part vfat 50M 43.8M 12% /boot/efi
├─nvme0n1p3 part ext4 700M 313.9M 46% /boot
└─nvme0n1p4 part LVM2_member 930.8G
├─vg01-swap01 lvm swap 4.1G [SWAP]
├─vg01-root01 lvm ext4 35G 24.7G 23% /
├─vg01-var01 lvm ext4 20G 13.1G 27% /var
├─vg01-tmp01 lvm ext4 4G 3.3G 9% tmp01 /tmp
├─vg01-home01 lvm ext4 20G 7.3G 58% home01 /home
└─vg01-libvirt--01 lvm ext4 137G 2.8G 98% libvirt--01 /var/lib/libvirt
I use LVM for the flexibility it provides, not just a "Use LVM" checkbox install.
If I'm really worried, I should set a reminder to check the journal size tomorrow, then next week, to ensure the conffile changes worked. That's easy.
Code:
$ journalctl --disk-usage
Archived and active journals take up 184.0M in the file system.
I'll run that command via 'at' tomorrow morning. When at runs, I'll get an email with the data.
Code:
$ echo "/usr/bin/journalctl --disk-usage"| at 6 am
$ echo "/usr/bin/journalctl --disk-usage"| at now + 7 days
Now I don't need to remember to do anything more. There are 3 jobs in my atq
Code:
$ atq
272 Fri Sep 13 06:00:00 2024 a tf
273 Thu Sep 19 17:50:00 2024 a tf
263 Sun Nov 10 16:32:00 2024 a tf
When I need for something to happen later, I often use at and forget about it. I think the Nov task is deleting a file that I may need for the govt, but the file becomes useless in November, so I delete it.
And this is why I love, love, love, Unix-like OSes. Automation and being able to forget things that aren't important, once handled.
Bookmarks