Page 1 of 2 12 LastLast
Results 1 to 10 of 19

Thread: Root HDD being filled up by a mystery!

  1. #1
    Join Date
    Nov 2020
    Beans
    10

    Root HDD being filled up by a mystery!

    Hi all,

    (Ubuntu server 22.04 Jammy)

    My Ubuntu server has just started to act oddly. I use it for Emby media server, and Zoneminder CCTV, plus Apache and certbot to host some basic websites.

    All files associated with these services (such as CCTV recordings, and ,music etc..) are held on other mounted hard disks. The main had disk (in question) with the Ubuntu OS and the applications is quickly and repeatedly filling up to 100%.

    I read some guidance online and performed:

    Code:
    apt-get autoremove --purge
    and

    Code:
    dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge

    to remove old kernels and packages. This freed up 513 MB.... but 30 minutes later this was down to 94 MB free space!!! You can literally watch the space being consumed by something!!

    I don't know what else to delete to watch it fill up again, as it's all system files!

    How can I find out what files are being written to, or created?

    Many thanks, Scott

  2. #2
    Join Date
    Mar 2010
    Location
    /home
    Beans
    9,684
    Distro
    Xubuntu

    Re: Root HDD being filled up by a mystery!

    What does this show?
    Code:
    du -h / 2>/dev/null | grep '^[0-9]*\.[0-9]G'
    Please wrap the output with code tags when replying.

  3. #3
    Join Date
    Jun 2009
    Location
    SW Forida
    Beans
    Hidden!
    Distro
    Kubuntu

    Re: Root HDD being filled up by a mystery!

    One common issue is an error that is filling log files.

    I like to use ncdu over multiple du commans, which you have to install to view files. If /var then is very large, it is a log file issue.

    Review log files, I get a page or two most warning or just entry about an error but not error.
    sudo grep -Ei 'warn|error' /var/log/*log
    tail -n 10 /var/log/syslog
    UEFI boot install & repair info - Regularly Updated :
    https://ubuntuforums.org/showthread.php?t=2147295
    Please use Thread Tools above first post to change to [Solved] when/if answered completely.

  4. #4
    Join Date
    Nov 2020
    Beans
    10

    Re: Root HDD being filled up by a mystery!

    Thanks...

    I ran the command suggested by Rubi, but excluded /mnt and /var/www/html as these are mount points for other volumes, including them only lists loads of stuff from non-problematic drives. I didn't try installing ncdu as suggested by oldfred as the root drive is full.

    Code:
    $ du -h / 2>/dev/null --exclude=mnt --exclude=var/www/html | grep '^[0-9]*\.[0-9]G'
    2.1G    /var/log/zm
    4.1G    /var/log/journal/c14637d0633b45b18aa96baad55db587
    4.1G    /var/log/journal
    5.7G    /var/lib/emby/metadata/library
    7.1G    /var/lib/emby/metadata
    8.2G    /var/lib/emby
    9.6G    /var/lib
    1.1G    /usr/lib/firmware
    2.5G    /usr/lib
    3.4G    /usr
    2.5G    /snap
    Its a little confusing as the HDD is quite big, and the above totals to 50.4 GB.

    Code:
    $ sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
    
    NAME                      FSTYPE        SIZE MOUNTPOINT         LABEL
    ....
    sda                                   232.9G                    
    ├─sda1                                    1M                    
    ├─sda2                    ext4            1G /boot              
    └─sda3                    LVM2_member 231.8G                    
      └─ubuntu--vg-ubuntu--lv ext4        115.9G / 
    ....


    Code:
    $ sudo fdisk -l
    ....
    Disk /dev/sda: 232.85 GiB, 250023444480 bytes, 488327040 sectors
    Disk model: LOGICAL VOLUME  
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: FF90F0FD-E1E9-488F-BEB6-948E9F0C4457
    
    Device       Start       End   Sectors   Size Type
    /dev/sda1     2048      4095      2048     1M BIOS boot
    /dev/sda2     4096   2101247   2097152     1G Linux filesystem
    /dev/sda3  2101248 488323071 486221824 231.8G Linux filesystem
    ....
    ls from root / location:

    Code:
    $ ls -alh
    total 8.1G
    drwxr-xr-x  20 root root 4.0K Nov  3  2023 .
    drwxr-xr-x  20 root root 4.0K Nov  3  2023 ..
    lrwxrwxrwx   1 root root    7 Jul 31  2020 bin -> usr/bin
    drwxr-xr-x   4 root root 4.0K Sep 12 14:31 boot
    drwxr-xr-x   2 root root 4.0K Oct 28  2020 cdrom
    -rw-------   1 root root  11M Nov  3  2023 core
    drwxr-xr-x  20 root root 4.4K Sep 12 15:16 dev
    drwxr-xr-x 123 root root  12K Sep 10 06:57 etc
    drwxr-xr-x   5 root root 4.0K Nov  2  2021 home
    lrwxrwxrwx   1 root root    7 Jul 31  2020 lib -> usr/lib
    lrwxrwxrwx   1 root root    9 Jul 31  2020 lib32 -> usr/lib32
    lrwxrwxrwx   1 root root    9 Jul 31  2020 lib64 -> usr/lib64
    lrwxrwxrwx   1 root root   10 Jul 31  2020 libx32 -> usr/libx32
    drwx------   2 root root  16K Oct 28  2020 lost+found
    drwxr-xr-x   2 root root 4.0K Jul 31  2020 media
    drwxr-xr-x   7 root root 4.0K Feb  5  2024 mnt
    drwxr-xr-x   3 root root 4.0K Nov  3  2023 opt
    dr-xr-xr-x 343 root root    0 Sep 12 15:15 proc
    drwx------   6 root root 4.0K Sep 12 13:25 root
    drwxr-xr-x  32 root root  900 Sep 12 15:49 run
    lrwxrwxrwx   1 root root    8 Jul 31  2020 sbin -> usr/sbin
    drwxr-xr-x   9 root root 4.0K Oct 29  2020 snap
    drwxr-xr-x   3 root root 4.0K Nov 16  2021 srv
    -rw-------   1 root root 8.0G Oct 28  2020 swap.img
    dr-xr-xr-x  13 root root    0 Sep 12 15:15 sys
    drwxrwxrwt  15 root root 4.0K Sep 12 17:09 tmp
    drwxr-xr-x  14 root root 4.0K Feb  5  2022 usr
    drwxr-xr-x  14 root root 4.0K Oct 28  2020 var
    There is an 8G swap.img, but still nothing near the problem size.

    Are there any clues in the above, or are there any other commands to try?

    Daft thing is this server has been working fine for years, then out of nowhere (I hadn't been messing around with it) it starts filling the root HDD up.
    Last edited by scottbouch-com; 3 Weeks Ago at 06:14 PM.

  5. #5
    Join Date
    Jun 2009
    Location
    SW Forida
    Beans
    Hidden!
    Distro
    Kubuntu

    Re: Root HDD being filled up by a mystery!

    I do not use LVM, but default install typically does not assign volumes to the entire LVM.
    It then lets you add swap, /home, data, another / or just expand the one volume you have. Or leaves you some flexiblity.

    But that still only shows your volume as 116GB of your 231 available GB.
    Maybe what is in your excluded folders is then the issue?
    UEFI boot install & repair info - Regularly Updated :
    https://ubuntuforums.org/showthread.php?t=2147295
    Please use Thread Tools above first post to change to [Solved] when/if answered completely.

  6. #6
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Root HDD being filled up by a mystery!

    230G is not "quite big" by standards for today. 14TB is. 8TB is "average".

    Anyway, looks like your zm logs haven't been properly managed. You should fix that using logrotate. BTW, since I don't use either emby or ZM, I don't know what is "reasonable" for those logs. There's little reason to allow any individual logs over 10MB and all historical logs to be over 200MB in a home environment. It isn't like you or anyone else will be reading them. logrotate will let you setup automatic log rotation based on the size or specific times. There are tutorials online for configuring logrotate. It is a very mature tool, so I'd be surprised if google didn't find a 15 line config file for zm that will get you started - now, today, in 30 seconds. https://github.com/ZoneMinder/zonemi...rotate.conf.in shows an example logrotate conf file for zoneminder, so you probably already have it on your system. Looking at it, there's no time-based rotation or size limit. Booooo. That sorta defeats the point, if you ask me. I guess they can put the ZM log data into syslog or into separate ZM-specific files. Depends on what you did, I suppose.

    As for emby, I use jellyfin. Jellyfin logs are in
    Code:
    $ ls -l /var/log/jellyfin/
    total 61740
    -rw-r--r-- 1 jellyfin jellyfin     6952 Sep 12 11:13 FFmpeg.DirectStream-2024-09-12_11-13-21_4092ad420d06a9b67fa043b8449c3167_05240223.log
    -rw-r--r-- 1 jellyfin jellyfin    15196 Sep  9 14:02 FFmpeg.Transcode-2024-09-09_14-02-54_16747cd149cfad60fa0680ddbd222d7a_861b9e8f.log
    -rw-r--r-- 1 jellyfin jellyfin    58716 Sep  9 14:07 FFmpeg.Transcode-2024-09-09_14-06-37_0b4f708b0c122ef5802b40d143cb1697_0ffb4e9d.log
    -rw-r--r-- 1 jellyfin jellyfin    20752 Sep  9 14:15 FFmpeg.Transcode-2024-09-09_14-15-47_7485a6b1a35a43b6a756c8ada85e2beb_92643988.log
    -rw-r--r-- 1 jellyfin jellyfin    19534 Sep  9 14:16 FFmpeg.Transcode-2024-09-09_14-15-56_7485a6b1a35a43b6a756c8ada85e2beb_cb5d3ed6.log
    -rw-r--r-- 1 jellyfin jellyfin    17025 Sep  9 14:16 FFmpeg.Transcode-2024-09-09_14-16-04_7485a6b1a35a43b6a756c8ada85e2beb_ddc616e2.log
    -rw-r--r-- 1 jellyfin jellyfin     7374 Sep 12 11:13 FFmpeg.Transcode-2024-09-12_11-13-21_4092ad420d06a9b67fa043b8449c3167_00e13579.log
    -rw-r--r-- 1 jellyfin jellyfin     7374 Sep 12 11:13 FFmpeg.Transcode-2024-09-12_11-13-21_4092ad420d06a9b67fa043b8449c3167_2509a323.log
    -rw-r--r-- 1 jellyfin jellyfin    19050 Sep 12 11:15 FFmpeg.Transcode-2024-09-12_11-15-10_89f449b15f3a5d2f2637be5795a637b3_06908d8c.log
    -rw-r--r-- 1 jellyfin jellyfin    42165 Sep 12 11:24 FFmpeg.Transcode-2024-09-12_11-24-25_10d1ba56f8a8efcfd626a6e5607c50a7_4cf3dfa1.log
    -rw-r--r-- 1 jellyfin jellyfin  7615139 Sep  9 23:31 jellyfin20240909_001.log
    -rw-r--r-- 1 jellyfin jellyfin 10485867 Sep  9 13:51 jellyfin20240909.log
    -rw-r--r-- 1 jellyfin jellyfin  8527661 Sep 10 23:31 jellyfin20240910.log
    -rw-r--r-- 1 jellyfin jellyfin 10485843 Sep 11 22:30 jellyfin20240911_001.log
    -rw-r--r-- 1 jellyfin jellyfin  3633574 Sep 11 23:31 jellyfin20240911_002.log
    -rw-r--r-- 1 jellyfin jellyfin 10485772 Sep 11 22:23 jellyfin20240911.log
    -rw-r--r-- 1 jellyfin jellyfin  1217731 Sep 12 17:01 jellyfin20240912_001.log
    -rw-r--r-- 1 jellyfin jellyfin 10490738 Sep 12 09:17 jellyfin20240912.log
    
    $ du -c /var/log/jellyfin/*g
    61M     total
    Not much there. Probably more than I need and I can certainly gzip/compress the older logs with logrotate. Looks like logrotate isn't managing the jellyfin logs at all. I haven't manually cleaned that directory ... er ... ever, so those few log files are all controlled by jellyfin, I guess.

    When looking for large files, I use this alias:
    Code:
    alias du-big-sort='du -sh *|sort -h'
    For example,
    Code:
    $ cd /var
    $ du-big-sort
    0       lock
    0       run
    4.0K    local
    4.0K    mail
    4.0K    opt
    16K     lost+found
    24K     tmp
    128K    snap
    4.8M    spool
    14M     backups
    563M    crash
    566M    cache
    1.1G    lib
    2.3G    log
    So, I need to work my way down the /var/log directory to figure out which logs are eating 2.3G. I'll not show the small stuff below, just the largest few files/directories as I go deeper.
    Code:
    ...
    13M     kern.log.1
    15M     kern.log
    96M     syslog
    131M    syslog.1
    2.0G    journal
    By far, the /var/log/journal/ directory is using the most storage.
    Code:
    $ du-big-sort 
    2.0G    299bc9611c014b03a884708043024ec2
    
    /var/log/journal$ ll
    total 12
    drwxr-sr-x+  3 root systemd-journal 4096 May 22  2023 ./
    drwxrwxr-x  15 root syslog          4096 Sep 12 00:00 ../
    drwxr-sr-x+  2 root systemd-journal 4096 Sep 12 14:38 299bc9611c014b03a884708043024ec2/
    That looks like the systemd journalctl is eating most of the space, so I can run a few commands to clean that up immediately, the I can modify the journald.conf file to prevent it getting that large again. Some examples:
    Code:
      journalctl --disk-usage   # See log file disk use
      sudo journalctl --vacuum-size=200M    # Drop log file size to 200M, if possible.
      sudo journalctl --vacuum-time=10d     # Drop logs, over 10 days old
    After running a sudo journalctl --vacuum-size=200M,
    Code:
    $ du-big-sort 
    161M    299bc9611c014b03a884708043024ec2
    It was 2.0G before. Not anymore. But it will grow to that size again. Inside /etc/systemd/journald.conf I can change the size limits. Use sudoedit /etc/systemd/journald.conf to safely change that file:
    Old:
    Code:
    #SystemMaxUse=
    #SystemKeepFree=
    SystemMaxFileSize=200M
    #SystemMaxFiles=100
    #RuntimeMaxUse=
    #RuntimeKeepFree=
    RuntimeMaxFileSize=200M
    RuntimeMaxFiles=100
    New:
    Code:
    SystemMaxUse=200M
    #SystemKeepFree=
    SystemMaxFileSize=20M
    #SystemMaxFiles=100
    #RuntimeMaxUse=
    #RuntimeKeepFree=
    RuntimeMaxFileSize=20M
    RuntimeMaxFiles=100
    Now I need to reload/restart journald. A reboot would be the safest, but I have about 20 programs open, so that will need to wait. Looks like restart isn't a valid option, so I put to commands on the command like to stop/start systemd-journald quickly.
    Code:
    $ sudo service systemd-journald status
    ● systemd-journald.service - Journal Service
         Loaded: loaded (/lib/systemd/system/systemd-journald.service; static; vendor preset: enabled)
         Active: active (running) since Thu 2024-09-12 17:45:59 EDT; 10s ago
    ...
    Looks fine.
    That should do it. I didn't check the manpage to be certain what each specific setting meant, so it is very possible I'm misinterpreting what the descriptive setting truly means. Oh well.

    BTW, my /var/ storage is allocated separately from / ... for exactly this reason.
    Code:
    NAME                              TYPE  FSTYPE              SIZE FSAVAIL FSUSE% LABEL       MOUNTPOINT
    nvme0n1                           disk                    931.5G                            
    ├─nvme0n1p1                       part  ext2                  1M                            
    ├─nvme0n1p2                       part  vfat                 50M   43.8M    12%             /boot/efi
    ├─nvme0n1p3                       part  ext4                700M  313.9M    46%             /boot
    └─nvme0n1p4                       part  LVM2_member       930.8G                            
      ├─vg01-swap01                   lvm   swap                4.1G                            [SWAP]
      ├─vg01-root01                   lvm   ext4                 35G   24.7G    23%             /
      ├─vg01-var01                    lvm   ext4                 20G   13.1G    27%             /var
      ├─vg01-tmp01                    lvm   ext4                  4G    3.3G     9% tmp01       /tmp
      ├─vg01-home01                   lvm   ext4                 20G    7.3G    58% home01      /home
      └─vg01-libvirt--01              lvm   ext4                137G    2.8G    98% libvirt--01 /var/lib/libvirt
    I use LVM for the flexibility it provides, not just a "Use LVM" checkbox install.

    If I'm really worried, I should set a reminder to check the journal size tomorrow, then next week, to ensure the conffile changes worked. That's easy.
    Code:
    $ journalctl --disk-usage
    Archived and active journals take up 184.0M in the file system.
    I'll run that command via 'at' tomorrow morning. When at runs, I'll get an email with the data.
    Code:
    $ echo "/usr/bin/journalctl --disk-usage"| at 6 am
    $ echo "/usr/bin/journalctl --disk-usage"| at now + 7 days
    Now I don't need to remember to do anything more. There are 3 jobs in my atq
    Code:
    $ atq
    272     Fri Sep 13 06:00:00 2024 a tf
    273     Thu Sep 19 17:50:00 2024 a tf
    263     Sun Nov 10 16:32:00 2024 a tf
    When I need for something to happen later, I often use at and forget about it. I think the Nov task is deleting a file that I may need for the govt, but the file becomes useless in November, so I delete it.

    And this is why I love, love, love, Unix-like OSes. Automation and being able to forget things that aren't important, once handled.
    Last edited by TheFu; 3 Weeks Ago at 10:59 PM.

  7. #7
    Join Date
    Nov 2020
    Beans
    10

    Re: Root HDD being filled up by a mystery!

    Thanks, that's helpful.

    So in tracking down the sizes:

    root:
    Code:
    /$ sudo du -sh *|sort -h
    du: cannot access 'proc/50520/task/50520/fd/4': No such file or directory
    du: cannot access 'proc/50520/task/50520/fdinfo/4': No such file or directory
    du: cannot access 'proc/50520/fd/3': No such file or directory
    du: cannot access 'proc/50520/fdinfo/3': No such file or directory
    0    bin
    0    lib
    0    lib32
    0    lib64
    0    libx32
    0    proc
    0    sbin
    0    sys
    4.0K    cdrom
    4.0K    media
    16K    lost+found
    104K    tmp
    176K    home
    1.6M    run
    2.3M    root
    11M    core
    12M    etc
    58M    srv
    90M    dev
    131M    boot
    528M    opt
    2.5G    snap
    3.4G    usr
    8.1G    swap.img
    113G    var
    6.1T    mnt
    As I said before, /var/www and /mnt have other larger volumes mounted there, so should be ignored.

    Code:
    /var$ sudo du -sh *|sort -h
    0    lock
    0    run
    4.0K    crash
    4.0K    local
    4.0K    opt
    8.0K    mail
    76K    tmp
    108K    snap
    3.2M    backups
    3.2M    spool
    128M    cache
    17G    www
    42G    log
    54G    lib
    So, ignoring the 17GB in /www, it seems that /log and /lib look to be the biggest.

    Ignoring anything less than 1M to minimise the list:

    /var/log:
    Code:
    /var/log$ sudo du -sh *|sort -h
    ...
    1.1M    syslog.7.gz
    1.2M    syslog.6.gz
    2.5M    dist-upgrade
    3.5M    cloud-init.log.1
    8.3M    apache2
    14M    letsencrypt
    143M    syslog.2.gz
    221M    gitlab
    1.5G    zm
    4.1G    journal
    5.7G    syslog
    31G    syslog.1

    and /var/lib:
    Code:
    /var/lib$ sudo du -sh *|sort -h
    ...
    1.9M    fwupd
    3.4M    command-not-found
    4.3M    ubuntu-advantage
    41M    dpkg
    197M    apt
    1.3G    snapd
    8.2G    emby
    44G    mysql
    So some big offenders are /var/log/syslog.1 and /var/lib/mysql

    syslog is big, but syslog.1 is massive! Reading online at https://askubuntu.com/questions/1433...at-should-i-do I will try some of the tips recommended fo delete and empty syslog files.

    This one command from the above link cleared 33 GB of archived logs: rm /var/log/*[0-9] /var/log/*.gz


    Exploring /var/lib/mysql shows zoneminder's database to have been very naughty too at 44 GB!:

    Code:
    /var/lib/mysql$ sudo du -sh *|sort -h
    ...
    4.0M    mysql
    13M    ibtmp1
    96M    ib_logfile0
    141M    ibdata1
    524M    phpbb
    44G    zm
    I have headed off the ZM forum for help with this SQL issue.

    Many thanks all, Scott.
    Last edited by scottbouch-com; 2 Weeks Ago at 12:52 PM.

  8. #8
    Join Date
    Jul 2005
    Location
    I think I'm here! Maybe?
    Beans
    Hidden!
    Distro
    Xubuntu 24.04 Noble Numbat

    Re: Root HDD being filled up by a mystery!

    I think 8.2G for /var/lib/emby is pretty big; mine is just 113M so I wonder if you have a much larger library of media than I do, (mine is about 400G) but I do not allow Emby to download a huge amount of metadata so that may also account for some differences.

  9. #9
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Root HDD being filled up by a mystery!

    Code:
    4.1G    journal
    5.7G    syslog
    31G    syslog.1
    Ouch! Something is pushing lots of crap into your logs. Making log files smaller often means there's some sort of problem in some programs and the logs are filling due to it. Time to watch the end of the syslog file as messages are added and track down the issues.

    Code:
    $ tail -f /var/log/syslog
    I'd set a limit in logrotate on the size that syslog can get, then rotate and compress the files more often based on the size.


    BTW, this morning, I had an email waiting from the 'at' command shown above.
    Code:
    Archived and active journals take up 672.0M in the file system.
    It appears that my attempt to limit journald logs to 200MB didn't work. But,
    Code:
    $ du -sh 299bc9611c014b03a884708043024ec2/
    169M    299bc9611c014b03a884708043024ec2/
    is still fine. There must be journals being stored elsewhere with the other 500+MB. I found 350MB in /tmp/ ....
    Code:
    $ df .
    Filesystem              Size  Used Avail Use% Mounted on
    /dev/mapper/vg01-var01   20G  3.6G   15G  20% /var
    still has plenty of free space, so I'm not going to worry about it any more, but I'll get another email next week that will clarify things.

    The manpage for journald.conf will explain things clearly. I could try again. Seems that 2 settings are needed. Setting just the max size isn't sufficient. The example shows setting the required disk free amount - which makes little sense to me, but the systemd guys think in odd ways often. I'll check if it is a problem next week when the next email comes.

    I do have logwatch running nightly sending me reports of any issues across all my systems. Takes about 15 seconds per system to review it. That's part of being a sysadmin. Reviewing logs.

  10. #10
    Join Date
    Nov 2020
    Beans
    10

    Re: Root HDD being filled up by a mystery!

    I have 6 TB of media, but I suppose it's down to the number of items, ie: TV series will require more lines in a database than one film.
    Last edited by scottbouch-com; 2 Weeks Ago at 02:09 PM.

Page 1 of 2 12 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •