Clamd consuming 1434 mb of RAM and the usage increasing.

Clamd service has been consuming 1000 mb of RAM and has been consuming more every hour. It was stable initially - below 1000 mb.

image

From what I read on ApisCP resources, it seems, Clamd only scans new files or changes to existing files. If so, why would the memory consumption increase all of a sudden.

Or does it do scheduled scans? If so, can we configure the scan to happen at later hours when the load on the server is low?

Any help would be appreciated.

New release of signatures increases the total count of signatures loaded into memory. You can have ClamAV run on demand, which greatly increases scan times from a few hundred milliseconds to several seconds - so at risk of creating a DoS as HTTP requests pile up - this is ill-advised:

# clamscan /bin/bash
/usr/bin/bash: OK

----------- SCAN SUMMARY -----------
Known viruses: 8878529
Engine version: 0.103.11
Scanned directories: 0
Scanned files: 1
Infected files: 0
Data scanned: 0.97 MB
Data read: 0.92 MB (ratio 1.06:1)
Time: 44.231 sec (0 m 44 s)
Start Date: 2023:12:04 08:21:10
End Date:   2023:12:04 08:21:54

versus

# clamdscan /bin/bash
/usr/bin/bash: OK

----------- SCAN SUMMARY -----------
Infected files: 0
Time: 0.771 sec (0 m 0 s)
Start Date: 2023:12:04 08:22:17
End Date:   2023:12:04 08:22:18

If you’d like to disable supplementary signatures, OfficialDatabaseOnly can be set following customization procedure:

cpcmd scope:set cp.bootstrapper clamav_clamd_config__custom "OfficialDatabaseOnly yes"
upcp -sb clamav/setup

Note too, database reloads are concurrent to avoid processing delays. When new signatures are loaded, ClamAV spawns a copy of itself so you may be closer to ~3 GB temporarily. You can see the maximal memory for the process lifetime by looking at memory.max_usage_in_bytes:

 cat /sys/fs/cgroup/memory/system.slice/system-clamd.slice/memory.max_usage_in_bytes

To clear the counter, echo 0 > /sys/fs/cgroup/memory/system.slice/system-clamd.slice/memory.max_usage_in_bytes.

Thank you for the reply.

If I leave Clamd as it is - as recommended by you, what happens when my systems RAM usage peaks to its max?
Is such a case handled by ApisCP? How does it work, if it does get handled?

My system has 4000 mb of memory. Currently it runs around 2560 mb usage.
I do understand from another post on the forums, that linux reserves some for cache, which it can release as per need.

OOM killer is invoked only when available memory reaches zero. Kernel is quite judicious on how it spends memory, which is a fairly elastic resource. Free memory is tied up in filesystem caches either on file metadata or data itself to speed up access. Don’t worry about the value of free as reported by free -m. See also linuxatemyram.com

In the event it does happen, then “OOM killer” is invoked. It follows a predictable scoring algorithm in which case clamd will almost certainly be terminated first. If killed, you’ll get an alert through Argos and OOM event logged to dmesg.

I have a fresh installation of ApisCP and clamd is using 1434 MB. I keep getting alerts that clamd is restarting every hour or so. I only have one site configured right now. Is this a glitch/memory leak?

I will try using official signatures, but I guess the concern is that it’s a fresh installation and still occurring.

ClamAV loads all signatures into memory. Due to natural progression of additional threats, the memory is expected to grow rather than lessen over time. Ideally, this is why delegating scans to a single install works best.

I’m at 1,578,968 KB or 1541 MB.

# cpcmd pman:stat "$(pidof clamd)"
pid: 2706
comm: clamd
stat: S
ppid: 1
pgrp: 2706
session: 2706
tty_nr: 0
tpgid: -1
flags: 4194624
minflt: 18577932
cminflt: 0
majflt: 213320
cmajflt: 0
utime: 1556.08
stime: 191.71
cutime: 0
cstime: 0
priority: 30
nice: 10
num_threads: 2
itrealvalue: 0
starttime: 25030245.81
vsize: 3809632
rss: 1578968
rsslim: 1.8014398509482E+16
user: 985
exe: /usr/sbin/clamd
cwd: /
startutime: !!float 1738395454
args:
  1: '-c'
  2: /etc/clamd.d/scan.conf

If you’re getting reports from Argos/Monit, then look into what’s causing a restart.

journalctl -n50 -u clamd
# Verify it's actually restarting
systemctl status clamd

Thanks, I see no entries…

journalctl -n50 -u clamd
-- Logs begin at Mon 2025-02-03 21:18:31 UTC, end at Sun 2025-02-09 22:02:26 UTC. --
-- No entries --
systemctl status clamd
Unit clamd.service could not be found.

Although when checking /var/log/clamav/clamd.log I see this…

Sun Feb  9 20:38:04 2025 -> SelfCheck: Database status OK.
Sun Feb  9 20:48:04 2025 -> SelfCheck: Database status OK.
Sun Feb  9 20:58:04 2025 -> SelfCheck: Database status OK.
Sun Feb  9 21:08:04 2025 -> SelfCheck: Database status OK.
Sun Feb  9 21:18:04 2025 -> SelfCheck: Database modification detected. Forcing reload.

Make that clamd@scan, not clamd

I don’t see any errors shown. The output is similar to the clamd.log. I see “Database modification detected” about every hour and it restarts.

systemctl status clamd@scan is necessary otherwise everything is speculation.

# systemctl status clamd@scan
● clamd@scan.service - clamd scanner (scan) daemon
   Loaded: loaded (/etc/systemd/system/clamd@scan.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/clamd@scan.service.d
           └─override.conf
   Active: active (running) since Sat 2025-02-01 02:38:19 EST; 1 weeks 1 days ago
     Docs: man:clamd(8)
           man:clamd.conf(5)
           https://www.clamav.net/documents/
 Main PID: 2706 (clamd)
   CGroup: /system.slice/system-clamd.slice/clamd@scan.service
           └─2706 /usr/sbin/clamd -c /etc/clamd.d/scan.conf

In the abovementioned example we see clamd@scan running for the last week, so we know it’s not restarting. Reloads are perfectly organic, which coincide with freshclam updates.

Feb 09 22:21:54 apiscp.asher.cloud clamd[1721537]: SelfCheck: Database modification detected. Forcing reload.
Feb 09 22:21:54 apiscp.asher.cloud clamd[1721537]: Reading databases from /var/lib/clamav
Feb 09 22:22:41 apiscp.asher.cloud systemd[1]: clamd@scan.service: Main process exited, code=killed, status=9/KILL
Feb 09 22:22:41 apiscp.asher.cloud systemd[1]: clamd@scan.service: Failed with result 'signal'.
Feb 09 22:24:11 apiscp.asher.cloud systemd[1]: clamd@scan.service: Service RestartSec=1min 30s expired, scheduling restart.
Feb 09 22:24:11 apiscp.asher.cloud systemd[1]: clamd@scan.service: Scheduled restart job, restart counter is at 1.
Feb 09 22:24:11 apiscp.asher.cloud systemd[1]: Stopped clamd scanner (scan) daemon.
Feb 09 22:24:11 apiscp.asher.cloud systemd[1]: Starting clamd scanner (scan) daemon...
Feb 09 22:24:11 apiscp.asher.cloud clamd[1724246]: Received 0 file descriptor(s) from systemd.
   Loaded: loaded (/usr/lib/systemd/system/clamd@.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/clamd@scan.service.d
           └─override.conf
   Active: activating (start) since Sun 2025-02-09 22:24:11 UTC; 13s ago
     Docs: man:clamd(8)
           man:clamd.conf(5)
           https://www.clamav.net/documents/
 Main PID: 1721537 (code=killed, signal=KILL); Control PID: 1724245 (clamd)
    Tasks: 2 (limit: 24868)
   Memory: 1.4G
   CGroup: /system.slice/system-clamd.slice/clamd@scan.service
           ├─1724245 /usr/sbin/clamd -c /etc/clamd.d/scan.conf
           └─1724246 /usr/sbin/clamd -c /etc/clamd.d/scan.conf

What does dmesg report? Looks like you’re hitting the memory ceiling of the server and it’s getting an OOM kill. A reload will run 2 parallel copies to retain scanning capabilities while the new copy is loading.

Thanks for your help.

dmesg does show Out of memory.

Also FYI mail shows

N  9 monit@localhost       Fri Feb  7 16:19  26/799   "monit alert --  Does not exist clamd"
 N 10 monit@localhost       Fri Feb  7 16:21  26/794   "monit alert --  Exists clamd"
 N 11 monit@localhost       Fri Feb  7 18:22  26/799   "monit alert --  Does not exist clamd"
 N 12 monit@localhost       Fri Feb  7 18:25  26/794   "monit alert --  Exists clamd"
 N 13 monit@localhost       Fri Feb  7 19:25  26/799   "monit alert --  Does not exist clamd"
 N 14 monit@localhost       Fri Feb  7 19:28  26/794   "monit alert --  Exists clamd"
 N 15 monit@localhost       Fri Feb  7 20:28  26/799   "monit alert --  Does not exist clamd"
 N 16 monit@localhost       Fri Feb  7 20:30  26/794   "monit alert --  Exists clamd"
 N 17 monit@localhost       Fri Feb  7 21:20  26/799   "monit alert --  Does not exist clamd"
 N 18 monit@localhost       Fri Feb  7 21:23  26/794   "monit alert --  Exists clamd"
 N 19 monit@localhost       Fri Feb  7 22:23  26/799   "monit alert --  Does not exist clamd"
 N 20 monit@localhost       Fri Feb  7 22:26  26/794   "monit alert --  Exists clamd"
 N 21 monit@localhost       Fri Feb  7 23:26  26/799   "monit alert --  Does not exist clamd"
 N 22 monit@localhost       Fri Feb  7 23:29  26/794   "monit alert --  Exists clamd"
 N 23 monit@localhost       Sat Feb  8 00:30  26/799   "monit alert --  Does not exist clamd"
 N 24 monit@localhost       Sat Feb  8 00:33  26/794   "monit alert --  Exists clamd"
 N 25 monit@localhost       Sat Feb  8 00:43  26/799   "monit alert --  Does not exist clamd"
 N 26 monit@localhost       Sat Feb  8 00:45  26/794   "monit alert --  Exists clamd"
 N 27 monit@localhost       Sat Feb  8 01:25  26/799   "monit alert --  Does not exist clamd"
 N 28 monit@localhost       Sat Feb  8 01:28  26/794   "monit alert --  Exists clamd"
 N 29 monit@localhost       Sat Feb  8 02:28  26/799   "monit alert --  Does not exist clamd"

I have a 4 GB dedicated VPS. Only one site at the moment (one user).

image

The site running is essentially just HTML/JS/CSS and limited PHP. There are no file uploads occurring or user input, so I wouldn’t think clamd would need that much to run? Do I need to tweak a setting?

Can you provide further snippets? There can be OOM from cgroup kills, which are perfectly normal then there can be OOM kills because of pressure on memory management, which is not normal. Context helps understand what we’re looking at rather than speculation.

Sure, here is some more output from dmesg, let me know if I need to provide other info. :blush:

[515255.453092] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=user.slice,mems_allowed=0,global_oom,task_memcg=/system.slice/system-clamd.slice/clamd@scan.service,task=clamd,pid=1707910,uid=985
[515255.458961] Out of memory: Killed process 1707910 (clamd) total-vm:3256524kB, anon-rss:2731788kB, file-rss:1924kB, shmem-rss:0kB, UID:985 pgtables:5612kB oom_score_adj:0
[518439.817072] redis-server invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0

redis-server process requested to allocate more memory, for which there was none available so clamd was killed to satisfy the request. This is due to concurrent reloads that can be disabled by setting ConcurrentDatabaseReload no in /etc/clamd.d/scan.conf:

# Enable non-blocking (multi-threaded/concurrent) database reloads.
# This feature will temporarily load a second scanning engine while scanning
# continues using the first engine. Once loaded, the new engine takes over.
# The old engine is removed as soon as all scans using the old engine have
# completed.
# This feature requires more RAM, so this option is provided in case users are
# willing to block scans during reload in exchange for lower RAM requirements.
# Default: yes
#ConcurrentDatabaseReload no

Run systemctl restart clamd@scan afterward.

Thanks, I have made the changes.