Zoneminder not releasing open files when encoding with VAAPI
Zoneminder not releasing open files when encoding with VAAPI
Hi, I'm trying VAAPI accelerated video encoding to h264 on Intel Ivy Bridge iGPU. I was able to bring it to work properly with good results, but after a while ZM stopped to save h264 videos from this monitor and saved mjpeg videos instead. Was wondering what happened, after some investigation I found that default limit for open files (1024) was reached. I'm saving 30s video files so limit was exhausted rather quick with +2 open files every minute.
short sample with problematic file:
zmc 1549915 www-data 1291u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1292u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1293u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1294u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1295u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1296u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1297u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1298u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549926 www-data 6u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1291u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1292u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1293u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1294u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1295u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1296u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1297u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1298u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549926 www-data 6u CHR 226,128 0t0 673 /dev/dri/renderD128
root@cam:~/scripts# service zoneminder status
● zoneminder.service - ZoneMinder CCTV recording and surveillance system
Loaded: loaded (/usr/lib/systemd/system/zoneminder.service; enabled; preset: enabled)
Active: active (running) since Tue 2024-12-24 10:22:48 CET; 10h ago
Process: 1549870 ExecStart=/usr/bin/zmpkg.pl start (code=exited, status=0/SUCCESS)
Main PID: 1549887 (zmdc.pl)
Tasks: 20 (limit: 18742)
Memory: 11.8G (peak: 12.6G)
CPU: 8h 39min 10.232s
CGroup: /system.slice/zoneminder.service
├─1549887 /usr/bin/perl -wT /usr/bin/zmdc.pl startup
├─1549915 /usr/bin/zmc -m 6
├─1549922 "/usr/bin/zmcontrol.pl --id 6"
├─1549926 /usr/bin/zmc -m 8
├─1549932 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=2 --daemon
├─1549936 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=3 --daemon
├─1549941 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=5 --daemon
├─1549946 /usr/bin/perl -wT /usr/bin/zmaudit.pl -c
├─1549953 /usr/bin/perl -wT /usr/bin/zmwatch.pl
├─1549960 /usr/bin/perl -wT /usr/bin/zmtelemetry.pl
├─1549965 /usr/bin/perl -wT /usr/bin/zmstats.pl
└─1558521 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=4 --daemon
and
root@cam:~/scripts# lsof -u www-data|grep renderD128|wc -l
1291
I've increased ulimit -n, but this is only temporary solution.
www-data@cam:/root/scripts$ ulimit -n
64000
Restarting ZM helps to clean open files.
System is up to date Ubuntu 24.04.1 LTS.
Is this a known bug or I need to set something in ZM to prevent this? Thank you!
Mery Christmas to all!
short sample with problematic file:
zmc 1549915 www-data 1291u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1292u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1293u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1294u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1295u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1296u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1297u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1298u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549926 www-data 6u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1291u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1292u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1293u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1294u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1295u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1296u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1297u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549915 www-data 1298u CHR 226,128 0t0 673 /dev/dri/renderD128
zmc 1549926 www-data 6u CHR 226,128 0t0 673 /dev/dri/renderD128
root@cam:~/scripts# service zoneminder status
● zoneminder.service - ZoneMinder CCTV recording and surveillance system
Loaded: loaded (/usr/lib/systemd/system/zoneminder.service; enabled; preset: enabled)
Active: active (running) since Tue 2024-12-24 10:22:48 CET; 10h ago
Process: 1549870 ExecStart=/usr/bin/zmpkg.pl start (code=exited, status=0/SUCCESS)
Main PID: 1549887 (zmdc.pl)
Tasks: 20 (limit: 18742)
Memory: 11.8G (peak: 12.6G)
CPU: 8h 39min 10.232s
CGroup: /system.slice/zoneminder.service
├─1549887 /usr/bin/perl -wT /usr/bin/zmdc.pl startup
├─1549915 /usr/bin/zmc -m 6
├─1549922 "/usr/bin/zmcontrol.pl --id 6"
├─1549926 /usr/bin/zmc -m 8
├─1549932 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=2 --daemon
├─1549936 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=3 --daemon
├─1549941 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=5 --daemon
├─1549946 /usr/bin/perl -wT /usr/bin/zmaudit.pl -c
├─1549953 /usr/bin/perl -wT /usr/bin/zmwatch.pl
├─1549960 /usr/bin/perl -wT /usr/bin/zmtelemetry.pl
├─1549965 /usr/bin/perl -wT /usr/bin/zmstats.pl
└─1558521 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=4 --daemon
and
root@cam:~/scripts# lsof -u www-data|grep renderD128|wc -l
1291
I've increased ulimit -n, but this is only temporary solution.
www-data@cam:/root/scripts$ ulimit -n
64000
Restarting ZM helps to clean open files.
System is up to date Ubuntu 24.04.1 LTS.
Is this a known bug or I need to set something in ZM to prevent this? Thank you!
Mery Christmas to all!
Re: Zoneminder not releasing open files when encoding with VAAPI
That is a very interesting discovery! I'll see what I can do to fix it.
Re: Zoneminder not releasing open files when encoding with VAAPI
Thank you very much! I can help testing when needed off course.
Kind regards,
Milan
Kind regards,
Milan
Re: Zoneminder not releasing open files when encoding with VAAPI
I havn't been able to reproduce this. What distro/ffmpeg version? What options have you specified to the encoder?
Re: Zoneminder not releasing open files when encoding with VAAPI
Hi, thank you for looking into it. Happy new year!
Distro is Ubuntu LTS:
root@cam:/home/migo# cat /etc/issue
Ubuntu 24.04.1 LTS \n \l
root@cam:/home/migo# ffmpeg -version
ffmpeg version 6.1.1-3ubuntu5 Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 13 (Ubuntu 13.2.0-23ubuntu3)
configuration: --prefix=/usr --extra-version=3ubuntu5 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --disable-omx --enable-gnutls --enable-libaom --enable-libass --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libharfbuzz --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-openal --enable-opencl --enable-opengl --disable-sndio --enable-libvpl --disable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-ladspa --enable-libbluray --enable-libjack --enable-libpulse --enable-librabbitmq --enable-librist --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libx264 --enable-libzmq --enable-libzvbi --enable-lv2 --enable-sdl2 --enable-libplacebo --enable-librav1e --enable-pocketsphinx --enable-librsvg --enable-libjxl --enable-shared
libavutil 58. 29.100 / 58. 29.100
libavcodec 60. 31.102 / 60. 31.102
libavformat 60. 16.100 / 60. 16.100
libavdevice 60. 3.100 / 60. 3.100
libavfilter 9. 12.100 / 9. 12.100
libswscale 7. 5.100 / 7. 5.100
libswresample 4. 12.100 / 4. 12.100
libpostproc 57. 3.100 / 57. 3.100
Only parameter used for encoder is: qp=28, please see attachment.
Thank you,
Milan

root@cam:/home/migo# cat /etc/issue
Ubuntu 24.04.1 LTS \n \l
root@cam:/home/migo# ffmpeg -version
ffmpeg version 6.1.1-3ubuntu5 Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 13 (Ubuntu 13.2.0-23ubuntu3)
configuration: --prefix=/usr --extra-version=3ubuntu5 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --disable-omx --enable-gnutls --enable-libaom --enable-libass --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libharfbuzz --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-openal --enable-opencl --enable-opengl --disable-sndio --enable-libvpl --disable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-ladspa --enable-libbluray --enable-libjack --enable-libpulse --enable-librabbitmq --enable-librist --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libx264 --enable-libzmq --enable-libzvbi --enable-lv2 --enable-sdl2 --enable-libplacebo --enable-librav1e --enable-pocketsphinx --enable-librsvg --enable-libjxl --enable-shared
libavutil 58. 29.100 / 58. 29.100
libavcodec 60. 31.102 / 60. 31.102
libavformat 60. 16.100 / 60. 16.100
libavdevice 60. 3.100 / 60. 3.100
libavfilter 9. 12.100 / 9. 12.100
libswscale 7. 5.100 / 7. 5.100
libswresample 4. 12.100 / 4. 12.100
libpostproc 57. 3.100 / 57. 3.100
Only parameter used for encoder is: qp=28, please see attachment.
Thank you,
Milan
- Attachments
-
- Screenshot From 2025-01-01 19-30-00.png (69.81 KiB) Viewed 9142 times
Re: Zoneminder not releasing open files when encoding with VAAPI
Can you post parameters you had used for encoder please? I'll try those if changes anything. Thank you!
Re: Zoneminder not releasing open files when encoding with VAAPI
Mine was empty. WIth your qp setting I am getting the behaviour.
Re: Zoneminder not releasing open files when encoding with VAAPI
Strangely master doesn't show the behaviour only 1.36 does. Which is strange because the code is VERY similar.
Re: Zoneminder not releasing open files when encoding with VAAPI
Without this parameter was video quality very poor. 

Re: Zoneminder not releasing open files when encoding with VAAPI
Another problem is that zmc of this monitor (-m 6) consumes a LOT of memory and causes OOM kill situations...
- Attachments
-
- Screenshot From 2025-01-28 18-29-57.png (304.69 KiB) Viewed 8320 times
Re: Zoneminder not releasing open files when encoding with VAAPI
Like this one:
[Tue Jan 28 15:25:55 2025] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init.scope,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=357287,uid=33
[Tue Jan 28 15:25:55 2025] Out of memory: Killed process 357287 (zmc) total-vm:30176312kB, anon-rss:14747924kB, file-rss:4352kB, shmem-rss:8064kB, UID:33 pgtables:58372kB oom_score_adj:0
[Tue Jan 28 15:25:57 2025] systemd[1]: snapd.service: Watchdog timeout (limit 5min)!
[Tue Jan 28 15:25:57 2025] systemd[1]: snapd.service: Killing process 344943 (snapd) with signal SIGABRT.
[Tue Jan 28 15:25:57 2025] oom_reaper: reaped process 357287 (zmc), now anon-rss:0kB, file-rss:88kB, shmem-rss:0kB
[Tue Jan 28 15:25:58 2025] systemd[1]: snapd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
[Tue Jan 28 15:25:58 2025] systemd[1]: snapd.service: Failed with result 'watchdog'.
[Tue Jan 28 15:25:58 2025] systemd[1]: Created slice user-1000.slice - User Slice of UID 1000.
[Tue Jan 28 15:25:58 2025] systemd[1]: Starting user-runtime-dir@1000.service - User Runtime Directory /run/user/1000...
[Tue Jan 28 15:25:58 2025] systemd[1]: Finished user-runtime-dir@1000.service - User Runtime Directory /run/user/1000.
[Tue Jan 28 15:25:58 2025] systemd[1]: Starting user@1000.service - User Manager for UID 1000...
[Tue Jan 28 15:25:58 2025] systemd[1]: snapd.service: Scheduled restart job, restart counter is at 4.
[Tue Jan 28 15:25:58 2025] systemd[1]: Starting snapd.service - Snap Daemon...
[Tue Jan 28 15:25:58 2025] systemd[1]: Started user@1000.service - User Manager for UID 1000.
[Tue Jan 28 15:25:58 2025] systemd[1]: Started session-414.scope - Session 414 of User migo.
[Tue Jan 28 15:25:58 2025] systemd[1]: systemd-journald.service: Main process exited, code=dumped, status=6/ABRT
[Tue Jan 28 15:25:58 2025] systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
[Tue Jan 28 15:25:58 2025] systemd[1]: systemd-journald.service: Consumed 3min 11.939s CPU time.
[Tue Jan 28 15:25:58 2025] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 3.
[Tue Jan 28 15:25:58 2025] systemd[1]: Starting systemd-journald.service - Journal Service...
[Tue Jan 28 15:25:58 2025] systemd-journald[369628]: Collecting audit messages is disabled.
[Tue Jan 28 15:25:58 2025] systemd-journald[369628]: File /var/log/journal/5105103edaba44fabd4c9dab6f5886da/system.journal corrupted or uncleanly shut down, renaming and replacing.
[Tue Jan 28 15:25:58 2025] systemd[1]: Started systemd-journald.service - Journal Service.
[Tue Jan 28 15:25:59 2025] systemd-journald[369628]: /var/log/journal/5105103edaba44fabd4c9dab6f5886da/user-1000.journal: Journal file uses a different sequence number ID, rotating.
[Tue Jan 28 15:25:59 2025] loop6: detected capacity change from 0 to 8
[Tue Jan 28 18:39:39 2025] workqueue: delayed_fput hogged CPU for >10000us 4 times, consider switching to WQ_UNBOUND
[Fri Jan 31 06:52:57 2025] audit: type=1400 audit(1738302777.538:145): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/sbin/mysqld" pid=637067 comm="apparmor_parser"
[Fri Jan 31 06:53:12 2025] audit: type=1400 audit(1738302792.841:146): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/sbin/mysqld" pid=637477 comm="apparmor_parser"
[Fri Jan 31 08:20:41 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:20:42 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:20:47 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:20:51 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:20:56 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:21:00 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:21:05 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:22:19 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 09:06:03 2025] zmc invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
[Fri Jan 31 09:06:03 2025] CPU: 3 PID: 648259 Comm: zmc Tainted: G U 6.8.0-51-generic #52-Ubuntu
[Fri Jan 31 09:06:03 2025] Hardware name: LENOVO 10SUS4430B/312A, BIOS M1UKT47A 08/14/2019
[Fri Jan 31 09:06:03 2025] Call Trace:
[Fri Jan 31 09:06:03 2025] <TASK>
[Fri Jan 31 09:06:03 2025] dump_stack_lvl+0x76/0xa0
[Fri Jan 31 09:06:03 2025] dump_stack+0x10/0x20
[Fri Jan 31 09:06:03 2025] dump_header+0x47/0x1f0
[Fri Jan 31 09:06:03 2025] oom_kill_process+0x118/0x280
[Fri Jan 31 09:06:03 2025] ? oom_evaluate_task+0x143/0x1e0
[Fri Jan 31 09:06:03 2025] out_of_memory+0x103/0x350
[Fri Jan 31 09:06:03 2025] __alloc_pages_may_oom+0x10c/0x1d0
[Fri Jan 31 09:06:03 2025] __alloc_pages_slowpath.constprop.0+0x420/0x9f0
[Fri Jan 31 09:06:03 2025] __alloc_pages+0x31f/0x350
[Fri Jan 31 09:06:03 2025] alloc_pages_mpol+0x91/0x210
[Fri Jan 31 09:06:03 2025] folio_alloc+0x64/0x120
[Fri Jan 31 09:06:03 2025] ? filemap_get_entry+0xe5/0x160
[Fri Jan 31 09:06:03 2025] filemap_alloc_folio+0xf4/0x100
[Fri Jan 31 09:06:03 2025] __filemap_get_folio+0x14b/0x2f0
[Fri Jan 31 09:06:03 2025] filemap_fault+0x15c/0x8e0
[Fri Jan 31 09:06:03 2025] __do_fault+0x38/0x140
[Fri Jan 31 09:06:03 2025] do_read_fault+0x133/0x1d0
[Fri Jan 31 09:06:03 2025] do_fault+0x109/0x350
[Fri Jan 31 09:06:03 2025] handle_pte_fault+0x114/0x1d0
[Fri Jan 31 09:06:03 2025] __handle_mm_fault+0x653/0x790
[Fri Jan 31 09:06:03 2025] handle_mm_fault+0x18a/0x380
[Fri Jan 31 09:06:03 2025] do_user_addr_fault+0x169/0x670
[Fri Jan 31 09:06:03 2025] exc_page_fault+0x83/0x1b0
[Fri Jan 31 09:06:03 2025] asm_exc_page_fault+0x27/0x30
[Fri Jan 31 09:06:03 2025] RIP: 0033:0x5f3cffd61790
[Fri Jan 31 09:06:03 2025] Code: Unable to access opcode bytes at 0x5f3cffd61766.
[Fri Jan 31 09:06:03 2025] RSP: 002b:0000768e757ff3c8 EFLAGS: 00010246
[Fri Jan 31 09:06:03 2025] RAX: 0000000000000006 RBX: 0000000000000000 RCX: 00005f3d05ca5968
[Fri Jan 31 09:06:03 2025] RDX: 0000000100000006 RSI: 0000000000000028 RDI: 0000768e784b64d0
[Fri Jan 31 09:06:03 2025] RBP: 0000768e757ff420 R08: 0000000000000000 R09: 0000000000000000
[Fri Jan 31 09:06:03 2025] R10: 0000000000000000 R11: 0000000000000293 R12: 00005f3d05ca5960
[Fri Jan 31 09:06:03 2025] R13: 00005f3d05ca5970 R14: 0000768e8a60b040 R15: 00005f3d05ca5968
[Fri Jan 31 09:06:03 2025] </TASK>
[Fri Jan 31 09:06:03 2025] Mem-Info:
[Fri Jan 31 09:06:03 2025] active_anon:5511399 inactive_anon:368895 isolated_anon:0
active_file:687 inactive_file:1435 isolated_file:0
unevictable:38237 dirty:0 writeback:0
slab_reclaimable:68668 slab_unreclaimable:39009
mapped:31576 shmem:32076 pagetables:16000
sec_pagetables:0 bounce:0
kernel_misc_reclaimable:0
free:38988 free_pcp:2629 free_cma:0
[Fri Jan 31 09:06:03 2025] Node 0 active_anon:22045596kB inactive_anon:1475580kB active_file:2968kB inactive_file:5300kB unevictable:152948kB isolated(anon):0kB isolated(file):0kB mapped:126304kB dirty:0kB writeback:0kB shmem:128304kB shmem_thp:4096kB shmem_pmdmapped:0kB anon_thp:241664kB writeback_tmp:0kB kernel_stack:5424kB pagetables:64000kB sec_pagetables:0kB all_unreclaimable? no
[Fri Jan 31 09:06:03 2025] Node 0 DMA free:11264kB boost:0kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[Fri Jan 31 09:06:03 2025] lowmem_reserve[]: 0 2307 23814 23814 23814
[Fri Jan 31 09:06:03 2025] Node 0 DMA32 free:91544kB boost:0kB min:6540kB low:8900kB high:11260kB reserved_highatomic:0KB active_anon:2076332kB inactive_anon:223036kB active_file:140kB inactive_file:1232kB unevictable:0kB writepending:0kB present:2494164kB managed:2428212kB mlocked:0kB bounce:0kB free_pcp:1716kB local_pcp:0kB free_cma:0kB
[Fri Jan 31 09:06:03 2025] lowmem_reserve[]: 0 0 21506 21506 21506
[Fri Jan 31 09:06:03 2025] Node 0 Normal free:53144kB boost:0kB min:60996kB low:83016kB high:105036kB reserved_highatomic:0KB active_anon:19969068kB inactive_anon:1252740kB active_file:792kB inactive_file:5804kB unevictable:152948kB writepending:0kB present:22519808kB managed:22030112kB mlocked:142500kB bounce:0kB free_pcp:8800kB local_pcp:4kB free_cma:0kB
[Fri Jan 31 09:06:03 2025] lowmem_reserve[]: 0 0 0 0 0
[Fri Jan 31 09:06:03 2025] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 2*4096kB (M) = 11264kB
[Fri Jan 31 09:06:03 2025] Node 0 DMA32: 518*4kB (UE) 742*8kB (UME) 659*16kB (UME) 474*32kB (UE) 247*64kB (UME) 127*128kB (UME) 55*256kB (UME) 14*512kB (UME) 4*1024kB (UME) 0*2048kB 0*4096kB = 91128kB
[Fri Jan 31 09:06:03 2025] Node 0 Normal: 3233*4kB (UME) 748*8kB (UME) 1653*16kB (UME) 219*32kB (UME) 3*64kB (U) 1*128kB (U) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 52692kB
[Fri Jan 31 09:06:03 2025] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[Fri Jan 31 09:06:03 2025] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[Fri Jan 31 09:06:03 2025] 37017 total pagecache pages
[Fri Jan 31 09:06:03 2025] 0 pages in swap cache
[Fri Jan 31 09:06:03 2025] Free swap = 0kB
[Fri Jan 31 09:06:03 2025] Total swap = 0kB
[Fri Jan 31 09:06:03 2025] 6257491 pages RAM
[Fri Jan 31 09:06:03 2025] 0 pages HighMem/MovableOnly
[Fri Jan 31 09:06:03 2025] 139070 pages reserved
[Fri Jan 31 09:06:03 2025] 0 pages hwpoisoned
[Fri Jan 31 09:06:03 2025] Tasks state (memory values in pages):
[Fri Jan 31 09:06:03 2025] [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
[Fri Jan 31 09:06:03 2025] [ 447] 0 447 72279 6816 4640 2176 0 114688 0 -1000 multipathd
[Fri Jan 31 09:06:03 2025] [ 469] 0 469 7243 963 419 544 0 81920 0 -1000 systemd-udevd
[Fri Jan 31 09:06:03 2025] [ 847] 102 847 5386 1248 576 672 0 86016 0 0 systemd-resolve
[Fri Jan 31 09:06:03 2025] [ 848] 104 848 22735 992 224 768 0 94208 0 0 systemd-timesyn
[Fri Jan 31 09:06:03 2025] [ 889] 101 889 4718 928 256 672 0 81920 0 0 systemd-network
[Fri Jan 31 09:06:03 2025] [ 948] 103 948 2439 960 128 832 0 61440 0 -900 dbus-daemon
[Fri Jan 31 09:06:03 2025] [ 951] 0 951 272182 6080 5088 992 0 1376256 0 0 fail2ban-server
[Fri Jan 31 09:06:03 2025] [ 966] 0 966 4520 928 224 704 0 81920 0 0 systemd-logind
[Fri Jan 31 09:06:03 2025] [ 984] 0 984 673 352 0 352 0 49152 0 0 atopacctd
[Fri Jan 31 09:06:03 2025] [ 1003] 107 1003 55627 1181 349 832 0 94208 0 0 rsyslogd
[Fri Jan 31 09:06:03 2025] [ 1038] 0 1038 1706 512 32 480 0 57344 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 1039] 65534 1039 3546 1120 320 800 0 73728 0 0 openvpn
[Fri Jan 31 09:06:03 2025] [ 1047] 0 1047 27413 3488 2336 1152 0 118784 0 0 unattended-upgr
[Fri Jan 31 09:06:03 2025] [ 1099] 0 1099 1526 480 32 448 0 53248 0 0 agetty
[Fri Jan 31 09:06:03 2025] [ 1140] 0 1140 19526 1504 672 832 0 151552 0 0 nmbd
[Fri Jan 31 09:06:03 2025] [ 1147] 0 1147 22580 2432 800 896 736 176128 0 0 smbd
[Fri Jan 31 09:06:03 2025] [ 1161] 0 1161 21957 1075 787 288 0 143360 0 0 smbd-notifyd
[Fri Jan 31 09:06:03 2025] [ 1162] 0 1162 21959 1107 787 320 0 139264 0 0 smbd-cleanupd
[Fri Jan 31 09:06:03 2025] [ 2014] 0 2014 3005 800 256 544 0 65536 0 -1000 sshd
[Fri Jan 31 09:06:03 2025] [ 22436] 998 22436 77040 1280 192 1088 0 114688 0 0 polkitd
[Fri Jan 31 09:06:03 2025] [ 276974] 0 276974 23068 1478 806 320 352 159744 0 0 smbd-scavenger
[Fri Jan 31 09:06:03 2025] [ 369524] 0 369524 32081 3003 1166 607 1230 200704 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 369548] 0 369548 404364 3086 3086 0 0 339968 0 -900 snapd
[Fri Jan 31 09:06:03 2025] [ 369628] 0 369628 29008 1101 256 845 0 241664 0 -250 systemd-journal
[Fri Jan 31 09:06:03 2025] [ 371760] 0 371760 23485 2098 838 651 609 180224 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 610543] 0 610543 3363 3273 1801 1472 0 65536 0 -999 atop
[Fri Jan 31 09:06:03 2025] [ 637570] 113 637570 958850 136799 136063 736 0 1777664 0 0 mysqld
[Fri Jan 31 09:06:03 2025] [ 638384] 0 638384 63300 3042 1698 672 672 172032 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638393] 33 638393 63836 4060 2332 704 1024 200704 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638394] 33 638394 63787 4033 2241 736 1056 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638395] 33 638395 63835 4099 2307 736 1056 200704 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638396] 33 638396 63835 4080 2352 672 1056 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638397] 33 638397 63834 4202 2442 736 1024 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638425] 33 638425 9986 3588 3140 448 0 122880 0 0 zmdc.pl
[Fri Jan 31 09:06:03 2025] [ 638452] 0 638452 105568 1280 192 1088 0 163840 0 0 thermald
[Fri Jan 31 09:06:03 2025] [ 638462] 33 638462 275480 144360 141256 1088 2016 1605632 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 638469] 33 638469 11437 5216 4576 640 0 139264 0 0 /usr/bin/zmcont
[Fri Jan 31 09:06:03 2025] [ 638473] 33 638473 222580 71441 64401 992 6048 1081344 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 638479] 33 638479 342521 214777 207673 1056 6048 2080768 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 638489] 33 638489 278195 148513 141409 1056 6048 1572864 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 638494] 33 638494 14540 8256 7616 640 0 172032 0 0 zmfilter.pl
[Fri Jan 31 09:06:03 2025] [ 638499] 33 638499 15815 9504 8864 640 0 167936 0 0 zmfilter.pl
[Fri Jan 31 09:06:03 2025] [ 638505] 33 638505 15810 9504 8832 672 0 167936 0 0 zmfilter.pl
[Fri Jan 31 09:06:03 2025] [ 638511] 33 638511 14348 8000 7392 608 0 155648 0 0 zmfilter.pl
[Fri Jan 31 09:06:03 2025] [ 638518] 33 638518 13229 7104 6368 736 0 147456 0 0 zmaudit.pl
[Fri Jan 31 09:06:03 2025] [ 638525] 33 638525 9865 3584 2976 608 0 118784 0 0 zmwatch.pl
[Fri Jan 31 09:06:03 2025] [ 638531] 33 638531 13805 7495 6727 768 0 147456 0 0 zmtelemetry.pl
[Fri Jan 31 09:06:03 2025] [ 638537] 33 638537 9795 3488 2912 576 0 126976 0 0 zmstats.pl
[Fri Jan 31 09:06:03 2025] [ 638864] 33 638864 63824 4011 2280 675 1056 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 639273] 33 639273 63837 3887 2127 736 1024 200704 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 639274] 33 639274 63825 4044 2284 736 1024 196608 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 639275] 33 639275 63836 3854 2158 704 992 196608 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 639276] 33 639276 63824 4054 2326 672 1056 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 644568] 33 644568 5374235 5100872 5093768 1056 6048 41791488 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 648289] 33 648289 63489 2240 1728 512 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648293] 33 648293 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648294] 33 648294 63489 2272 1728 544 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648295] 33 648295 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648297] 33 648297 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648308] 0 648308 22590 1382 806 544 32 163840 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648310] 0 648310 22590 1318 806 512 0 163840 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648321] 0 648321 2384 739 131 608 0 61440 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 648331] 0 648331 2350 643 131 512 0 61440 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 648333] 33 648333 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648334] 33 648334 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648335] 33 648335 63489 2272 1728 544 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648336] 33 648336 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648337] 33 648337 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648338] 33 648338 63489 2240 1728 512 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648339] 33 648339 63489 2272 1728 544 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648343] 0 648343 2350 707 131 576 0 61440 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 648344] 33 648344 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648346] 33 648346 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648347] 33 648347 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648348] 33 648348 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648352] 33 648352 63489 2240 1728 512 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648357] 33 648357 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648362] 33 648362 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648363] 33 648363 63484 2272 1728 544 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648364] 33 648364 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648368] 0 648368 22580 1158 806 352 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648369] 33 648369 63484 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648371] 0 648371 22580 1158 806 352 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648373] 33 648373 63484 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648374] 33 648374 63484 2240 1728 512 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648375] 33 648375 63484 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648376] 33 648376 63484 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648380] 0 648380 63480 2208 1728 480 0 155648 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648381] 0 648381 22580 1158 806 352 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648382] 0 648382 22580 1126 806 320 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648383] 0 648383 3005 603 283 320 0 57344 0 0 sshd
[Fri Jan 31 09:06:03 2025] [ 648384] 0 648384 22580 1126 806 320 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648385] 0 648385 1791 451 67 384 0 57344 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 648386] 0 648386 3005 539 283 256 0 57344 0 0 sshd
[Fri Jan 31 09:06:03 2025] [ 648387] 0 648387 63420 2144 1728 416 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648388] 0 648388 63420 2144 1728 416 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648389] 0 648389 63420 2208 1728 480 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648390] 0 648390 63420 2176 1728 448 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648391] 0 648391 63420 2176 1728 448 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648392] 0 648392 63420 2176 1728 448 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648393] 0 648393 63420 2176 1728 448 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648394] 0 648394 22580 1094 806 288 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=zoneminder.service,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=644568,uid=33
[Fri Jan 31 09:06:03 2025] Out of memory: Killed process 644568 (zmc) total-vm:21496940kB, anon-rss:20375072kB, file-rss:4224kB, shmem-rss:24192kB, UID:33 pgtables:40812kB oom_score_adj:0
[Tue Jan 28 15:25:55 2025] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init.scope,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=357287,uid=33
[Tue Jan 28 15:25:55 2025] Out of memory: Killed process 357287 (zmc) total-vm:30176312kB, anon-rss:14747924kB, file-rss:4352kB, shmem-rss:8064kB, UID:33 pgtables:58372kB oom_score_adj:0
[Tue Jan 28 15:25:57 2025] systemd[1]: snapd.service: Watchdog timeout (limit 5min)!
[Tue Jan 28 15:25:57 2025] systemd[1]: snapd.service: Killing process 344943 (snapd) with signal SIGABRT.
[Tue Jan 28 15:25:57 2025] oom_reaper: reaped process 357287 (zmc), now anon-rss:0kB, file-rss:88kB, shmem-rss:0kB
[Tue Jan 28 15:25:58 2025] systemd[1]: snapd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
[Tue Jan 28 15:25:58 2025] systemd[1]: snapd.service: Failed with result 'watchdog'.
[Tue Jan 28 15:25:58 2025] systemd[1]: Created slice user-1000.slice - User Slice of UID 1000.
[Tue Jan 28 15:25:58 2025] systemd[1]: Starting user-runtime-dir@1000.service - User Runtime Directory /run/user/1000...
[Tue Jan 28 15:25:58 2025] systemd[1]: Finished user-runtime-dir@1000.service - User Runtime Directory /run/user/1000.
[Tue Jan 28 15:25:58 2025] systemd[1]: Starting user@1000.service - User Manager for UID 1000...
[Tue Jan 28 15:25:58 2025] systemd[1]: snapd.service: Scheduled restart job, restart counter is at 4.
[Tue Jan 28 15:25:58 2025] systemd[1]: Starting snapd.service - Snap Daemon...
[Tue Jan 28 15:25:58 2025] systemd[1]: Started user@1000.service - User Manager for UID 1000.
[Tue Jan 28 15:25:58 2025] systemd[1]: Started session-414.scope - Session 414 of User migo.
[Tue Jan 28 15:25:58 2025] systemd[1]: systemd-journald.service: Main process exited, code=dumped, status=6/ABRT
[Tue Jan 28 15:25:58 2025] systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
[Tue Jan 28 15:25:58 2025] systemd[1]: systemd-journald.service: Consumed 3min 11.939s CPU time.
[Tue Jan 28 15:25:58 2025] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 3.
[Tue Jan 28 15:25:58 2025] systemd[1]: Starting systemd-journald.service - Journal Service...
[Tue Jan 28 15:25:58 2025] systemd-journald[369628]: Collecting audit messages is disabled.
[Tue Jan 28 15:25:58 2025] systemd-journald[369628]: File /var/log/journal/5105103edaba44fabd4c9dab6f5886da/system.journal corrupted or uncleanly shut down, renaming and replacing.
[Tue Jan 28 15:25:58 2025] systemd[1]: Started systemd-journald.service - Journal Service.
[Tue Jan 28 15:25:59 2025] systemd-journald[369628]: /var/log/journal/5105103edaba44fabd4c9dab6f5886da/user-1000.journal: Journal file uses a different sequence number ID, rotating.
[Tue Jan 28 15:25:59 2025] loop6: detected capacity change from 0 to 8
[Tue Jan 28 18:39:39 2025] workqueue: delayed_fput hogged CPU for >10000us 4 times, consider switching to WQ_UNBOUND
[Fri Jan 31 06:52:57 2025] audit: type=1400 audit(1738302777.538:145): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/sbin/mysqld" pid=637067 comm="apparmor_parser"
[Fri Jan 31 06:53:12 2025] audit: type=1400 audit(1738302792.841:146): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/sbin/mysqld" pid=637477 comm="apparmor_parser"
[Fri Jan 31 08:20:41 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:20:42 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:20:47 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:20:51 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:20:56 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:21:00 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:21:05 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 08:22:19 2025] systemd-journald[369628]: Under memory pressure, flushing caches.
[Fri Jan 31 09:06:03 2025] zmc invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
[Fri Jan 31 09:06:03 2025] CPU: 3 PID: 648259 Comm: zmc Tainted: G U 6.8.0-51-generic #52-Ubuntu
[Fri Jan 31 09:06:03 2025] Hardware name: LENOVO 10SUS4430B/312A, BIOS M1UKT47A 08/14/2019
[Fri Jan 31 09:06:03 2025] Call Trace:
[Fri Jan 31 09:06:03 2025] <TASK>
[Fri Jan 31 09:06:03 2025] dump_stack_lvl+0x76/0xa0
[Fri Jan 31 09:06:03 2025] dump_stack+0x10/0x20
[Fri Jan 31 09:06:03 2025] dump_header+0x47/0x1f0
[Fri Jan 31 09:06:03 2025] oom_kill_process+0x118/0x280
[Fri Jan 31 09:06:03 2025] ? oom_evaluate_task+0x143/0x1e0
[Fri Jan 31 09:06:03 2025] out_of_memory+0x103/0x350
[Fri Jan 31 09:06:03 2025] __alloc_pages_may_oom+0x10c/0x1d0
[Fri Jan 31 09:06:03 2025] __alloc_pages_slowpath.constprop.0+0x420/0x9f0
[Fri Jan 31 09:06:03 2025] __alloc_pages+0x31f/0x350
[Fri Jan 31 09:06:03 2025] alloc_pages_mpol+0x91/0x210
[Fri Jan 31 09:06:03 2025] folio_alloc+0x64/0x120
[Fri Jan 31 09:06:03 2025] ? filemap_get_entry+0xe5/0x160
[Fri Jan 31 09:06:03 2025] filemap_alloc_folio+0xf4/0x100
[Fri Jan 31 09:06:03 2025] __filemap_get_folio+0x14b/0x2f0
[Fri Jan 31 09:06:03 2025] filemap_fault+0x15c/0x8e0
[Fri Jan 31 09:06:03 2025] __do_fault+0x38/0x140
[Fri Jan 31 09:06:03 2025] do_read_fault+0x133/0x1d0
[Fri Jan 31 09:06:03 2025] do_fault+0x109/0x350
[Fri Jan 31 09:06:03 2025] handle_pte_fault+0x114/0x1d0
[Fri Jan 31 09:06:03 2025] __handle_mm_fault+0x653/0x790
[Fri Jan 31 09:06:03 2025] handle_mm_fault+0x18a/0x380
[Fri Jan 31 09:06:03 2025] do_user_addr_fault+0x169/0x670
[Fri Jan 31 09:06:03 2025] exc_page_fault+0x83/0x1b0
[Fri Jan 31 09:06:03 2025] asm_exc_page_fault+0x27/0x30
[Fri Jan 31 09:06:03 2025] RIP: 0033:0x5f3cffd61790
[Fri Jan 31 09:06:03 2025] Code: Unable to access opcode bytes at 0x5f3cffd61766.
[Fri Jan 31 09:06:03 2025] RSP: 002b:0000768e757ff3c8 EFLAGS: 00010246
[Fri Jan 31 09:06:03 2025] RAX: 0000000000000006 RBX: 0000000000000000 RCX: 00005f3d05ca5968
[Fri Jan 31 09:06:03 2025] RDX: 0000000100000006 RSI: 0000000000000028 RDI: 0000768e784b64d0
[Fri Jan 31 09:06:03 2025] RBP: 0000768e757ff420 R08: 0000000000000000 R09: 0000000000000000
[Fri Jan 31 09:06:03 2025] R10: 0000000000000000 R11: 0000000000000293 R12: 00005f3d05ca5960
[Fri Jan 31 09:06:03 2025] R13: 00005f3d05ca5970 R14: 0000768e8a60b040 R15: 00005f3d05ca5968
[Fri Jan 31 09:06:03 2025] </TASK>
[Fri Jan 31 09:06:03 2025] Mem-Info:
[Fri Jan 31 09:06:03 2025] active_anon:5511399 inactive_anon:368895 isolated_anon:0
active_file:687 inactive_file:1435 isolated_file:0
unevictable:38237 dirty:0 writeback:0
slab_reclaimable:68668 slab_unreclaimable:39009
mapped:31576 shmem:32076 pagetables:16000
sec_pagetables:0 bounce:0
kernel_misc_reclaimable:0
free:38988 free_pcp:2629 free_cma:0
[Fri Jan 31 09:06:03 2025] Node 0 active_anon:22045596kB inactive_anon:1475580kB active_file:2968kB inactive_file:5300kB unevictable:152948kB isolated(anon):0kB isolated(file):0kB mapped:126304kB dirty:0kB writeback:0kB shmem:128304kB shmem_thp:4096kB shmem_pmdmapped:0kB anon_thp:241664kB writeback_tmp:0kB kernel_stack:5424kB pagetables:64000kB sec_pagetables:0kB all_unreclaimable? no
[Fri Jan 31 09:06:03 2025] Node 0 DMA free:11264kB boost:0kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[Fri Jan 31 09:06:03 2025] lowmem_reserve[]: 0 2307 23814 23814 23814
[Fri Jan 31 09:06:03 2025] Node 0 DMA32 free:91544kB boost:0kB min:6540kB low:8900kB high:11260kB reserved_highatomic:0KB active_anon:2076332kB inactive_anon:223036kB active_file:140kB inactive_file:1232kB unevictable:0kB writepending:0kB present:2494164kB managed:2428212kB mlocked:0kB bounce:0kB free_pcp:1716kB local_pcp:0kB free_cma:0kB
[Fri Jan 31 09:06:03 2025] lowmem_reserve[]: 0 0 21506 21506 21506
[Fri Jan 31 09:06:03 2025] Node 0 Normal free:53144kB boost:0kB min:60996kB low:83016kB high:105036kB reserved_highatomic:0KB active_anon:19969068kB inactive_anon:1252740kB active_file:792kB inactive_file:5804kB unevictable:152948kB writepending:0kB present:22519808kB managed:22030112kB mlocked:142500kB bounce:0kB free_pcp:8800kB local_pcp:4kB free_cma:0kB
[Fri Jan 31 09:06:03 2025] lowmem_reserve[]: 0 0 0 0 0
[Fri Jan 31 09:06:03 2025] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 2*4096kB (M) = 11264kB
[Fri Jan 31 09:06:03 2025] Node 0 DMA32: 518*4kB (UE) 742*8kB (UME) 659*16kB (UME) 474*32kB (UE) 247*64kB (UME) 127*128kB (UME) 55*256kB (UME) 14*512kB (UME) 4*1024kB (UME) 0*2048kB 0*4096kB = 91128kB
[Fri Jan 31 09:06:03 2025] Node 0 Normal: 3233*4kB (UME) 748*8kB (UME) 1653*16kB (UME) 219*32kB (UME) 3*64kB (U) 1*128kB (U) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 52692kB
[Fri Jan 31 09:06:03 2025] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[Fri Jan 31 09:06:03 2025] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[Fri Jan 31 09:06:03 2025] 37017 total pagecache pages
[Fri Jan 31 09:06:03 2025] 0 pages in swap cache
[Fri Jan 31 09:06:03 2025] Free swap = 0kB
[Fri Jan 31 09:06:03 2025] Total swap = 0kB
[Fri Jan 31 09:06:03 2025] 6257491 pages RAM
[Fri Jan 31 09:06:03 2025] 0 pages HighMem/MovableOnly
[Fri Jan 31 09:06:03 2025] 139070 pages reserved
[Fri Jan 31 09:06:03 2025] 0 pages hwpoisoned
[Fri Jan 31 09:06:03 2025] Tasks state (memory values in pages):
[Fri Jan 31 09:06:03 2025] [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
[Fri Jan 31 09:06:03 2025] [ 447] 0 447 72279 6816 4640 2176 0 114688 0 -1000 multipathd
[Fri Jan 31 09:06:03 2025] [ 469] 0 469 7243 963 419 544 0 81920 0 -1000 systemd-udevd
[Fri Jan 31 09:06:03 2025] [ 847] 102 847 5386 1248 576 672 0 86016 0 0 systemd-resolve
[Fri Jan 31 09:06:03 2025] [ 848] 104 848 22735 992 224 768 0 94208 0 0 systemd-timesyn
[Fri Jan 31 09:06:03 2025] [ 889] 101 889 4718 928 256 672 0 81920 0 0 systemd-network
[Fri Jan 31 09:06:03 2025] [ 948] 103 948 2439 960 128 832 0 61440 0 -900 dbus-daemon
[Fri Jan 31 09:06:03 2025] [ 951] 0 951 272182 6080 5088 992 0 1376256 0 0 fail2ban-server
[Fri Jan 31 09:06:03 2025] [ 966] 0 966 4520 928 224 704 0 81920 0 0 systemd-logind
[Fri Jan 31 09:06:03 2025] [ 984] 0 984 673 352 0 352 0 49152 0 0 atopacctd
[Fri Jan 31 09:06:03 2025] [ 1003] 107 1003 55627 1181 349 832 0 94208 0 0 rsyslogd
[Fri Jan 31 09:06:03 2025] [ 1038] 0 1038 1706 512 32 480 0 57344 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 1039] 65534 1039 3546 1120 320 800 0 73728 0 0 openvpn
[Fri Jan 31 09:06:03 2025] [ 1047] 0 1047 27413 3488 2336 1152 0 118784 0 0 unattended-upgr
[Fri Jan 31 09:06:03 2025] [ 1099] 0 1099 1526 480 32 448 0 53248 0 0 agetty
[Fri Jan 31 09:06:03 2025] [ 1140] 0 1140 19526 1504 672 832 0 151552 0 0 nmbd
[Fri Jan 31 09:06:03 2025] [ 1147] 0 1147 22580 2432 800 896 736 176128 0 0 smbd
[Fri Jan 31 09:06:03 2025] [ 1161] 0 1161 21957 1075 787 288 0 143360 0 0 smbd-notifyd
[Fri Jan 31 09:06:03 2025] [ 1162] 0 1162 21959 1107 787 320 0 139264 0 0 smbd-cleanupd
[Fri Jan 31 09:06:03 2025] [ 2014] 0 2014 3005 800 256 544 0 65536 0 -1000 sshd
[Fri Jan 31 09:06:03 2025] [ 22436] 998 22436 77040 1280 192 1088 0 114688 0 0 polkitd
[Fri Jan 31 09:06:03 2025] [ 276974] 0 276974 23068 1478 806 320 352 159744 0 0 smbd-scavenger
[Fri Jan 31 09:06:03 2025] [ 369524] 0 369524 32081 3003 1166 607 1230 200704 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 369548] 0 369548 404364 3086 3086 0 0 339968 0 -900 snapd
[Fri Jan 31 09:06:03 2025] [ 369628] 0 369628 29008 1101 256 845 0 241664 0 -250 systemd-journal
[Fri Jan 31 09:06:03 2025] [ 371760] 0 371760 23485 2098 838 651 609 180224 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 610543] 0 610543 3363 3273 1801 1472 0 65536 0 -999 atop
[Fri Jan 31 09:06:03 2025] [ 637570] 113 637570 958850 136799 136063 736 0 1777664 0 0 mysqld
[Fri Jan 31 09:06:03 2025] [ 638384] 0 638384 63300 3042 1698 672 672 172032 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638393] 33 638393 63836 4060 2332 704 1024 200704 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638394] 33 638394 63787 4033 2241 736 1056 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638395] 33 638395 63835 4099 2307 736 1056 200704 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638396] 33 638396 63835 4080 2352 672 1056 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638397] 33 638397 63834 4202 2442 736 1024 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 638425] 33 638425 9986 3588 3140 448 0 122880 0 0 zmdc.pl
[Fri Jan 31 09:06:03 2025] [ 638452] 0 638452 105568 1280 192 1088 0 163840 0 0 thermald
[Fri Jan 31 09:06:03 2025] [ 638462] 33 638462 275480 144360 141256 1088 2016 1605632 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 638469] 33 638469 11437 5216 4576 640 0 139264 0 0 /usr/bin/zmcont
[Fri Jan 31 09:06:03 2025] [ 638473] 33 638473 222580 71441 64401 992 6048 1081344 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 638479] 33 638479 342521 214777 207673 1056 6048 2080768 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 638489] 33 638489 278195 148513 141409 1056 6048 1572864 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 638494] 33 638494 14540 8256 7616 640 0 172032 0 0 zmfilter.pl
[Fri Jan 31 09:06:03 2025] [ 638499] 33 638499 15815 9504 8864 640 0 167936 0 0 zmfilter.pl
[Fri Jan 31 09:06:03 2025] [ 638505] 33 638505 15810 9504 8832 672 0 167936 0 0 zmfilter.pl
[Fri Jan 31 09:06:03 2025] [ 638511] 33 638511 14348 8000 7392 608 0 155648 0 0 zmfilter.pl
[Fri Jan 31 09:06:03 2025] [ 638518] 33 638518 13229 7104 6368 736 0 147456 0 0 zmaudit.pl
[Fri Jan 31 09:06:03 2025] [ 638525] 33 638525 9865 3584 2976 608 0 118784 0 0 zmwatch.pl
[Fri Jan 31 09:06:03 2025] [ 638531] 33 638531 13805 7495 6727 768 0 147456 0 0 zmtelemetry.pl
[Fri Jan 31 09:06:03 2025] [ 638537] 33 638537 9795 3488 2912 576 0 126976 0 0 zmstats.pl
[Fri Jan 31 09:06:03 2025] [ 638864] 33 638864 63824 4011 2280 675 1056 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 639273] 33 639273 63837 3887 2127 736 1024 200704 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 639274] 33 639274 63825 4044 2284 736 1024 196608 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 639275] 33 639275 63836 3854 2158 704 992 196608 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 639276] 33 639276 63824 4054 2326 672 1056 204800 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 644568] 33 644568 5374235 5100872 5093768 1056 6048 41791488 0 0 zmc
[Fri Jan 31 09:06:03 2025] [ 648289] 33 648289 63489 2240 1728 512 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648293] 33 648293 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648294] 33 648294 63489 2272 1728 544 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648295] 33 648295 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648297] 33 648297 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648308] 0 648308 22590 1382 806 544 32 163840 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648310] 0 648310 22590 1318 806 512 0 163840 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648321] 0 648321 2384 739 131 608 0 61440 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 648331] 0 648331 2350 643 131 512 0 61440 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 648333] 33 648333 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648334] 33 648334 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648335] 33 648335 63489 2272 1728 544 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648336] 33 648336 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648337] 33 648337 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648338] 33 648338 63489 2240 1728 512 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648339] 33 648339 63489 2272 1728 544 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648343] 0 648343 2350 707 131 576 0 61440 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 648344] 33 648344 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648346] 33 648346 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648347] 33 648347 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648348] 33 648348 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648352] 33 648352 63489 2240 1728 512 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648357] 33 648357 63489 2176 1728 448 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648362] 33 648362 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648363] 33 648363 63484 2272 1728 544 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648364] 33 648364 63489 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648368] 0 648368 22580 1158 806 352 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648369] 33 648369 63484 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648371] 0 648371 22580 1158 806 352 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648373] 33 648373 63484 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648374] 33 648374 63484 2240 1728 512 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648375] 33 648375 63484 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648376] 33 648376 63484 2208 1728 480 0 163840 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648380] 0 648380 63480 2208 1728 480 0 155648 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648381] 0 648381 22580 1158 806 352 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648382] 0 648382 22580 1126 806 320 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648383] 0 648383 3005 603 283 320 0 57344 0 0 sshd
[Fri Jan 31 09:06:03 2025] [ 648384] 0 648384 22580 1126 806 320 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] [ 648385] 0 648385 1791 451 67 384 0 57344 0 0 cron
[Fri Jan 31 09:06:03 2025] [ 648386] 0 648386 3005 539 283 256 0 57344 0 0 sshd
[Fri Jan 31 09:06:03 2025] [ 648387] 0 648387 63420 2144 1728 416 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648388] 0 648388 63420 2144 1728 416 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648389] 0 648389 63420 2208 1728 480 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648390] 0 648390 63420 2176 1728 448 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648391] 0 648391 63420 2176 1728 448 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648392] 0 648392 63420 2176 1728 448 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648393] 0 648393 63420 2176 1728 448 0 151552 0 0 apache2
[Fri Jan 31 09:06:03 2025] [ 648394] 0 648394 22580 1094 806 288 0 143360 0 0 smbd[192.168.1.
[Fri Jan 31 09:06:03 2025] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=zoneminder.service,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=644568,uid=33
[Fri Jan 31 09:06:03 2025] Out of memory: Killed process 644568 (zmc) total-vm:21496940kB, anon-rss:20375072kB, file-rss:4224kB, shmem-rss:24192kB, UID:33 pgtables:40812kB oom_score_adj:0
Re: Zoneminder not releasing open files when encoding with VAAPI
I can confirm 1.37 release is working smooth for 10 days now without issues described in this thread. Many new features, my favorite one is monitor decoding only on demand! Thank you.