- FC6 with a custom kernel build 2.6.21.5 build for my AMD X2
3GB (originally 1GB) RAM
over 700GB disks with JFS filesystem and a logical volume manager
ZM 1.22.3
3 Bosch cameras via a ProVideo PV150 - 4 Bt878 port capture card
Camera settings - 384x240 with FPS=8, alarm FPS=20, buffer =40 frames & 10,20,20 warm pre and post
Hauppage PVR-350 (mpeg2 vid in/out capture card)
The system is also running MythTV (SVN version with the MythTV Zoneminder plugin - Nice Piece Of work!). With both ZM and MythTV I have some significant (but not huge) video capture & streaming requirements. When I was running ZM on FC4 with double the buffer sizes and FPS it was happy. Other reasons forced my to update to FC6. (Video capture drivers for the capture card I am using for MythTV)
Random system hangs and slowdowns were occurring between 1-3 days. /var/log/messages was telling me that I should decreases the buffers or slow down capture.... Which I did. To levels less than what I was running on FC4 and I was still hanging, so I knew it was something else. I followed the FAQ info on setting the Shared memory to no avail.
REMEMBER to restart httpd and ZM after changes to the shared memory settings!
I had tried many, many things but was stumped. I even grabbed and extra 2GB memory to help out, with no luck, UNTIL...
I came across and article about shared memory setting when running DB2. ( http://publib.boulder.ibm.com/infocente ... 008238.htm ). It made some statement that conflicted with those in the FAQ so I gave them a try. The essential difference was that the kernel.shmall setting is NOT in a direct memory setting in KB but in pages of memory. it is Max Pages of memory
For example: If you want to allocate a maximum memory setting to 8GB you have to convert it to the number of pages (or segments).
with a page size of 4096
Code: Select all
kernel.shmall=8000x1024x1024/4096
kernel.shmall=2048000
shmmax is the max amount to allocate in one request
this is is an actual memory size (as opposed to pages) set to 4GB
Code: Select all
kernel.shmmax = 4194304000
Here is my /etc/sysctl.conf
Code: Select all
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# The default shared memory limit for SHMMAX is 32 MB and for SHMALL is 2 MB, but it can be changed.
# For example, to allow 128 MB:
# SEE http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.uprun.doc/doc/t0008238.htm
# this is the max pages set to 8GB divided by the page size of 4096 = 8000x1024x1024/4096
kernel.shmall = 2048000
# this is is an actual mem size (as opposed to pages) set to 4GB
kernel.shmmax = 4194304000
# reload these with a sysctl -p
# list the mem settings eith a ipcs -l
Code: Select all
[root@dvr tmp]# ipcs -l
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 4096000
max total shared memory (kbytes) = 8192000
min seg size (bytes) = 1
------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767
------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384
My system has not hung since. So far a full 7 days and not a hint of trouble.