Bottlenecking...need for multiple ZM servers?
Posted: Fri Sep 11, 2009 4:05 am
- My setup -
Hardware:
Quad Core Xeon with HT @ 2.8GHz
12GB DDR3 ECC RAM
8x 1TB HDDs in LSI SAS RAID10
Server Software:
Windows Server 2003 x64
VMware Server 2 for Windows
Zoneminder setup:
Debian (running inside a VM allocated two cores and 2gb ram)
Zoneminder
Cameras:
16 320x240 IP cameras running at 3fps
1 640x480 IP camera running at 10fps
Several other VMs run on the server, but they are very light on both CPU and Disk activity.
My issue is that Zoneminder, operating in mocord mode on all cameras, starts to choke when multiple cameras experience alerts and begin saving imagery at once. Remote viewing (via zmviewer for windows) freezes for two to three seconds and upon review of jpegs saved, images from the frozen time period don't exist. They are skipped entirely.
During all of this the very powerful sever's windows 2k3 OS shows only about 25% CPU utilization. The RAID10 array has been tested to continuous writes at over 100MBps for extended periods of time.
My theory is that something is getting jammed up in threading with ZM in the debian VM. I can only allocate two cores per VM, though I have access to 8 (4x2, thx to hyperthreading) on the physical server. This issue did not occur until I added three new cameras, taking me from 14 to 17. The server's overall CPU utilization stayed at ~25% despite the new cameras. I'm kinof thinking this adds up to mean i'm maxing out those threads. 2 of 8 cores is 25%.
The "top" command in debian shows the system idles (all cameras active but none alerting/recording) at about 40% idle. I'm trying to gather data on how high it spikes during multiple alarms, but that's hard in the middle of the night (the only time i have to study it sometimes) when no cars drive by!
If this is indeed the issue, my idea for a fix would be to create a second Debian VM for a second install of zoneminder and give it two different cores...that way i don't have 17 camera processes fighting for two cores. Does this sound reasonable?
I know it's a complex setup, but I'll explain more as needed. Any input is much appreciated. We have plans to add up to 20 more cameras soon. FYI, running outside of VMware isn't really an option right now. The server was purchased for other purposes but has the spare CPU cycles to run this system since it primarily runs many low power VMs for small office use.
EDIT: Confirmed. Studying top in the debian/zoneminder VM during periods of multiple alarm shows the CPU becoming 0% idle for the full several seconds the system freezes. The CPU indicator in windows sits 75% idle the whole time. I'm maxing my two virtual cores in the VM. Anyone have advice on centrally managing (particularly remotely viewing) multiple zoneminder servers at once? Sounds like i'm going to be starting a virtual zm farm.
Hardware:
Quad Core Xeon with HT @ 2.8GHz
12GB DDR3 ECC RAM
8x 1TB HDDs in LSI SAS RAID10
Server Software:
Windows Server 2003 x64
VMware Server 2 for Windows
Zoneminder setup:
Debian (running inside a VM allocated two cores and 2gb ram)
Zoneminder
Cameras:
16 320x240 IP cameras running at 3fps
1 640x480 IP camera running at 10fps
Several other VMs run on the server, but they are very light on both CPU and Disk activity.
My issue is that Zoneminder, operating in mocord mode on all cameras, starts to choke when multiple cameras experience alerts and begin saving imagery at once. Remote viewing (via zmviewer for windows) freezes for two to three seconds and upon review of jpegs saved, images from the frozen time period don't exist. They are skipped entirely.
During all of this the very powerful sever's windows 2k3 OS shows only about 25% CPU utilization. The RAID10 array has been tested to continuous writes at over 100MBps for extended periods of time.
My theory is that something is getting jammed up in threading with ZM in the debian VM. I can only allocate two cores per VM, though I have access to 8 (4x2, thx to hyperthreading) on the physical server. This issue did not occur until I added three new cameras, taking me from 14 to 17. The server's overall CPU utilization stayed at ~25% despite the new cameras. I'm kinof thinking this adds up to mean i'm maxing out those threads. 2 of 8 cores is 25%.
The "top" command in debian shows the system idles (all cameras active but none alerting/recording) at about 40% idle. I'm trying to gather data on how high it spikes during multiple alarms, but that's hard in the middle of the night (the only time i have to study it sometimes) when no cars drive by!
If this is indeed the issue, my idea for a fix would be to create a second Debian VM for a second install of zoneminder and give it two different cores...that way i don't have 17 camera processes fighting for two cores. Does this sound reasonable?
I know it's a complex setup, but I'll explain more as needed. Any input is much appreciated. We have plans to add up to 20 more cameras soon. FYI, running outside of VMware isn't really an option right now. The server was purchased for other purposes but has the spare CPU cycles to run this system since it primarily runs many low power VMs for small office use.
EDIT: Confirmed. Studying top in the debian/zoneminder VM during periods of multiple alarm shows the CPU becoming 0% idle for the full several seconds the system freezes. The CPU indicator in windows sits 75% idle the whole time. I'm maxing my two virtual cores in the VM. Anyone have advice on centrally managing (particularly remotely viewing) multiple zoneminder servers at once? Sounds like i'm going to be starting a virtual zm farm.