Hi,
My ZM is running dreadfully slow. Unusable. (logging in takes 10 minutes)
Logging in via SSH takes aroung 2 minutes. Typing is a nightmare.
Running the command service zm stop takes over 10 minutes.
I run the Feroda Core 3 Distro v1.21.3
I have 9 axis 206's and 1 axis 205 IP cameras all on modect at 4 fps and 640x480 resolution. Im using jpeg not mpeg streaming.
The shared memory is currently at 2Gbytes for shmall and shmmax
(I have tried varies sizes from 100M up)
The hardware is interesting as it isnt hardware but a VMware virtual machine runing on a dual Xeon 3.2 GHz with 10 GB Ram. The Hard drive is on a EMC SAN and is 300GB with Fibre channel raid 5 arrays. (Fast)
There are only 2 other VM on the host server and combined they are only using 15% of the host server resources.
I have tried various processor numbers (I can choose 1, 2 or 4) but cant seem to get FC3 to recoginse the extra CPU's and I still get the buffer overrun error.
The VMware monitor for CPU is running too high (90% +) top dosnt register correctly as the VM allocates processing power as needed but it registers 0% idle.
For what its worth the Top output is this. (It is taken after about 30 seconds after a service zm start command).
top - 15:31:28 up 3:22, 1 user, load average: 10.94, 5.93, 2.50
Tasks: 89 total, 8 running, 81 sleeping, 0 stopped, 0 zombie
Cpu(s): 25.6% us, 73.2% sy, 0.0% ni, 0.0% id, 0.0% wa, 1.2% hi, 0.0% si
Mem: 3894428k total, 2442496k used, 1451932k free, 57264k buffers
Swap: 2031608k total, 0k used, 2031608k free, 2175012k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8970 apache 16 0 172m 76m 71m R 6.2 2.0 0:09.57 zmc
9009 apache 16 0 172m 76m 71m R 6.2 2.0 0:06.47 zmc
8981 apache 16 0 172m 75m 71m R 5.9 2.0 0:12.81 zmc
8906 apache 16 0 172m 76m 71m S 5.6 2.0 0:14.31 zmc
8957 apache 16 0 172m 76m 71m R 5.6 2.0 0:16.70 zmc
8993 apache 16 0 172m 76m 71m R 5.6 2.0 0:11.80 zmc
8915 apache 16 0 172m 76m 71m S 5.3 2.0 0:18.39 zmc
8925 apache 15 0 172m 76m 71m S 5.3 2.0 0:16.30 zmc
8935 apache 15 0 172m 76m 71m S 4.6 2.0 0:16.95 zmc
8945 apache 15 0 172m 76m 71m S 4.6 2.0 0:16.47 zmc
9032 apache 15 0 171m 75m 71m S 3.7 2.0 0:04.03 zma
9000 apache 15 0 171m 75m 71m S 3.4 2.0 0:06.05 zma
9055 apache 15 0 171m 75m 71m S 3.4 2.0 0:01.71 zma
8969 apache 15 0 171m 75m 71m S 3.1 2.0 0:04.61 zma
9003 apache 15 0 171m 75m 71m S 3.1 2.0 0:06.22 zma
8922 apache 15 0 171m 75m 71m S 2.8 2.0 0:11.08 zma
8951 apache 15 0 171m 75m 71m S 2.8 2.0 0:09.48 zma
8954 apache 15 0 171m 75m 71m S 2.8 2.0 0:08.19 zma
8967 apache 15 0 171m 75m 71m S 2.8 2.0 0:04.67 zma
9084 apache 18 0 4292 2484 1604 R 2.8 0.1 0:00.09 zmdc.pl
8911 apache 15 0 171m 75m 71m S 2.5 2.0 0:07.97 zma
2838 mysql 16 0 35680 9704 2248 S 0.9 0.2 0:22.26 mysqld
9038 apache 16 0 6980 5100 2092 S 0.6 0.1 0:00.20 zmwatch.pl
115 root 15 0 0 0 0 S 0.3 0.0 0:01.39 pdflush
I have tried various settings in the Monitor,buffers window and at the moment all cameras have the following settings
Image Buffer Size (frames) 80
Warmup Frames 25
Pre Event Image Buffer 2
Post Event Image 2
Alarm Frame count 1
The load number when logged in as admin is often over 20.00
The disk usage is at 6%
A sample of /var/log/messages
Aug 7 11:17:43 zm zmc_m6[4769]: WAR [Buffer overrun at index 16 ]
Aug 7 11:17:43 zm zma_m4[4782]: INF [A3_Lab: 3233 - Gone back into alarm state]
Aug 7 11:17:47 zm zma_m7[4785]: INF [B1_Lab: 683309 - Gone back into alarm state]
Aug 7 11:17:47 zm zmc_m4[2561]: WAR [Buffer overrun at index 38 ]
Aug 7 11:17:48 zm zmc_m11[4807]: WAR [Buffer overrun at index 20 ]
Aug 7 11:17:48 zm zma_m6[4773]: INF [B_Block_North_Side: 670685 - Gone into alert state]
Aug 7 11:17:48 zm zmc_m2[2091]: WAR [Buffer overrun at index 20 ]
Aug 7 11:17:51 zm zma_m11[4811]: INF [C_Block_North_Side: 660121 - Gone back into alarm state]
Aug 7 11:17:51 zm zma_m3[4743]: INF [I12_Lab: 670413 - Gone into alarm state]
Aug 7 11:17:52 zm zmc_m8[4789]: WAR [Buffer overrun at index 21 ]
Aug 7 11:17:54 zm zma_m4[4782]: INF [A3_Lab: 3236 - Gone into alert state]
Aug 7 11:17:54 zm zmc_m11[4807]: INF [C_Block_North_Side: 669000 - Capturing at 2.77 fps]
Aug 7 11:17:55 zm zmc_m9[4797]: WAR [Buffer overrun at index 34 ]
Aug 7 11:17:56 zm zmc_m7[4777]: WAR [Buffer overrun at index 6 ]
Aug 7 11:17:57 zm zmc_m6[4769]: WAR [Buffer overrun at index 17 ]
Aug 7 11:17:57 zm zma_m11[4811]: INF [C_Block_North_Side: 660125 - Gone into alert state]
Aug 7 11:18:01 zm zma_m3[4743]: INF [I12_Lab: 670417 - Gone into alert state]
Aug 7 11:18:01 zm zmc_m2[2091]: WAR [Buffer overrun at index 20 ]
The overhead on CPU for a VM is no more than 5% so in theory, 1 CPU running at 3.2GHz on the host should be about a 3GHz CPU on the VM. The VM RAM is set at 5GB (I know its overkill but been experimenting)
Reducing the resolution is not an option as we need to recognise individuals at a 15 meter distance.
Can I reduce the resolution for triggering recording of events but increase the resolution for the stored images?
I hope this is enough info and I hope you can help.
Regards Brad
buffer overrun at index xyz
-
- Posts: 5111
- Joined: Wed Jun 08, 2005 8:07 pm
- Location: Midlands UK
i have tested mzndrake on a vm on my fx53 and had awful performance that was before zm, so i owuld expect it to be worse. I would try to run zm on a dedicated machine, as looking at the top from the vm its plain overloaded. You could try running at 0.5 fps and see what that does, just to see. It should work but i assume the vm is having to get resources through emulation and that will be useing resources
James Wilson
Disclaimer: The above is pure theory and may work on a good day with the wind behind it. etc etc.
http://www.securitywarehouse.co.uk
Disclaimer: The above is pure theory and may work on a good day with the wind behind it. etc etc.
http://www.securitywarehouse.co.uk
Basically, you're asking too much of the hardware. Your box sounds capable of quite a lot, but comparing 40 frames per second at 640x480 takes a big old chunk of cpu. So your options are: Reduce what you're asking, or provide more hardware to do it.
You can't shift resolutions on motion detections. TBH, the overhead in setting up an alternate stream could easily take a couple of seconds or more, by which time Mr Chav has moved out of sight. Even ramping up the fps on detection sometimes has the opposite effect.
So, onto the constructive suggestions...
1. Drop the colour. (Add &color=0 to the end of the axis URL and change the colour depth to greyscale). This alone will reduce the load by 100-200% ime, without significantly affecting the "recognisability" of faces.
2. Use straight hardware. Obviously you must have a good reason to be using multiple VM's on the same machine, but sharing it is a compromise and with a thirsty tool like ZM, it'll hurt everyone. What James says - the overhead in VM is very noticable, I'd be very surprised if 5% is a realworld figure.
3. Split the cameras onto another server. Unfortunately ZM doesn't scale across servers /that/ well (In my experience, welcome differences of opinion!) and I found it easiest to move half the cameras onto another server. Only downside is it's two websites to check instead of one.
You can't shift resolutions on motion detections. TBH, the overhead in setting up an alternate stream could easily take a couple of seconds or more, by which time Mr Chav has moved out of sight. Even ramping up the fps on detection sometimes has the opposite effect.
So, onto the constructive suggestions...
1. Drop the colour. (Add &color=0 to the end of the axis URL and change the colour depth to greyscale). This alone will reduce the load by 100-200% ime, without significantly affecting the "recognisability" of faces.
2. Use straight hardware. Obviously you must have a good reason to be using multiple VM's on the same machine, but sharing it is a compromise and with a thirsty tool like ZM, it'll hurt everyone. What James says - the overhead in VM is very noticable, I'd be very surprised if 5% is a realworld figure.
3. Split the cameras onto another server. Unfortunately ZM doesn't scale across servers /that/ well (In my experience, welcome differences of opinion!) and I found it easiest to move half the cameras onto another server. Only downside is it's two websites to check instead of one.