ZM is writing faster than it's ability to clean

Forum for questions and support relating to the 1.28.x releases only.
Locked
abi
Posts: 61
Joined: Fri Oct 23, 2015 11:25 am

ZM is writing faster than it's ability to clean

Post by abi »

My 8 cameras in mocord mode are writing faster than automatic rm working. :shock:

1.Analysis FPS are 3 FPS
2.

Code: Select all

dev/vdb				/zoneminder			ext2	defaults,noatime,nodiratime,commit=120		0 2
3. Approximately 400G/day is written (rather huge amount of data, my disk on test system is 1Tb).
3. Server is operating under FreeBSD/bhyve

Are there any tips to configure linux for better IO, or maybe choose another filesystem?
bbunge
Posts: 2975
Joined: Mon Mar 26, 2012 11:40 am
Location: Pennsylvania

Re: ZM is writing faster than it's ability to clean

Post by bbunge »

How much memory do you have? Try moving PATH_SWAP to tmpfs and anything else that uses the /tmp on the hard drive. Many distros use half of the RAM for tmpfs. This can force the system to use swap which will slow things down. If you can, reduce the size of the tmpfs which will give your system more RAM for other processes.

Depending on the version of ZM your MySQL could be pegging the processor. There is a fix for the database cited in this forum.

bb
abi
Posts: 61
Joined: Fri Oct 23, 2015 11:25 am

Re: ZM is writing faster than it's ability to clean

Post by abi »

I made some tweaks, but today purgewhenfull purged all data again. :(
I have 3Gb of memory, 1,4 grabbed by ZM, other is cached. Maybe ext2 is not so good to keep so many files?

Code: Select all

[abishai@zoneminder ~]$ sudo iotop -b |grep rm
22869 be/4 apache      0.00 B/s    0.00 B/s  0.00 %  0.00 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache     60.82 K/s   98.84 K/s  0.00 % 94.30 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache    152.86 K/s  217.82 K/s  0.00 % 99.99 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache     19.15 K/s   61.29 K/s  0.00 % 99.99 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache     19.18 K/s   65.21 K/s  0.00 % 91.96 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache     19.06 K/s   49.54 K/s  0.00 % 96.53 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache      7.70 K/s   19.24 K/s  0.00 % 89.94 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache    153.92 K/s  519.50 K/s  0.00 % 99.99 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache    410.19 K/s  846.97 K/s  0.00 % 99.99 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache    376.58 K/s  664.79 K/s  0.00 % 92.43 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache    210.99 K/s  333.75 K/s  0.00 % 71.10 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache     49.93 K/s   76.81 K/s  0.00 % 99.99 % rm -rf 6/15/11/22/00/40/00
22869 be/4 apache    287.96 K/s  453.06 K/s  0.00 % 99.99 % rm -rf 6/15/11/22/00/40/00
You may see how bad is IO (99), basically, this means that server stops at all.

Code: Select all

[abishai@zoneminder ~]$ sudo tune2fs -l /dev/vdb
tune2fs 1.42.9 (28-Dec-2013)
Filesystem volume name:   <none>
Last mounted on:          /var/lib/zoneminder/events
Filesystem UUID:          c844bf64-46cb-48c9-855b-4c5af1407556
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      ext_attr resize_inode dir_index filetype sparse_super large_file
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         not clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              32768000
Block count:              131072000
Reserved block count:     6553600
Free blocks:              114282702
Free inodes:              32474883
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      992
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              2
RAID stripe width:        2
Filesystem created:       Thu Nov 19 01:18:38 2015
Last mount time:          Sat Nov 21 12:24:15 2015
Last write time:          Sat Nov 21 12:24:15 2015
Mount count:              6
Maximum mount count:      -1
Last checked:             Thu Nov 19 01:18:38 2015
Check interval:           0 (<none>)
Lifetime writes:          658 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          256
Required extra isize:     28
Desired extra isize:      28
Default directory hash:   half_md4
Directory Hash Seed:      05e08129-d59a-4b74-9aa7-02ad14558c22
bbunge
Posts: 2975
Joined: Mon Mar 26, 2012 11:40 am
Location: Pennsylvania

Re: ZM is writing faster than it's ability to clean

Post by bbunge »

Your three GIG may not be enough for eight cameras. Especially if they are hi def. Reduce the size of your tmpfs to gain a bit more usable RAM. Also reduce your image size. Turn off CREATE_ANALYSIS_IMAGES.

Turn off OPT_FAST_DELETE which will allow event images to be deleted with database records. With OPT_FAST_DELETE enabled just the database record is removed then the system depends on zmaudit to compare event images to events in the database. The purge when full filter senses that the disk percent has not changed and keeps deleting events from the database.

EXT2 could be a problem. Suggest you try EXT4 no journal on that drive. Or just EXT4 noatime
abi
Posts: 61
Joined: Fri Oct 23, 2015 11:25 am

Re: ZM is writing faster than it's ability to clean

Post by abi »

I switched to ext4.

Code: Select all

mkfs.ext4 -T small -m 1 /dev/vdb
tune2fs -O ^has_journal /dev/vdb

Code: Select all

dev/vdb				/zoneminder			ext4	defaults,noatime,nodiratime,commit=120		0 2
Looks like it handles ZM load pattern much better.
You shouldn't do this on root volume.
Locked