
A number of fairly simple improvements could be made:
#1. Use a 16-bit fixed point number to hold the reference image. This is very quick to update and holds much better precision, particularly in darker areas. i.e.
ref_pixel = ref_pixel - (ref_pixel>>7) +( image_pixel<<1)
where ref_pixel is a array of shorts, not chars. This gives a 1/128 blend, with all shifts and adds.
#2. Use smart filtering. This is slightly more complicated. You store 2nd reference image for the delta image, with very slow blending, and then subtract this from the delta image before you check for motion.
This has the effect of subtracting out areas in the image that are always changing a little bit. For example, waving branches, moving shadows, camera noise, and JPEG artifacts.
This allows you to get rid of the 'min_threshold' stuff as the filter will automatically adjust to the level of background noise the camera generates.
#3. Use the 4th power of the difference instead of the absolute difference. Generally, a small part of the image changing intensity a lot is much more important than the entire image changing a little bit. The current scoring treats then the same, but summing over the 4th power gives much bigger numbers for a object moving, and small numbers for the entire image getting a little brighter (as the camera auto-adjusts).
This lets you disregard the min_threshold as small changes don't do anything anyway.
#4. Fix the color space! argh! Zoneminder assumes that the input JPEG is RGB, which gives wierd image colouring when the image is really YUV.
So the question is: If I get off my arse and implement all this, is there any interest in merging the result into ZoneMinder?