This smearing is caused by "lost" RTP packets , specifically during reception of an I frame. FFMPEG's H.264 decoder does a really good job of concealing the occasional lost packet, but when losses occur during an I frame there's nothing the decoder can do.
Needless to say, this smearing wreaks havoc with ZM's motion detection, causing all sorts of false alarms. In the past, the standard workaround was to append "?tcp" to the end of the RTSP URI in the ZM camera configuration; while this used to work, in recent versions of FFMPEG it no longer has any effect. Now in order to force the RTP packets to be sent over a TCP transport layer, you have to hack the ZM code as described here:
With this modification the lost packets are eliminated and the problem goes away.This is what I did in FfmpegCamera::PrimeCapture to fix my issue:
AVDictionary *opts = 0;
av_dict_set(&opts, "rtsp_transport", "tcp", 0);
avformat_open_input( &mFormatContext, mPath.c_str(), NULL, &opts )
But why are the RTP packets being lost in the first place? My cameras are on a gigabit LAN, which should have more than enough capacity for dozens of these cameras. It turns out the packet loss is a bug in FFMPEG itself:
Although the issue was identified some 2 years ago, there's no indication of an impending fix to FFMPEG. Hopefully the impending switch to libvlc in the next release of ZM will solve this issue once and for all.the problem is caused by the OS UDP buffer overflowing
this is because rtpproto.c disabled our ring buffer
without the ring buffer the code depends on the OS having large
enough buffers which it plain doesnt
To fix this you would have to do 2 things
first remove "url_add_option(buf, buf_size, "fifo_size=0");" from
rtpproto.c
and second make the rtp code actually work with the udp code with its
fifo, currently the rtp code hacks into the udp code and extracts
its fd and accesses this directly bypassing the fifo, which cannot
work