Page 1 of 1
Camera Simulation for Demostration Purposes?
Posted: Sat Feb 06, 2016 8:07 pm
by linforpros
Hello,
I wonder whether zoneminder would allow to create six simulated "cameras" to demostrate its features (to a limited degree) without phisical cameras actually installed. Now with smartphone apps it would be desirable to quickly see how they work with zoneminder without investing in acctual cameras, but not only that.
Would the following process make sense:
1. download 6 files of type video
2. run ffmpeg using those files and output snapshots of type jpg saving those snapshot somewhere visible to zoneminder
3. make sure ffmpeg outputs snapshot in a loop so there is a non-stop source available to zoneminder
3. Have zonminder read those snapshots adding them as camera of type File as source.
If any one could comment on the above and steer it in the right direction it would be great.
Re: Camera Simulation for Demostration Purposes?
Posted: Sat Feb 06, 2016 8:13 pm
by asker
Zoneminder has a "file" mode as camera source which does exactly this. You can point it to a JPEG somewhere and that becomes the feed. You keep changing the JPEG as often as you need and it becomes a video.
Re: Camera Simulation for Demostration Purposes?
Posted: Sat Feb 06, 2016 8:35 pm
by mikb
Also: you don't need to split 6 videos into a pile of JPEGs -- you can feed the video directly to ZoneMinder.
I recorded the output of each one of my cameras (motion jpeg video stream) into a file "vidfile.mjpeg" for testing/setup to give a known constant video input. It's easier to tune the settings when you have files of "known nothing", "small activity", "large activity" etc. to see what ZM detects/misses!
To manually view the result (I don't think XINE or VLC would take it), I used
ffmpeg -r 5 -i input.mjpeg -f avi -q:v 5 -vtag XVID -g 60 -bf 2 output.avi
to get a compliant file (output.avi) that I could watch outside of ZM at 5 frames per second,hence the -r 5.
Then: In ZM, I set up a "FileTest" camera with Source type: FFMPEG and FPS both set to 5/5 to control the rate to match camera's output.
I'm sure you could make the source file "big enough" in the first place or maybe add an option to FFMPEG to loop/repeat the input file to play forever.
Re: Camera Simulation for Demostration Purposes?
Posted: Sat Feb 06, 2016 8:54 pm
by linforpros
That is exactly what interests me here.
Is there a commandline simplest of all to accomplish this not to overload the CPU (if this is an issue at all) and have zoneminder read the output file without complaing about permissions?
I do not know if the following would be correct:
Code: Select all
ffmpeg -loop -1 -i input.flv -vf fps=1 -update=1 /var/lib/zoneminder/images/image_overwritten.jpg
-loop would supposedly provide an indefinate replay of the input
-update would keep the file name in check without creating multiple output files.
If the command is run as root, would the output have the right permissions for zoneminder to read it?
Thank you for any hints
Re: Camera Simulation for Demostration Purposes?
Posted: Sat Feb 06, 2016 11:08 pm
by asker
Right - the file mode makes it easy to loop.
I haven't tried that exact ffmpeg command but you can always store the file in a readable directory and ZM will read and display it. Give it a try with a single image first and then write a script to change it and loop . It should work.
Re: Camera Simulation for Demostration Purposes?
Posted: Tue Apr 12, 2016 9:18 pm
by knight-of-ni
I just set up a couple virtual cameras in zoneminder using ffserver. Turns out I didn't need to use the ffmpeg binary to create a feed. That just over complicated things.
I created an ffserver.conf file:
Code: Select all
#
# To run the server:
# ffserver -f /etc/ffserver.conf
#
# Port on which the server is listening. You must select a different
# port from your standard HTTP web server if it is running on the same
# computer.
HTTPPort 8090
RTSPPort 8554
# Address on which the server is bound. Only useful if you have
# several network interfaces.
HTTPBindAddress 0.0.0.0
RTSPBindAddress 0.0.0.0
# Number of simultaneous HTTP connections that can be handled. It has
# to be defined *before* the MaxClients parameter, since it defines the
# MaxClients maximum limit.
MaxHTTPConnections 2000
# Number of simultaneous requests that can be handled. Since FFServer
# is very fast, it is more likely that you will want to leave this high
# and use MaxBandwidth, below.
MaxClients 1000
# This the maximum amount of kbit/sec that you are prepared to
# consume when streaming to clients.
MaxBandwidth 1000
# Access log file (uses standard Apache log file format)
# '-' is the standard output.
#CustomLog /var/log/ffserver
CustomLog -
#
# FEEDS
#
# Rather than play the feeds externally and reference them here,
# just specify the filename directly in the Stream section.
#<Feed camfeed1.ffm>
# File /tmp/camfeed1.ffm
# FileMaxSize 2M
# ACL allow 127.0.0.1
# ACL allow 192.168.1.0 192.168.1.255
#</Feed>
#
# STREAMS
#
<Stream camstream1.sdp>
File "/path/to/video1.mpg"
Format rtp
Noaudio
VideoFrameRate 5
VideoSize 640x480
ACL allow 127.0.0.1
ACL allow 192.168.1.0 192.168.1.255
</Stream>
<Stream camstream2.sdp>
File "/path/to/video2.mpg"
Format rtp
Noaudio
VideoFrameRate 5
VideoSize 640x480
ACL allow 127.0.0.1
ACL allow 192.168.1.0 192.168.1.255
</Stream>
I then created and installed a systemd unit file to auto-start ffserver:
Code: Select all
# systemd configuration for ffserver
# /etc/systemd/system/ffserver.service
[Unit]
Description=ffserver streaming server
Before=zoneminder.service
[Service]
ExecStart=/usr/bin/ffserver -f /etc/ffserver.conf
#ExecReload=/bin/kill -HUP $MAINPID
Type=simple
User=root
Group=root
Restart=always
[Install]
WantedBy=multi-user.target
For some reason ffserver crashes after a few minutes. Confirmed on different machines and different versions of ffmpeg. However, the "Restart=always" parameter in the systemd unit file will restart it automatically so you won't notice it happened.
Also note the "Before=zoneminder.service" parameter which will, as the name implies, ensure this is started up before zoneminder tries to access the streams.
Last, I programmed each monitor like so:
Source Type: ffmpeg
Source Path: rtsp://127.0.0.1:8554/camstream1.sdp
Remote Method: RTP/Unicast
Target colorspace: 32bit
Width: 640
Height: 480
Re: Camera Simulation for Demostration Purposes?
Posted: Tue May 24, 2016 4:08 pm
by linforpros
Thank you knnniggett for sharing the setup how to simulate cameras with ffserver systemd service. It works here in my computer = Fedora 23. I confirm that the ffserver crashes every minute. Restart=Always stanza in ffserver.service file seems to solve the problem temporarily. However setting up 4 streams and viewing them with zmNinja is very slow in my case, both from localnet and over verizon wireless connection.
The files being served by ffserver have these parameters:
Code: Select all
# ffprobe pappy.mp4
ffprobe version 2.6.5 Copyright (c) 2007-2015 the FFmpeg developers
built with gcc 5.1.1 (GCC) 20150618 (Red Hat 5.1.1-4)
configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --enable-bzlib --disable-crystalhd --enable-frei0r --enable-gnutls --enable-ladspa --enable-libass --enable-libcdio --enable-libdc1394 --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopencv --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 26.100 / 56. 26.100
libavformat 56. 25.101 / 56. 25.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 11.102 / 5. 11.102
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'pappy.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.25.101
Duration: 00:01:00.94, start: 0.010907, bitrate: 5098 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, unknown/unknown/iec61966-2-1), 640x480 [SAR 8:9 DAR 32:27], 4964 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: mp3 (mp4a / 0x6134706D), 44100 Hz, stereo, s16p, 127 kb/s (default)
Metadata:
handler_name : SoundHandler
Perhaps I made a mistake preparing the file? I used my smartphone for recording in HD and then Openshot's export feature under fedora to produce the above file. (pappy.mp4)
Besides, the simulcameras look like they have some sort of padding under zmNinja (in montage view). I would like them to be without "borders" almost touching each other while in montage view. Any hints from anybody on that?
Thank you so much.
Re: Camera Simulation for Demostration Purposes?
Posted: Wed Jun 01, 2016 10:07 pm
by linforpros
I have two questions pertaining the "simulcameras"
1. Given the res being 640 x 480, what should be the lowest acceptable parameters for the pre-recorded video file?
2. Should MAX FPS setting be defined as you mentioned in another post about "type file source"?
The load on the server is enourmous hence wanted to know which way should I steer myself to handle the test.
Here is the file I am using and is causing problems:
Code: Select all
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'bank.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.25.101
Duration: 00:01:04.92, start: 0.010907, bitrate: 3887 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, unknown/unknown/iec61966-2-1), 640x480 [SAR 8:9 DAR 32:27], 3753 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: mp3 (mp4a / 0x6134706D), 44100 Hz, stereo, s16p, 127 kb/s (default)
Metadata:
handler_name : SoundHandler
The FPS in the file shows 29.97. But since it is a video file served by ffserver should it have MAX FPS defined?