Page 2 of 2

Posted: Wed Apr 05, 2006 2:29 pm
by zoneminder
I do some work for a company that also uses video capture cards, and they have never come across a camera/card combination that doesn't interlace above 320 pixels high, including so called 'professional' equipment, so I just don't think they exist I'm afraid.

Posted: Wed Apr 05, 2006 3:11 pm
by jameswilson
I agree and i think the limit is 288 (PAL). Most 'professional' kei has either 360x288 or 720x288. ZM will run at 720x288 but the clock info is slightly compressed when you sort out the aspect ratio. Most pro kit that says its 570 line record is lying and the interpolate up from 288. I have used pro kit that can do high res but it has the interlace issue. There must be a way to do it even if we have to use non interline transfer cameras. I have been looking at the v4l2 api and although most of it is outside my knowledge i found some interesting things
3.6. Field Order

We have to distinguish between progressive and interlaced video. Progressive video transmits all lines of a video image sequentially. Interlaced video divides an image into two fields, containing only the odd and even lines of the image, respectively. Alternating the so called odd and even field are transmitted, and due to a small delay between fields a cathode ray TV displays the lines interleaved, yielding the original frame. This curious technique was invented because at refresh rates similar to film the image would fade out too quickly. Transmitting fields reduces the flicker without the necessity of doubling the frame rate and with it the bandwidth required for each channel.

It is important to understand a video camera does not expose one frame at a time, merely transmitting the frames separated into fields. The fields are in fact captured at two different instances in time. An object on screen may well move between one field and the next. For applications analysing motion it is of paramount importance to recognize which field of a frame is older, the temporal order.

When the driver provides or accepts images field by field rather than interleaved, it is also important applications understand how the fields combine to frames. We distinguish between top and bottom fields, the spatial order: The first line of the top field is the first line of an interlaced frame, the first line of the bottom field is the second line of that frame.

However because fields were captured one after the other, arguing whether a frame commences with the top or bottom field is pointless. Any two successive top and bottom, or bottom and top fields yield a valid frame. Only when the source was progressive to begin with, e. g. when transferring film to video, two fields may come from the same frame, creating a natural order.

Counter to intuition the top field is not necessarily the older field. Whether the older field contains the top or bottom lines is a convention determined by the video standard. Hence the distinction between temporal and spatial order of fields. The diagrams below should make this clearer.

All video capture and output devices must report the current field order. Some drivers may permit the selection of a different order, to this end applications initialize the field field of struct v4l2_pix_format before calling the VIDIOC_S_FMT ioctl. If this is not desired it should have the value V4L2_FIELD_ANY (0).

Table 3-8. enum v4l2_field
V4L2_FIELD_ANY 0 Applications request this field order when any one of the V4L2_FIELD_NONE, V4L2_FIELD_TOP, V4L2_FIELD_BOTTOM, or V4L2_FIELD_INTERLACED formats is acceptable. Drivers choose depending on hardware capabilities or e. g. the requested image size, and return the actual field order. struct v4l2_buffer field can never be V4L2_FIELD_ANY.
V4L2_FIELD_NONE 1 Images are in progressive format, not interlaced. The driver may also indicate this order when it cannot distinguish between V4L2_FIELD_TOP and V4L2_FIELD_BOTTOM.
V4L2_FIELD_TOP 2 Images consist of the top field only.
V4L2_FIELD_BOTTOM 3 Images consist of the bottom field only. Applications may wish to prevent a device from capturing interlaced images because they will have "comb" or "feathering" artefacts around moving objects.
V4L2_FIELD_INTERLACED 4 Images contain both fields, interleaved line by line. The temporal order of the fields (whether the top or bottom field is first transmitted) depends on the current video standard. M/NTSC transmits the bottom field first, all other standards the top field first.
V4L2_FIELD_SEQ_TB 5 Images contain both fields, the top field lines are stored first in memory, immediately followed by the bottom field lines. Fields are always stored in temporal order, the older one first in memory. Image sizes refer to the frame, not fields.
V4L2_FIELD_SEQ_BT 6 Images contain both fields, the bottom field lines are stored first in memory, immediately followed by the top field lines. Fields are always stored in temporal order, the older one first in memory. Image sizes refer to the frame, not fields.
V4L2_FIELD_ALTERNATE 7 The two fields of a frame are passed in separate buffers, in temporal order, i. e. the older one first. To indicate the field parity (whether the current field is a top or bottom field) the driver or application, depending on data direction, must set struct v4l2_buffer field to V4L2_FIELD_TOP or V4L2_FIELD_BOTTOM. Any two successive fields pair to build a frame. If fields are successive, without any dropped fields between them (fields can drop individually), can be determined from the struct v4l2_buffer sequence field. Image sizes refer to the frame, not fields. This format cannot be selected when using the read/write I/O method.

Figure 3-1. Field Order, Top Field First Transmitted

Figure 3-2. Field Order, Bottom Field First Transmitted
Taken from http://www.linuxtv.org/downloads/video4 ... /v4l2.html

It appears that the code that requests the image has various ways of asking, and im assuming the above is one of the relevant arg that would be passed to the driver. I may be way off but can someone who does know take a look at this?
V4L2_VBI_INTERLACED 0x0002 By default the two field images will be passed sequentially; all lines of the first field followed by all lines of the second field (compare Section 3.6> V4L2_FIELD_SEQ_TB and V4L2_FIELD_SEQ_BT, whether the top or bottom field is first in memory depends on the video standard). When this flag is set, the two fields are interlaced (cf. V4L2_FIELD_INTERLACED). The first line of the first field followed by the first line of the second field, then the two second lines, and so on. Such a layout may be necessary when the hardware has been programmed to capture or output interlaced video images and is unable to separate the fields for VBI capturing at the same time. For simplicity setting this flag implies that both count values are equal and non-zero.
regards James

Posted: Fri Apr 07, 2006 1:22 pm
by skydiver
Sorry I havn't checked the thread in a while. Been busy and it's a testament that ZM can be left alone without intervention for extended periods of time.

No luck with the combfilter setting. I am going to try to call and talk to the people at KT&C about the image captured and sent by the camera. I will repot back my findings. I have called Ituner about the Spectra8 before but there seems a complete lack of english speaking support in the US office and a general lack of help when it come to Linux.

What is frustrating is that the same hardware combination on a Windoze box captures images flawlessly. I am trying to sell a customer on Zone Minder as an alternative to an over priced Windows box, but this image quality a limiting factor. I know I could use a different capture method (ip cameras vs. regular cameras) but our application requires some covert cameras.

Posted: Fri Apr 07, 2006 5:15 pm
by lazyleopard
zoneminder wrote:I do some work for a company that also uses video capture cards, and they have never come across a camera/card combination that doesn't interlace above 320 pixels high, including so called 'professional' equipment, so I just don't think they exist I'm afraid.
I guess, whilst it would be possible, theoretically, to produce a camera which produced interlaced half-frames that would not exhibit combing, it isn't worth the effort. From the camera manufacturer's point of view it'll cost more, and for applications that want to show a moving picture it'll also flicker slightly more. Figures...

The one situation I'd expect not to see combing would be on frames off something originally captured on film. But who would bother setting ZoneMinder to watch a film? ;)

Posted: Fri Apr 07, 2006 5:54 pm
by jameswilson
There must be a way to record digitally at higher res than 288 as my sky+ box does it (admitadly not with mjpeg compression). Plus I have seen some wavelet compression dvr's that can do full res. I assume the issue is the bttv driver, has anyone used the software that comes with these cards in windows. Im also assuming that if you use the correct driver that this doesnt happen

Posted: Sat Apr 08, 2006 9:15 am
by lazyleopard
I guess, when you're digitising an an analogue broadcast signal the vertical resolution is dictated by the scan lines, but the horizontal resolution is a matter of sample rate. The bttv digitising chip is just trying to come up with an image which has approximately square pixels. As analog broadcast signals are interlaced, you're going to see combing artifacts in some cases, though movies that have been transferred from celuloid won't show it so badly.

Once you start considering digital broadcasts (like those received on Sky+ and FreeView) the analogue considerations go out the window.

Posted: Sat Apr 08, 2006 11:49 am
by jameswilson
just to be an awkward git what about windows media centres with analouge tuning cards. I think im gonna fire up a windows machine and see if thats the same. I also tried a progressive scan camera a while vack but cant remember the results.
When using xaw and tvtime these have ways around it. is this because they capture frames in a different way or because of something else

Posted: Sat Apr 08, 2006 11:07 pm
by lazyleopard
Google on "deinterlace" will likely throw up some interesting reading. http://www.100fps.com/ has quite a bit on the matter.

Posted: Sat Apr 08, 2006 11:20 pm
by jameswilson
I understand a lot about how de-interlacing works after my last effort to nail this lol. I am currently trying to compile the official osprey driver into my kernel and its getting messy. Im hoping that its a driver issue as the only way i can see it helping zm is if the driver presents de-interlaced images to zm. I have tried various 878 and 1 saa based card without success. Im begining to think this is a lost cause.!!

Posted: Sun Apr 09, 2006 9:13 am
by lazyleopard
Yeah, for the kind of camera that gets hooked up to 8x8 cards, the only option is to do some de-interlacing after the raw frame capture. There do seem to be folk out there trying to get the necessary code together, but it's clearly a non-trivial task, and bound to put an extra load on the CPU. Quite a bit of the effort seems to relate to playing back DVDs on large screens.

I expect the answer to video quality problems is not to use standard interlaced video sources. For ZoneMinder, the easiest way round that is probably to use good quality network cameras.

combing - the frame grabber card?

Posted: Tue Jun 13, 2006 9:54 pm
by biadco
People seem to be focusing on the frame grabber card.
I disagree.
Try running TVTime - you will see no combing.
Boot to windoz and run a win program - you will see no combing.
My conclusion is that it is the software.
And why are we interlacing anyway?
Some of us still have some CRT's around, but, most viewing is now done on an LCD screen.
I have also connected a camera directly to a monitor - no combing.
Can we have, in the options box, a check off to stop interlacing?
The combing effect is definately going to be objectional to some users.

Re: combing - the frame grabber card?

Posted: Wed Jun 14, 2006 12:03 pm
by lazyleopard
biadco wrote:And why are we interlacing anyway?
If you're using a video camera as a source, and it provides a "standard" analogue video signal, then it is going to be interlaced, because that's the way those things work.
I have also connected a camera directly to a monitor - no combing.
If you're viewing a moving image you won't notice the combing because you'll be seeing the moving image half-frame by half-frame. If you freeze-frame then you may in fact be seeing only one half-frame. This is probably the way things like TVTime do freeze-frame.
Can we have, in the options box, a check off to stop interlacing?
If you use an analogue video source then you can choose a resolution that can be satisfied by half frames.
The combing effect is definately going to be objectional to some users.
...or you can use a digital image source like a netcam.

Posted: Wed Jun 14, 2006 1:06 pm
by jameswilson
Try running TVTime - you will see no combing
Tvtime uses differing algorithms to 'hide' the interlace effect and they have a differing impact on proc use. Obviously this would be compounded if you had 8 cameras
Can we have, in the options box, a check off to stop interlacing
if it were that simple i would be there, the problem comes from the fact as stated above that most cameras output an interlaced signal. You need to blame whoever designed the PAL/CCIR?NTSC signal format for this and it was because that was the best they could do at the time, unfortunatly we had a chance with the new HD std to change this and we have 2 standards 720p 720 lines progressive (non interlaced) or 1080i which is interlaced, so even though you have 1080 vertical lines only 540 are non interlaced, so i for one will be using 720p when we get it! The reason sky and other pushed to make 1080 interlaced was due to bandwith limitations a 1080 interlaced signal at 50 fields per second takes the same bandwith as 1080p at 25 frames per second. So for sport etc 50 fields is better.