interlacing: this is the question!
interlacing: this is the question!
hi,
i have a simple (I believed...) question: i want to record a car in movement on the street (at least 50 Km/h)... but the image is always afflicted by a blurring and there are more annoying black line (probably the result of a interlacing)... I have read on some post that is possible setting the options "ZM_CAPTURES_PER_FRAME" for reducind this effect... but I do not succeed to do good it! which type of setting I can to do?
i have a simple (I believed...) question: i want to record a car in movement on the street (at least 50 Km/h)... but the image is always afflicted by a blurring and there are more annoying black line (probably the result of a interlacing)... I have read on some post that is possible setting the options "ZM_CAPTURES_PER_FRAME" for reducind this effect... but I do not succeed to do good it! which type of setting I can to do?
The only way to remove interlacing in a cctv system is to capture at or below 320x240. The other option might be if you can find a progresive scan capture card and camera, if they exist, I'm sure they are pricey.
Your other option would be to use an IP camera possibly.
Other than that there are no other options, it's the nature of how cctv works.
Your other option would be to use an IP camera possibly.
Other than that there are no other options, it's the nature of how cctv works.
Deinterlacing is on playback of course, not capture.
So for 8 cameras you need to do:
1) Capture image from camera
2) That's it
This happens already, so I don't think there'll be any problem here.
The only capture modification I'd make would be to capture odd/even fields as separate jpegs to prevent interference between them with the lossy compression.
So for 8 cameras you need to do:
1) Capture image from camera
2) That's it
This happens already, so I don't think there'll be any problem here.
The only capture modification I'd make would be to capture odd/even fields as separate jpegs to prevent interference between them with the lossy compression.
record vs. playback
There's a potential problem doing it that way. If your capture rate isn't high enough, you will have only one half of the interlace and it can't be merged with the other half.
I have had lots of captures with motion too fast to be grabbed in sequential frames (at 640x480). So if I look at a JPG that has the "lined" part that is in motion, and then the next JPG captured immediately after it, they are not part of the same fps, they are in next frames, and the object has already moved from that position.
On one camera this would be easy, just crank the capture rate up to 30fps and then there's a good chance that you could deinterlace the frames immediately following each other. But on a 1.6 GHz machine with 3 cameras on it, I can't capture any higher than about 12fps or I start getting dropped frames because my CPU isn't fast enough to keep up when analyzing with zma.
I have had lots of captures with motion too fast to be grabbed in sequential frames (at 640x480). So if I look at a JPG that has the "lined" part that is in motion, and then the next JPG captured immediately after it, they are not part of the same fps, they are in next frames, and the object has already moved from that position.
On one camera this would be easy, just crank the capture rate up to 30fps and then there's a good chance that you could deinterlace the frames immediately following each other. But on a 1.6 GHz machine with 3 cameras on it, I can't capture any higher than about 12fps or I start getting dropped frames because my CPU isn't fast enough to keep up when analyzing with zma.
Sure, if you don't capture at full frame rate deinterlacing isn't particularly useful. My cameras capture at 10fps until motion is detected, then full frame rate. Given that not all cameras trigger together you can probably get quite a few cameras on any modern box before there's any trouble. Besides, computers get faster all the time so what's possible with current hardware now will be easy with future hardware.
-
- Posts: 5111
- Joined: Wed Jun 08, 2005 8:07 pm
- Location: Midlands UK
just to resurect this, do any of us have any suggestions for using bttv and above half vertical res?
The only reason i ask is cos my own setup at home has a nice camera on it and id like to run at 720x576.
James
The only reason i ask is cos my own setup at home has a nice camera on it and id like to run at 720x576.
James
James Wilson
Disclaimer: The above is pure theory and may work on a good day with the wind behind it. etc etc.
http://www.securitywarehouse.co.uk
Disclaimer: The above is pure theory and may work on a good day with the wind behind it. etc etc.
http://www.securitywarehouse.co.uk
-
- Posts: 247
- Joined: Sun Jan 16, 2005 11:26 am
I've no answers as such.
But I've recently started using mythtv, which has build in support for deinterlacing. Waching the football the other night it seems quite effective.
I dont know any more than that, but perhaps there is someone out there technical enough to work out how it does it and apply it to ZM?
Just a thought.
But I've recently started using mythtv, which has build in support for deinterlacing. Waching the football the other night it seems quite effective.
I dont know any more than that, but perhaps there is someone out there technical enough to work out how it does it and apply it to ZM?
Just a thought.
This keeps coming up... Of course my partner keeps asking the same thing, and he still doesn't get it either.
So first, start here. http://www.jkor.com/peter/tvlines.html That should give you a nice headache. But the point is this... The composite video standard requires alternating black lines. To give a clear picture the odd lines are black in one frame, and the even lines are black in the other. Now black in analog video is simply the absence of light... That means any color overwrites it. So if I combine line 1 and 2, the black is completely covered by the image. This is why 320 x 240 looks clean.
Now deinterlacing is a hack... On old analog TVs it was moot as the image from the last frame is still slightly glowing when you paint the next frame. This does not work in digital, so they had to develope a few hacks. Mostly it is done with frame averaging. This means every frame is the current frame, and the frame immediately prior layered on top. Another method is frame stretching. a frame is captured at 640 x 240, and all horizontal lines are doubled, so you get a 640 x 480 pic. Both hacks have issues. In the first one, you have the most data, but a different times. This can give a comb look to fast moving images. There is a decomb filter just to address this on DVDs. The other method can look better with fast moving images, but it has less data, while taking up just as much space and bandwidth.
A possible solution... I had a thought.... What if we could capture at 640 x 240, and store at 640 x 240, but display and export at 640 x 480? Lots of advantages here is space, quality, processing time to analyze and store, and best of all, it would be someone else doing it! In case you didn't know, my coding skills stink...
So first, start here. http://www.jkor.com/peter/tvlines.html That should give you a nice headache. But the point is this... The composite video standard requires alternating black lines. To give a clear picture the odd lines are black in one frame, and the even lines are black in the other. Now black in analog video is simply the absence of light... That means any color overwrites it. So if I combine line 1 and 2, the black is completely covered by the image. This is why 320 x 240 looks clean.
Now deinterlacing is a hack... On old analog TVs it was moot as the image from the last frame is still slightly glowing when you paint the next frame. This does not work in digital, so they had to develope a few hacks. Mostly it is done with frame averaging. This means every frame is the current frame, and the frame immediately prior layered on top. Another method is frame stretching. a frame is captured at 640 x 240, and all horizontal lines are doubled, so you get a 640 x 480 pic. Both hacks have issues. In the first one, you have the most data, but a different times. This can give a comb look to fast moving images. There is a decomb filter just to address this on DVDs. The other method can look better with fast moving images, but it has less data, while taking up just as much space and bandwidth.
A possible solution... I had a thought.... What if we could capture at 640 x 240, and store at 640 x 240, but display and export at 640 x 480? Lots of advantages here is space, quality, processing time to analyze and store, and best of all, it would be someone else doing it! In case you didn't know, my coding skills stink...
-
- Posts: 5111
- Joined: Wed Jun 08, 2005 8:07 pm
- Location: Midlands UK
I use interpolation to achieve this live an playback in zm4ms, but d1 is better.
De-interlacing is possible and there are various ways to do it. and i do understand the issues but even 2 cif has res issues.
De-interlacing is possible and there are various ways to do it. and i do understand the issues but even 2 cif has res issues.
James Wilson
Disclaimer: The above is pure theory and may work on a good day with the wind behind it. etc etc.
http://www.securitywarehouse.co.uk
Disclaimer: The above is pure theory and may work on a good day with the wind behind it. etc etc.
http://www.securitywarehouse.co.uk