Posted: Mon Jul 25, 2005 6:48 pm
Thanks James,
Yes you are correct and in fact almost every surveillance within budget is the interline type. Now from what I've read in ccd sensor datasheets there are tpyically 582 X 512 rows and columns of CCD cells - enough to completely contain all the data in each line as it is read. The rate in which all transfers are completed is known as the shutter speed. It can vary from 1/100000 second to 20 ms or a field in NTSC/PAL.
This means that even though the other cells may change in other rows the image technically is not an exact snapshot in time. However if all rows can be read in 1/100000 second then it is quite close, but additional memory id required to contain digitized pixels. (I have some DSP CCD cameras that I'm testing currently that would almost certainly require this extra storage.)
The controller will read alternating rows to derive the interlaced sequence and process the single field. How this occurs varies with the camera manufacturer. For example a controller could digitize the entire sensor and store the data in memory. Or if it had no additional memory then the shutter would work only for a field.
The issue I am trying to determine is whether or not we are seeing the 'blurring' that occurs since it take time to read all rows - ODDS first then EVENS where it takes some time. Or what we have is an interlace error where the time increases to the point where any moving object shows these lines.
I would think that the CDD camera manufacturer would attempt to keep this number down otherwise the camera would be look more like a CMOS camera. So if the camera has a shutter speed of 1/100000 second but only for a single field then any moving ojbect would blur and it would appear to not work very well.
-Scott
Yes you are correct and in fact almost every surveillance within budget is the interline type. Now from what I've read in ccd sensor datasheets there are tpyically 582 X 512 rows and columns of CCD cells - enough to completely contain all the data in each line as it is read. The rate in which all transfers are completed is known as the shutter speed. It can vary from 1/100000 second to 20 ms or a field in NTSC/PAL.
This means that even though the other cells may change in other rows the image technically is not an exact snapshot in time. However if all rows can be read in 1/100000 second then it is quite close, but additional memory id required to contain digitized pixels. (I have some DSP CCD cameras that I'm testing currently that would almost certainly require this extra storage.)
The controller will read alternating rows to derive the interlaced sequence and process the single field. How this occurs varies with the camera manufacturer. For example a controller could digitize the entire sensor and store the data in memory. Or if it had no additional memory then the shutter would work only for a field.
The issue I am trying to determine is whether or not we are seeing the 'blurring' that occurs since it take time to read all rows - ODDS first then EVENS where it takes some time. Or what we have is an interlace error where the time increases to the point where any moving object shows these lines.
I would think that the CDD camera manufacturer would attempt to keep this number down otherwise the camera would be look more like a CMOS camera. So if the camera has a shutter speed of 1/100000 second but only for a single field then any moving ojbect would blur and it would appear to not work very well.
-Scott