|Introduction |Background |Research and Development |Milestones |Sponsorship |Contact Us|

Background


Video footages for forensic purpose are normally captured from surveillance cameras, but the objects of interest are usually found to be blurred, making identification of objects difficult and a tediously time consuming process. The blurring effect is normally due to the relative motion between camera and the object of interest. This effect, normally known as motion blurring, is particularly apparent whenever the objects were moving at relatively high speed and whenever the exposure time of the shutter is relatively long in low lighting environment. Although it is possible to employ high frame rate camera, which substantially shortened the exposure time and hence reducing the blurring artifacts due to objects’ motion, it is not a practical and cost-effective solution for actual implementation as high frame rate camera is a lot more expensive than a normal surveillance camera. On the other hand, there are techniques to remove motion blur from an image [1], but they normally require the restrictive assumptions like identical motion model for all the objects in the scene, or the objects have to undergo unidirectional and constant motion. In view of this, we propose to develop motion deblurring techniques, which estimates accurate motion model for each moving object from a number of video frames, and perform object-based deblurring based on the estimated motion model to improve clarity of the object of interests in each video frame. Furthermore, since we can enhance the clarity of the same object over a number of video frames, we can further enhance the clarity of the object based on super-resolution technique [2].


Figure 1: Conceptual workflow of the proposed object-based motion deblurring solution

Figure 1 presents a conceptual diagram of our proposed solution. First, we will take consecutive frames from the video footage as input, and track the objects of interests to establish correspondence of objects across multiple frames. Once it is done, based on these correspondences we can fit a parameterized motion model for each object of interest. This step differs from existing approach [1-3] in the sense that it does not assume the object to undergo constant and isotropic motion model. Essentially, each object could have its own motion field, which describes its own motion. Furthermore, existing approach assume all the pixels of an object have same motion vector, which is not necessarily the case in practical environment when an object undergoes translational as well as rotational motion.

Figure 2 presents an example motion model of this. Apart from forward motion, objects in the scene could have a rotating motion as well. Then motion of each pixel on the object should be better described by multiple motions instead of constant and isotropic motion. With this more accurate motion model, a more relevant motion deblurring can then be performed on each snapshot of the object in the input frames. Finally, super-resolution technique will then applied on the deblurred snapshots to further enhance the clarity of the object of interest.


Figure 2: Illustration of multiple motion model

[1] SC Park, MK Park, MG Kang, “Super-resolution image reconstruction: a technical
overview”, IEEE signal processing magazine, 20(3), pp21-36, 2003

[2] S. K. Nayar, M. Ben-Ezra, “Motion-based motion deblurring”, IEEE Trans. On PAMI 26(6),
pp 689-698, 2004

[3] A. Nagy, and Z. Vamossy, “OpenCV C# wrapper based video enhancement using
different optical flow methods in the super-resolution”, 6th International Symposium on
Intelligent Systems and Informatics, pp 1-6 2008.

[4] Ronald H.Y. Chung, Francis Y.L. Chin, Kwan-Yee K. Wong, K.P. Chow, T. Luo and Henry
S.K. Fung, "Efficient Block-based Motion Segmentation Method using Motion Vector
Consistency", in Proc. IAPR Conference on Machine Vision Applications (MVA2005),
Tsukuba, Japan, pp.550-553, 2005.

[5] X. C. He, Nelson H. C. Yung, K.P. Chow, Francis Y.L. Chin, Ronald H. Y. Chung, K. Y. K.
Wong, Kenneth S.H. Tsang, “Watershed Segmentation with Boundary Curvature Ratio
Based Merging Criterion”, proceeding of The Ninth IASTED International Conference on
Signal and Image Processing,576(178), Hawaii, USA, August, 2007.

[6] S.C. Yuk, K.K.Y. Wong, H.Y. Chung, K.P. Chow, Chin F.Y.L. and Tsang K.S.H.,
Object-Based Surveillance Video Retrieval System With Real-Time Indexing Methodology,
International Conference on Image Analysis and Recognition. Montreal, Canada, 2007,
626-637.

[7] H.Y. Chung, K.K.Y. Wong, F.Y.L. Chin, K.P. Chow and S.C. Yuk, Generalized Motion and
Edge Adaptive Interpolation De-interlacing Algorithm, WSEAS Transactions on Computers.
2006, 5(11): 2544-2551.

[8] K-Y.K. Wong, F.Y.L. Chin, R.H.Y. Chung, , K.P. Chow and S.C. Yuk, Motion and Edge
Adaptive Interpolation De-Interlacing Algorithm, 10th WSEAS Int. Conf. on Computers,
Greece, 2006.

[9] S. Zhong, F. Chin, Y.S. Cheung and D. Kwan, “Hierarchical motion estimation based on
visual patterns for video coding,” in IEEE Proc. ICASSP’ 96, pp. 2325-2328 (1996).

[10] Boris Wai-Sing Yiu, Kwan-Yee Kenneth Wong, Francis Y.L. Chin and R.H.Y. Chung,
"Explicit Contour Model for Vehicle Tracking with Automatic Hypothesis Verification", to
appear in Proc. International Conference on Image Processing (ICIP05).

[11] Angie W.K. So, Kenneth K.Y. Wong, Ronald H.Y. Chung, and Francis Y.L. Chin, "Shadow
Detection for Vehicles by Locating the Object-Shadow Boundary", to appear in Proc.
IASTED Conference on Signal and Image Processing (SIP 2005).

[12] R.H.Y. Chung, N.H.C. Yung and P.Y.S. Cheung, "An Efficient Parameter-less
Quadrilateral-Based Image Segmentation Method", to appear in IEEE Trans. PAMI.

[13] S.-F. Wong and K.-Y. K. Wong. Reliable and fast human body tracking under information
deficiency. In Proc. IEEE Intelligent Automation Conference, pages 491–498, Hong Kong,
China, December 2003.

[14] S.-F. Wong and K.-Y. K. Wong. Fast and reliable recognition of human motion from
motion trajectories using wavelet analysis. In Proc. 1st IFIP International Conference on
Artificial Intelligence Applications and Innovations, Toulouse, France, August 2004.

[15] S.-F. Wong and K.-Y. K. Wong. Fast face detection using quadtree based color analysis
and support vector verification. In Proc. International Conference on Image Analysis and
Recognition, pages 676–683, Porto, Portugal, September 2004.

[16] S.-F. Wong and K.-Y. K. Wong. Robust image segmentation by texture sensitive snake
under low contrast environment. In Proc. International Conference on Informatics in Control,
Automation and Robotics, pages 430–434, Setubal, Portugal, August 2004.

[17] X. C. He and N.H.C. Yung, “Curvature Scale Space Corner Detector with Adaptive
Threshold and Dynamic Region of Support”, Proceedings of the 17th International
Conference on Pattern Recognition, 2:791-794, August 2004.

[18] X. C. He and N. H. C. Yung, “Corner detector based on global and local curvature
properties”, Optical Engineering, 47(5), pp: 057008, 2008

[19] X. C. He and N. H. C. Yung, “A New Method for Solving Ill-Condition in Vanishing Point
Based Camera Calibration”, Optical Engineering, 46(3), pp:037202, 2007.

[20] C. Harris. “Determination of ego-motion from matched points”. In Proc. Alvey Vision
Conf., Cambridge, UK, 1987

[21] G. S. K. Fung, N. H. C. Yung, G.K.H. Pang, “Camera calibration from road lane markings,”
Optical engineering, 42(10), 2967-2977, 2003.