3 research outputs found

    Automated video processing and scene understanding for intelligent video surveillance

    Get PDF
    Title from PDF of title page (University of Missouri--Columbia, viewed on December 7, 2010).The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file.Dissertation advisor: Dr. Zhihai He.Vita.Ph. D. University of Missouri--Columbia 2010.Recent advances in key technologies have enabled the deployment of surveillance video cameras on various platforms. There is an urgent need to develop advanced computational methods and tools for automated video processing and scene understanding to support various applications. In this dissertation, we concentrate our efforts on the following four tightly coupled tasks: Aerial video registration and moving object detection. We develop a fast and reliable global camera motion estimation and video registration for aerial video surveillance. 3-D change detection from moving cameras. Based on multi-scale pattern, we construct a hierarchy of image patch descriptors and detect changes in the video scene using multi-scale information fusion. Cross-view building matching and retrieval from aerial surveillance videos. Identifying and matching buildings between camera views is our central idea. We construct a semantically rich sketch-based representation for buildings which is invariant under large scale and perspective changes. Collaborative video compression for UAV surveillance network. Based on distributed video coding, we develop a collaborative video compression scheme for a UAV surveillance network. Our extensive experimental results demonstrate that the developed suite of tools for automated video processing and scene understanding are efficient and promising for surveillance applications.Includes bibliographical reference

    Reliability Analysis for Global Motion Estimation

    Get PDF
    Digital Object Identifier 10.1109/LSP.2009.2028101Global motion estimation (GME) is the enabling step for many important video exploitation tasks. In this work, we focus on indirect GME methods which have low computational complexity. Typically, an indirect GME method has two major steps. The first step is to find point correspondence between frames through local motion search or feature matching. Then, the second step determines global motion parameters using optimal model fitting, such as least mean-squared error (LMSE) fitting or RANSAC. However, due to image noise and inherent ambiguity in point correspondence, local motion estimation often suffers from relatively large errors, which degrade the performance and reliability of GME. In this work, we propose a method to characterize the reliability of local motion estimation results and use this reliability measure as a weighting factor to determine the importance level of each local motion estimation result during global motion estimation. Our simulation results demonstrate that the proposed scheme is able to significantly improve the accuracy and robustness of global motion estimation with a very small computational overhead.This work was supported in part by the National Institute of Health under Grant 5R21AG026412

    Activity Analysis, Summarization, and Visualization for Indoor Human Activity Monitoring

    Get PDF
    DOI 10.1109/TCSVT.2008.2005612In this work, we study how continuous video monitoring and intelligent video processing can be used in eldercare to assist the independent living of elders and to improve the efficiency of eldercare practice. More specifically, we develop an automated activity analysis and summarization for eldercare video monitoring. At the object level, we construct an advanced silhouette extraction, human detection and tracking algorithm for indoor environments. At the feature level, we develop an adaptive learning method to estimate the physical location and moving speed of a person from a single camera view without calibration. At the action level, we explore hierarchical decision tree and dimension reduction methods for human action recognition. We extract important ADL (activities of daily living) statistics for automated functional assessment. To test and evaluate the proposed algorithms and methods, we deploy the camera system in a real living environment for about a month and have collected more than 200 hours (in excess of 600 G bytes) of activity monitoring videos. Our extensive tests over these massive video datasets demonstrate that the proposed automated activity analysis system is very efficient.This work was supported in part by National Institute of Health under Grant 5R21AG026412
    corecore