73,709 research outputs found

    Automated video processing and scene understanding for intelligent video surveillance

    Get PDF
    Title from PDF of title page (University of Missouri--Columbia, viewed on December 7, 2010).The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file.Dissertation advisor: Dr. Zhihai He.Vita.Ph. D. University of Missouri--Columbia 2010.Recent advances in key technologies have enabled the deployment of surveillance video cameras on various platforms. There is an urgent need to develop advanced computational methods and tools for automated video processing and scene understanding to support various applications. In this dissertation, we concentrate our efforts on the following four tightly coupled tasks: Aerial video registration and moving object detection. We develop a fast and reliable global camera motion estimation and video registration for aerial video surveillance. 3-D change detection from moving cameras. Based on multi-scale pattern, we construct a hierarchy of image patch descriptors and detect changes in the video scene using multi-scale information fusion. Cross-view building matching and retrieval from aerial surveillance videos. Identifying and matching buildings between camera views is our central idea. We construct a semantically rich sketch-based representation for buildings which is invariant under large scale and perspective changes. Collaborative video compression for UAV surveillance network. Based on distributed video coding, we develop a collaborative video compression scheme for a UAV surveillance network. Our extensive experimental results demonstrate that the developed suite of tools for automated video processing and scene understanding are efficient and promising for surveillance applications.Includes bibliographical reference

    View Registration Using Interesting Segments of Planar Trajectories

    Full text link
    We introduce a method for recovering the spatial and temporal alignment between two or more views of objects moving over a ground plane. Existing approaches either assume that the streams are globally synchronized, so that only solving the spatial alignment is needed, or that the temporal misalignment is small enough so that exhaustive search can be performed. In contrast, our approach can recover both the spatial and temporal alignment. We compute for each trajectory a number of interesting segments, and we use their description to form putative matches between trajectories. Each pair of corresponding interesting segments induces a temporal alignment, and defines an interval of common support across two views of an object that is used to recover the spatial alignment. Interesting segments and their descriptors are defined using algebraic projective invariants measured along the trajectories. Similarity between interesting segments is computed taking into account the statistics of such invariants. Candidate alignment parameters are verified checking the consistency, in terms of the symmetric transfer error, of all the putative pairs of corresponding interesting segments. Experiments are conducted with two different sets of data, one with two views of an outdoor scene featuring moving people and cars, and one with four views of a laboratory sequence featuring moving radio-controlled cars
    • …
    corecore