11,585 research outputs found

    In-Band Disparity Compensation for Multiview Image Compression and View Synthesis

    Get PDF

    Selected topics in video coding and computer vision

    Get PDF
    Video applications ranging from multimedia communication to computer vision have been extensively studied in the past decades. However, the emergence of new applications continues to raise questions that are only partially answered by existing techniques. This thesis studies three selected topics related to video: intra prediction in block-based video coding, pedestrian detection and tracking in infrared imagery, and multi-view video alignment.;In the state-of-art video coding standard H.264/AVC, intra prediction is defined on the hierarchical quad-tree based block partitioning structure which fails to exploit the geometric constraint of edges. We propose a geometry-adaptive block partitioning structure and a new intra prediction algorithm named geometry-adaptive intra prediction (GAIP). A new texture prediction algorithm named geometry-adaptive intra displacement prediction (GAIDP) is also developed by extending the original intra displacement prediction (IDP) algorithm with the geometry-adaptive block partitions. Simulations on various test sequences demonstrate that intra coding performance of H.264/AVC can be significantly improved by incorporating the proposed geometry adaptive algorithms.;In recent years, due to the decreasing cost of thermal sensors, pedestrian detection and tracking in infrared imagery has become a topic of interest for night vision and all weather surveillance applications. We propose a novel approach for detecting and tracking pedestrians in infrared imagery based on a layered representation of infrared images. Pedestrians are detected from the foreground layer by a Principle Component Analysis (PCA) based scheme using the appearance cue. To facilitate the task of pedestrian tracking, we formulate the problem of shot segmentation and present a graph matching-based tracking algorithm. Simulations with both OSU Infrared Image Database and WVU Infrared Video Database are reported to demonstrate the accuracy and robustness of our algorithms.;Multi-view video alignment is a process to facilitate the fusion of non-synchronized multi-view video sequences for various applications including automatic video based surveillance and video metrology. In this thesis, we propose an accurate multi-view video alignment algorithm that iteratively aligns two sequences in space and time. To achieve an accurate sub-frame temporal alignment, we generalize the existing phase-correlation algorithm to 3-D case. We also present a novel method to obtain the ground-truth of the temporal alignment by using supplementary audio signals sampled at a much higher rate. The accuracy of our algorithm is verified by simulations using real-world sequences

    Vector Quantization Video Encoder Using Hierarchical Cache Memory Scheme

    Get PDF
    A system compresses image blocks via successive hierarchical stages and motion encoders which employ caches updated by stack replacement algorithms. Initially, a background detector compares the present image block with a corresponding previously encoded image block and if similar, the background detector terminates the encoding procedure by setting a flag bit. Otherwise, the image block is decomposed into smaller present image subblocks. The smaller present image subblocks are each compared with a corresponding previously encoded image subblock of comparable size within the present image block. When a present image subblock is similar to a corresponding previously encoded image subblock, then the procedure is terminated by setting a flag bit. Alternatively, the present image subblock is forwarded to a motion encoder where it is compared with displaced image subblocks, which are formed by displacing previously encoded image subblocks by motion vectors that are stored in a cache, to derive a first distortion vector. When the first distortion vector is below a first threshold TM, the procedure is terminated and the present image subblock is encoded by setting flag bit and a cache index corresponding to the first distortion vector. Alternatively, the present image subblock is passed to a block matching encoder where it is compared with other previously encoded image subblocks to derive a second distortion vector. When the second distortion vector is below a second threshold Tm, the procedure is terminated by setting a flag bit, by generating the second distortion vector, and by updating the cache.Georgia Tech Research Corporatio

    Motion Estimation using Block Matching Algorithm

    Full text link
    With the recent advances in video technology, there is an increasing need for a more reliable, efficient and robust generic framework for video processing and its analysis. In this regard the Motion estimation has for many years demanding area of research because of its diversity of use in real-time applications. Motion estimation using block matching algorithm is used in many applications in video processing. This paper presents a review of motion estimation based on block matching algorithm and also includes analytical study of fixed and variable block matching algorithm

    Accelerated hardware video object segmentation: From foreground detection to connected components labelling

    Get PDF
    This is the preprint version of the Article - Copyright @ 2010 ElsevierThis paper demonstrates the use of a single-chip FPGA for the segmentation of moving objects in a video sequence. The system maintains highly accurate background models, and integrates the detection of foreground pixels with the labelling of objects using a connected components algorithm. The background models are based on 24-bit RGB values and 8-bit gray scale intensity values. A multimodal background differencing algorithm is presented, using a single FPGA chip and four blocks of RAM. The real-time connected component labelling algorithm, also designed for FPGA implementation, run-length encodes the output of the background subtraction, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of run-lengths are typically less than the number of pixels. The two algorithms are pipelined together for maximum efficiency
    • …
    corecore