5,871 research outputs found

    A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain

    Full text link
    Detecting camouflaged moving foreground objects has been known to be difficult due to the similarity between the foreground objects and the background. Conventional methods cannot distinguish the foreground from background due to the small differences between them and thus suffer from under-detection of the camouflaged foreground objects. In this paper, we present a fusion framework to address this problem in the wavelet domain. We first show that the small differences in the image domain can be highlighted in certain wavelet bands. Then the likelihood of each wavelet coefficient being foreground is estimated by formulating foreground and background models for each wavelet band. The proposed framework effectively aggregates the likelihoods from different wavelet bands based on the characteristics of the wavelet transform. Experimental results demonstrated that the proposed method significantly outperformed existing methods in detecting camouflaged foreground objects. Specifically, the average F-measure for the proposed algorithm was 0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI

    초점 스택에서 3D 깊이 재구성 및 깊이 개선

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2021. 2. 신영길.Three-dimensional (3D) depth recovery from two-dimensional images is a fundamental and challenging objective in computer vision, and is one of the most important prerequisites for many applications such as 3D measurement, robot location and navigation, self-driving, and so on. Depth-from-focus (DFF) is one of the important methods to reconstruct a 3D depth in the use of focus information. Reconstructing a 3D depth from texture-less regions is a typical issue associated with the conventional DFF. Further more, it is difficult for the conventional DFF reconstruction techniques to preserve depth edges and fine details while maintaining spatial consistency. In this dissertation, we address these problems and propose an DFF depth recovery framework which is robust over texture-less regions, and can reconstruct a depth image with clear edges and fine details. The depth recovery framework proposed in this dissertation is composed of two processes: depth reconstruction and depth refinement. To recovery an accurate 3D depth, We first formulate the depth reconstruction as a maximum a posterior (MAP) estimation problem with the inclusion of matting Laplacian prior. The nonlocal principle is adopted during the construction stage of the matting Laplacian matrix to preserve depth edges and fine details. Additionally, a depth variance based confidence measure with the combination of the reliability measure of focus measure is proposed to maintain the spatial smoothness, such that the smooth depth regions in initial depth could have high confidence value and the reconstructed depth could be more derived from the initial depth. As the nonlocal principle breaks the spatial consistency, the reconstructed depth image is spatially inconsistent. Meanwhile, it suffers from texture-copy artifacts. To smooth the noise and suppress the texture-copy artifacts introduced in the reconstructed depth image, we propose a closed-form edge-preserving depth refinement algorithm that formulates the depth refinement as a MAP estimation problem using Markov random fields (MRFs). With the incorporation of pre-estimated depth edges and mutual structure information into our energy function and the specially designed smoothness weight, the proposed refinement method can effectively suppress noise and texture-copy artifacts while preserving depth edges. Additionally, with the construction of undirected weighted graph representing the energy function, a closed-form solution is obtained by using the Laplacian matrix corresponding to the graph. The proposed framework presents a novel method of 3D depth recovery from a focal stack. The proposed algorithm shows the superiority in depth recovery over texture-less regions owing to the effective variance based confidence level computation and the matting Laplacian prior. Additionally, this proposed reconstruction method can obtain a depth image with clear edges and fine details due to the adoption of nonlocal principle in the construct]ion of matting Laplacian matrix. The proposed closed-form depth refinement approach shows that the ability in noise removal while preserving object structure with the usage of common edges. Additionally, it is able to effectively suppress texture-copy artifacts by utilizing mutual structure information. The proposed depth refinement provides a general idea for edge-preserving image smoothing, especially for depth related refinement such as stereo vision. Both quantitative and qualitative experimental results show the supremacy of the proposed method in terms of robustness in texture-less regions, accuracy, and ability to preserve object structure while maintaining spatial smoothness.Chapter 1 Introduction 1 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Chapter 2 Related Works 9 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Principle of depth-from-focus . . . . . . . . . . . . . . . . . . . . 9 2.2.1 Focus measure operators . . . . . . . . . . . . . . . . . . . 12 2.3 Depth-from-focus reconstruction . . . . . . . . . . . . . . . . . . 14 2.4 Edge-preserving image denoising . . . . . . . . . . . . . . . . . . 23 Chapter 3 Depth-from-Focus Reconstruction using Nonlocal Matting Laplacian Prior 38 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2 Image matting and matting Laplacian . . . . . . . . . . . . . . . 40 3.3 Depth-from-focus . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4 Depth reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.4.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . 47 3.4.2 Likelihood model . . . . . . . . . . . . . . . . . . . . . . . 48 3.4.3 Nonlocal matting Laplacian prior model . . . . . . . . . . 50 3.5 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5.2 Data configuration . . . . . . . . . . . . . . . . . . . . . . 55 3.5.3 Reconstruction results . . . . . . . . . . . . . . . . . . . . 56 3.5.4 Comparison between reconstruction using local and nonlocal matting Laplacian . . . . . . . . . . . . . . . . . . . 56 3.5.5 Spatial consistency analysis . . . . . . . . . . . . . . . . . 59 3.5.6 Parameter setting and analysis . . . . . . . . . . . . . . . 59 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Chapter 4 Closed-form MRF-based Depth Refinement 63 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.3 Closed-form solution . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.4 Edge preservation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.5 Texture-copy artifacts suppression . . . . . . . . . . . . . . . . . 73 4.6 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Chapter 5 Evaluation 82 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.2 Evaluation metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.3 Evaluation on synthetic datasets . . . . . . . . . . . . . . . . . . 84 5.4 Evaluation on real scene datasets . . . . . . . . . . . . . . . . . . 89 5.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.6 Computational performances . . . . . . . . . . . . . . . . . . . . 93 Chapter 6 Conclusion 96 Bibliography 99Docto

    Video content analysis for intelligent forensics

    Get PDF
    The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and identification of their type, 4. Detection and recognition of text information in outdoor scenes. To address the first issue, a framework is presented in the first part of the thesis that efficiently detects and recognizes moving objects in videos. The framework targets the problem of object detection in the presence of complex background. The object detection part of the framework relies on background modelling technique and a novel post processing step where the contours of the foreground regions (i.e. moving object) are refined by the classification of edge segments as belonging either to the background or to the foreground region. Further, a novel feature descriptor is devised for the classification of moving objects into humans, vehicles and background. The proposed feature descriptor captures the texture information present in the silhouette of foreground objects. To address the second issue, a framework for the correction and recognition of true colours of objects in videos is presented with novel noise reduction, colour enhancement and colour recognition stages. The colour recognition stage makes use of temporal information to reliably recognize the true colours of moving objects in multiple frames. The proposed framework is specifically designed to perform robustly on videos that have poor quality because of surrounding illumination, camera sensor imperfection and artefacts due to high compression. In the third part of the thesis, a framework for vehicle make and model recognition and type identification is presented. As a part of this work, a novel feature representation technique for distinctive representation of vehicle images has emerged. The feature representation technique uses dense feature description and mid-level feature encoding scheme to capture the texture in the frontal view of the vehicles. The proposed method is insensitive to minor in-plane rotation and skew within the image. The capability of the proposed framework can be enhanced to any number of vehicle classes without re-training. Another important contribution of this work is the publication of a comprehensive up to date dataset of vehicle images to support future research in this domain. The problem of text detection and recognition in images is addressed in the last part of the thesis. A novel technique is proposed that exploits the colour information in the image for the identification of text regions. Apart from detection, the colour information is also used to segment characters from the words. The recognition of identified characters is performed using shape features and supervised learning. Finally, a lexicon based alignment procedure is adopted to finalize the recognition of strings present in word images. Extensive experiments have been conducted on benchmark datasets to analyse the performance of proposed algorithms. The results show that the proposed moving object detection and recognition technique superseded well-know baseline techniques. The proposed framework for the correction and recognition of object colours in video frames achieved all the aforementioned goals. The performance analysis of the vehicle make and model recognition framework on multiple datasets has shown the strength and reliability of the technique when used within various scenarios. Finally, the experimental results for the text detection and recognition framework on benchmark datasets have revealed the potential of the proposed scheme for accurate detection and recognition of text in the wild

    Geo-rectification and cloud-cover correction of multi-temporal Earth observation imagery

    Get PDF
    Over the past decades, improvements in remote sensing technology have led to mass proliferation of aerial imagery. This, in turn, opened vast new possibilities relating to land cover classification, cartography, and so forth. As applications in these fields became increasingly more complex, the amount of data required also rose accordingly and so, to satisfy these new needs, automated systems had to be developed. Geometric distortions in raw imagery must be rectified, otherwise the high accuracy requirements of the newest applications will not be attained. This dissertation proposes an automated solution for the pre-stages of multi-spectral satellite imagery classification, focusing on Fast Fourier Shift theorem based geo-rectification and multi-temporal cloud-cover correction. By automatizing the first stages of image processing, automatic classifiers can take advantage of a larger supply of image data, eventually allowing for the creation of semi-real-time mapping applications
    corecore