4,197 research outputs found

    Statistical model-based fusion of noisy multi-band images in the wavelet domain

    Get PDF

    A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain

    Full text link
    Detecting camouflaged moving foreground objects has been known to be difficult due to the similarity between the foreground objects and the background. Conventional methods cannot distinguish the foreground from background due to the small differences between them and thus suffer from under-detection of the camouflaged foreground objects. In this paper, we present a fusion framework to address this problem in the wavelet domain. We first show that the small differences in the image domain can be highlighted in certain wavelet bands. Then the likelihood of each wavelet coefficient being foreground is estimated by formulating foreground and background models for each wavelet band. The proposed framework effectively aggregates the likelihoods from different wavelet bands based on the characteristics of the wavelet transform. Experimental results demonstrated that the proposed method significantly outperformed existing methods in detecting camouflaged foreground objects. Specifically, the average F-measure for the proposed algorithm was 0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI

    Multi-Modal Enhancement Techniques for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of 8-bit color digital images based on spatial domain, wavelet transform domain, and multiple image fusion approaches are investigated in this dissertation research. In the category of spatial domain approach, two enhancement algorithms are developed to deal with problems associated with images captured from scenes with high dynamic ranges. The first technique is based on an illuminance-reflectance (I-R) model of the scene irradiance. The dynamic range compression of the input image is achieved by a nonlinear transformation of the estimated illuminance based on a windowed inverse sigmoid transfer function. A single-scale neighborhood dependent contrast enhancement process is proposed to enhance the high frequency components of the illuminance, which compensates for the contrast degradation of the mid-tone frequency components caused by dynamic range compression. The intensity image obtained by integrating the enhanced illuminance and the extracted reflectance is then converted to a RGB color image through linear color restoration utilizing the color components of the original image. The second technique, named AINDANE, is a two step approach comprised of adaptive luminance enhancement and adaptive contrast enhancement. An image dependent nonlinear transfer function is designed for dynamic range compression and a multiscale image dependent neighborhood approach is developed for contrast enhancement. Real time processing of video streams is realized with the I-R model based technique due to its high speed processing capability while AINDANE produces higher quality enhanced images due to its multi-scale contrast enhancement property. Both the algorithms exhibit balanced luminance, contrast enhancement, higher robustness, and better color consistency when compared with conventional techniques. In the transform domain approach, wavelet transform based image denoising and contrast enhancement algorithms are developed. The denoising is treated as a maximum a posteriori (MAP) estimator problem; a Bivariate probability density function model is introduced to explore the interlevel dependency among the wavelet coefficients. In addition, an approximate solution to the MAP estimation problem is proposed to avoid the use of complex iterative computations to find a numerical solution. This relatively low complexity image denoising algorithm implemented with dual-tree complex wavelet transform (DT-CWT) produces high quality denoised images

    Self Designing Pattern Recognition System Employing Multistage Classification

    Get PDF
    Recently, pattern recognition/classification has received a considerable attention in diverse engineering fields such as biomedical imaging, speaker identification, fingerprint recognition, etc. In most of these applications, it is desirable to maintain the classification accuracy in the presence of corrupted and/or incomplete data. The quality of a given classification technique is measured by the computational complexity, execution time of algorithms, and the number of patterns that can be classified correctly despite any distortion. Some classification techniques that are introduced in the literature are described in Chapter one. In this dissertation, a pattern recognition approach that can be designed to have evolutionary learning by developing the features and selecting the criteria that are best suited for the recognition problem under consideration is proposed. Chapter two presents some of the features used in developing the set of criteria employed by the system to recognize different types of signals. It also presents some of the preprocessing techniques used by the system. The system operates in two modes, namely, the learning (training) mode, and the running mode. In the learning mode, the original and preprocessed signals are projected into different transform domains. The technique automatically tests many criteria over the range of parameters for each criterion. A large number of criteria are developed from the features extracted from these domains. The optimum set of criteria, satisfying specific conditions, is selected. This set of criteria is employed by the system to recognize the original or noisy signals in the running mode. The modes of operation and the classification structures employed by the system are described in details in Chapter three. The proposed pattern recognition system is capable of recognizing an enormously large number of patterns by virtue of the fact that it analyzes the signal in different domains and explores the distinguishing characteristics in each of these domains. In other words, this approach uses available information and extracts more characteristics from the signals, for classification purposes, by projecting the signal in different domains. Some experimental results are given in Chapter four showing the effect of using mathematical transforms in conjunction with preprocessing techniques on the classification accuracy. A comparison between some of the classification approaches, in terms of classification rate in case of distortion, is also given. A sample of experimental implementations is presented in chapter 5 and chapter 6 to illustrate the performance of the proposed pattern recognition system. Preliminary results given confirm the superior performance of the proposed technique relative to the single transform neural network and multi-input neural network approaches for image classification in the presence of additive noise

    A multiresolution framework for local similarity based image denoising

    Get PDF
    In this paper, we present a generic framework for denoising of images corrupted with additive white Gaussian noise based on the idea of regional similarity. The proposed framework employs a similarity function using the distance between pixels in a multidimensional feature space, whereby multiple feature maps describing various local regional characteristics can be utilized, giving higher weight to pixels having similar regional characteristics. An extension of the proposed framework into a multiresolution setting using wavelets and scale space is presented. It is shown that the resulting multiresolution multilateral (MRM) filtering algorithm not only eliminates the coarse-grain noise but can also faithfully reconstruct anisotropic features, particularly in the presence of high levels of noise

    Information selection and fusion in vision systems

    Get PDF
    Handling the enormous amounts of data produced by data-intensive imaging systems, such as multi-camera surveillance systems and microscopes, is technically challenging. While image and video compression help to manage the data volumes, they do not address the basic problem of information overflow. In this PhD we tackle the problem in a more drastic way. We select information of interest to a specific vision task, and discard the rest. We also combine data from different sources into a single output product, which presents the information of interest to end users in a suitable, summarized format. We treat two types of vision systems. The first type is conventional light microscopes. During this PhD, we have exploited for the first time the potential of the curvelet transform for image fusion for depth-of-field extension, allowing us to combine the advantages of multi-resolution image analysis for image fusion with increased directional sensitivity. As a result, the proposed technique clearly outperforms state-of-the-art methods, both on real microscopy data and on artificially generated images. The second type is camera networks with overlapping fields of view. To enable joint processing in such networks, inter-camera communication is essential. Because of infrastructure costs, power consumption for wireless transmission, etc., transmitting high-bandwidth video streams between cameras should be avoided. Fortunately, recently designed 'smart cameras', which have on-board processing and communication hardware, allow distributing the required image processing over the cameras. This permits compactly representing useful information from each camera. We focus on representing information for people localization and observation, which are important tools for statistical analysis of room usage, quick localization of people in case of building fires, etc. To further save bandwidth, we select which cameras should be involved in a vision task and transmit observations only from the selected cameras. We provide an information-theoretically founded framework for general purpose camera selection based on the Dempster-Shafer theory of evidence. Applied to tracking, it allows tracking people using a dynamic selection of as little as three cameras with the same accuracy as when using up to ten cameras
    • ā€¦
    corecore