53 research outputs found

    Spatially adaptive estimation via fitted local likelihood techniques

    Get PDF
    This paper offers a new technique for spatially adaptive estimation. The local likelihood is exploited for nonparametric modelling of observations and estimated signals. The approach is based on the assumption of a local homogeneity of the signal: for every point there exists a neighborhood in which the signal can be well approximated by a constant. The fitted local likelihood statistics is used for selection of an adaptive size of this neighborhood. The algorithm is developed for quite a general class of observations subject to the exponential distribution. The estimated signal can be uni- and multivariable. We demonstrate a good performance of the new algorithm for Poissonian image denoising and compare of the new method versus the intersection of confidence interval (ICI)(ICI) technique that also exploits a selection of an adaptive neighborhood for estimation

    Spatially adaptive estimation via fitted local likelihood techniques

    Get PDF
    This paper offers a new technique for spatially adaptive estimation. The local likelihood is exploited for nonparametric modelling of observations and estimated signals. The approach is based on the assumption of a local homogeneity of the signal: for every point there exists a neighborhood in which the signal can be well approximated by a constant. The fitted local likelihood statistics is used for selection of an adaptive size of this neighborhood. The algorithm is developed for quite a general class of observations subject to the exponential distribution. The estimated signal can be uni- and multivariable. We demonstrate a good performance of the new algorithm for Poissonian image denoising and compare of the new method versus the intersection of confidence interval (ICI)(ICI) technique that also exploits a selection of an adaptive neighborhood for estimation

    Block-based Collaborative 3-D Transform Domain Modeling in Inverse Imaging

    Get PDF
    The recent developments in image and video denoising have brought a new generation of filtering algorithms achieving unprecedented restoration quality. This quality mainly follows from exploiting various features of natural images. The nonlocal self-similarity and sparsity of representations are key elements of the novel filtering algorithms, with the best performance achieved by adaptively aggregating multiple redundant and sparse estimates. In a very broad sense, the filters are now able, given a perturbed image, to identify its plausible representative in the space or manifold of possible solutions. Thus, they are powerful tools not only for noise removal, but also for providing accurate adaptive regularization to many ill-conditioned inverse imaging problems. In this thesis we show how the image modeling of the well-known Block-matching 3-D transform domain (BM3D) filter can be exploited for designing efficient algorithms for image reconstruction. First, we formalize the BM3D-modeling in terms of the overcomplete sparse frame representation. We construct analysis and synthesis BM3D-frames and study their properties, making BM3D-modeling available for use in variational formulations of image reconstruction problems. Second, we demonstrate that standard variational formulations based on single objective optimization, such as Basis Pursuit Denoising and its various extensions, cannot be used with the imaging models generating non-tight frames, such as BM3D. We propose an alternative sparsity promoting problem formulation based on the generalized Nash equilibrium (GNE). Finally, using BM3D-frames we develop practical algorithms for image deblurring and super-resolution problems. To the best of our knowledge, these algorithms provide results which are the state of the art in the field

    Directional edge and texture representations for image processing

    Get PDF
    An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations

    Sparse and Redundant Representations for Inverse Problems and Recognition

    Get PDF
    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented

    Models and Methods for Estimation and Filtering of Signal-Dependent Noise in Imaging

    Get PDF
    The work presented in this thesis focuses on Image Processing, that is the branch of Signal Processing that centers its interest on images, sequences of images, and videos. It has various applications: imaging for traditional cameras, medical imaging, e.g., X-ray and magnetic resonance imaging (MRI), infrared imaging (thermography), e.g., for security purposes, astronomical imaging for space exploration, three-dimensional (video+depth) signal processing, and many more.This thesis covers a small but relevant slice that is transversal to this vast pool of applications: noise estimation and denoising. To appreciate the relevance of this thesis it is essential to understand why noise is such an important part of Image Processing. Every acquisition device, and every measurement is subject to interferences that causes random fluctuations in the acquired signals. If not taken into consideration with a suitable mathematical approach, these fluctuations might invalidate any use of the acquired signal. Consider, for example, an MRI used to detect a possible condition; if not suitably processed and filtered, the image could lead to a wrong diagnosis. Therefore, before any acquired image is sent to an end-user (machine or human), it undergoes several processing steps. Noise estimation and denoising are usually parts of these fundamental steps.Some sources of noise can be removed by suitably modeling the acquisition process of the camera, and developing hardware based on that model. Other sources of noise are instead inevitable: high/low light conditions of the acquired scene, hardware imperfections, temperature of the device, etc. To remove noise from an image, the noise characteristics have to be first estimated. The branch of image processing that fulfills this role is called noise estimation. Then, it is possible to remove the noise artifacts from the acquired image. This process is referred to as denoising.For practical reasons, it is convenient to model noise as random variables. In this way, we assume that the noise fluctuations take values whose probabilities follow specific distributions characterized only by few parameters. These are the parameters that we estimate. We focus our attention on noise modeled by Gaussian distributions, Poisson distributions, or a combination of these. These distributions are adopted for modeling noise affecting images from digital cameras, microscopes, telescopes, radiography systems, thermal cameras, depth-sensing cameras, etc. The parameters that define a Gaussian distribution are its mean and its variance, while a Poisson distribution depends only on its mean, since its variance is equal to the mean (signal-dependent variance). Consequently, the parameters of a Poisson-Gaussian distribution describe the relation between the intensity of the noise-free signal and the variance of the noise affecting it. Degradation models of this kind are referred to as signal-dependent noise.Estimation of signal-dependent noise is commonly performed by processing, individually, groups of pixels with equal intensity in order to sample the aforementioned relation between signal mean and noise variance. Such sampling is often subject to outliers; we propose a robust estimation model where the noise parameters are estimated optimizing a likelihood function that models the local variance estimates from each group of pixels as mixtures of Gaussian and Cauchy distributions. The proposed model is general and applicable to a variety of signal-dependent noise models, including also possible clipping of the data. We also show that, under certain hypotheses, the relation between signal mean and noise variance can also be effectively sampled from groups of pixels of possibly different intensities.Then, we propose a spatially adaptive transform to improve the denoising performance of a specific class of filters, namely nonlocal transformdomain collaborative filters. In particular, the proposed transform exploits the spatial coordinates of nonlocal similar features from an image to better decorrelate the data, and consequently to improve the filtering. Unlike non-adaptive transforms, the proposed spatially adaptive transform is capable of representing spatially smooth coarse-scale variations in the similar features of the image. Further, based on the same paradigm, we propose a method that adaptively enhances the local image features depending on their orientation with respect to the relative coordinates of other similar features at other locations in the image.An established approach for removing Poisson noise utilizes so-called variance-stabilizing transformations (VST) to make the noise variance independent of the mean of the signal, hence enabling denoising by a standard denoiser for additive Gaussian noise. Within this framework, we propose an iterative method where at each iteration the previous estimate is summed back to the noisy image in order to improve the stabilizing performance of the transformation, and consequently to improve the denoising results. The proposed iterative procedure allows to circumvent the typical drawbacks that VSTs experience at very low intensities, and thus allows us to apply the standard denoiser effectively even at extremely low counts.The developed methods achieve state-of-the-art results in their respective field of application

    Development of Some Efficient Lossless and Lossy Hybrid Image Compression Schemes

    Get PDF
    Digital imaging generates a large amount of data which needs to be compressed, without loss of relevant information, to economize storage space and allow speedy data transfer. Though both storage and transmission medium capacities have been continuously increasing over the last two decades, they dont match the present requirement. Many lossless and lossy image compression schemes exist for compression of images in space domain and transform domain. Employing more than one traditional image compression algorithms results in hybrid image compression techniques. Based on the existing schemes, novel hybrid image compression schemes are developed in this doctoral research work, to compress the images effectually maintaining the quality

    Bayesian Approaches For Image Restoration

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore