41 research outputs found

    Bayesian Lower Bounds for Dense or Sparse (Outlier) Noise in the RMT Framework

    Full text link
    Robust estimation is an important and timely research subject. In this paper, we investigate performance lower bounds on the mean-square-error (MSE) of any estimator for the Bayesian linear model, corrupted by a noise distributed according to an i.i.d. Student's t-distribution. This class of prior parametrized by its degree of freedom is relevant to modelize either dense or sparse (accounting for outliers) noise. Using the hierarchical Normal-Gamma representation of the Student's t-distribution, the Van Trees' Bayesian Cram\'er-Rao bound (BCRB) on the amplitude parameters is derived. Furthermore, the random matrix theory (RMT) framework is assumed, i.e., the number of measurements and the number of unknown parameters grow jointly to infinity with an asymptotic finite ratio. Using some powerful results from the RMT, closed-form expressions of the BCRB are derived and studied. Finally, we propose a framework to fairly compare two models corrupted by noises with different degrees of freedom for a fixed common target signal-to-noise ratio (SNR). In particular, we focus our effort on the comparison of the BCRBs associated with two models corrupted by a sparse noise promoting outliers and a dense (Gaussian) noise, respectively

    Bayesian Estimation for Continuous-Time Sparse Stochastic Processes

    Full text link
    We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this probability density function can be factorized. The factorization enables us to tractably implement the maximum a posteriori and minimum mean-square error (MMSE) criteria as two statistical approaches for estimating the unknowns. We compare the derived statistical methods with well-known techniques for the recovery of sparse signals, such as the 1\ell_1 norm and Log (1\ell_1-0\ell_0 relaxation) regularization methods. The simulation results show that, under certain conditions, the performance of the regularization techniques can be very close to that of the MMSE estimator.Comment: To appear in IEEE TS

    Contourlet Domain Image Modeling and its Applications in Watermarking and Denoising

    Get PDF
    Statistical image modeling in sparse domain has recently attracted a great deal of research interest. Contourlet transform as a two-dimensional transform with multiscale and multi-directional properties is known to effectively capture the smooth contours and geometrical structures in images. The objective of this thesis is to study the statistical properties of the contourlet coefficients of images and develop statistically-based image denoising and watermarking schemes. Through an experimental investigation, it is first established that the distributions of the contourlet subband coefficients of natural images are significantly non-Gaussian with heavy-tails and they can be best described by the heavy-tailed statistical distributions, such as the alpha-stable family of distributions. It is shown that the univariate members of this family are capable of accurately fitting the marginal distributions of the empirical data and that the bivariate members can accurately characterize the inter-scale dependencies of the contourlet coefficients of an image. Based on the modeling results, a new method in image denoising in the contourlet domain is proposed. The Bayesian maximum a posteriori and minimum mean absolute error estimators are developed to determine the noise-free contourlet coefficients of grayscale and color images. Extensive experiments are conducted using a wide variety of images from a number of databases to evaluate the performance of the proposed image denoising scheme and to compare it with that of other existing schemes. It is shown that the proposed denoising scheme based on the alpha-stable distributions outperforms these other methods in terms of the peak signal-to-noise ratio and mean structural similarity index, as well as in terms of visual quality of the denoised images. The alpha-stable model is also used in developing new multiplicative watermark schemes for grayscale and color images. Closed-form expressions are derived for the log-likelihood-based multiplicative watermark detection algorithm for grayscale images using the univariate and bivariate Cauchy members of the alpha-stable family. A multiplicative multichannel watermark detector is also designed for color images using the multivariate Cauchy distribution. Simulation results demonstrate not only the effectiveness of the proposed image watermarking schemes in terms of the invisibility of the watermark, but also the superiority of the watermark detectors in providing detection rates higher than that of the state-of-the-art schemes even for the watermarked images undergone various kinds of attacks

    Digital Communications in Additive White Symmetric Alpha-Stable Noise

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    One-bit Compressed Sensing in the Presence of Noise

    Get PDF
    Many modern real-world systems generate large amounts of high-dimensional data stressing the available computing and signal processing systems. In resource-constrained settings, it is desirable to process, store and transmit as little amount of data as possible. It has been shown that one can obtain acceptable performance for tasks such as inference and reconstruction using fewer bits of data by exploiting low-dimensional structures on data such as sparsity. This dissertation investigates the signal acquisition paradigm known as one-bit compressed sensing (one-bit CS) for signal reconstruction and parameter estimation. We first consider the problem of joint sparse support estimation with one-bit measurements in a distributed setting. Each node observes sparse signals with the same but unknown support. The goal is to minimize the probability of error of support estimation. First, we study the performance of maximum likelihood (ML) estimation of the support set from one-bit compressed measurements when all these measurements are available at the fusion center. We provide a lower bound on the number of one-bit measurements required per node for vanishing probability of error. Though the ML estimator is optimal, its computational complexity increases exponentially with the signal dimension. So, we propose computationally tractable algorithms in a centralized setting. Further, we extend these algorithms to a decentralized setting where each node can communicate only with its one-hop neighbors. The proposed method shows excellent estimation performance even in the presence of noise. In the second part of the dissertation, we investigate the problem of sparse signal reconstruction from noisy one-bit compressed measurements using a signal that is statistically dependent on the compressed signal as an aid. We refer to this signal as side-information. We consider a generalized measurement model of one-bit CS where noise is assumed to be added at two stages of the measurement process- a) before quantizationand b) after quantization. We model the noise before quantization as additive white Gaussian noise and the noise after quantization as a sign-flip noise generated from a Bernoulli distribution. We assume that the SI at the receiver is noisy. The noise in the SI can be either in the support or in the amplitude, or both. This nature of the noise in SI suggests that the noise has a sparse structure. We use additive independent and identically distributed Laplacian noise to model such sparse nature of the noise. In this setup, we develop tractable algorithms that approximate the minimum mean square error (MMSE) estimator of the signal. We consider the following three different SI-based scenarios: 1. The side-information is assumed to be a noisy version of the signal. The noise is independent of the signal and follows the Laplacian distribution. We do not assume any temporal dependence in the signal.2. The signal exhibits temporal dependencies between signals at the current time instant and the previous time instant. The temporal dependence is modeled using the birth-death-drift (BDD) model. The side-information is a noisy version of the previous time instant signal, which is statistically dependent on the signal as defined by the BDD model. 3. The SI available at the receiver is heterogeneous. The signal and side-information are from different modalities and may not share joint sparse representation. We assume that the SI and the sparse signal are dependent and use the Copula function to model the dependence. In each of these scenarios, we develop generalized approximate message passing-based algorithms to approximate the minimum mean square error estimate. Numerical results show the effectiveness of the proposed algorithm. In the final part of the dissertation, we propose two one-bit compressed sensing reconstruction algorithms that use a deep neural network as a prior on the signal. In the first algorithm, we use a trained Generative model such as Generative Adversarial Networks and Variational Autoencoders as a prior. This trained network is used to reconstruct the compressed signal from one-bit measurements by searching over its range. We provide theoretical guarantees on the reconstruction accuracy and sample complexity of the presented algorithm. In the second algorithm, we investigate an untrained neural network architecture so that it acts as a good prior on natural signals such as images and audio. We formulate an optimization problem to reconstruct the signal from one-bit measurements using this untrained network. We demonstrate the superior performance of the proposed algorithms through numerical results. Further, in contrast to competing model-based algorithms, we demonstrate that the proposed algorithms estimate both direction and magnitude of the compressed signal from one-bit measurements

    Statistical modeling and processing of high frequency ultrasound images: application to dermatologic oncology

    Get PDF
    Cette thèse étudie le traitement statistique des images d’ultrasons de haute fréquence, avec application à l’exploration in-vivo de la peau humaine et l’évaluation non invasive de lésions. Des méthodes Bayésiennes sont considérées pour la segmentation d’images échographiques de la peau. On y établit que les ultrasons rétrodiffusés par la peau convergent vers un processus aléatoire complexe de type Levy-Flight, avec des statistiques non Gaussiennes alpha-stables. L’enveloppe du signal suit une distribution Rayleigh généralisée à queue lourde. A partir de ces résultats, il est proposé de modéliser l’image ultrason de multiples tissus comme un mélange spatialement cohérent de lois Rayleigh à queues lourdes. La cohérence spatiale inhérente aux tissus biologiques est modélisée par un champ aléatoire de Potts-Markov pour représenter la dépendance locale entre les composantes du mélange. Un algorithme Bayésien original combiné à une méthode Monte Carlo par chaine de Markov (MCMC) est proposé pour conjointement estimer les paramètres du modèle et classifier chaque voxel dans un tissu. L’approche proposée est appliquée avec succès à la segmentation de tumeurs de la peau in-vivo dans des images d’ultrasons de haute fréquence en 2D et 3D. Cette méthode est ensuite étendue en incluant l’estimation du paramètre B de régularisation du champ de Potts dans la chaine MCMC. Les méthodes MCMC classiques ne sont pas directement applicables à ce problème car la vraisemblance du champ de Potts ne peut pas être évaluée. Ce problème difficile est traité en adoptant un algorithme Metropolis-Hastings “sans vraisemblance” fondé sur la statistique suffisante du Potts. La méthode de segmentation non supervisée, ainsi développée, est appliquée avec succès à des images échographiques 3D. Finalement, le problème du calcul de la borne de Cramer-Rao (CRB) du paramètre B est étudié. Cette borne dépend des dérivées de la constante de normalisation du modèle de Potts, dont le calcul est infaisable. Ce problème est résolu en proposant un algorithme Monte Carlo original, qui est appliqué avec succès au calcul de la borne CRB des modèles d’Ising et de Potts. ABSTRACT : This thesis studies statistical image processing of high frequency ultrasound imaging, with application to in-vivo exploration of human skin and noninvasive lesion assessment. More precisely, Bayesian methods are considered in order to perform tissue segmentation in ultrasound images of skin. It is established that ultrasound signals backscattered from skin tissues converge to a complex Levy Flight random process with non-Gaussian _-stable statistics. The envelope signal follows a generalized (heavy-tailed) Rayleigh distribution. Based on these results, it is proposed to model the distribution of multiple-tissue ultrasound images as a spatially coherent finite mixture of heavy-tailed Rayleigh distributions. Spatial coherence inherent to biological tissues is modeled by a Potts Markov random field. An original Bayesian algorithm combined with a Markov chain Monte Carlo method is then proposed to jointly estimate the mixture parameters and a label-vector associating each voxel to a tissue. The proposed method is successfully applied to the segmentation of in-vivo skin tumors in high frequency 2D and 3D ultrasound images. This method is subsequently extended by including the estimation of the Potts regularization parameter B within the Markov chain Monte Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem because the likelihood of B is intractable. This difficulty is addressed by using a likelihood-free Metropolis-Hastings algorithm based on the sufficient statistic of the Potts model. The resulting unsupervised segmentation method is successfully applied to tridimensional ultrasound images. Finally, the problem of computing the Cramer-Rao bound (CRB) of B is studied. The CRB depends on the derivatives of the intractable normalizing constant of the Potts model. This is resolved by proposing an original Monte Carlo algorithm, which is successfully applied to compute the CRB of the Ising and Potts models

    Space adaptive and hierarchical Bayesian variational models for image restoration

    Get PDF
    The main contribution of this thesis is the proposal of novel space-variant regularization or penalty terms motivated by a strong statistical rational. In light of the connection between the classical variational framework and the Bayesian formulation, we will focus on the design of highly flexible priors characterized by a large number of unknown parameters. The latter will be automatically estimated by setting up a hierarchical modeling framework, i.e. introducing informative or non-informative hyperpriors depending on the information at hand on the parameters. More specifically, in the first part of the thesis we will focus on the restoration of natural images, by introducing highly parametrized distribution to model the local behavior of the gradients in the image. The resulting regularizers hold the potential to adapt to the local smoothness, directionality and sparsity in the data. The estimation of the unknown parameters will be addressed by means of non-informative hyperpriors, namely uniform distributions over the parameter domain, thus leading to the classical Maximum Likelihood approach. In the second part of the thesis, we will address the problem of designing suitable penalty terms for the recovery of sparse signals. The space-variance in the proposed penalties, corresponding to a family of informative hyperpriors, namely generalized gamma hyperpriors, will follow directly from the assumption of the independence of the components in the signal. The study of the properties of the resulting energy functionals will thus lead to the introduction of two hybrid algorithms, aimed at combining the strong sparsity promotion characterizing non-convex penalty terms with the desirable guarantees of convex optimization

    Multiresolution image models and estimation techniques

    Get PDF

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Gaussian versus Sparse Stochastic Processes:Construction, Regularity, Compressibility

    Get PDF
    Although our work lies in the field of random processes, this thesis was originally motivated by signal processing applications, mainly the stochastic modeling of sparse signals. We develop a mathematical study of the innovation model, under which a signal is described as a random process s that can be linearly and deterministically transformed into a white noise. The noise represents the unpredictable part of the signal, called its innovation, and is part of the family of Lévy white noises, which includes both Gaussian and Poisson noises. In mathematical terms, s satisfies the equation Ls=w where L is a differential operator and w a Lévy noise. The problem is therefore to study the solution of a stochastic differential equation driven by a Lévy noise. Gaussian models usually fail to reproduce the empirical sparsity observed in real-world signals. By contrast, Lévy models offer a wide range of random processes going from typically non-sparse (Gaussian) to very sparse ones (Poisson), and with many sparse signals standing between these two extremes. Our contributions can be divided in four parts. First, the cornerstone of our work is the theory of generalized random processes. Within this framework, all the considered random processes are seen as random tempered generalized functions and can be observed through smooth and rapidly decaying windows. This allows us to define the solutions of Ls=w, called generalized Lévy processes, in the most general setting. Then, we identify two limit phenomenons: the approximation of generalized Lévy processes by their Poisson counterparts, and the asymptotic behavior of generalized Lévy processes at coarse and fine scales. In the third part, we study the localization of Lévy noise in notorious function spaces (Hölder, Sobolev, Besov). As an application, characterize the local smoothness and the asymptotic growth rate of the Lévy noise. Finally, we quantify the local compressibility of the generalized Lévy processes, understood as a measure of the decreasing rate of their approximation error in an appropriate basis. From this last result, we provide a theoretical justification of the ability of the innovation model to represent sparse signals. The guiding principle of our research is the duality between the local and asymptotic properties of generalized Lévy processes. In particular, we highlight the relevant quantities, called the local and asymptotic indices, that allow quantifying the local regularity, the asymptotic growth rate, the limit behavior at coarse and fine scales, and the level of compressibility of the solutions of generalized Lévy processes
    corecore