16,168 research outputs found

    Blind deconvolution of sparse pulse sequences under a minimum distance constraint: a partially collapsed Gibbs sampler method

    Get PDF
    For blind deconvolution of an unknown sparse sequence convolved with an unknown pulse, a powerful Bayesian method employs the Gibbs sampler in combination with a Bernoulli–Gaussian prior modeling sparsity. In this paper, we extend this method by introducing a minimum distance constraint for the pulses in the sequence. This is physically relevant in applications including layer detection, medical imaging, seismology, and multipath parameter estimation. We propose a Bayesian method for blind deconvolution that is based on a modified Bernoulli–Gaussian prior including a minimum distance constraint factor. The core of our method is a partially collapsed Gibbs sampler (PCGS) that tolerates and even exploits the strong local dependencies introduced by the minimum distance constraint. Simulation results demonstrate significant performance gains compared to a recently proposed PCGS. The main advantages of the minimum distance constraint are a substantial reduction of computational complexity and of the number of spurious components in the deconvolution result

    The algorithm of noisy k-means

    Get PDF
    In this note, we introduce a new algorithm to deal with finite dimensional clustering with errors in variables. The design of this algorithm is based on recent theoretical advances (see Loustau (2013a,b)) in statistical learning with errors in variables. As the previous mentioned papers, the algorithm mixes different tools from the inverse problem literature and the machine learning community. Coarsely, it is based on a two-step procedure: (1) a deconvolution step to deal with noisy inputs and (2) Newton's iterations as the popular k-means

    A deconvolution approach to estimation of a common shape in a shifted curves model

    Get PDF
    This paper considers the problem of adaptive estimation of a mean pattern in a randomly shifted curve model. We show that this problem can be transformed into a linear inverse problem, where the density of the random shifts plays the role of a convolution operator. An adaptive estimator of the mean pattern, based on wavelet thresholding is proposed. We study its consistency for the quadratic risk as the number of observed curves tends to infinity, and this estimator is shown to achieve a near-minimax rate of convergence over a large class of Besov balls. This rate depends both on the smoothness of the common shape of the curves and on the decay of the Fourier coefficients of the density of the random shifts. Hence, this paper makes a connection between mean pattern estimation and the statistical analysis of linear inverse problems, which is a new point of view on curve registration and image warping problems. We also provide a new method to estimate the unknown random shifts between curves. Some numerical experiments are given to illustrate the performances of our approach and to compare them with another algorithm existing in the literature

    A proximal iteration for deconvolving Poisson noisy images using sparse representations

    Get PDF
    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are: First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a {\it non-linear} degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a non-smooth sparsity-promoting penalties over the image representation coefficients (e.g. 1\ell_1-norm). Third, a fast iterative backward-forward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy

    Fast rates for noisy clustering

    Get PDF
    The effect of errors in variables in empirical minimization is investigated. Given a loss ll and a set of decision rules G\mathcal{G}, we prove a general upper bound for an empirical minimization based on a deconvolution kernel and a noisy sample Zi=Xi+ϵi,i=1,...,nZ_i=X_i+\epsilon_i,i=1,...,n. We apply this general upper bound to give the rate of convergence for the expected excess risk in noisy clustering. A recent bound from \citet{levrard} proves that this rate is O(1/n)\mathcal{O}(1/n) in the direct case, under Pollard's regularity assumptions. Here the effect of noisy measurements gives a rate of the form O(1/nγγ+2β)\mathcal{O}(1/n^{\frac{\gamma}{\gamma+2\beta}}), where γ\gamma is the H\"older regularity of the density of XX whereas β\beta is the degree of illposedness
    corecore