7,580 research outputs found

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    Combining Contrast Invariant L1 Data Fidelities with Nonlinear Spectral Image Decomposition

    Get PDF
    This paper focuses on multi-scale approaches for variational methods and corresponding gradient flows. Recently, for convex regularization functionals such as total variation, new theory and algorithms for nonlinear eigenvalue problems via nonlinear spectral decompositions have been developed. Those methods open new directions for advanced image filtering. However, for an effective use in image segmentation and shape decomposition, a clear interpretation of the spectral response regarding size and intensity scales is needed but lacking in current approaches. In this context, L1L^1 data fidelities are particularly helpful due to their interesting multi-scale properties such as contrast invariance. Hence, the novelty of this work is the combination of L1L^1-based multi-scale methods with nonlinear spectral decompositions. We compare L1L^1 with L2L^2 scale-space methods in view of spectral image representation and decomposition. We show that the contrast invariant multi-scale behavior of L1TVL^1-TV promotes sparsity in the spectral response providing more informative decompositions. We provide a numerical method and analyze synthetic and biomedical images at which decomposition leads to improved segmentation.Comment: 13 pages, 7 figures, conference SSVM 201

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography

    Get PDF
    Total variation (TV) is a powerful regularization method that has been widely applied in different imaging applications, but is difficult to apply to diffuse optical tomography (DOT) image reconstruction (inverse problem) due to complex and unstructured geometries, non-linearity of the data fitting and regularization terms, and non-differentiability of the regularization term. We develop several approaches to overcome these difficulties by: i) defining discrete differential operators for unstructured geometries using both finite element and graph representations; ii) developing an optimization algorithm based on the alternating direction method of multipliers (ADMM) for the non-differentiable and non-linear minimization problem; iii) investigating isotropic and anisotropic variants of TV regularization, and comparing their finite element- and graph-based implementations. These approaches are evaluated on experiments on simulated data and real data acquired from a tissue phantom. Our results show that both FEM and graph-based TV regularization is able to accurately reconstruct both sparse and non-sparse distributions without the over-smoothing effect of Tikhonov regularization and the over-sparsifying effect of L1_1 regularization. The graph representation was found to out-perform the FEM method for low-resolution meshes, and the FEM method was found to be more accurate for high-resolution meshes.Comment: 24 pages, 11 figures. Reviced version includes revised figures and improved clarit

    CT Image Reconstruction by Spatial-Radon Domain Data-Driven Tight Frame Regularization

    Full text link
    This paper proposes a spatial-Radon domain CT image reconstruction model based on data-driven tight frames (SRD-DDTF). The proposed SRD-DDTF model combines the idea of joint image and Radon domain inpainting model of \cite{Dong2013X} and that of the data-driven tight frames for image denoising \cite{cai2014data}. It is different from existing models in that both CT image and its corresponding high quality projection image are reconstructed simultaneously using sparsity priors by tight frames that are adaptively learned from the data to provide optimal sparse approximations. An alternative minimization algorithm is designed to solve the proposed model which is nonsmooth and nonconvex. Convergence analysis of the algorithm is provided. Numerical experiments showed that the SRD-DDTF model is superior to the model by \cite{Dong2013X} especially in recovering some subtle structures in the images
    corecore