36 research outputs found

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    Performance bounds for expander-based compressed sensing in Poisson noise

    Full text link
    This paper provides performance bounds for compressed sensing in the presence of Poisson noise using expander graphs. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and/or bounded noise models used in the compressed sensing literature are no longer applicable. In this paper, we develop a novel sensing paradigm based on expander graphs and propose a MAP algorithm for recovering sparse or compressible signals from Poisson observations. The geometry of the expander graphs and the positivity of the corresponding sensing matrices play a crucial role in establishing the bounds on the signal reconstruction error of the proposed algorithm. We support our results with experimental demonstrations of reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process.Comment: revised version; accepted to IEEE Transactions on Signal Processin

    Compressed sensing performance bounds under Poisson noise

    Full text link
    This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical â„“2\ell_2--â„“1\ell_1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the signal-dependent part of the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.Comment: 12 pages, 3 pdf figures; accepted for publication in IEEE Transactions on Signal Processin

    Sparse Poisson Intensity Reconstruction Algorithms

    Full text link
    The observations in many applications consist of counts of discrete events, such as photons hitting a dector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f) from Poisson data (y) cannot be accomplished by minimizing a conventional l2-l1 objective function. The problem addressed in this paper is the estimation of f from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number of observations and (b) f admits a sparse approximation in some basis. The optimization formulation considered in this paper uses a negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). This paper describes computational methods for solving the constrained sparse Poisson inverse problem. In particular, the proposed approach incorporates key ideas of using quadratic separable approximations to the objective function at each iteration and computationally efficient partition-based multiscale estimation methods.Comment: 4 pages, 4 figures, PDFLaTeX, Submitted to IEEE Workshop on Statistical Signal Processing, 200

    Microscopy with ultraviolet surface excitation for rapid slide-free histology.

    Get PDF
    Histologic examination of tissues is central to the diagnosis and management of neoplasms and many other diseases, and is a foundational technique for preclinical and basic research. However, commonly used bright-field microscopy requires prior preparation of micrometre-thick tissue sections mounted on glass slides, a process that can require hours or days, that contributes to cost, and that delays access to critical information. Here, we introduce a simple, non-destructive slide-free technique that within minutes provides high-resolution diagnostic histological images resembling those obtained from conventional haematoxylin-and-eosin-histology. The approach, which we named microscopy with ultraviolet surface excitation (MUSE), can also generate shape and colour-contrast information. MUSE relies on ~280-nm ultraviolet light to restrict the excitation of conventional fluorescent stains to tissue surfaces, and it has no significant effects on downstream molecular assays (including fluorescence in situ hybridization and RNA-seq). MUSE promises to improve the speed and efficiency of patient care in both state-of-the-art and low-resource settings, and to provide opportunities for rapid histology in research

    Controlling the error in fmri: Hypothesis testing or set estimation

    No full text
    This paper describes a new methodology and associated theoretical analysis for rapid and accurate extraction of activation regions from functional MRI data. Most fMRI data analysis methods in use today adopt a hypothesis testing approach, in which the BOLD signals in individual voxels or clusters of voxels are compared to a threshold. In order to obtain statistically meaningful results, the testing must be limited to very small numbers of voxels/clusters or the threshold must be set extremely high. Furthermore, voxelization introduces partial volume effects (PVE), which present a persistent error in the localization of activity that no testing procedure can overcome. We abandon the multiple hypothesis testing approach in this paper, and instead advocate a new approach based on set estimation. Rather then attempting to control the probability of error, our method aims to control the spatial volume of the error. To do this, we view the activation regions as level sets of the statistical parametric map (SPM) under consideration. The estimation of the level sets, in the presence of noise, is then treated as a statistical inference problem. We propose a level set estimator and show that the expected volume of the error is proportional to the sidelength of a voxel. Since PVEs are unavoidable and produce errors of the same order, this is the smallest error volume achievable. Experiments demonstrate the advantages of this new theory and methodology, and the statistical reasonability of controlling the volume of the error rather than the probability of error. Index Terms — Magnetic resonance imaging, Signal detection, Neuroimaging, fMR
    corecore