27 research outputs found
Recommended from our members
De-noising SPECT/PET Images Using Cross-Scale Regularization
De-noising of SPECT and PET images is a challenging task due to the inherent low signal-to-noise ratio of acquired data. Wavelet based multi-scale denoising methods typically apply thresholding operators on sub-band coefficients to eliminate noise components in spatial-frequency space prior to reconstruction. In the case of high noise levels, detailed scales of sub-band images are usually dominated by noise which cannot be easily removed using traditional thresholding schemes. To address this issue, a cross-scale regularization scheme is introduced, which takes into account cross-scale coherence of structured signals. Preliminary results show promising performance in denoising clinical SPECT and PET images for liver and brain studies. Wavelet thresholding was also compared to denoising with a brushlet expansion. The proposed regularization scheme eliminates the need for threshold parameter settings, making the denoising process less tedious and suitable for clinical practice
Recommended from our members
Tomographic reconstruction with non-linear diagonal estimators
In tomographic reconstruction, the inversion of the Radon transform in the presence of noise is numerically unstable. Reconstruction estimators are studied where the regularization is performed by a thresholding in a wavelet or wavelet packet decomposition. These estimators are efficient and their optimality can be established when the decomposition provides a near-diagonalization of the inverse Radon transform operator and a compact representation of the object to be recovered. Several new estimators are investigated in different decomposition. First numerical results already exhibit a strong metrical and perceptual improvement over current reconstruction methods. These estimators are implemented with fast non-iterative algorithms, and are expected to outperform Filtered Back-Projection and iterative procedures for PET, SPECT and X-ray CT devices
Recommended from our members
Regularization in Tomographic Reconstruction Using Thresholding Estimators
In tomographic medical devices such as single photon emission computed tomography or positron emission tomography cameras, image reconstruction is an unstable inverse problem, due to the presence of additive noise. A new family of regularization methods for reconstruction, based on a thresholding procedure in wavelet and wavelet packet (WP) decompositions, is studied. This approach is based on the fact that the decompositions provide a near-diagonalization of the inverse Radon transform and of prior information in medical images. A WP decomposition is adaptively chosen for the specific image to be restored. Corresponding algorithms have been developed for both two-dimensional and full three-dimensional reconstruction. These procedures are fast, noniterative, and flexible. Numerical results suggest that they outperform filtered back-projection and iterative procedures such as ordered-subset-expectation-maximization
A novel approach to restoration of Poissonian images
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency
A novel approach to restoration of Poissonian images
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency
Modern Regularization Methods for Inverse Problems
Regularization methods are a key tool in the solution of inverse problems. They are used to introduce prior knowledge and allow a robust approximation of ill-posed (pseudo-) inverses. In the last two decades interest has shifted from linear to nonlinear regularization methods, even for linear inverse problems. The aim of this paper is to provide a reasonably comprehensive overview of this shift towards modern nonlinear regularization methods, including their analysis, applications and issues for future research.
In particular we will discuss variational methods and techniques derived from them, since they have attracted much recent interest and link to other fields, such as image processing and compressed sensing. We further point to developments related to statistical inverse problems, multiscale decompositions and learning theory.Leverhulme Trust Early Career Fellowship ‘Learning from mistakes: a supervised feedback-loop for imaging applications’
Isaac Newton Trust
Cantab Capital Institute for the Mathematics of Information
ERC Grant EU FP 7 - ERC Consolidator Grant 615216 LifeInverse
German Ministry for Science and Education (BMBF) project MED4D
EPSRC grant EP/K032208/