4,784 research outputs found
Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI
Background: Prostate cancer is one of the most common forms of cancer found
in males making early diagnosis important. Magnetic resonance imaging (MRI) has
been useful in visualizing and localizing tumor candidates and with the use of
endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The
coils introduce intensity inhomogeneities and the surface coil intensity
correction built into MRI scanners is used to reduce these inhomogeneities.
However, the correction typically performed at the MRI scanner level leads to
noise amplification and noise level variations. Methods: In this study, we
introduce a new Monte Carlo-based noise compensation approach for coil
intensity corrected endorectal MRI which allows for effective noise
compensation and preservation of details within the prostate. The approach
accounts for the ERC SNR profile via a spatially-adaptive noise model for
correcting non-stationary noise variations. Such a method is useful
particularly for improving the image quality of coil intensity corrected
endorectal MRI data performed at the MRI scanner level and when the original
raw data is not available. Results: SNR and contrast-to-noise ratio (CNR)
analysis in patient experiments demonstrate an average improvement of 11.7 dB
and 11.2 dB respectively over uncorrected endorectal MRI, and provides strong
performance when compared to existing approaches. Conclusions: A new noise
compensation method was developed for the purpose of improving the quality of
coil intensity corrected endorectal MRI data performed at the MRI scanner
level. We illustrate that promising noise compensation performance can be
achieved for the proposed approach, which is particularly important for
processing coil intensity corrected endorectal MRI data performed at the MRI
scanner level and when the original raw data is not available.Comment: 23 page
Learning sparse representations of depth
This paper introduces a new method for learning and inferring sparse
representations of depth (disparity) maps. The proposed algorithm relaxes the
usual assumption of the stationary noise model in sparse coding. This enables
learning from data corrupted with spatially varying noise or uncertainty,
typically obtained by laser range scanners or structured light depth cameras.
Sparse representations are learned from the Middlebury database disparity maps
and then exploited in a two-layer graphical model for inferring depth from
stereo, by including a sparsity prior on the learned features. Since they
capture higher-order dependencies in the depth structure, these priors can
complement smoothness priors commonly used in depth inference based on Markov
Random Field (MRF) models. Inference on the proposed graph is achieved using an
alternating iterative optimization technique, where the first layer is solved
using an existing MRF-based stereo matching algorithm, then held fixed as the
second layer is solved using the proposed non-stationary sparse coding
algorithm. This leads to a general method for improving solutions of state of
the art MRF-based depth estimation algorithms. Our experimental results first
show that depth inference using learned representations leads to state of the
art denoising of depth maps obtained from laser range scanners and a time of
flight camera. Furthermore, we show that adding sparse priors improves the
results of two depth estimation methods: the classical graph cut algorithm by
Boykov et al. and the more recent algorithm of Woodford et al.Comment: 12 page
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization
As a powerful statistical image modeling technique, sparse representation has
been successfully used in various image restoration applications. The success
of sparse representation owes to the development of l1-norm optimization
techniques, and the fact that natural images are intrinsically sparse in some
domain. The image restoration quality largely depends on whether the employed
sparse domain can represent well the underlying image. Considering that the
contents can vary significantly across different images or different patches in
a single image, we propose to learn various sets of bases from a pre-collected
dataset of example image patches, and then for a given patch to be processed,
one set of bases are adaptively selected to characterize the local sparse
domain. We further introduce two adaptive regularization terms into the sparse
representation framework. First, a set of autoregressive (AR) models are
learned from the dataset of example image patches. The best fitted AR models to
a given patch are adaptively selected to regularize the image local structures.
Second, the image non-local self-similarity is introduced as another
regularization term. In addition, the sparsity regularization parameter is
adaptively estimated for better image restoration performance. Extensive
experiments on image deblurring and super-resolution validate that by using
adaptive sparse domain selection and adaptive regularization, the proposed
method achieves much better results than many state-of-the-art algorithms in
terms of both PSNR and visual perception.Comment: 35 pages. This paper is under review in IEEE TI
BLADE: Filter Learning for General Purpose Computational Photography
The Rapid and Accurate Image Super Resolution (RAISR) method of Romano,
Isidoro, and Milanfar is a computationally efficient image upscaling method
using a trained set of filters. We describe a generalization of RAISR, which we
name Best Linear Adaptive Enhancement (BLADE). This approach is a trainable
edge-adaptive filtering framework that is general, simple, computationally
efficient, and useful for a wide range of problems in computational
photography. We show applications to operations which may appear in a camera
pipeline including denoising, demosaicing, and stylization
Joint Total Variation ESTATICS for Robust Multi-Parameter Mapping
Quantitative magnetic resonance imaging (qMRI) derives tissue-specific
parameters -- such as the apparent transverse relaxation rate R2*, the
longitudinal relaxation rate R1 and the magnetisation transfer saturation --
that can be compared across sites and scanners and carry important information
about the underlying microstructure. The multi-parameter mapping (MPM) protocol
takes advantage of multi-echo acquisitions with variable flip angles to extract
these parameters in a clinically acceptable scan time. In this context,
ESTATICS performs a joint loglinear fit of multiple echo series to extract R2*
and multiple extrapolated intercepts, thereby improving robustness to motion
and decreasing the variance of the estimators. In this paper, we extend this
model in two ways: (1) by introducing a joint total variation (JTV) prior on
the intercepts and decay, and (2) by deriving a nonlinear maximum \emph{a
posteriori} estimate. We evaluated the proposed algorithm by predicting
left-out echoes in a rich single-subject dataset. In this validation, we
outperformed other state-of-the-art methods and additionally showed that the
proposed approach greatly reduces the variance of the estimated maps, without
introducing bias.Comment: 11 pages, 2 figures, 1 table, conference paper, accepted at MICCAI
202
- …