2,093 research outputs found
3D medical volume segmentation using hybrid multiresolution statistical approaches
This article is available through the Brunel Open Access Publishing Fund. Copyright © 2010 S AlZu’bi and A Amira.3D volume segmentation is the process of partitioning voxels into 3D regions (subvolumes) that represent meaningful physical entities which are more meaningful and easier to analyze and usable in future applications. Multiresolution Analysis (MRA) enables the preservation of an image according to certain levels of resolution or blurring. Because of multiresolution quality, wavelets have been deployed in image compression, denoising, and classification. This paper focuses on the implementation of efficient medical volume segmentation techniques. Multiresolution analysis including 3D wavelet and ridgelet has been used for feature extraction which can be modeled using Hidden Markov Models (HMMs) to segment the volume slices. A comparison study has been carried out to evaluate 2D and 3D techniques which reveals that 3D methodologies can accurately detect the Region Of Interest (ROI). Automatic segmentation has been achieved using HMMs where the ROI is detected accurately but suffers a long computation time for its calculations
Hyperanalytic denoising
A new threshold rule for the estimation of a deterministic image immersed in noise is proposed. The full estimation procedure is based on a separable wavelet decomposition of the observed image, and the estimation is improved by introducing the new threshold to estimate the decomposition coefficients. The observed wavelet coefficients are thresholded, using the magnitudes of wavelet transforms of a small number of "replicates" of the image. The "replicates" are calculated by extending the image into a vector-valued hyperanalytic signal. More than one hyperanalytic signal may be chosen, and either the hypercomplex or Riesz transforms are used, to calculate this object. The deterministic and stochastic properties of the observed wavelet coefficients of the hyperanalytic signal, at a fixed scale and position index, are determined. A "universal" threshold is calculated for the proposed procedure. An expression for the risk of an individual coefficient is derived. The risk is calculated explicitly when the "universal" threshold is used and is shown to be less than the risk of "universal" hard thresholding, under certain conditions. The proposed method is implemented and the derived theoretical risk reductions substantiated
Large Scale Variational Bayesian Inference for Structured Scale Mixture Models
Natural image statistics exhibit hierarchical dependencies across multiple
scales. Representing such prior knowledge in non-factorial latent tree models
can boost performance of image denoising, inpainting, deconvolution or
reconstruction substantially, beyond standard factorial "sparse" methodology.
We derive a large scale approximate Bayesian inference algorithm for linear
models with non-factorial (latent tree-structured) scale mixture priors.
Experimental results on a range of denoising and inpainting problems
demonstrate substantially improved performance compared to MAP estimation or to
inference with factorial priors.Comment: Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012
A Multiscale Approach for Statistical Characterization of Functional Images
Increasingly, scientific studies yield functional image data, in which the observed data consist of sets of curves recorded on the pixels of the image. Examples include temporal brain response intensities measured by fMRI and NMR frequency spectra measured at each pixel. This article presents a new methodology for improving the characterization of pixels in functional imaging, formulated as a spatial curve clustering problem. Our method operates on curves as a unit. It is nonparametric and involves multiple stages: (i) wavelet thresholding, aggregation, and Neyman truncation to effectively reduce dimensionality; (ii) clustering based on an extended EM algorithm; and (iii) multiscale penalized dyadic partitioning to create a spatial segmentation. We motivate the different stages with theoretical considerations and arguments, and illustrate the overall procedure on simulated and real datasets. Our method appears to offer substantial improvements over monoscale pixel-wise methods. An Appendix which gives some theoretical justifications of the methodology, computer code, documentation and dataset are available in the online supplements
A Hierarchical Bayesian Model for Frame Representation
In many signal processing problems, it may be fruitful to represent the
signal under study in a frame. If a probabilistic approach is adopted, it
becomes then necessary to estimate the hyper-parameters characterizing the
probability distribution of the frame coefficients. This problem is difficult
since in general the frame synthesis operator is not bijective. Consequently,
the frame coefficients are not directly observable. This paper introduces a
hierarchical Bayesian model for frame representation. The posterior
distribution of the frame coefficients and model hyper-parameters is derived.
Hybrid Markov Chain Monte Carlo algorithms are subsequently proposed to sample
from this posterior distribution. The generated samples are then exploited to
estimate the hyper-parameters and the frame coefficients of the target signal.
Validation experiments show that the proposed algorithms provide an accurate
estimation of the frame coefficients and hyper-parameters. Application to
practical problems of image denoising show the impact of the resulting Bayesian
estimation on the recovered signal quality
Compressive Imaging via Approximate Message Passing with Image Denoising
We consider compressive imaging problems, where images are reconstructed from
a reduced number of linear measurements. Our objective is to improve over
existing compressive imaging algorithms in terms of both reconstruction error
and runtime. To pursue our objective, we propose compressive imaging algorithms
that employ the approximate message passing (AMP) framework. AMP is an
iterative signal reconstruction algorithm that performs scalar denoising at
each iteration; in order for AMP to reconstruct the original input signal well,
a good denoiser must be used. We apply two wavelet based image denoisers within
AMP. The first denoiser is the "amplitude-scaleinvariant Bayes estimator"
(ABE), and the second is an adaptive Wiener filter; we call our AMP based
algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results
show that both AMP-ABE and AMP-Wiener significantly improve over the state of
the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener
offers lower mean square error (MSE) than existing compressive imaging
algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise
as well as the adaptive Wiener filter.Comment: 15 pages; 2 tables; 7 figures; to appear in IEEE Trans. Signal
Proces
Learning sparse representations of depth
This paper introduces a new method for learning and inferring sparse
representations of depth (disparity) maps. The proposed algorithm relaxes the
usual assumption of the stationary noise model in sparse coding. This enables
learning from data corrupted with spatially varying noise or uncertainty,
typically obtained by laser range scanners or structured light depth cameras.
Sparse representations are learned from the Middlebury database disparity maps
and then exploited in a two-layer graphical model for inferring depth from
stereo, by including a sparsity prior on the learned features. Since they
capture higher-order dependencies in the depth structure, these priors can
complement smoothness priors commonly used in depth inference based on Markov
Random Field (MRF) models. Inference on the proposed graph is achieved using an
alternating iterative optimization technique, where the first layer is solved
using an existing MRF-based stereo matching algorithm, then held fixed as the
second layer is solved using the proposed non-stationary sparse coding
algorithm. This leads to a general method for improving solutions of state of
the art MRF-based depth estimation algorithms. Our experimental results first
show that depth inference using learned representations leads to state of the
art denoising of depth maps obtained from laser range scanners and a time of
flight camera. Furthermore, we show that adding sparse priors improves the
results of two depth estimation methods: the classical graph cut algorithm by
Boykov et al. and the more recent algorithm of Woodford et al.Comment: 12 page
- …