8,260 research outputs found
3D medical volume segmentation using hybrid multiresolution statistical approaches
This article is available through the Brunel Open Access Publishing Fund. Copyright © 2010 S AlZu’bi and A Amira.3D volume segmentation is the process of partitioning voxels into 3D regions (subvolumes) that represent meaningful physical entities which are more meaningful and easier to analyze and usable in future applications. Multiresolution Analysis (MRA) enables the preservation of an image according to certain levels of resolution or blurring. Because of multiresolution quality, wavelets have been deployed in image compression, denoising, and classification. This paper focuses on the implementation of efficient medical volume segmentation techniques. Multiresolution analysis including 3D wavelet and ridgelet has been used for feature extraction which can be modeled using Hidden Markov Models (HMMs) to segment the volume slices. A comparison study has been carried out to evaluate 2D and 3D techniques which reveals that 3D methodologies can accurately detect the Region Of Interest (ROI). Automatic segmentation has been achieved using HMMs where the ROI is detected accurately but suffers a long computation time for its calculations
Learning the dynamics and time-recursive boundary detection of deformable objects
We propose a principled framework for recursively segmenting deformable objects across a sequence
of frames. We demonstrate the usefulness of this method on left ventricular segmentation across a cardiac
cycle. The approach involves a technique for learning the system dynamics together with methods of
particle-based smoothing as well as non-parametric belief propagation on a loopy graphical model capturing
the temporal periodicity of the heart. The dynamic system state is a low-dimensional representation
of the boundary, and the boundary estimation involves incorporating curve evolution into recursive state
estimation. By formulating the problem as one of state estimation, the segmentation at each particular
time is based not only on the data observed at that instant, but also on predictions based on past and future
boundary estimates. Although the paper focuses on left ventricle segmentation, the method generalizes
to temporally segmenting any deformable object
Conditional Density Estimation by Penalized Likelihood Model Selection and Applications
In this technical report, we consider conditional density estimation with a
maximum likelihood approach. Under weak assumptions, we obtain a theoretical
bound for a Kullback-Leibler type loss for a single model maximum likelihood
estimate. We use a penalized model selection technique to select a best model
within a collection. We give a general condition on penalty choice that leads
to oracle type inequality for the resulting estimate. This construction is
applied to two examples of partition-based conditional density models, models
in which the conditional density depends only in a piecewise manner from the
covariate. The first example relies on classical piecewise polynomial densities
while the second uses Gaussian mixtures with varying mixing proportion but same
mixture components. We show how this last case is related to an unsupervised
segmentation application that has been the source of our motivation to this
study.Comment: No. RR-7596 (2011
On Interpretable Approaches to Cluster, Classify and Represent Multi-Subspace Data via Minimum Lossy Coding Length based on Rate-Distortion Theory
To cluster, classify and represent are three fundamental objectives of
learning from high-dimensional data with intrinsic structure. To this end, this
paper introduces three interpretable approaches, i.e., segmentation
(clustering) via the Minimum Lossy Coding Length criterion, classification via
the Minimum Incremental Coding Length criterion and representation via the
Maximal Coding Rate Reduction criterion. These are derived based on the lossy
data coding and compression framework from the principle of rate distortion in
information theory. These algorithms are particularly suitable for dealing with
finite-sample data (allowed to be sparse or almost degenerate) of mixed
Gaussian distributions or subspaces. The theoretical value and attractive
features of these methods are summarized by comparison with other learning
methods or evaluation criteria. This summary note aims to provide a theoretical
guide to researchers (also engineers) interested in understanding 'white-box'
machine (deep) learning methods
- …