24,819 research outputs found
Unsupervised Lesion Detection via Image Restoration with a Normative Prior
Unsupervised lesion detection is a challenging problem that requires
accurately estimating normative distributions of healthy anatomy and detecting
lesions as outliers without training examples. Recently, this problem has
received increased attention from the research community following the advances
in unsupervised learning with deep learning. Such advances allow the estimation
of high-dimensional distributions, such as normative distributions, with higher
accuracy than previous methods.The main approach of the recently proposed
methods is to learn a latent-variable model parameterized with networks to
approximate the normative distribution using example images showing healthy
anatomy, perform prior-projection, i.e. reconstruct the image with lesions
using the latent-variable model, and determine lesions based on the differences
between the reconstructed and original images. While being promising, the
prior-projection step often leads to a large number of false positives. In this
work, we approach unsupervised lesion detection as an image restoration problem
and propose a probabilistic model that uses a network-based prior as the
normative distribution and detect lesions pixel-wise using MAP estimation. The
probabilistic model punishes large deviations between restored and original
images, reducing false positives in pixel-wise detections. Experiments with
gliomas and stroke lesions in brain MRI using publicly available datasets show
that the proposed approach outperforms the state-of-the-art unsupervised
methods by a substantial margin, +0.13 (AUC), for both glioma and stroke
detection. Extensive model analysis confirms the effectiveness of MAP-based
image restoration.Comment: Extended version of 'Unsupervised Lesion Detection via Image
Restoration with a Normative Prior' (MIDL2019
PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation
With the advent of convolutional neural networks~(CNN), supervised learning
methods are increasingly being used for whole brain segmentation. However, a
large, manually annotated training dataset of labeled brain images required to
train such supervised methods is frequently difficult to obtain or create. In
addition, existing training datasets are generally acquired with a homogeneous
magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such
datasets are unable to generalize on test data with different acquisition
protocols. Modern neuroimaging studies and clinical trials are necessarily
multi-center initiatives with a wide variety of acquisition protocols. Despite
stringent protocol harmonization practices, it is very difficult to standardize
the gamut of MRI imaging parameters across scanners, field strengths, receive
coils etc., that affect image contrast. In this paper we propose a CNN-based
segmentation algorithm that, in addition to being highly accurate and fast, is
also resilient to variation in the input acquisition. Our approach relies on
building approximate forward models of pulse sequences that produce a typical
test image. For a given pulse sequence, we use its forward model to generate
plausible, synthetic training examples that appear as if they were acquired in
a scanner with that pulse sequence. Sampling over a wide variety of pulse
sequences results in a wide variety of augmented training examples that help
build an image contrast invariant model. Our method trains a single CNN that
can segment input MRI images with acquisition parameters as disparate as
-weighted and -weighted contrasts with only -weighted training
data. The segmentations generated are highly accurate with state-of-the-art
results~(overall Dice overlap), with a fast run time~( 45
seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev
Quantitative magnetic resonance image analysis via the EM algorithm with stochastic variation
Quantitative Magnetic Resonance Imaging (qMRI) provides researchers insight
into pathological and physiological alterations of living tissue, with the help
of which researchers hope to predict (local) therapeutic efficacy early and
determine optimal treatment schedule. However, the analysis of qMRI has been
limited to ad-hoc heuristic methods. Our research provides a powerful
statistical framework for image analysis and sheds light on future localized
adaptive treatment regimes tailored to the individual's response. We assume in
an imperfect world we only observe a blurred and noisy version of the
underlying pathological/physiological changes via qMRI, due to measurement
errors or unpredictable influences. We use a hidden Markov random field to
model the spatial dependence in the data and develop a maximum likelihood
approach via the Expectation--Maximization algorithm with stochastic variation.
An important improvement over previous work is the assessment of variability in
parameter estimation, which is the valid basis for statistical inference. More
importantly, we focus on the expected changes rather than image segmentation.
Our research has shown that the approach is powerful in both simulation studies
and on a real dataset, while quite robust in the presence of some model
assumption violations.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS157 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Intensity Segmentation of the Human Brain with Tissue dependent Homogenization
High-precision segmentation of the human cerebral cortex based on T1-weighted MRI is still a challenging task. When opting to use an intensity based approach, careful data processing is mandatory to overcome inaccuracies. They are caused by noise, partial volume effects and systematic signal intensity variations imposed by limited homogeneity of the acquisition hardware. We propose an intensity segmentation which is free from any shape prior. It uses for the first time alternatively grey (GM) or white matter (WM) based homogenization. This new tissue dependency was introduced as the analysis of 60 high resolution MRI datasets revealed appreciable differences in the axial bias field corrections, depending if they are based on GM or WM. Homogenization starts with axial bias correction, a spatially irregular distortion correction follows and finally a noise reduction is applied. The construction of the axial bias correction is based on partitions of a depth histogram. The irregular bias is modelled by Moody Darken radial basis functions. Noise is eliminated by nonlinear edge preserving and homogenizing filters. A critical point is the estimation of the training set for the irregular bias correction in the GM approach. Because of intensity edges between CSF (cerebro spinal fluid surrounding the brain and within the ventricles), GM and WM this estimate shows an acceptable stability. By this supervised approach a high flexibility and precision for the segmentation of normal and pathologic brains is gained. The precision of this approach is shown using the Montreal brain phantom. Real data applications exemplify the advantage of the GM based approach, compared to the usual WM homogenization, allowing improved cortex segmentation
Fat fraction mapping using bSSFP Signal Profile Asymmetries for Robust multi-Compartment Quantification (SPARCQ)
Purpose: To develop a novel quantitative method for detection of different
tissue compartments based on bSSFP signal profile asymmetries (SPARCQ) and to
provide a validation and proof-of-concept for voxel-wise water-fat separation
and fat fraction mapping. Methods: The SPARCQ framework uses phase-cycled bSSFP
acquisitions to obtain bSSFP signal profiles. For each voxel, the profile is
decomposed into a weighted sum of simulated profiles with specific
off-resonance and relaxation time ratios. From the obtained set of weights,
voxel-wise estimations of the fractions of the different components and their
equilibrium magnetization are extracted. For the entire image volume,
component-specific quantitative maps as well as banding-artifact-free images
are generated. A SPARCQ proof-of-concept was provided for water-fat separation
and fat fraction mapping. Noise robustness was assessed using simulations. A
dedicated water-fat phantom was used to validate fat fractions estimated with
SPARCQ against gold-standard 1H MRS. Quantitative maps were obtained in knees
of six healthy volunteers, and SPARCQ repeatability was evaluated in scan
rescan experiments. Results: Simulations showed that fat fraction estimations
are accurate and robust for signal-to-noise ratios above 20. Phantom
experiments showed good agreement between SPARCQ and gold-standard (GS) fat
fractions (fF(SPARCQ) = 1.02*fF(GS) + 0.00235). In volunteers, quantitative
maps and banding-artifact-free water-fat-separated images obtained with SPARCQ
demonstrated the expected contrast between fatty and non-fatty tissues. The
coefficient of repeatability of SPARCQ fat fraction was 0.0512. Conclusion: The
SPARCQ framework was proposed as a novel quantitative mapping technique for
detecting different tissue compartments, and its potential was demonstrated for
quantitative water-fat separation.Comment: 20 pages, 7 figures, submitted to Magnetic Resonance in Medicin
Increasing power for voxel-wise genome-wide association studies : the random field theory, least square kernel machines and fast permutation procedures
Imaging traits are thought to have more direct links to genetic variation than diagnostic measures based on cognitive or clinical assessments and provide a powerful substrate to examine the influence of genetics on human brains. Although imaging genetics has attracted growing attention and interest, most brain-wide genome-wide association studies focus on voxel-wise single-locus approaches, without taking advantage of the spatial information in images or combining the effect of multiple genetic variants. In this paper we present a fast implementation of voxel- and cluster-wise inferences based on the random field theory to fully use the spatial information in images. The approach is combined with a multi-locus model based on least square kernel machines to associate the joint effect of several single nucleotide polymorphisms (SNP) with imaging traits. A fast permutation procedure is also proposed which significantly reduces the number of permutations needed relative to the standard empirical method and provides accurate small p-value estimates based on parametric tail approximation. We explored the relation between 448,294 single nucleotide polymorphisms and 18,043 genes in 31,662 voxels of the entire brain across 740 elderly subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Structural MRI scans were analyzed using tensor-based morphometry (TBM) to compute 3D maps of regional brain volume differences compared to an average template image based on healthy elderly subjects. We find method to be more sensitive compared with voxel-wise single-locus approaches. A number of genes were identified as having significant associations with volumetric changes. The most associated gene was GRIN2B, which encodes the N-methyl-d-aspartate (NMDA) glutamate receptor NR2B subunit and affects both the parietal and temporal lobes in human brains. Its role in Alzheimer's disease has been widely acknowledged and studied, suggesting the validity of the approach. The various advantages over existing approaches indicate a great potential offered by this novel framework to detect genetic influences on human brains
Shape-driven segmentation of the arterial wall in intravascular ultrasound images
Segmentation of arterial wall boundaries from intravascular images is an important problem for many applications in the study of plaque characteristics, mechanical properties of the arterial wall, its 3D reconstruction,
and its measurements such as lumen size, lumen radius, and wall radius. We present a shape-driven approach to segmentation of the arterial wall from intravascular ultrasound images in the rectangular domain. In a properly built
shape space using training data, we constrain the lumen and media-adventitia contours to a smooth, closed geometry, which increases the segmentation quality without any tradeoff with a regularizer term. In addition to a shape prior,
we utilize an intensity prior through a non-parametric probability density based image energy, with global image measurements rather than pointwise measurements used in previous methods. Furthermore, a detection step is included to address the challenges introduced to the segmentation process by side branches and calcifications. All these features greatly enhance our segmentation method. The tests of our algorithm on a large dataset demonstrate the effectiveness of our approach
- ā¦