231 research outputs found
A Latent Source Model for Patch-Based Image Segmentation
Despite the popularity and empirical success of patch-based nearest-neighbor
and weighted majority voting approaches to medical image segmentation, there
has been no theoretical development on when, why, and how well these
nonparametric methods work. We bridge this gap by providing a theoretical
performance guarantee for nearest-neighbor and weighted majority voting
segmentation under a new probabilistic model for patch-based image
segmentation. Our analysis relies on a new local property for how similar
nearby patches are, and fuses existing lines of work on modeling natural
imagery patches and theory for nonparametric classification. We use the model
to derive a new patch-based segmentation algorithm that iterates between
inferring local label patches and merging these local segmentations to produce
a globally consistent image segmentation. Many existing patch-based algorithms
arise as special cases of the new algorithm.Comment: International Conference on Medical Image Computing and Computer
Assisted Interventions 201
Co-Clustering with Generative Models
In this paper, we present a generative model for co-clustering and develop algorithms based on the mean field approximation for the corresponding modeling problem. These algorithms can be viewed as generalizations of the traditional model-based clustering; they extend hard co-clustering algorithms such as Bregman co-clustering to include soft assignments. We show empirically that these model-based algorithms offer better performance than their hard-assignment counterparts, especially with increasing problem complexity
Atlas-Based Under-Segmentation
We study the widespread, but rarely discussed, tendency of atlas-based segmentation to under-segment the organs of interest. Commonly used error measures do not distinguish between under- and over-segmentation, contributing to the problem. We explicitly quantify over- and under-segmentation in several typical examples and present a new hypothesis for the cause. We provide evidence that segmenting only one organ of interest and merging all surrounding structures into one label creates bias towards background in the label estimates suggested by the atlas. We propose a generative model that corrects for this effect by learning the background structures from the data. Inference in the model separates the background into distinct structures and consequently improves the segmentation accuracy. Our experiments demonstrate a clear improvement in several applications.National Alliance for Medical Image Computing (U.S.) (U54-EB005149)Neuroimaging Analysis Center (U.S.) (P41-EB015902
Keypoint Transfer for Fast Whole-Body Segmentation
We introduce an approach for image segmentation based on sparse
correspondences between keypoints in testing and training images. Keypoints
represent automatically identified distinctive image locations, where each
keypoint correspondence suggests a transformation between images. We use these
correspondences to transfer label maps of entire organs from the training
images to the test image. The keypoint transfer algorithm includes three steps:
(i) keypoint matching, (ii) voting-based keypoint labeling, and (iii)
keypoint-based probabilistic transfer of organ segmentations. We report
segmentation results for abdominal organs in whole-body CT and MRI, as well as
in contrast-enhanced CT and MRI. Our method offers a speed-up of about three
orders of magnitude in comparison to common multi-atlas segmentation, while
achieving an accuracy that compares favorably. Moreover, keypoint transfer does
not require the registration to an atlas or a training phase. Finally, the
method allows for the segmentation of scans with highly variable field-of-view.Comment: Accepted for publication at IEEE Transactions on Medical Imagin
BrainPainter: A software for the visualisation of brain structures, biomarkers and associated pathological processes
We present BrainPainter, a software that automatically generates images of
highlighted brain structures given a list of numbers corresponding to the
output colours of each region. Compared to existing visualisation software
(i.e. Freesurfer, SPM, 3D Slicer), BrainPainter has three key advantages: (1)
it does not require the input data to be in a specialised format, allowing
BrainPainter to be used in combination with any neuroimaging analysis tools,
(2) it can visualise both cortical and subcortical structures and (3) it can be
used to generate movies showing dynamic processes, e.g. propagation of
pathology on the brain. We highlight three use cases where BrainPainter was
used in existing neuroimaging studies: (1) visualisation of the degree of
atrophy through interpolation along a user-defined gradient of colours, (2)
visualisation of the progression of pathology in Alzheimer's disease as well as
(3) visualisation of pathology in subcortical regions in Huntington's disease.
Moreover, through the design of BrainPainter we demonstrate the possibility of
using a powerful 3D computer graphics engine such as Blender to generate brain
visualisations for the neuroscience community. Blender's capabilities, e.g.
particle simulations, motion graphics, UV unwrapping, raster graphics editing,
raytracing and illumination effects, open a wealth of possibilities for brain
visualisation not available in current neuroimaging software. BrainPainter is
customisable, easy to use, and can run straight from the web browser:
https://brainpainter.csail.mit.edu , as well as from source-code packaged in a
docker container: https://github.com/mrazvan22/brain-coloring . It can be used
to visualise biomarker data from any brain imaging modality, or simply to
highlight a particular brain structure for e.g. anatomy courses.Comment: Accepted at the MICCAI Multimodal Brain Imaging Analysis (MBIA)
workshop, 201
Interpolating between Images with Diffusion Models
One little-explored frontier of image generation and editing is the task of
interpolating between two input images, a feature missing from all currently
deployed image generation pipelines. We argue that such a feature can expand
the creative applications of such models, and propose a method for zero-shot
interpolation using latent diffusion models. We apply interpolation in the
latent space at a sequence of decreasing noise levels, then perform denoising
conditioned on interpolated text embeddings derived from textual inversion and
(optionally) subject poses. For greater consistency, or to specify additional
criteria, we can generate several candidates and use CLIP to select the highest
quality image. We obtain convincing interpolations across diverse subject
poses, image styles, and image content, and show that standard quantitative
metrics such as FID are insufficient to measure the quality of an
interpolation. Code and data are available at
https://clintonjwang.github.io/interpolation.Comment: Presented at ICML 2023 Workshop on Challenges of Deploying Generative
A
Permutation Tests for Classification
We introduce and explore an approach to estimating statistical significance of classification accuracy, which is particularly useful in scientific applications of machine learning where high dimensionality of the data and the small number of training examples render most standard convergence bounds too loose to yield a meaningful guarantee of the generalization ability of the classifier. Instead, we estimate statistical significance of the observed classification accuracy, or the likelihood of observing such accuracy by chance due to spurious correlations of the high-dimensional data patterns with the class labels in the given training set. We adopt permutation testing, a non-parametric technique previously developed in classical statistics for hypothesis testing in the generative setting (i.e., comparing two probability distributions). We demonstrate the method on real examples from neuroimaging studies and DNA microarray analysis and suggest a theoretical analysis of the procedure that relates the asymptotic behavior of the test to the existing convergence bounds
Coping with confounds in multivoxel pattern analysis: What should we do about reaction time differences? A comment on Todd, Nystrom & Cohen 2013
Multivoxel pattern analysis (MVPA) is a sensitive and increasingly popular method for examining differences between neural activation patterns that cannot be detected using classical mass-univariate analysis. Recently, Todd et al. (“Confounds in multivariate pattern analysis: Theory and rule representation case study”, 2013, NeuroImage 77: 157–165) highlighted a potential problem for these methods: high sensitivity to confounds at the level of individual participants due to the use of directionless summary statistics. Unlike traditional mass-univariate analyses where confounding activation differences in opposite directions tend to approximately average out at group level, group level MVPA results may be driven by any activation differences that can be discriminated in individual participants. In Todd et al.'s empirical data, factoring out differences in reaction time (RT) reduced a classifier's ability to distinguish patterns of activation pertaining to two task rules. This raises two significant questions for the field: to what extent have previous multivoxel discriminations in the literature been driven by RT differences, and by what methods should future studies take RT and other confounds into account? We build on the work of Todd et al. and compare two different approaches to remove the effect of RT in MVPA. We show that in our empirical data, in contrast to that of Todd et al., the effect of RT on rule decoding is negligible, and results were not affected by the specific details of RT modelling. We discuss the meaning of and sensitivity for confounds in traditional and multivoxel approaches to fMRI analysis. We observe that the increased sensitivity of MVPA comes at a price of reduced specificity, meaning that these methods in particular call for careful consideration of what differs between our conditions of interest. We conclude that the additional complexity of the experimental design, analysis and interpretation needed for MVPA is still not a reason to favour a less sensitive approach.National Science Foundation (U.S.). Division of Information & Intelligent Systems (Collaborative Research in Computational Neuroscience 0904625)National Institutes of Health (U.S.) (National Institute for Biomedical Imaging and Bioengineering (U.S.)/National Alliance for Medical Image Computing (U.S.) U54-EB005149) )National Institutes of Health (U.S.) (National Institute for Biomedical Imaging and Bioengineering (U.S.)/Neuroimaging Analysis Center (U.S.) P41-EB015902
From Connectivity Models to Region Labels: Identifying Foci of a Neurological Disorder
We propose a novel approach to identify the foci of a neurological disorder based on anatomical and functional connectivity information. Specifically, we formulate a generative model that characterizes the network of abnormal functional connectivity emanating from the affected foci. This allows us to aggregate pairwise connectivity changes into a region-based representation of the disease. We employ the variational expectation-maximization algorithm to fit the model and subsequently identify both the afflicted regions and the differences in connectivity induced by the disorder. We demonstrate our method on a population study of schizophrenia.National Alliance for Medical Image Computing (U.S.) (Grant NIH NIBIB NAMIC U54-EB005149)Neuroimaging Analysis Center (U.S.) (Grant NIH NCRR NAC P41-RR13218)Neuroimaging Analysis Center (U.S.) (Grant NIH NCRR NAC P41-EB015902)National Science Foundation (U.S.) (CAREER Grant 0642971)National Institutes of Health (U.S.) (R01MH074794)National Institutes of Health (U.S.). Advanced Multimodal Neuroimaging Training Progra
- …