673 research outputs found
Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging
Many analyses of neuroimaging data involve studying one or more regions of
interest (ROIs) in a brain image. In order to do so, each ROI must first be
identified. Since every brain is unique, the location, size, and shape of each
ROI varies across subjects. Thus, each ROI in a brain image must either be
manually identified or (semi-) automatically delineated, a task referred to as
segmentation. Automatic segmentation often involves mapping a previously
manually segmented image to a new brain image and propagating the labels to
obtain an estimate of where each ROI is located in the new image. A more recent
approach to this problem is to propagate labels from multiple manually
segmented atlases and combine the results using a process known as label
fusion. To date, most label fusion algorithms either employ voting procedures
or impose prior structure and subsequently find the maximum a posteriori
estimator (i.e., the posterior mode) through optimization. We propose using a
fully Bayesian spatial regression model for label fusion that facilitates
direct incorporation of covariate information while making accessible the
entire posterior distribution. We discuss the implementation of our model via
Markov chain Monte Carlo and illustrate the procedure through both simulation
and application to segmentation of the hippocampus, an anatomical structure
known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure
Error Corrective Boosting for Learning Fully Convolutional Networks with Limited Data
Training deep fully convolutional neural networks (F-CNNs) for semantic image
segmentation requires access to abundant labeled data. While large datasets of
unlabeled image data are available in medical applications, access to manually
labeled data is very limited. We propose to automatically create auxiliary
labels on initially unlabeled data with existing tools and to use them for
pre-training. For the subsequent fine-tuning of the network with manually
labeled data, we introduce error corrective boosting (ECB), which emphasizes
parameter updates on classes with lower accuracy. Furthermore, we introduce
SkipDeconv-Net (SD-Net), a new F-CNN architecture for brain segmentation that
combines skip connections with the unpooling strategy for upsampling. The
SD-Net addresses challenges of severe class imbalance and errors along
boundaries. With application to whole-brain MRI T1 scan segmentation, we
generate auxiliary labels on a large dataset with FreeSurfer and fine-tune on
two datasets with manual annotations. Our results show that the inclusion of
auxiliary labels and ECB yields significant improvements. SD-Net segments a 3D
scan in 7 secs in comparison to 30 hours for the closest multi-atlas
segmentation method, while reaching similar performance. It also outperforms
the latest state-of-the-art F-CNN models.Comment: Accepted at MICCAI 201
One-shot Joint Extraction, Registration and Segmentation of Neuroimaging Data
Brain extraction, registration and segmentation are indispensable
preprocessing steps in neuroimaging studies. The aim is to extract the brain
from raw imaging scans (i.e., extraction step), align it with a target brain
image (i.e., registration step) and label the anatomical brain regions (i.e.,
segmentation step). Conventional studies typically focus on developing separate
methods for the extraction, registration and segmentation tasks in a supervised
setting. The performance of these methods is largely contingent on the quantity
of training samples and the extent of visual inspections carried out by experts
for error correction. Nevertheless, collecting voxel-level labels and
performing manual quality control on high-dimensional neuroimages (e.g., 3D
MRI) are expensive and time-consuming in many medical studies. In this paper,
we study the problem of one-shot joint extraction, registration and
segmentation in neuroimaging data, which exploits only one labeled template
image (a.k.a. atlas) and a few unlabeled raw images for training. We propose a
unified end-to-end framework, called JERS, to jointly optimize the extraction,
registration and segmentation tasks, allowing feedback among them.
Specifically, we use a group of extraction, registration and segmentation
modules to learn the extraction mask, transformation and segmentation mask,
where modules are interconnected and mutually reinforced by self-supervision.
Empirical results on real-world datasets demonstrate that our proposed method
performs exceptionally in the extraction, registration and segmentation tasks.
Our code and data can be found at https://github.com/Anonymous4545/JERSComment: Published as a research track paper at KDD 2023. Code:
https://github.com/Anonymous4545/JER
- …