6,402 research outputs found
Neuroconductor: an R platform for medical imaging analysis
Neuroconductor (https://neuroconductor.org) is an open-source platform for rapid testing and dissemination of reproducible computational imaging software. The goals of the project are to: (i) provide a centralized repository of R software dedicated to image analysis, (ii) disseminate software updates quickly, (iii) train a large, diverse community of scientists using detailed tutorials and short courses, (iv) increase software quality via automatic and manual quality controls, and (v) promote reproducibility of image data analysis. Based on the programming language R (https://www.r-project.org/), Neuroconductor starts with 51 inter-operable packages that cover multiple areas of imaging including visualization, data processing and storage, and statistical inference. Neuroconductor accepts new R package submissions, which are subject to a formal review and continuous automated testing. We provide a description of the purpose of Neuroconductor and the user and developer experience
Spinal cord gray matter segmentation using deep dilated convolutions
Gray matter (GM) tissue changes have been associated with a wide range of
neurological disorders and was also recently found relevant as a biomarker for
disability in amyotrophic lateral sclerosis. The ability to automatically
segment the GM is, therefore, an important task for modern studies of the
spinal cord. In this work, we devise a modern, simple and end-to-end fully
automated human spinal cord gray matter segmentation method using Deep
Learning, that works both on in vivo and ex vivo MRI acquisitions. We evaluate
our method against six independently developed methods on a GM segmentation
challenge and report state-of-the-art results in 8 out of 10 different
evaluation metrics as well as major network parameter reduction when compared
to the traditional medical imaging architectures such as U-Nets.Comment: 13 pages, 8 figure
Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging
Many analyses of neuroimaging data involve studying one or more regions of
interest (ROIs) in a brain image. In order to do so, each ROI must first be
identified. Since every brain is unique, the location, size, and shape of each
ROI varies across subjects. Thus, each ROI in a brain image must either be
manually identified or (semi-) automatically delineated, a task referred to as
segmentation. Automatic segmentation often involves mapping a previously
manually segmented image to a new brain image and propagating the labels to
obtain an estimate of where each ROI is located in the new image. A more recent
approach to this problem is to propagate labels from multiple manually
segmented atlases and combine the results using a process known as label
fusion. To date, most label fusion algorithms either employ voting procedures
or impose prior structure and subsequently find the maximum a posteriori
estimator (i.e., the posterior mode) through optimization. We propose using a
fully Bayesian spatial regression model for label fusion that facilitates
direct incorporation of covariate information while making accessible the
entire posterior distribution. We discuss the implementation of our model via
Markov chain Monte Carlo and illustrate the procedure through both simulation
and application to segmentation of the hippocampus, an anatomical structure
known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well
Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation
Machine learning-based imaging diagnostics has recently reached or even
superseded the level of clinical experts in several clinical domains. However,
classification decisions of a trained machine learning system are typically
non-transparent, a major hindrance for clinical integration, error tracking or
knowledge discovery. In this study, we present a transparent deep learning
framework relying on convolutional neural networks (CNNs) and layer-wise
relevance propagation (LRP) for diagnosing multiple sclerosis (MS). MS is
commonly diagnosed utilizing a combination of clinical presentation and
conventional magnetic resonance imaging (MRI), specifically the occurrence and
presentation of white matter lesions in T2-weighted images. We hypothesized
that using LRP in a naive predictive model would enable us to uncover relevant
image features that a trained CNN uses for decision-making. Since imaging
markers in MS are well-established this would enable us to validate the
respective CNN model. First, we pre-trained a CNN on MRI data from the
Alzheimer's Disease Neuroimaging Initiative (n = 921), afterwards specializing
the CNN to discriminate between MS patients and healthy controls (n = 147).
Using LRP, we then produced a heatmap for each subject in the holdout set
depicting the voxel-wise relevance for a particular classification decision.
The resulting CNN model resulted in a balanced accuracy of 87.04% and an area
under the curve of 96.08% in a receiver operating characteristic curve. The
subsequent LRP visualization revealed that the CNN model focuses indeed on
individual lesions, but also incorporates additional information such as lesion
location, non-lesional white matter or gray matter areas such as the thalamus,
which are established conventional and advanced MRI markers in MS. We conclude
that LRP and the proposed framework have the capability to make diagnostic
decisions of..
- …