1,122 research outputs found
Skull Stripping of Neonatal Brain MRI: Using Prior Shape Information with Graph Cuts
In this paper, we propose a novel technique for skull stripping of infant (neonatal) brain magnetic resonance images using prior shape information within a graph cut framework. Skull stripping plays an important role in brain image analysis and is a major challenge for neonatal brain images. Popular methods like the brain surface extractor (BSE) and brain extraction tool (BET) do not produce satisfactory results for neonatal images due to poor tissue contrast, weak boundaries between brain and non-brain regions, and low spatial resolution. Inclusion of prior shape information helps in accurate identification of brain and non-brain tissues. Prior shape information is obtained from a set of labeled training images. The probability of a pixel belonging to the brain is obtained from the prior shape mask and included in the penalty term of the cost function. An extra smoothness term is based on gradient information that helps identify the weak boundaries between the brain and non-brain region. Experimental results on real neonatal brain images show that compared to BET, BSE, and other methods, our method achieves superior segmentation performance for neonatal brain images and comparable performance for adult brain image
Skull Stripping of Neonatal Brain MRI: Using Prior Shape Information with Graph Cuts
ISSN:0897-1889ISSN:1618-727
Recommended from our members
Semi-Automatic 3-D Segmentation of Anatomical Structures of Brain MRI Volumes using Graph Cuts
We present a semi-automatic segmentation technique of the anatomical structures of the brain: cerebrum, cerebellum, and brain stem. The method uses graph cuts segmentation with an anatomic template for initialization. First, a skull stripping procedure is applied to remove non-brain tissues. Then, the segmentation is done hierarchically by first, extracting first the cerebrum from the brain, and then from the remaining volume the cerebellum and the brain stem are separated. This method is fast and can separate different anatomical structures of the brain in spite of weak boundaries. We describe our approach and present experimental results demonstrating its usefulness
A Graph theoretic approach to quantifying grey matter volume in neuroimaging
Brain atrophy occurs as a symptom of many diseases. The software package, Statistical Parametric Mapping (SPM) is one of the most respected and commonly used tools in the neuroimaging community for quantifying the amount of grey matter (GM) in the brain based on magnetic resonance (MR) images. One aspect of quantifying GM volume is to identify, or segment, regions of the brain image corresponding to grey matter. A recent trend in the field of image segmentation is to model an image as a graph composed of vertices and edges, and then to cut the graph into subgraphs corresponding to different segments. In this thesis, we incorporate image segmentation algorithms based on graph-cuts into a GM volume estimation system, and then we compare the GM volume estimates with those achieved via SPM. To aid in this comparison, we use 20 T1-weighted normal brain MR images simulated using BrainWeb[1] [2]. We obtained results verifying the graph-cuts technique better approximated the GM volumes by halving the error resulting from SPM preprocessing
GUBS: Graph-Based Unsupervised Brain Segmentation in MRI Images
Brain segmentation in magnetic resonance imaging (MRI) images is the process of isolating the brain from non-brain tissues to simplify the further analysis, such as detecting pathology or calculating volumes. This paper proposes a Graph-based Unsupervised Brain Segmentation (GUBS) that processes 3D MRI images and segments them into brain, non-brain tissues, and backgrounds. GUBS first constructs an adjacency graph from a preprocessed MRI image, weights it by the difference between voxel intensities, and computes its minimum spanning tree (MST). It then uses domain knowledge about the different regions of MRIs to sample representative points from the brain, non-brain, and background regions of the MRI image. The adjacency graph nodes corresponding to sampled points in each region are identified and used as the terminal nodes for paths connecting the regions in the MST. GUBS then computes a subgraph of the MST by first removing the longest edge of the path connecting the terminal nodes in the brain and other regions, followed by removing the longest edge of the path connecting non-brain and background regions. This process results in three labeled, connected components, whose labels are used to segment the brain, non-brain tissues, and the background. GUBS was tested by segmenting 3D T1 weighted MRI images from three publicly available data sets. GUBS shows comparable results to the state-of-the-art methods in terms of performance. However, many competing methods rely on having labeled data available for training. Labeling is a time-intensive and costly process, and a big advantage of GUBS is that it does not require labels.publishedVersio
Learning-based Single-step Quantitative Susceptibility Mapping Reconstruction Without Brain Extraction
Quantitative susceptibility mapping (QSM) estimates the underlying tissue
magnetic susceptibility from MRI gradient-echo phase signal and typically
requires several processing steps. These steps involve phase unwrapping, brain
volume extraction, background phase removal and solving an ill-posed inverse
problem. The resulting susceptibility map is known to suffer from inaccuracy
near the edges of the brain tissues, in part due to imperfect brain extraction,
edge erosion of the brain tissue and the lack of phase measurement outside the
brain. This inaccuracy has thus hindered the application of QSM for measuring
the susceptibility of tissues near the brain edges, e.g., quantifying cortical
layers and generating superficial venography. To address these challenges, we
propose a learning-based QSM reconstruction method that directly estimates the
magnetic susceptibility from total phase images without the need for brain
extraction and background phase removal, referred to as autoQSM. The neural
network has a modified U-net structure and is trained using QSM maps computed
by a two-step QSM method. 209 healthy subjects with ages ranging from 11 to 82
years were employed for patch-wise network training. The network was validated
on data dissimilar to the training data, e.g. in vivo mouse brain data and
brains with lesions, which suggests that the network has generalized and
learned the underlying mathematical relationship between magnetic field
perturbation and magnetic susceptibility. AutoQSM was able to recover magnetic
susceptibility of anatomical structures near the edges of the brain including
the veins covering the cortical surface, spinal cord and nerve tracts near the
mouse brain boundaries. The advantages of high-quality maps, no need for brain
volume extraction and high reconstruction speed demonstrate its potential for
future applications.Comment: 26 page
Brain Tumor Detection and Classification from MRI Images
A brain tumor is detected and classified by biopsy that is conducted after the brain surgery. Advancement in technology and machine learning techniques could help radiologists in the diagnosis of tumors without any invasive measures. We utilized a deep learning-based approach to detect and classify the tumor into Meningioma, Glioma, Pituitary tumors. We used registration and segmentation-based skull stripping mechanism to remove the skull from the MRI images and the grab cut method to verify whether the skull stripped MRI masks retained the features of the tumor for accurate classification. In this research, we proposed a transfer learning based approach in conjunction with discriminative learning rates to perform the classification of brain tumors. The data set used is a 3064 T MRI images dataset that contains T1 flair MRI images. We achieved a classification accuracy of 98.83%, 96.26%, and 95.18% for training, validation, and test sets and an F1 score of 0.96 on the T1 Flair MRI dataset
Fast and robust hybrid framework for infant brain classification from structural MRI : a case study for early diagnosis of autism.
The ultimate goal of this work is to develop a computer-aided diagnosis (CAD) system for early autism diagnosis from infant structural magnetic resonance imaging (MRI). The vital step to achieve this goal is to get accurate segmentation of the different brain structures: whitematter, graymatter, and cerebrospinal fluid, which will be the main focus of this thesis. The proposed brain classification approach consists of two major steps. First, the brain is extracted based on the integration of a stochastic model that serves to learn the visual appearance of the brain texture, and a geometric model that preserves the brain geometry during the extraction process. Secondly, the brain tissues are segmented based on shape priors, built using a subset of co-aligned training images, that is adapted during the segmentation process using first- and second-order visual appearance features of infant MRIs. The accuracy of the presented segmentation approach has been tested on 300 infant subjects and evaluated blindly on 15 adult subjects. The experimental results have been evaluated by the MICCAI MR Brain Image Segmentation (MRBrainS13) challenge organizers using three metrics: Dice coefficient, 95-percentile Hausdorff distance, and absolute volume difference. The proposed method has been ranked the first in terms of performance and speed
Informative sample generation using class aware generative adversarial networks for classification of chest Xrays
Training robust deep learning (DL) systems for disease detection from medical
images is challenging due to limited images covering different disease types
and severity. The problem is especially acute, where there is a severe class
imbalance. We propose an active learning (AL) framework to select most
informative samples for training our model using a Bayesian neural network.
Informative samples are then used within a novel class aware generative
adversarial network (CAGAN) to generate realistic chest xray images for data
augmentation by transferring characteristics from one class label to another.
Experiments show our proposed AL framework is able to achieve state-of-the-art
performance by using about of the full dataset, thus saving significant
time and effort over conventional methods
- …