904 research outputs found
A deep learning-based method for prostate segmentation in T2-weighted magnetic resonance imaging
We propose a novel automatic method for accurate segmentation of the prostate
in T2-weighted magnetic resonance imaging (MRI). Our method is based on
convolutional neural networks (CNNs). Because of the large variability in the
shape, size, and appearance of the prostate and the scarcity of annotated
training data, we suggest training two separate CNNs. A global CNN will
determine a prostate bounding box, which is then resampled and sent to a local
CNN for accurate delineation of the prostate boundary. This way, the local CNN
can effectively learn to segment the fine details that distinguish the prostate
from the surrounding tissue using the small amount of available training data.
To fully exploit the training data, we synthesize additional data by deforming
the training images and segmentations using a learned shape model. We apply the
proposed method on the PROMISE12 challenge dataset and achieve state of the art
results. Our proposed method generates accurate, smooth, and artifact-free
segmentations. On the test images, we achieve an average Dice score of 90.6
with a small standard deviation of 2.2, which is superior to all previous
methods. Our two-step segmentation approach and data augmentation strategy may
be highly effective in segmentation of other organs from small amounts of
annotated medical images
Segmentation of the Prostatic Gland and the Intraprostatic Lesions on Multiparametic MRI Using Mask-RCNN
Prostate cancer (PCa) is the most common cancer in men in the United States.
Multiparametic magnetic resonance imaging (mp-MRI) has been explored by many
researchers to targeted prostate biopsies and radiation therapy. However,
assessment on mp-MRI can be subjective, development of computer-aided diagnosis
systems to automatically delineate the prostate gland and the intraprostratic
lesions (ILs) becomes important to facilitate with radiologists in clinical
practice. In this paper, we first study the implementation of the Mask-RCNN
model to segment the prostate and ILs. We trained and evaluated models on 120
patients from two different cohorts of patients. We also used 2D U-Net and 3D
U-Net as benchmarks to segment the prostate and compared the model's
performance. The contour variability of ILs using the algorithm was also
benchmarked against the interobserver variability between two different
radiation oncologists on 19 patients. Our results indicate that the Mask-RCNN
model is able to reach state-of-art performance in the prostate segmentation
and outperforms several competitive baselines in ILs segmentation
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
CNN-based Prostate Zonal Segmentation on T2-weighted MR Images: A Cross-dataset Study
Prostate cancer is the most common cancer among US men. However, prostate
imaging is still challenging despite the advances in multi-parametric Magnetic
Resonance Imaging (MRI), which provides both morphologic and functional
information pertaining to the pathological regions. Along with whole prostate
gland segmentation, distinguishing between the Central Gland (CG) and
Peripheral Zone (PZ) can guide towards differential diagnosis, since the
frequency and severity of tumors differ in these regions; however, their
boundary is often weak and fuzzy. This work presents a preliminary study on
Deep Learning to automatically delineate the CG and PZ, aiming at evaluating
the generalization ability of Convolutional Neural Networks (CNNs) on two
multi-centric MRI prostate datasets. Especially, we compared three CNN-based
architectures: SegNet, U-Net, and pix2pix. In such a context, the segmentation
performances achieved with/without pre-training were compared in 4-fold
cross-validation. In general, U-Net outperforms the other methods, especially
when training and testing are performed on multiple datasets.Comment: 12 pages, 3 figures, Accepted to Neural Approaches to Dynamics of
Signal Exchanges as a Springer book chapte
Data augmentation using learned transformations for one-shot medical image segmentation
Image segmentation is an important task in many medical applications. Methods
based on convolutional neural networks attain state-of-the-art accuracy;
however, they typically rely on supervised training with large labeled
datasets. Labeling medical images requires significant expertise and time, and
typical hand-tuned approaches for data augmentation fail to capture the complex
variations in such images.
We present an automated data augmentation method for synthesizing labeled
medical images. We demonstrate our method on the task of segmenting magnetic
resonance imaging (MRI) brain scans. Our method requires only a single
segmented scan, and leverages other unlabeled scans in a semi-supervised
approach. We learn a model of transformations from the images, and use the
model along with the labeled example to synthesize additional labeled examples.
Each transformation is comprised of a spatial deformation field and an
intensity change, enabling the synthesis of complex effects such as variations
in anatomy and image acquisition procedures. We show that training a supervised
segmenter with these new examples provides significant improvements over
state-of-the-art methods for one-shot biomedical image segmentation. Our code
is available at https://github.com/xamyzhao/brainstorm.Comment: 9 pages, CVPR 201
Radiological images and machine learning: trends, perspectives, and prospects
The application of machine learning to radiological images is an increasingly
active research area that is expected to grow in the next five to ten years.
Recent advances in machine learning have the potential to recognize and
classify complex patterns from different radiological imaging modalities such
as x-rays, computed tomography, magnetic resonance imaging and positron
emission tomography imaging. In many applications, machine learning based
systems have shown comparable performance to human decision-making. The
applications of machine learning are the key ingredients of future clinical
decision making and monitoring systems. This review covers the fundamental
concepts behind various machine learning techniques and their applications in
several radiological imaging areas, such as medical image segmentation, brain
function studies and neurological disease diagnosis, as well as computer-aided
systems, image registration, and content-based image retrieval systems.
Synchronistically, we will briefly discuss current challenges and future
directions regarding the application of machine learning in radiological
imaging. By giving insight on how take advantage of machine learning powered
applications, we expect that clinicians can prevent and diagnose diseases more
accurately and efficiently.Comment: 13 figure
Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching
Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods
Validation Strategies Supporting Clinical Integration of Prostate Segmentation Algorithms for Magnetic Resonance Imaging
Segmentation of the prostate in medical images is useful for prostate cancer diagnosis and therapy guidance. However, manual segmentation of the prostate is laborious and time-consuming, with inter-observer variability. The focus of this thesis was on accuracy, reproducibility and procedure time measurement for prostate segmentation on T2-weighted endorectal magnetic resonance imaging, and assessment of the potential of a computer-assisted segmentation technique to be translated to clinical practice for prostate cancer management. We collected an image data set from prostate cancer patients with manually-delineated prostate borders by one observer on all the images and by two other observers on a subset of images. We used a complementary set of error metrics to measure the different types of observed segmentation errors. We compared expert manual segmentation as well as semi-automatic and automatic segmentation approaches before and after manual editing by expert physicians. We recorded the time needed for user interaction to initialize the semi-automatic algorithm, algorithm execution, and manual editing as necessary. Comparing to manual segmentation, the measured errors for the algorithms compared favourably with observed differences between manual segmentations. The measured average editing times for the computer-assisted segmentation were lower than fully manual segmentation time, and the algorithms reduced the inter-observer variability as compared to manual segmentation. The accuracy of the computer-assisted approaches was near to or within the range of observed variability in manual segmentation. The recorded procedure time for prostate segmentation was reduced using computer-assisted segmentation followed by manual editing, compared to the time required for fully manual segmentation
Fast and robust hybrid framework for infant brain classification from structural MRI : a case study for early diagnosis of autism.
The ultimate goal of this work is to develop a computer-aided diagnosis (CAD) system for early autism diagnosis from infant structural magnetic resonance imaging (MRI). The vital step to achieve this goal is to get accurate segmentation of the different brain structures: whitematter, graymatter, and cerebrospinal fluid, which will be the main focus of this thesis. The proposed brain classification approach consists of two major steps. First, the brain is extracted based on the integration of a stochastic model that serves to learn the visual appearance of the brain texture, and a geometric model that preserves the brain geometry during the extraction process. Secondly, the brain tissues are segmented based on shape priors, built using a subset of co-aligned training images, that is adapted during the segmentation process using first- and second-order visual appearance features of infant MRIs. The accuracy of the presented segmentation approach has been tested on 300 infant subjects and evaluated blindly on 15 adult subjects. The experimental results have been evaluated by the MICCAI MR Brain Image Segmentation (MRBrainS13) challenge organizers using three metrics: Dice coefficient, 95-percentile Hausdorff distance, and absolute volume difference. The proposed method has been ranked the first in terms of performance and speed
- …