186 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
An application of cascaded 3D fully convolutional networks for medical image segmentation
Recent advances in 3D fully convolutional networks (FCN) have made it
feasible to produce dense voxel-wise predictions of volumetric images. In this
work, we show that a multi-class 3D FCN trained on manually labeled CT scans of
several anatomical structures (ranging from the large organs to thin vessels)
can achieve competitive segmentation results, while avoiding the need for
handcrafting features or training class-specific models.
To this end, we propose a two-stage, coarse-to-fine approach that will first
use a 3D FCN to roughly define a candidate region, which will then be used as
input to a second 3D FCN. This reduces the number of voxels the second FCN has
to classify to ~10% and allows it to focus on more detailed segmentation of the
organs and vessels.
We utilize training and validation sets consisting of 331 clinical CT images
and test our models on a completely unseen data collection acquired at a
different hospital that includes 150 CT scans, targeting three anatomical
organs (liver, spleen, and pancreas). In challenging organs such as the
pancreas, our cascaded approach improves the mean Dice score from 68.5 to
82.2%, achieving the highest reported average score on this dataset. We compare
with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and
achieve a significantly higher performance in small organs and vessels.
Furthermore, we explore fine-tuning our models to different datasets.
Our experiments illustrate the promise and robustness of current 3D FCN based
semantic segmentation of medical images, achieving state-of-the-art results.
Our code and trained models are available for download:
https://github.com/holgerroth/3Dunet_abdomen_cascade.Comment: Preprint accepted for publication in Computerized Medical Imaging and
Graphics. Substantial extension of arXiv:1704.06382; Corrected references to
figure numbers in this versio
Combining Shape and Learning for Medical Image Analysis
Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields
Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks
Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deeplearning- based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the GI tract (esophagus, stomach, duodenum) and surrounding organs (liver, spleen, left kidney, gallbladder). We directly compared the segmentation accuracy of the proposed method to existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 vs. 0.71, 0.74 and 0.74 for the pancreas, 0.90 vs 0.85, 0.87 and 0.83 for the stomach and 0.76 vs 0.68, 0.69 and 0.66 for the esophagus. We conclude that deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures
Multi-organ Segmentation Network with Adversarial Performance Validator
CT organ segmentation on computed tomography (CT) images becomes a
significant brick for modern medical image analysis, supporting clinic
workflows in multiple domains. Previous segmentation methods include 2D
convolution neural networks (CNN) based approaches, fed by CT image slices that
lack the structural knowledge in axial view, and 3D CNN-based methods with the
expensive computation cost in multi-organ segmentation applications. This paper
introduces an adversarial performance validation network into a 2D-to-3D
segmentation framework. The classifier and performance validator competition
contribute to accurate segmentation results via back-propagation. The proposed
network organically converts the 2D-coarse result to 3D high-quality
segmentation masks in a coarse-to-fine manner, allowing joint optimization to
improve segmentation accuracy. Besides, the structural information of one
specific organ is depicted by a statistics-meaningful prior bounding box, which
is transformed into a global feature leveraging the learning process in 3D fine
segmentation. The experiments on the NIH pancreas segmentation dataset
demonstrate the proposed network achieves state-of-the-art accuracy on small
organ segmentation and outperforms the previous best. High accuracy is also
reported on multi-organ segmentation in a dataset collected by ourselves
Automated liver tissues delineation based on machine learning techniques: A survey, current trends and future orientations
There is no denying how machine learning and computer vision have grown in
the recent years. Their highest advantages lie within their automation,
suitability, and ability to generate astounding results in a matter of seconds
in a reproducible manner. This is aided by the ubiquitous advancements reached
in the computing capabilities of current graphical processing units and the
highly efficient implementation of such techniques. Hence, in this paper, we
survey the key studies that are published between 2014 and 2020, showcasing the
different machine learning algorithms researchers have used to segment the
liver, hepatic-tumors, and hepatic-vasculature structures. We divide the
surveyed studies based on the tissue of interest (hepatic-parenchyma,
hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more
than one task simultaneously. Additionally, the machine learning algorithms are
classified as either supervised or unsupervised, and further partitioned if the
amount of works that fall under a certain scheme is significant. Moreover,
different datasets and challenges found in literature and websites, containing
masks of the aforementioned tissues, are thoroughly discussed, highlighting the
organizers original contributions, and those of other researchers. Also, the
metrics that are used excessively in literature are mentioned in our review
stressing their relevancy to the task at hand. Finally, critical challenges and
future directions are emphasized for innovative researchers to tackle, exposing
gaps that need addressing such as the scarcity of many studies on the vessels
segmentation challenge, and why their absence needs to be dealt with in an
accelerated manner.Comment: 41 pages, 4 figures, 13 equations, 1 table. A review paper on liver
tissues segmentation based on automated ML-based technique
Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks
Quantitative assessment of the abdominal region from clinically acquired CT
scans requires the simultaneous segmentation of abdominal organs. Thanks to the
availability of high-performance computational resources, deep learning-based
methods have resulted in state-of-the-art performance for the segmentation of
3D abdominal CT scans. However, the complex characterization of organs with
fuzzy boundaries prevents the deep learning methods from accurately segmenting
these anatomical organs. Specifically, the voxels on the boundary of organs are
more vulnerable to misprediction due to the highly-varying intensity of
inter-organ boundaries. This paper investigates the possibility of improving
the abdominal image segmentation performance of the existing 3D encoder-decoder
networks by leveraging organ-boundary prediction as a complementary task. To
address the problem of abdominal multi-organ segmentation, we train the 3D
encoder-decoder network to simultaneously segment the abdominal organs and
their corresponding boundaries in CT scans via multi-task learning. The network
is trained end-to-end using a loss function that combines two task-specific
losses, i.e., complete organ segmentation loss and boundary prediction loss. We
explore two different network topologies based on the extent of weights shared
between the two tasks within a unified multi-task framework. To evaluate the
utilization of complementary boundary prediction task in improving the
abdominal multi-organ segmentation, we use three state-of-the-art
encoder-decoder networks: 3D UNet, 3D UNet++, and 3D Attention-UNet. The
effectiveness of utilizing the organs' boundary information for abdominal
multi-organ segmentation is evaluated on two publically available abdominal CT
datasets. A maximum relative improvement of 3.5% and 3.6% is observed in Mean
Dice Score for Pancreas-CT and BTCV datasets, respectively.Comment: 15 pages, 16 figures, journal pape
Attention and Pooling based Sigmoid Colon Segmentation in 3D CT images
Segmentation of the sigmoid colon is a crucial aspect of treating
diverticulitis. It enables accurate identification and localisation of
inflammation, which in turn helps healthcare professionals make informed
decisions about the most appropriate treatment options. This research presents
a novel deep learning architecture for segmenting the sigmoid colon from
Computed Tomography (CT) images using a modified 3D U-Net architecture. Several
variations of the 3D U-Net model with modified hyper-parameters were examined
in this study. Pyramid pooling (PyP) and channel-spatial Squeeze and Excitation
(csSE) were also used to improve the model performance. The networks were
trained using manually annotated sigmoid colon. A five-fold cross-validation
procedure was used on a test dataset to evaluate the network's performance. As
indicated by the maximum Dice similarity coefficient (DSC) of 56.92+/-1.42%,
the application of PyP and csSE techniques improves segmentation precision. We
explored ensemble methods including averaging, weighted averaging, majority
voting, and max ensemble. The results show that average and majority voting
approaches with a threshold value of 0.5 and consistent weight distribution
among the top three models produced comparable and optimal results with DSC of
88.11+/-3.52%. The results indicate that the application of a modified 3D U-Net
architecture is effective for segmenting the sigmoid colon in Computed
Tomography (CT) images. In addition, the study highlights the potential
benefits of integrating ensemble methods to improve segmentation precision.Comment: 8 Pages, 6 figures, Accepted at IEEE DICTA 202
Automatic acute ischemic stroke lesion segmentation using semi-supervised learning
Ischemic stroke is a common disease in the elderly population, which can
cause long-term disability and even death. However, the time window for
treatment of ischemic stroke in its acute stage is very short. To fast localize
and quantitively evaluate the acute ischemic stroke (AIS) lesions, many
deep-learning-based lesion segmentation methods have been proposed in the
literature, where a deep convolutional neural network (CNN) was trained on
hundreds of fully labeled subjects with accurate annotations of AIS lesions.
Despite that high segmentation accuracy can be achieved, the accurate labels
should be annotated by experienced clinicians, and it is therefore very
time-consuming to obtain a large number of fully labeled subjects. In this
paper, we propose a semi-supervised method to automatically segment AIS lesions
in diffusion weighted images and apparent diffusion coefficient maps. By using
a large number of weakly labeled subjects and a small number of fully labeled
subjects, our proposed method is able to accurately detect and segment the AIS
lesions. In particular, our proposed method consists of three parts: 1) a
double-path classification net (DPC-Net) trained in a weakly-supervised way is
used to detect the suspicious regions of AIS lesions; 2) a pixel-level K-Means
clustering algorithm is used to identify the hyperintensive regions on the
DWIs; and 3) a region-growing algorithm combines the outputs of the DPC-Net and
the K-Means to obtain the final precise lesion segmentation. In our experiment,
we use 460 weakly labeled subjects and 15 fully labeled subjects to train and
fine-tune the proposed method. By evaluating on a clinical dataset with 150
fully labeled subjects, our proposed method achieves a mean dice coefficient of
0.642, and a lesion-wise F1 score of 0.822
Improving CT image tumor segmentation through deep supervision and attentional gates
Computer Tomography (CT) is an imaging procedure that combines many X-ray measurements taken from different angles. The segmentation of areas in the CT images provides a valuable aid to physicians and radiologists in order to better provide a patient diagnose. The CT scans of a body torso usually include different neighboring internal body organs. Deep learning has become the state-of-the-art in medical image segmentation. For such techniques, in order to perform a successful segmentation, it is of great importance that the network learns to focus on the organ of interest and surrounding structures and also that the network can detect target regions of different sizes. In this paper, we propose the extension of a popular deep learning methodology, Convolutional Neural Networks (CNN), by including deep supervision and attention gates. Our experimental evaluation shows that the inclusion of attention and deep supervision results in consistent improvement of the tumor prediction accuracy across the different datasets and training sizes while adding minimal computational overhead. © Copyright © 2020 TureÄŤková, TureÄŤek, KomĂnková Oplatková and RodrĂguez-Sánchez.Internal Grant Agency of Tomas Bata University [IGA/CebiaTech/2020/001]; COST (European Cooperation in Science Technology) [CA15140]; program Projects of Large Research, Development, and Innovations Infrastructures [e-INFRA LM2018140
- …