122 research outputs found
3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities
Accurate, automated quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided diagnosis (CADx) systems to support the in- terpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, varying image spatial resolutions resulting from different scanner protocols, and the presence of blurring artefacts. This paper presents a novel computing ap- proach for automated organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal organ or muscle boundaries for every protrusion and indentation. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and iliopsoas muscles. We achieve quantitative measures of mean Dice similarity coefficient (DSC) that surpasses or are comparable with the state-of-the-art and demonstrate statistical stability. A qualitative evaluation performed by two independent experts in radiology and radiography verified the preservation of detailed organ and muscle boundaries
Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks
Quantitative assessment of the abdominal region from clinically acquired CT
scans requires the simultaneous segmentation of abdominal organs. Thanks to the
availability of high-performance computational resources, deep learning-based
methods have resulted in state-of-the-art performance for the segmentation of
3D abdominal CT scans. However, the complex characterization of organs with
fuzzy boundaries prevents the deep learning methods from accurately segmenting
these anatomical organs. Specifically, the voxels on the boundary of organs are
more vulnerable to misprediction due to the highly-varying intensity of
inter-organ boundaries. This paper investigates the possibility of improving
the abdominal image segmentation performance of the existing 3D encoder-decoder
networks by leveraging organ-boundary prediction as a complementary task. To
address the problem of abdominal multi-organ segmentation, we train the 3D
encoder-decoder network to simultaneously segment the abdominal organs and
their corresponding boundaries in CT scans via multi-task learning. The network
is trained end-to-end using a loss function that combines two task-specific
losses, i.e., complete organ segmentation loss and boundary prediction loss. We
explore two different network topologies based on the extent of weights shared
between the two tasks within a unified multi-task framework. To evaluate the
utilization of complementary boundary prediction task in improving the
abdominal multi-organ segmentation, we use three state-of-the-art
encoder-decoder networks: 3D UNet, 3D UNet++, and 3D Attention-UNet. The
effectiveness of utilizing the organs' boundary information for abdominal
multi-organ segmentation is evaluated on two publically available abdominal CT
datasets. A maximum relative improvement of 3.5% and 3.6% is observed in Mean
Dice Score for Pancreas-CT and BTCV datasets, respectively.Comment: 15 pages, 16 figures, journal pape
A Coarse-to-fine Framework for Automated Kidney and Kidney Tumor Segmentation from Volumetric CT Images
Automatic semantic segmentation of kidney and kidney tumor is a promising tool for the treatment of kidney cancer. Due to the wide variety in kidney and kidney tumor morphology, it is still a great challenge to complete accurate segmentation of kidney and kidney tumor. We propose a new framework based on our previous work accepted by MICCAI2019, which is a coarse-to-fine segmentation framework to realize accurate and fast segmentation of kidney and kidney tumor
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
Multi-organ Segmentation via Co-training Weight-averaged Models from Few-organ Datasets
Multi-organ segmentation has extensive applications in many clinical
applications. To segment multiple organs of interest, it is generally quite
difficult to collect full annotations of all the organs on the same images, as
some medical centers might only annotate a portion of the organs due to their
own clinical practice. In most scenarios, one might obtain annotations of a
single or a few organs from one training set, and obtain annotations of the the
other organs from another set of training images. Existing approaches mostly
train and deploy a single model for each subset of organs, which are memory
intensive and also time inefficient. In this paper, we propose to co-train
weight-averaged models for learning a unified multi-organ segmentation network
from few-organ datasets. We collaboratively train two networks and let the
coupled networks teach each other on un-annotated organs. To alleviate the
noisy teaching supervisions between the networks, the weighted-averaged models
are adopted to produce more reliable soft labels. In addition, a novel region
mask is utilized to selectively apply the consistent constraint on the
un-annotated organ regions that require collaborative teaching, which further
boosts the performance. Extensive experiments on three public available
single-organ datasets LiTS, KiTS, Pancreas and manually-constructed
single-organ datasets from MOBA show that our method can better utilize the
few-organ datasets and achieves superior performance with less inference
computational cost.Comment: Accepted by MICCAI 202
Towards Robust Deep Learning for Medical Image Analysis
Multi-dimensional medical data are rapidly collected to enhance healthcare. With the recent advance in artificial intelligence, deep learning techniques have been widely applied to medical images, constituting a significant proportion of medical data. The techniques of automated medical image analysis have the potential to benefit general clinical procedures, e.g., disease screening, malignancy diagnosis, patient risk prediction, and surgical planning. Although preliminary success takes place, the robustness of these approaches requires to be cautiously validated and sufficiently guaranteed before their application to real-world clinical problems.
In this thesis, we propose different approaches to improve the robustness of deep learning algorithms for automated medical image analysis. (i) In terms of network architecture, we leverage the advantages of both 2D and 3D networks, and propose an alternative 2.5D approach for 3D organ segmentation. (ii) To improve data efficiency and utilize large-scale unlabeled medical data, we propose a unified framework for semi-supervised medical image segmentation and domain adaptation. (iii) For the safety-critical applications, we design a unified approach for failure detection and anomaly segmentation. (iv) We study the problem of Federated Learning, which enables collaborative learning and preserves data privacy, and improve the robustness of the algorithm in the non-i.i.d setting. (v) We incorporate multi-phase information for more accurate pancreatic tumor detection. (vi) Finally, we show our discovery for potential pancreatic cancer screening on non-contrast CT scans which outperform expert radiologists
Automatic Pancreas Segmentation and 3D Reconstruction for Morphological Feature Extraction in Medical Image Analysis
The development of highly accurate, quantitative automatic medical image segmentation techniques, in comparison to manual techniques, remains a constant challenge for medical image analysis. In particular, segmenting the pancreas from an abdominal scan presents additional difficulties: this particular organ has very high anatomical variability, and a full inspection is problematic due to the location of the pancreas behind the stomach. Therefore, accurate, automatic pancreas segmentation can consequently yield quantitative morphological measures such as volume and curvature, supporting biomedical research to establish the severity and progression of a condition, such as type 2 diabetes mellitus. Furthermore, it can also guide subject stratification after diagnosis or before clinical trials, and help shed additional light on detecting early signs of pancreatic cancer. This PhD thesis delivers a novel approach for automatic, accurate quantitative pancreas segmentation in mostly but not exclusively Magnetic Resonance Imaging (MRI), by harnessing the advantages of machine learning and classical image processing in computer vision. The proposed approach is evaluated on two MRI datasets containing 216 and 132 image volumes, achieving a mean Dice similarity coefficient (DSC) of 84:1 4:6% and 85:7 2:3% respectively. In order to demonstrate the universality of the approach, a dataset containing 82 Computer Tomography (CT) image volumes is also evaluated and achieves mean DSC of 83:1 5:3%. The proposed approach delivers a contribution to computer science (computer vision) in medical image analysis, reporting better quantitative pancreas segmentation results in comparison to other state-of-the-art techniques, and also captures detailed pancreas boundaries as verified by two independent experts in radiology and radiography. The contributions’ impact can support the usage of computational methods in biomedical research with a clinical translation; for example, the pancreas volume provides a prognostic biomarker about the severity of type 2 diabetes mellitus. Furthermore, a generalisation of the proposed segmentation approach successfully extends to other anatomical structures, including the kidneys, liver and iliopsoas muscles using different MRI sequences. Thus, the proposed approach can incorporate into the development of a computational tool to support radiological interpretations of MRI scans obtained using different sequences by providing a “second opinion”, help reduce possible misdiagnosis, and consequently, provide enhanced guidance towards targeted treatment planning
- …