1,354 research outputs found
Cloud-Based Benchmarking of Medical Image Analysis
Medical imagin
Combining Shape and Learning for Medical Image Analysis
Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields
3D Medical Image Segmentation based on multi-scale MPU-Net
The high cure rate of cancer is inextricably linked to physicians' accuracy
in diagnosis and treatment, therefore a model that can accomplish
high-precision tumor segmentation has become a necessity in many applications
of the medical industry. It can effectively lower the rate of misdiagnosis
while considerably lessening the burden on clinicians. However, fully automated
target organ segmentation is problematic due to the irregular stereo structure
of 3D volume organs. As a basic model for this class of real applications,
U-Net excels. It can learn certain global and local features, but still lacks
the capacity to grasp spatial long-range relationships and contextual
information at multiple scales. This paper proposes a tumor segmentation model
MPU-Net for patient volume CT images, which is inspired by Transformer with a
global attention mechanism. By combining image serialization with the Position
Attention Module, the model attempts to comprehend deeper contextual
dependencies and accomplish precise positioning. Each layer of the decoder is
also equipped with a multi-scale module and a cross-attention mechanism. The
capability of feature extraction and integration at different levels has been
enhanced, and the hybrid loss function developed in this study can better
exploit high-resolution characteristic information. Moreover, the suggested
architecture is tested and evaluated on the Liver Tumor Segmentation Challenge
2017 (LiTS 2017) dataset. Compared with the benchmark model U-Net, MPU-Net
shows excellent segmentation results. The dice, accuracy, precision,
specificity, IOU, and MCC metrics for the best model segmentation results are
92.17%, 99.08%, 91.91%, 99.52%, 85.91%, and 91.74%, respectively. Outstanding
indicators in various aspects illustrate the exceptional performance of this
framework in automatic medical image segmentation.Comment: 37 page
CancerUniT: Towards a Single Unified Model for Effective Detection, Segmentation, and Diagnosis of Eight Major Cancers Using a Large Collection of CT Scans
Human readers or radiologists routinely perform full-body multi-organ
multi-disease detection and diagnosis in clinical practice, while most medical
AI systems are built to focus on single organs with a narrow list of a few
diseases. This might severely limit AI's clinical adoption. A certain number of
AI models need to be assembled non-trivially to match the diagnostic process of
a human reading a CT scan. In this paper, we construct a Unified Tumor
Transformer (CancerUniT) model to jointly detect tumor existence & location and
diagnose tumor characteristics for eight major cancers in CT scans. CancerUniT
is a query-based Mask Transformer model with the output of multi-tumor
prediction. We decouple the object queries into organ queries, tumor detection
queries and tumor diagnosis queries, and further establish hierarchical
relationships among the three groups. This clinically-inspired architecture
effectively assists inter- and intra-organ representation learning of tumors
and facilitates the resolution of these complex, anatomically related
multi-organ cancer image reading tasks. CancerUniT is trained end-to-end using
a curated large-scale CT images of 10,042 patients including eight major types
of cancers and occurring non-cancer tumors (all are pathology-confirmed with 3D
tumor masks annotated by radiologists). On the test set of 631 patients,
CancerUniT has demonstrated strong performance under a set of clinically
relevant evaluation metrics, substantially outperforming both multi-disease
methods and an assembly of eight single-organ expert models in tumor detection,
segmentation, and diagnosis. This moves one step closer towards a universal
high performance cancer screening tool.Comment: ICCV 2023 Camera Ready Versio
- …