57 research outputs found

    RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical Centerline Extraction

    Full text link
    Automatic rib labeling and anatomical centerline extraction are common prerequisites for various clinical applications. Prior studies either use in-house datasets that are inaccessible to communities, or focus on rib segmentation that neglects the clinical significance of rib labeling. To address these issues, we extend our prior dataset (RibSeg) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2, with 660 CT scans (15,466 individual ribs in total) and annotations manually inspected by experts for rib labeling and anatomical centerline extraction. Based on the RibSeg v2, we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction. To improve computational efficiency, we propose a sparse point cloud representation of CT scans and compare it with standard dense voxel grids. Moreover, we design and analyze evaluation metrics to address the key challenges of each task. Our dataset, code, and model are available online to facilitate open research at https://github.com/M3DV/RibSegComment: 10 pages, 6 figures, journa

    Med-Query: Steerable Parsing of 9-DoF Medical Anatomies with Query Embedding

    Full text link
    Automatic parsing of human anatomies at instance-level from 3D computed tomography (CT) scans is a prerequisite step for many clinical applications. The presence of pathologies, broken structures or limited field-of-view (FOV) all can make anatomy parsing algorithms vulnerable. In this work, we explore how to exploit and conduct the prosperous detection-then-segmentation paradigm in 3D medical data, and propose a steerable, robust, and efficient computing framework for detection, identification, and segmentation of anatomies in CT scans. Considering complicated shapes, sizes and orientations of anatomies, without lose of generality, we present the nine degrees-of-freedom (9-DoF) pose estimation solution in full 3D space using a novel single-stage, non-hierarchical forward representation. Our whole framework is executed in a steerable manner where any anatomy of interest can be directly retrieved to further boost the inference efficiency. We have validated the proposed method on three medical imaging parsing tasks of ribs, spine, and abdominal organs. For rib parsing, CT scans have been annotated at the rib instance-level for quantitative evaluation, similarly for spine vertebrae and abdominal organs. Extensive experiments on 9-DoF box detection and rib instance segmentation demonstrate the effectiveness of our framework (with the identification rate of 97.0% and the segmentation Dice score of 90.9%) in high efficiency, compared favorably against several strong baselines (e.g., CenterNet, FCOS, and nnU-Net). For spine identification and segmentation, our method achieves a new state-of-the-art result on the public CTSpine1K dataset. Last, we report highly competitive results in multi-organ segmentation at FLARE22 competition. Our annotations, code and models will be made publicly available at: https://github.com/alibaba-damo-academy/Med_Query.Comment: updated versio

    Adjusting the Ground Truth Annotations for Connectivity-Based Learning to Delineate

    Full text link
    Deep learning-based approaches to delineating 3D structure depend on accurate annotations to train the networks. Yet, in practice, people, no matter how conscientious, have trouble precisely delineating in 3D and on a large scale, in part because the data is often hard to interpret visually and in part because the 3D interfaces are awkward to use. In this paper, we introduce a method that explicitly accounts for annotation inaccuracies. To this end, we treat the annotations as active contour models that can deform themselves while preserving their topology. This enables us to jointly train the network and correct potential errors in the original annotations. The result is an approach that boosts performance of deep networks trained with potentially inaccurate annotations

    심장 컴퓨터 단층촬영 영상으로부터 경사도 보조 지역 능동 윤곽 모델을 이용한 심장 영역 자동 분할 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 2. 신영길.The heart is one of the most important human organs, and composed of complex structures. Computed tomography angiography (CTA), magnetic resonance imaging (MRI), and single photon emission computed tomography are widely used, non-invasive cardiac imaging modalities. Compared with other modalities, CTA can provide more detailed anatomic information of the heart chambers, vessels, and coronary arteries due to its higher spatial resolution. To obtain important morphological information of the heart, whole heart segmentation is necessary and it can be used for clinical diagnosis. In this paper, we propose a novel framework to segment the four chambers of the heart automatically. First, the whole heart is coarsely extracted. This is separated into the left and right parts using a geometric analysis based on anatomical information and a subsequent power watershed. Then, the proposed gradient-assisted localized active contour model (GLACM) refines the left and right sides of the heart segmentation accurately. Finally, the left and right sides of the heart are separated into atrium and ventricle by minimizing the proposed split energy function that determines the boundary between the atrium and ventricle based on the shape and intensity of the heart. The main challenge of heart segmentation is to extract four chambers from cardiac CTA which has weak edges or separators. To enhance the accuracy of the heart segmentation, we use region-based information and edge-based information for the robustness of the accuracy in heterogeneous region. Model-based method, which requires a number of training data and proper template model, has been widely used for heat segmentation. It is difficult to model those data, since training data should describe precise heart regions and the number of data should be high in order to produce more accurate segmentation results. Besides, the training data are required to be represented with remarkable features, which are generated by manual setting, and these features must have correspondence for each other. However in our proposed methods, the training data and template model is not necessary. Instead, we use edge, intensity and shape information from cardiac CTA for each chamber segmentation. The intensity information of CTA can be substituted for the shape information of the template model. In addition, we devised adaptive radius function and Gaussian-pyramid edge map for GLACM in order to utilize the edge information effectively and improve the accuracy of segmentation comparison with original localizing region-based active contour model (LACM). Since the radius of LACM affects the overall segmentation performance, we proposed an energy function for changing radius adaptively whether homogeneous or heterogeneous region. Also we proposed split energy function in order to segment four chambers of the heart in cardiac CT images and detects the valve of atrium and ventricle. In experimental results using twenty clinical datasets, the proposed method identified the four chambers accurately and efficiently. We also demonstrated that this approach can assist the cardiologist for the clinical investigations and functional analysis.Contents Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Dissertation Goal 7 1.3 Main Contribtions 9 1.4 Organization of the Dissertation 10 Chapter 2 Related Works 11 2.1 Medical Image Segmentation 11 2.1.1 Classic Methods 11 2.1.2 Variational Methods 15 2.1.3 Image Features of the Curve 21 2.1.4 Combinatorial Methods 25 2.1.5 Difficulty of Segmentation 30 2.2 Heart Segmentation 33 2.2.1 Non-Model-Based Segmentation 34 2.2.2 Unstatistical Model-Based Segmentation 35 2.2.3 Statistical Model-Based Segmentation 37 Chapter 3 Gradient-assisted Localized Active Contour Model 41 3.1 LACM 41 3.2 Gaussian-pyramid Edge Map 46 3.3 Adaptive Radius Function 50 3.4 LACM with Gaussian-pyramid Edge Map and Adaptive Radius Function 52 Chapter 4 Segmentation of Four Chambers of Heart 54 4.1 Overview 54 4.2 Segmentation of Whole Heart 56 4.3 Separation of Left and Right Sides of Heart 59 4.3.1 Extraction of Candidate Regions of LV and RV 60 4.3.2 Detection of Left and Right sides of Heart 62 4.4 Segmentation of Left and Right Sides of Heart 66 4.5 Separation of Atrium and Ventricle from Heart 69 4.5.1 Calculation of Principal Axes of Left and Right Sides of Heart 69 4.5.2 Detection of Separation Plane Using Split Energy Function 70 Chapter 5 Experiments 74 5.1 Performance Evaluation 74 5.2 Comparison with Conventional Method 79 5.3 Parametric Study 84 5.4 Computational Performance 85 Chapter 6 Conclusion 86 Bibliography 89Docto

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Beyond the Pixel-Wise Loss for Topology-Aware Delineation

    Get PDF
    Delineation of curvilinear structures is an important problem in Computer Vision with multiple practical applications. With the advent of Deep Learning, many current approaches on automatic delineation have focused on finding more powerful deep architectures, but have continued using the habitual pixel-wise losses such as binary cross- entropy. In this paper we claim that pixel-wise losses alone are unsuitable for this problem because of their inability to reflect the topological impact of mistakes in the final prediction. We propose a new loss term that is aware of the higher- order topological features of linear structures. We also exploit a refinement pipeline that iteratively applies the same model over the previous delineation to refine the predictions at each step, while keeping the number of parameters and the complexity of the model constant. When combined with the standard pixel-wise loss, both our new loss term and an iterative refinement boost the quality of the predicted delineations, in some cases almost doubling the accuracy as compared to the same classifier trained with the binary cross-entropy alone. We show that our approach outperforms state-of-the-art methods on a wide range of data, from microscopy to aerial images

    Coronary motion modelling for CTA to X-ray angiography registration

    Get PDF

    Coronary motion modelling for CTA to X-ray angiography registration

    Get PDF
    corecore