393 research outputs found

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    Evaluating and Improving 4D-CT Image Segmentation for Lung Cancer Radiotherapy

    Get PDF
    Lung cancer is a high-incidence disease with low survival despite surgical advances and concurrent chemo-radiotherapy strategies. Image-guided radiotherapy provides for treatment measures, however, significant challenges exist for imaging, treatment planning, and delivery of radiation due to the influence of respiratory motion. 4D-CT imaging is capable of improving image quality of thoracic target volumes influenced by respiratory motion. 4D-CT-based treatment planning strategies requires highly accurate anatomical segmentation of tumour volumes for radiotherapy treatment plan optimization. Variable segmentation of tumour volumes significantly contributes to uncertainty in radiotherapy planning due to a lack of knowledge regarding the exact shape of the lesion and difficulty in quantifying variability. As image-segmentation is one of the earliest tasks in the radiotherapy process, inherent geometric uncertainties affect subsequent stages, potentially jeopardizing patient outcomes. Thus, this work assesses and suggests strategies for mitigation of segmentation-related geometric uncertainties in 4D-CT-based lung cancer radiotherapy at pre- and post-treatment planning stages

    Developments in PET-MRI for Radiotherapy Planning Applications

    Get PDF
    The hybridization of magnetic resonance imaging (MRI) and positron emission tomography (PET) provides the benefit of soft-tissue contrast and specific molecular information in a simultaneous acquisition. The applications of PET-MRI in radiotherapy are only starting to be realised. However, quantitative accuracy of PET relies on accurate attenuation correction (AC) of, not only the patient anatomy but also MRI hardware and current methods, which are prone to artefacts caused by dense materials. Quantitative accuracy of PET also relies on full characterization of patient motion during the scan. The simultaneity of PET-MRI makes it especially suited for motion correction. However, quality assurance (QA) procedures for such corrections are lacking. Therefore, a dynamic phantom that is PET and MR compatible is required. Additionally, respiratory motion characterization is needed for conformal radiotherapy of lung. 4D-CT can provide 3D motion characterization but suffers from poor soft-tissue contrast. In this thesis, I examine these problems, and present solutions in the form of improved MR-hardware AC techniques, a PET/MRI/CT-compatible tumour respiratory motion phantom for QA measurements, and a retrospective 4D-PET-MRI technique to characterise respiratory motion. Chapter 2 presents two techniques to improve upon current AC methods that use a standard helical CT scan for MRI hardware in PET-MRI. One technique uses a dual-energy computed tomography (DECT) scan to construct virtual monoenergetic image volumes and the other uses a tomotherapy linear accelerator to create CT images at megavoltage energies (1.0 MV) of the RF coil. The DECT-based technique reduced artefacts in the images translating to improved μ-maps. The MVCT-based technique provided further improvements in artefact reduction, resulting in artefact free μ-maps. This led to more AC of the breast coil. In chapter 3, I present a PET-MR-CT motion phantom for QA of motion-correction protocols. This phantom is used to evaluate a clinically available real-time dynamic MR images and a respiratory-triggered PET-MRI protocol. The results show the protocol to perform well under motion conditions. Additionally, the phantom provided a good model for performing QA of respiratory-triggered PET-MRI. Chapter 4 presents a 4D-PET/MRI technique, using MR sequences and PET acquisition methods currently available on hybrid PET/MRI systems. This technique is validated using the motion phantom presented in chapter 3 with three motion profiles. I conclude that our 4D-PET-MRI technique provides information to characterise tumour respiratory motion while using a clinically available pulse sequence and PET acquisition method

    Segmentation of kidney and renal collecting system on 3D computed tomography images

    Get PDF
    Surgical training for minimal invasive kidney interventions (MIKI) has huge importance within the urology field. Within this topic, simulate MIKI in a patient-specific virtual environment can be used for pre-operative planning using the real patient's anatomy, possibly resulting in a reduction of intra-operative medical complications. However, the validated VR simulators perform the training in a group of standard models and do not allow patient-specific training. For a patient-specific training, the standard simulator would need to be adapted using personalized models, which can be extracted from pre-operative images using segmentation strategies. To date, several methods have already been proposed to accurately segment the kidney in computed tomography (CT) images. However, most of these works focused on kidney segmentation only, neglecting the extraction of its internal compartments. In this work, we propose to adapt a coupled formulation of the B-Spline Explicit Active Surfaces (BEAS) framework to simultaneously segment the kidney and the renal collecting system (CS) from CT images. Moreover, from the difference of both kidney and CS segmentations, one is able to extract the renal parenchyma also. The segmentation process is guided by a new energy functional that combines both gradient and region-based energies. The method was evaluated in 10 kidneys from 5 CT datasets, with different image properties. Overall, the results demonstrate the accuracy of the proposed strategy, with a Dice overlap of 92.5%, 86.9% and 63.5%, and a point-to-surface error around 1.6 mm, 1.9 mm and 4 mm for the kidney, renal parenchyma and CS, respectively.NORTE-01-0145-FEDER0000I3, and NORTE-01-0145-FEDER-024300, supported by Northern Portugal Regional Operational Programme (Norte2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (FEDER), and also been funded by FEDER funds, through Competitiveness Factors Operational Programme (COMPETE), and by national funds, through the FCT-Fundacao para a Ciência e Tecnologia, under the scope of the project POCI-01-0145-FEDER-007038. The authors acknowledge FCT-Fundação para a Ciância e a Tecnologia, Portugal, and the European Social Found, European Union, for funding support through the Programa Operacional Capital Humano (POCH).info:eu-repo/semantics/publishedVersio

    腹部CT像上の複数オブジェクトのセグメンテーションのための統計的手法に関する研究

    Get PDF
    Computer aided diagnosis (CAD) is the use of a computer-generated output as an auxiliary tool for the assistance of efficient interpretation and accurate diagnosis. Medical image segmentation has an essential role in CAD in clinical applications. Generally, the task of medical image segmentation involves multiple objects, such as organs or diffused tumor regions. Moreover, it is very unfavorable to segment these regions from abdominal Computed Tomography (CT) images because of the overlap in intensity and variability in position and shape of soft tissues. In this thesis, a progressive segmentation framework is proposed to extract liver and tumor regions from CT images more efficiently, which includes the steps of multiple organs coarse segmentation, fine segmentation, and liver tumors segmentation. Benefit from the previous knowledge of the shape and its deformation, the Statistical shape model (SSM) method is firstly utilized to segment multiple organs regions robustly. In the process of building an SSM, the correspondence of landmarks is crucial to the quality of the model. To generate a more representative prototype of organ surface, a k-mean clustering method is proposed. The quality of the SSMs, which is measured by generalization ability, specificity, and compactness, was improved. We furtherly extend the shapes correspondence to multiple objects. A non-rigid iterative closest point surface registration process is proposed to seek more properly corresponded landmarks across the multi-organ surfaces. The accuracy of surface registration was improved as well as the model quality. Moreover, to localize the abdominal organs simultaneously, we proposed a random forest regressor cooperating intensity features to predict the position of multiple organs in the CT image. The regions of the organs are substantially restrained using the trained shape models. The accuracy of coarse segmentation using SSMs was increased by the initial information of organ positions.Consequently, a pixel-wise segmentation using the classification of supervoxels is applied for the fine segmentation of multiple organs. The intensity and spatial features are extracted from each supervoxels and classified by a trained random forest. The boundary of the supervoxels is closer to the real organs than the previous coarse segmentation. Finally, we developed a hybrid framework for liver tumor segmentation in multiphase images. To deal with these issues of distinguishing and delineating tumor regions and peripheral tissues, this task is accomplished in two steps: a cascade region-based convolutional neural network (R-CNN) with a refined head is trained to locate the bounding boxes that contain tumors, and a phase-sensitive noise filtering is introduced to refine the following segmentation of tumor regions conducted by a level-set-based framework. The results of tumor detection show the adjacent tumors are successfully separated by the improved cascaded R-CNN. The accuracy of tumor segmentation is also improved by our proposed method. 26 cases of multi-phase CT images were used to validate our proposed method for the segmentation of liver tumors. The average precision and recall rates for tumor detection are 76.8% and 84.4%, respectively. The intersection over union, true positive rate, and false positive rate for tumor segmentation are 72.7%, 76.2%, and 4.75%, respectively.九州工業大学博士学位論文 学位記番号: 工博甲第546号 学位授与年月日: 令和4年3月25日1 Introduction|2 Literature Review|3 Statistical Shape Model Building|4 Multi-organ Segmentation|5 Liver Tumors Segmentation|6 Summary and Outlook九州工業大学令和3年

    Multi-Atlas Segmentation of Biomedical Images: A Survey

    Get PDF
    Abstract Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing

    Improving the Accuracy of CT-derived Attenuation Correction in Respiratory-Gated PET/CT Imaging

    Get PDF
    The effect of respiratory motion on attenuation correction in Fludeoxyglucose (18F) positron emission tomography (FDG-PET) was investigated. Improvements to the accuracy of computed tomography (CT) derived attenuation correction were obtained through the alignment of the attenuation map to each emission image in a respiratory gated PET scan. Attenuation misalignment leads to artefacts in the reconstructed PET image and several methods were devised for evaluating the attenuation inaccuracies caused by this. These methods of evaluation were extended to finding the frame in the respiratory gated PET which best matched the CT. This frame was then used as a reference frame in mono-modality compensation for misalignment. Attenuation correction was found to affect the quantification of tumour volumes; thus a regional analysis was used to evaluate the impact of mismatch and the benefits of compensating for misalignment. Deformable image registration was used to compensate for misalignment, however, there were inaccuracies caused by the poor signal-to-noise ratio (SNR) in PET images. Two models were developed that were robust to a poor SNR allowing for the estimation of deformation from very noisy images. Firstly, a cross population model was developed by statistically analysing the respiratory motion in 10 4DCT scans. Secondly, a 1D model of respiration was developed based on the physiological function of respiration. The 1D approach correctly modelled the expansion and contraction of the lungs and the differences in the compressibility of lungs and surrounding tissues. Several additional models were considered but were ruled out based on their poor goodness of fit to 4DCT scans. Approaches to evaluating the developed models were also used to assist with optimising for the most accurate attenuation correction. It was found that the multimodality registration of the CT image to the PET image was the most accurate approach to compensating for attenuation correction mismatch. Mono-modality image registration was found to be the least accurate approach, however, incorporating a motion model improved the accuracy of image registration. The significance of these findings is twofold. Firstly, it was found that motion models are required to improve the accuracy in compensating for attenuation correction mismatch and secondly, a validation method was found for comparing approaches to compensating for attenuation mismatch
    corecore