1,197 research outputs found

    An Adaptive Sampling Scheme to Efficiently Train Fully Convolutional Networks for Semantic Segmentation

    Full text link
    Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark

    Keypoint Transfer for Fast Whole-Body Segmentation

    Full text link
    We introduce an approach for image segmentation based on sparse correspondences between keypoints in testing and training images. Keypoints represent automatically identified distinctive image locations, where each keypoint correspondence suggests a transformation between images. We use these correspondences to transfer label maps of entire organs from the training images to the test image. The keypoint transfer algorithm includes three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ segmentations. We report segmentation results for abdominal organs in whole-body CT and MRI, as well as in contrast-enhanced CT and MRI. Our method offers a speed-up of about three orders of magnitude in comparison to common multi-atlas segmentation, while achieving an accuracy that compares favorably. Moreover, keypoint transfer does not require the registration to an atlas or a training phase. Finally, the method allows for the segmentation of scans with highly variable field-of-view.Comment: Accepted for publication at IEEE Transactions on Medical Imagin

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    A SURVEY OF AI IMAGING TECHNIQUES FOR COVID-19 DIAGNOSIS AND PROGNOSIS

    Get PDF
    The Coronavirus Disease 2019 (COVID-19) has caused massive infections and death toll. Radiological imaging in chest such as computed tomography (CT) has been instrumental in the diagnosis and evaluation of the lung infection which is the common indication in COVID-19 infected patients. The technological advances in artificial intelligence (AI) furthermore increase the performance of imaging tools and support health professionals. CT, Positron Emission Tomography – CT (PET/CT), X-ray, Magnetic Resonance Imaging (MRI), and Lung Ultrasound (LUS) are used for diagnosis, treatment of COVID-19. Applying AI on image acquisition will help automate the process of scanning and providing protection to lab technicians. AI empowered models help radiologists and health experts in making better clinical decisions. We review AI-empowered medical imaging characteristics, image acquisition, computer-aided models that help in the COVID-19 diagnosis, management, and follow-up. Much emphasis is on CT and X-ray with integrated AI, as they are first choice in many hospitals

    Accuracy of Patient-Specific Organ Dose Estimates Obtained Using an Automated Image Segmentation Algorithm

    Get PDF
    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was -7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors

    Novel Computer-Aided Detection of Respiratory Misregistration Artifacts in PET/CT

    Get PDF
    https://openworks.mdanderson.org/sumexp23/1083/thumbnail.jp

    Segmentation of Lung Structures in CT

    Get PDF

    Lung segmentation and characterization in covid-19 patients for assessing pulmonary thromboembolism: An approach based on deep learning and radiomics

    Get PDF
    The COVID-19 pandemic is inevitably changing the world in a dramatic way, and the role of computed tomography (CT) scans can be pivotal for the prognosis of COVID-19 patients. Since the start of the pandemic, great care has been given to the relationship between interstitial pneumonia caused by the infection and the onset of thromboembolic phenomena. In this preliminary study, we collected n = 20 CT scans from the Polyclinic of Bari, all from patients positive with COVID-19, nine of which developed pulmonary thromboembolism (PTE). For eight CT scans, we obtained masks of the lesions caused by the infection, annotated by expert radiologists; whereas for the other four CT scans, we obtained masks of the lungs (including both healthy parenchyma and lesions). We developed a deep learning-based segmentation model that utilizes convolutional neural networks (CNNs) in order to accurately segment the lung and lesions. By considering the images from publicly available datasets, we also realized a training set composed of 32 CT scans and a validation set of 10 CT scans. The results obtained from the segmentation task are promising, allowing to reach a Dice coefficient higher than 97%, posing the basis for analysis concerning the assessment of PTE onset. We characterized the segmented region in order to individuate radiomic features that can be useful for the prognosis of PTE. Out of 919 extracted radiomic features, we found that 109 present different distributions according to the Mann–Whitney U test with corrected p-values less than 0.01. Lastly, nine uncorrelated features were retained that can be exploited to realize a prognostic signature
    corecore