1,908 research outputs found

    Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions

    Full text link
    Heavy smokers undergoing screening with low-dose chest CT are affected by cardiovascular disease as much as by lung cancer. Low-dose chest CT scans acquired in screening enable quantification of atherosclerotic calcifications and thus enable identification of subjects at increased cardiovascular risk. This paper presents a method for automatic detection of coronary artery, thoracic aorta and cardiac valve calcifications in low-dose chest CT using two consecutive convolutional neural networks. The first network identifies and labels potential calcifications according to their anatomical location and the second network identifies true calcifications among the detected candidates. This method was trained and evaluated on a set of 1744 CT scans from the National Lung Screening Trial. To determine whether any reconstruction or only images reconstructed with soft tissue filters can be used for calcification detection, we evaluated the method on soft and medium/sharp filter reconstructions separately. On soft filter reconstructions, the method achieved F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta, aortic valve and mitral valve calcifications, respectively. On sharp filter reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively. Linearly weighted kappa coefficients for risk category assignment based on per subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter reconstructions, respectively. These results demonstrate that the presented method enables reliable automatic cardiovascular risk assessment in all low-dose chest CT scans acquired for lung cancer screening

    Fully Automated Deep Learning-enabled Detection for Hepatic Steatosis on Computed Tomography: A Multicenter International Validation Study

    Full text link
    Despite high global prevalence of hepatic steatosis, no automated diagnostics demonstrated generalizability in detecting steatosis on multiple international datasets. Traditionally, hepatic steatosis detection relies on clinicians selecting the region of interest (ROI) on computed tomography (CT) to measure liver attenuation. ROI selection demands time and expertise, and therefore is not routinely performed in populations. To automate the process, we validated an existing artificial intelligence (AI) system for 3D liver segmentation and used it to purpose a novel method: AI-ROI, which could automatically select the ROI for attenuation measurements. AI segmentation and AI-ROI method were evaluated on 1,014 non-contrast enhanced chest CT images from eight international datasets: LIDC-IDRI, NSCLC-Lung1, RIDER, VESSEL12, RICORD-1A, RICORD-1B, COVID-19-Italy, and COVID-19-China. AI segmentation achieved a mean dice coefficient of 0.957. Attenuations measured by AI-ROI showed no significant differences (p = 0.545) and a reduction of 71% time compared to expert measurements. The area under the curve (AUC) of the steatosis classification of AI-ROI is 0.921 (95% CI: 0.883 - 0.959). If performed as a routine screening method, our AI protocol could potentially allow early non-invasive, non-pharmacological preventative interventions for hepatic steatosis. 1,014 expert-annotated liver segmentations of patients with hepatic steatosis annotations can be downloaded here: https://drive.google.com/drive/folders/1-g_zJeAaZXYXGqL1OeF6pUjr6KB0igJX

    Artificial intelligence based automatic quantification of epicardial adipose tissue suitable for large scale population studies

    Get PDF
    To develop a fully automatic model capable of reliably quantifying epicardial adipose tissue (EAT) volumes and attenuation in large scale population studies to investigate their relation to markers of cardiometabolic risk. Non-contrast cardiac CT images from the SCAPIS study were used to train and test a convolutional neural network based model to quantify EAT by: segmenting the pericardium, suppressing noise-induced artifacts in the heart chambers, and, if image sets were incomplete, imputing missing EAT volumes. The model achieved a mean Dice coefficient of 0.90 when tested against expert manual segmentations on 25 image sets. Tested on 1400 image sets, the model successfully segmented 99.4% of the cases. Automatic imputation of missing EAT volumes had an error of less than 3.1% with up to 20% of the slices in image sets missing. The most important predictors of EAT volumes were weight and waist, while EAT attenuation was predicted mainly by EAT volume. A model with excellent performance, capable of fully automatic handling of the most common challenges in large scale EAT quantification has been developed. In studies of the importance of EAT in disease development, the strong co-variation with anthropometric measures needs to be carefully considered

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    First PACS‐integrated artificial intelligence‐based software tool for rapid and fully automatic analysis of body composition from CT in clinical routine

    Get PDF
    Background: To externally evaluate the first picture archiving communications system (PACS)-integrated artificial intelligence (AI)-based workflow, trained to automatically detect a predefined computed tomography (CT) slice at the third lumbar vertebra (L3) and automatically perform complete image segmentation for analysis of CT body composition and to compare its performance with that of an established semi-automatic segmentation tool regarding speed and accuracy of tissue area calculation. Methods: For fully automatic analysis of body composition with L3 recognition, U-Nets were trained (Visage) and compared with a conventional image segmentation software (TomoVision). Tissue was differentiated into psoas muscle, skeletal muscle, visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT). Mid-L3 level images from randomly selected DICOM slice files of 20 CT scans acquired with various imaging protocols were segmented with both methods. Results: Success rate of AI-based L3 recognition was 100%. Compared with semi-automatic, fully automatic AI-based image segmentation yielded relative differences of 0.22% and 0.16% for skeletal muscle, 0.47% and 0.49% for psoas muscle, 0.42% and 0.42% for VAT and 0.18% and 0.18% for SAT. AI-based fully automatic segmentation was significantly faster than semi-automatic segmentation (3 ± 0 s vs. 170 ± 40 s, P < 0.001, for User 1 and 152 ± 40 s, P < 0.001, for User 2). Conclusion: Rapid fully automatic AI-based, PACS-integrated assessment of body composition yields identical results without transfer of critical patient data. Additional metabolic information can be inserted into the patient’s image report and offered to the referring clinicians

    Machine learning applications in cardiac computed tomography: a composite systematic review

    Get PDF
    Artificial intelligence and machine learning (ML) models are rapidly being applied to the analysis of cardiac computed tomography (CT). We sought to provide an overview of the contemporary advances brought about by the combination of ML and cardiac CT. Six searches were performed in Medline, Embase, and the Cochrane Library up to November 2021 for (i) CT-fractional flow reserve (CT-FFR), (ii) atrial fibrillation (AF), (iii) aortic stenosis, (iv) plaque characterization, (v) fat quantification, and (vi) coronary artery calcium score. We included 57 studies pertaining to the aforementioned topics. Non-invasive CT-FFR can accurately be estimated using ML algorithms and has the potential to reduce the requirement for invasive angiography. Coronary artery calcification and non-calcified coronary lesions can now be automatically and accurately calculated. Epicardial adipose tissue can also be automatically, accurately, and rapidly quantified. Effective ML algorithms have been developed to streamline and optimize the safety of aortic annular measurements to facilitate pre-transcatheter aortic valve replacement valve selection. Within electrophysiology, the left atrium (LA) can be segmented and resultant LA volumes have contributed to accurate predictions of post-ablation recurrence of AF. In this review, we discuss the latest studies and evolving techniques of ML and cardiac CT

    Development of pericardial fat count images using a combination of three different deep-learning models

    Full text link
    Rationale and Objectives: Pericardial fat (PF), the thoracic visceral fat surrounding the heart, promotes the development of coronary artery disease by inducing inflammation of the coronary arteries. For evaluating PF, this study aimed to generate pericardial fat count images (PFCIs) from chest radiographs (CXRs) using a dedicated deep-learning model. Materials and Methods: The data of 269 consecutive patients who underwent coronary computed tomography (CT) were reviewed. Patients with metal implants, pleural effusion, history of thoracic surgery, or that of malignancy were excluded. Thus, the data of 191 patients were used. PFCIs were generated from the projection of three-dimensional CT images, where fat accumulation was represented by a high pixel value. Three different deep-learning models, including CycleGAN, were combined in the proposed method to generate PFCIs from CXRs. A single CycleGAN-based model was used to generate PFCIs from CXRs for comparison with the proposed method. To evaluate the image quality of the generated PFCIs, structural similarity index measure (SSIM), mean squared error (MSE), and mean absolute error (MAE) of (i) the PFCI generated using the proposed method and (ii) the PFCI generated using the single model were compared. Results: The mean SSIM, MSE, and MAE were as follows: 0.856, 0.0128, and 0.0357, respectively, for the proposed model; and 0.762, 0.0198, and 0.0504, respectively, for the single CycleGAN-based model. Conclusion: PFCIs generated from CXRs with the proposed model showed better performance than those with the single model. PFCI evaluation without CT may be possible with the proposed method
    corecore