43 research outputs found

    Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions

    Full text link
    Heavy smokers undergoing screening with low-dose chest CT are affected by cardiovascular disease as much as by lung cancer. Low-dose chest CT scans acquired in screening enable quantification of atherosclerotic calcifications and thus enable identification of subjects at increased cardiovascular risk. This paper presents a method for automatic detection of coronary artery, thoracic aorta and cardiac valve calcifications in low-dose chest CT using two consecutive convolutional neural networks. The first network identifies and labels potential calcifications according to their anatomical location and the second network identifies true calcifications among the detected candidates. This method was trained and evaluated on a set of 1744 CT scans from the National Lung Screening Trial. To determine whether any reconstruction or only images reconstructed with soft tissue filters can be used for calcification detection, we evaluated the method on soft and medium/sharp filter reconstructions separately. On soft filter reconstructions, the method achieved F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta, aortic valve and mitral valve calcifications, respectively. On sharp filter reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively. Linearly weighted kappa coefficients for risk category assignment based on per subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter reconstructions, respectively. These results demonstrate that the presented method enables reliable automatic cardiovascular risk assessment in all low-dose chest CT scans acquired for lung cancer screening

    Automatic coronary calcium scoring in chest CT using a deep neural network in direct comparison with non-contrast cardiac CT:A validation study

    Get PDF
    Purpose: To evaluate deep-learning based calcium quantification on Chest CT scans compared with manual evaluation, and to enable interpretation in terms of the traditional Agatston score on dedicated Cardiac CT. Methods: Automated calcium quantification was performed using a combination of deep-learning convolution neural networks with a ResNet-architecture for image features and a fully connected neural network for spatial coordinate features. Calcifications were identified automatically, after which the algorithm automatically excluded all non-coronary calcifications using coronary probability maps and aortic segmentation. The algorithm was first trained on cardiac-CTs and refined on non-triggered chest-CTs. This study used on 95 patients (cohort 1), who underwent both dedicated calcium scoring and chest-CT acquisitions using the Agatston score as reference standard and 168 patients (cohort 2) who underwent chest-CT only using qualitative expert assessment for external validation. Results from the deep-learning model were compared to Agatston-scores(cardiac-CTs) and manually determined calcium volumes(chest-CTs) and risk classifications. Results: In cohort 1, the Agatston score and AI determined calcium volume shows high correlation with a correlation coefficient of 0.921(p < 0.001) and R-2 of 0.91. According to the Agatston categories, a total of 67(70 %) were correctly classified with a sensitivity of 91 % and specificity of 92 % in detecting presence of coronary calcifications. Manual determined calcium volume on chest-CT showed excellent correlation with the AI volumes with a correlation coefficient of 0.923(p < 0.001) and R-2 of 0.96, no significant difference was found (p = 0.247). According to qualitative risk classifications in cohort 2, 138(82 %) cases were correctly classified with a k-coefficient of 0.74, representing good agreement. All wrongly classified scans (30(18 %)) were attributed to an adjacent category. Conclusion: Artificial intelligence based calcium quantification on chest-CTs shows good correlation compared to reference standards. Fully automating this process may reduce evaluation time and potentially optimize clinical calcium scoring without additional acquisitions

    Standards for quantitative assessments by coronary computed tomography angiography (CCTA)

    Get PDF
    In current clinical practice, qualitative or semi-quantitative measures are primarily used to report coronary artery disease on cardiac CT. With advancements in cardiac CT technology and automated post-processing tools, quantitative measures of coronary disease severity have become more broadly available. Quantitative coronary CT angiography has great potential value for clinical management of patients, but also for research. This document aims to provide definitions and standards for the performance and reporting of quantitative measures of coronary artery disease by cardiac CT.</p

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Systems Radiology and Personalized Medicine

    Get PDF
    Medicine has evolved into a high level of specialization using the very detailed imaging of organs. This has impressively solved a multitude of acute health-related problems linked to single-organ diseases. Many diseases and pathophysiological processes, however, involve more than one organ. An organ-based approach is challenging when considering disease prevention and caring for elderly patients, or those with systemic chronic diseases or multiple co-morbidities. In addition, medical imaging provides more than a pretty picture. Much of the data are now revealed by quantitating algorithms with or without artificial intelligence. This Special Issue on “Systems Radiology and Personalized Medicine” includes reviews and original studies that show the strengths and weaknesses of structural and functional whole-body imaging for personalized medicine

    Innovations in Medical Image Analysis and Explainable AI for Transparent Clinical Decision Support Systems

    Get PDF
    This thesis explores innovative methods designed to assist clinicians in their everyday practice, with a particular emphasis on Medical Image Analysis and Explainability issues. The main challenge lies in interpreting the knowledge gained from machine learning algorithms, also called black-boxes, to provide transparent clinical decision support systems for real integration into clinical practice. For this reason, all work aims to exploit Explainable AI techniques to study and interpret the trained models. Given the countless open problems for the development of clinical decision support systems, the project includes the analysis of various data and pathologies. The main works are focused on the most threatening disease afflicting the female population: Breast Cancer. The works aim to diagnose and classify breast cancer through medical images by taking advantage of a first-level examination such as Mammography screening, Ultrasound images, and a more advanced examination such as MRI. Papers on Breast Cancer and Microcalcification Classification demonstrated the potential of shallow learning algorithms in terms of explainability and accuracy when intelligible radiomic features are used. Conversely, the union of deep learning and Explainable AI methods showed impressive results for Breast Cancer Detection. The local explanations provided via saliency maps were critical for model introspection, as well as increasing performance. To increase trust in these systems and aspire to their real use, a multi-level explanation was proposed. Three main stakeholders who need transparent models have been identified: developers, physicians, and patients. For this reason, guided by the enormous impact of COVID-19 in the world population, a fully Explainable machine learning model was proposed for COVID-19 Prognosis prediction exploiting the proposed multi-level explanation. It is assumed that such a system primarily requires two components: 1) inherently explainable inputs such as clinical, laboratory, and radiomic features; 2) Explainable methods capable of explaining globally and locally the trained model. The union of these two requirements allows the developer to detect any model bias, the doctor to verify the model findings with clinical evidence, and justify decisions to patients. These results were also confirmed for the study of coronary artery disease. In particular machine learning algorithms are trained using intelligible clinical and radiomic features extracted from pericoronaric adipose tissue to assess the condition of coronary arteries. Eventually, some important national and international collaborations led to the analysis of data for the development of predictive models for some neurological disorders. In particular, the predictivity of handwriting features for the prediction of depressed patients was explored. Using the training of neural networks constrained by first-order logic, it was possible to provide high-performance and explainable models, going beyond the trade-off between explainability and accuracy

    AI-based Aortic Vessel Tree Segmentation for Cardiovascular Diseases Treatment:Status Quo

    Get PDF
    The aortic vessel tree is composed of the aorta and its branching arteries, and plays a key role in supplying the whole body with blood. Aortic diseases, like aneurysms or dissections, can lead to an aortic rupture, whose treatment with open surgery is highly risky. Therefore, patients commonly undergo drug treatment under constant monitoring, which requires regular inspections of the vessels through imaging. The standard imaging modality for diagnosis and monitoring is computed tomography (CT), which can provide a detailed picture of the aorta and its branching vessels if completed with a contrast agent, called CT angiography (CTA). Optimally, the whole aortic vessel tree geometry from consecutive CTAs is overlaid and compared. This allows not only detection of changes in the aorta, but also of its branches, caused by the primary pathology or newly developed. When performed manually, this reconstruction requires slice by slice contouring, which could easily take a whole day for a single aortic vessel tree, and is therefore not feasible in clinical practice. Automatic or semi-automatic vessel tree segmentation algorithms, however, can complete this task in a fraction of the manual execution time and run in parallel to the clinical routine of the clinicians. In this paper, we systematically review computing techniques for the automatic and semi-automatic segmentation of the aortic vessel tree. The review concludes with an in-depth discussion on how close these state-of-the-art approaches are to an application in clinical practice and how active this research field is, taking into account the number of publications, datasets and challenges

    Spectral Photon-Counting Computed Tomography: Technical Principles and Applications in the Assessment of Cardiovascular Diseases

    Get PDF
    Spectral Photon-Counting Computed Tomography (SPCCT) represents a groundbreaking advancement in X-ray imaging technology. The core innovation of SPCCT lies in its photon-counting detectors, which can count the exact number of incoming x-ray photons and individually measure their energy. The first part of this review summarizes the key elements of SPCCT technology, such as energy binning, energy weighting, and material decomposition. Its energy-discriminating ability represents the key to the increase in the contrast between different tissues, the elimination of the electronic noise, and the correction of beam-hardening artifacts. Material decomposition provides valuable insights into specific elements’ composition, concentration, and distribution. The capability of SPCCT to operate in three or more energy regimes allows for the differentiation of several contrast agents, facilitating quantitative assessments of elements with specific energy thresholds within the diagnostic energy range. The second part of this review provides a brief overview of the applications of SPCCT in the assessment of various cardiovascular disease processes. SPCCT can support the study of myocardial blood perfusion and enable enhanced tissue characterization and the identification of contrast agents, in a manner that was previously unattainable
    corecore