8 research outputs found

    A Deep Learning Approach to Evaluating Disease Risk in Coronary Bifurcations

    Full text link
    Cardiovascular disease represents a large burden on modern healthcare systems, requiring significant resources for patient monitoring and clinical interventions. It has been shown that the blood flow through coronary arteries, shaped by the artery geometry unique to each patient, plays a critical role in the development and progression of heart disease. However, the popular and well tested risk models such as Framingham and QRISK3 current cardiovascular disease risk models are not able to take these differences when predicting disease risk. Over the last decade, medical imaging and image processing have advanced to the point that non-invasive high-resolution 3D imaging is routinely performed for any patient suspected of coronary artery disease. This allows for the construction of virtual 3D models of the coronary anatomy, and in-silico analysis of blood flow within the coronaries. However, several challenges still exist which preclude large scale patient-specific simulations, necessary for incorporating haemodynamic risk metrics as part of disease risk prediction. In particular, despite a large amount of available coronary medical imaging, extraction of the structures of interest from medical images remains a manual and laborious task. There is significant variation in how geometric features of the coronary arteries are measured, which makes comparisons between different studies difficult. Modelling blood flow conditions in the coronary arteries likewise requires manual preparation of the simulations and significant computational cost. This thesis aims to solve these challenges. The "Automated Segmentation of Coronary Arteries (ASOCA)" establishes a benchmark dataset of coronary arteries and their associated 3D reconstructions, which is currently the largest openly available dataset of coronary artery models and offers a wide range of applications such as computational modelling, 3D printed for experiments, developing, and testing medical devices such as stents, and Virtual Reality applications for education and training. An automated computational modelling workflow is developed to set up, run and postprocess simulations on the Left Main Bifurcation and calculate relevant shape metrics. A convolutional neural network model is developed to replace the computational fluid dynamics process, which can predict haemodynamic metrics such as wall shear stress in minutes, compared to several hours using traditional computational modelling reducing the computation and labour cost involved in performing such simulations

    Sex-Specific Variances in Anatomy and Blood Flow of the Left Main Coronary Bifurcation: Implications for Coronary Artery Disease Risk

    Full text link
    Studies have shown marked sex disparities in Coronary Artery Diseases (CAD) epidemiology, yet the underlying mechanisms remain unclear. We explored sex disparities in the coronary anatomy and the resulting haemodynamics in patients with suspected, but no significant CAD. Left Main (LM) bifurcations were reconstructed from CTCA images of 127 cases (42 males and 85 females, aged 38 to 81). Detailed shape parameters were measured for comparison, including bifurcation angles, curvature, and diameters, before solving the haemodynamic metrics using CFD. The severity and location of the normalised vascular area exposed to physiologically adverse haemodynamics were statistically compared between sexes for all branches. We found significant differences between sexes in potentially adverse haemodynamics. Females were more likely than males to exhibit adversely low Time Averaged Endothelial Shear Stress along the inner wall of a bifurcation (16.8% vs. 10.7%). Males had a higher percentage of areas exposed to both adversely high Relative Residence Time (6.1% vs 4.2%, p=0.001) and high Oscillatory Shear Index (4.6% vs 2.3%, p<0.001). However, the OSI values were generally small and should be interpreted cautiously. Males had larger arteries (M vs F, LM: 4.0mm vs 3.3mm, LAD: 3.6mm 3.0mm, LCX:3.5mm vs 2.9mm), and females exhibited higher curvatures in all three branches (M vs F, LM: 0.40 vs 0.46, LAD: 0.45 vs 0.51, LCx: 0.47 vs 0.55, p<0.001) and larger inflow angle of the LM trunk (M: 12.9{\deg} vs F: 18.5{\deg}, p=0.025). Haemodynamic differences were found between male and female patients, which may contribute, at least in part, to differences in CAD risk. This work may facilitate a better understanding of sex differences in the clinical presentation of CAD, contributing to improved sex-specific screening, especially relevant for women with CAD who currently have worse predictive outcomes.Comment: 14 pages, 5 figure

    Automated segmentation of normal and diseased coronary arteries – The ASOCA challenge

    No full text
    Cardiovascular disease is a major cause of death worldwide. Computed Tomography Coronary Angiography (CTCA) is a non-invasive method used to evaluate coronary artery disease, as well as evaluating and reconstructing heart and coronary vessel structures. Reconstructed models have a wide array of for educational, training and research applications such as the study of diseased and non-diseased coronary anatomy, machine learning based disease risk prediction and in-silico and in-vitro testing of medical devices. However, coronary arteries are difficult to image due to their small size, location, and movement, causing poor resolution and artefacts. Segmentation of coronary arteries has traditionally focused on semi-automatic methods where a human expert guides the algorithm and corrects errors, which severely limits large-scale applications and integration within clinical systems. International challenges aiming to overcome this barrier have focussed on specific tasks such as centreline extraction, stenosis quantification, and segmentation of specific artery segments only. Here we present the results of the first challenge to develop fully automatic segmentation methods of full coronary artery trees and establish the first large standardized dataset of normal and diseased arteries. This forms a new automated segmentation benchmark allowing the automated processing of CTCAs directly relevant for large-scale and personalized clinical applications

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
    corecore