14 research outputs found

    Multi-Resolution 3D Convolutional Neural Networks for Automatic Coronary Centerline Extraction in Cardiac CT Angiography Scans

    Full text link
    We propose a deep learning-based automatic coronary artery tree centerline tracker (AuCoTrack) extending the vessel tracker by Wolterink (arXiv:1810.03143). A dual pathway Convolutional Neural Network (CNN) operating on multi-scale 3D inputs predicts the direction of the coronary arteries as well as the presence of a bifurcation. A similar multi-scale dual pathway 3D CNN is trained to identify coronary artery endpoints for terminating the tracking process. Two or more continuation directions are derived based on the bifurcation detection. The iterative tracker detects the entire left and right coronary artery trees based on only two ostium landmarks derived from a model-based segmentation of the heart. The 3D CNNs were trained on a proprietary dataset consisting of 43 CCTA scans. An average sensitivity of 87.1% and clinically relevant overlap of 89.1% was obtained relative to a refined manual segmentation. In addition, the MICCAI 2008 Coronary Artery Tracking Challenge (CAT08) training and test datasets were used to benchmark the algorithm and to assess its generalization. An average overlap of 93.6% and a clinically relevant overlap of 96.4% were obtained. The proposed method achieved better overlap scores than the current state-of-the-art automatic centerline extraction techniques on the CAT08 dataset with a vessel detection rate of 95%

    HNT-AI:An Automatic Segmentation Framework for Head and Neck Primary Tumors and Lymph Nodes in FDG- PET/CT Images

    Get PDF
    Head and neck cancer is one of the most prevalent cancers in the world. Automatic delineation of primary tumors and lymph nodes is important for cancer diagnosis and treatment. In this paper, we develop a deep learning-based model for automatic tumor segmentation, HNT-AI, using PET/CT images provided by the MICCAI 2022 Head and Neck Tumor (HECKTOR) segmentation Challenge. We investigate the effect of residual blocks, squeeze-and-excitation normalization, and grid-attention gates on the performance of 3D-UNET. We project the predicted masks on the z-axis and apply k-means clustering to reduce the number of false positive predictions. Our proposed HNT-AI segmentation framework achieves an aggregated dice score of 0.774 and 0.759 for primary tumors and lymph nodes, respectively, on the unseen external test set. Qualitative analysis of the predicted segmentation masks shows that the predicted segmentation mask tends to follow the high standardized uptake value (SUV) area on the PET scans more closely than the ground truth masks. The largest tumor volume, the larget lymph node volume, and the total number of lymph nodes derived from the segmentation proved to be potential biomarkers for recurrence-free survival with a C-index of 0.627 on the test set

    Precision-medicine-toolbox: An open-source python package for facilitation of quantitative medical imaging and radiomics analysis

    Full text link
    [en] Medical image analysis plays a key role in precision medicine as it allows the clinicians to identify anatomical abnormalities and it is routinely used in clinical assessment. Data curation and pre-processing of medical images are critical steps in the quantitative medical image analysis that can have a significant impact on the resulting model performance. In this paper, we introduce a precision-medicine-toolbox that allows researchers to perform data curation, image pre-processing and handcrafted radiomics extraction (via Pyradiomics) and feature exploration tasks with Python. With this open-source solution, we aim to address the data preparation and exploration problem, bridge the gap between the currently existing packages, and improve the reproducibility of quantitative medical imaging research

    CT Reconstruction Kernels and the Effect of Pre- and Post-Processing on the Reproducibility of Handcrafted Radiomic Features.

    Full text link
    peer reviewedHandcrafted radiomics features (HRFs) are quantitative features extracted from medical images to decode biological information to improve clinical decision making. Despite the potential of the field, limitations have been identified. The most important identified limitation, currently, is the sensitivity of HRF to variations in image acquisition and reconstruction parameters. In this study, we investigated the use of Reconstruction Kernel Normalization (RKN) and ComBat harmonization to improve the reproducibility of HRFs across scans acquired with different reconstruction kernels. A set of phantom scans (n = 28) acquired on five different scanner models was analyzed. HRFs were extracted from the original scans, and scans were harmonized using the RKN method. ComBat harmonization was applied on both sets of HRFs. The reproducibility of HRFs was assessed using the concordance correlation coefficient. The difference in the number of reproducible HRFs in each scenario was assessed using McNemar's test. The majority of HRFs were found to be sensitive to variations in the reconstruction kernels, and only six HRFs were found to be robust with respect to variations in reconstruction kernels. The use of RKN resulted in a significant increment in the number of reproducible HRFs in 19 out of the 67 investigated scenarios (28.4%), while the ComBat technique resulted in a significant increment in 36 (53.7%) scenarios. The combination of methods resulted in a significant increment in 53 (79.1%) scenarios compared to the HRFs extracted from original images. Since the benefit of applying the harmonization methods depended on the data being harmonized, reproducibility analysis is recommended before performing radiomics analysis. For future radiomics studies incorporating images acquired with similar image acquisition and reconstruction parameters, except for the reconstruction kernels, we recommend the systematic use of the pre- and post-processing approaches (respectively, RKN and ComBat)

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    Transparency of deep neural networks for medical image analysis:A review of interpretability methods

    Get PDF
    Artificial Intelligence has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision making process. Therefore, there is a need to ensure interpretability of deep neural networks before they can be incorporated in the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis

    Transparency of deep neural networks for medical image analysis:A review of interpretability methods

    No full text
    Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair, and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision-making process. Therefore, there is a need to ensure the interpretability of deep neural networks before they can be incorporated into the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally, we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis

    Precision-medicine-toolbox:An open-source python package for the quantitative medical image analysis

    No full text
    peer reviewedMedical image analysis plays a key role in precision medicine. Data curation and pre-processing are critical steps in quantitative medical image analysis that can have a significant impact on the resulting performance of machine learning models. In this work, we introduce the Precision-medicine-toolbox, allowing clinical and junior researchers to perform data curation, image pre-processing, radiomics extraction, and feature exploration tasks with a customizable Python package. With this open-source tool, we aim to facilitate the crucial data preparation and exploration steps, bridge the gap between the currently existing packages, and improve the reproducibility of quantitative medical imaging research

    From Head and Neck Tumour and Lymph Node Segmentation to Survival Prediction on PET/CT: An End-to-End Framework Featuring Uncertainty, Fairness, and Multi-Region Multi-Modal Radiomics

    No full text
    Automatic delineation and detection of the primary tumour (GTVp) and lymph nodes (GTVn) using PET and CT in head and neck cancer and recurrence-free survival prediction can be useful for diagnosis and patient risk stratification. We used data from nine different centres, with 524 and 359 cases used for training and testing, respectively. We utilised posterior sampling of the weight space in the proposed segmentation model to estimate the uncertainty for false positive reduction. We explored the prognostic potential of radiomics features extracted from the predicted GTVp and GTVn in PET and CT for recurrence-free survival prediction and used SHAP analysis for explainability. We evaluated the bias of models with respect to age, gender, chemotherapy, HPV status, and lesion size. We achieved an aggregate Dice score of 0.774 and 0.760 on the test set for GTVp and GTVn, respectively. We observed a per image false positive reduction of 19.5% and 7.14% using the uncertainty threshold for GTVp and GTVn, respectively. Radiomics features extracted from GTVn in PET and from both GTVp and GTVn in CT are the most prognostic, and our model achieves a C-index of 0.672 on the test set. Our framework incorporates uncertainty estimation, fairness, and explainability, demonstrating the potential for accurate detection and risk stratification
    corecore