17 research outputs found
Multi-Resolution 3D Convolutional Neural Networks for Automatic Coronary Centerline Extraction in Cardiac CT Angiography Scans
We propose a deep learning-based automatic coronary artery tree centerline
tracker (AuCoTrack) extending the vessel tracker by Wolterink
(arXiv:1810.03143). A dual pathway Convolutional Neural Network (CNN) operating
on multi-scale 3D inputs predicts the direction of the coronary arteries as
well as the presence of a bifurcation. A similar multi-scale dual pathway 3D
CNN is trained to identify coronary artery endpoints for terminating the
tracking process. Two or more continuation directions are derived based on the
bifurcation detection. The iterative tracker detects the entire left and right
coronary artery trees based on only two ostium landmarks derived from a
model-based segmentation of the heart.
The 3D CNNs were trained on a proprietary dataset consisting of 43 CCTA
scans. An average sensitivity of 87.1% and clinically relevant overlap of 89.1%
was obtained relative to a refined manual segmentation. In addition, the MICCAI
2008 Coronary Artery Tracking Challenge (CAT08) training and test datasets were
used to benchmark the algorithm and to assess its generalization. An average
overlap of 93.6% and a clinically relevant overlap of 96.4% were obtained. The
proposed method achieved better overlap scores than the current
state-of-the-art automatic centerline extraction techniques on the CAT08
dataset with a vessel detection rate of 95%
HNT-AI:An Automatic Segmentation Framework for Head and Neck Primary Tumors and Lymph Nodes in FDG- PET/CT Images
Head and neck cancer is one of the most prevalent cancers in the world. Automatic delineation of primary tumors and lymph nodes is important for cancer diagnosis and treatment. In this paper, we develop a deep learning-based model for automatic tumor segmentation, HNT-AI, using PET/CT images provided by the MICCAI 2022 Head and Neck Tumor (HECKTOR) segmentation Challenge. We investigate the effect of residual blocks, squeeze-and-excitation normalization, and grid-attention gates on the performance of 3D-UNET. We project the predicted masks on the z-axis and apply k-means clustering to reduce the number of false positive predictions. Our proposed HNT-AI segmentation framework achieves an aggregated dice score of 0.774 and 0.759 for primary tumors and lymph nodes, respectively, on the unseen external test set. Qualitative analysis of the predicted segmentation masks shows that the predicted segmentation mask tends to follow the high standardized uptake value (SUV) area on the PET scans more closely than the ground truth masks. The largest tumor volume, the larget lymph node volume, and the total number of lymph nodes derived from the segmentation proved to be potential biomarkers for recurrence-free survival with a C-index of 0.627 on the test set
Precision-medicine-toolbox: An open-source python package for facilitation of quantitative medical imaging and radiomics analysis
[en] Medical image analysis plays a key role in precision medicine as it allows
the clinicians to identify anatomical abnormalities and it is routinely used in
clinical assessment. Data curation and pre-processing of medical images are
critical steps in the quantitative medical image analysis that can have a
significant impact on the resulting model performance. In this paper, we
introduce a precision-medicine-toolbox that allows researchers to perform data
curation, image pre-processing and handcrafted radiomics extraction (via
Pyradiomics) and feature exploration tasks with Python. With this open-source
solution, we aim to address the data preparation and exploration problem,
bridge the gap between the currently existing packages, and improve the
reproducibility of quantitative medical imaging research
UR-CarA-Net: A Cascaded Framework with Uncertainty Regularization for Automated Segmentation of Carotid Arteries on Black Blood MR Images
We present a fully automated method for carotid artery (CA) outer wall segmentation in black blood MRI using partially annotated data and compare it to the state-of-the-art reference model. Our model was trained and tested on multicentric data of patients (106 and 23 patients, respectively) with a carotid plaque and was validated on different MR sequences (24 patients) as well as data that were acquired with MRI systems of a different vendor (34 patients). A 3D nnU-Net was trained on pre-contrast T1w turbo spin echo (TSE) MR images. A CA centerline sliding window approach was chosen to refine the nnU-Net segmentation using an additionally trained 2D U-Net to increase agreement with manual annotations. To improve segmentation performance in areas with semantically and visually challenging voxels, Monte-Carlo dropout was used. To increase generalizability, data were augmented with intensity transformations. Our method achieves state-of-the-art results yielding a Dice similarity coefficient (DSC) of 91.7% (interquartile range (IQR) 3.3%) and volumetric intraclass correlation (ICC) with ground truth of 0.90 on the development domain data and a DSC of 91.1% (IQR 7.2%) and volumetric ICC with ground truth of 0.83 on the external domain data outperforming top-ranked methods for open-source CA segmentation. The uncertainty-based approach increases the interpretability of the proposed method by providing an uncertainty map together with the segmentation
Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and
healthcare, the deployment and adoption of AI technologies remain limited in
real-world clinical practice. In recent years, concerns have been raised about
the technical, clinical, ethical and legal risks associated with medical AI. To
increase real world adoption, it is essential that medical AI tools are trusted
and accepted by patients, clinicians, health organisations and authorities.
This work describes the FUTURE-AI guideline as the first international
consensus framework for guiding the development and deployment of trustworthy
AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and
currently comprises 118 inter-disciplinary experts from 51 countries
representing all continents, including AI scientists, clinicians, ethicists,
and social scientists. Over a two-year period, the consortium defined guiding
principles and best practices for trustworthy AI through an iterative process
comprising an in-depth literature review, a modified Delphi survey, and online
consensus meetings. The FUTURE-AI framework was established based on 6 guiding
principles for trustworthy AI in healthcare, i.e. Fairness, Universality,
Traceability, Usability, Robustness and Explainability. Through consensus, a
set of 28 best practices were defined, addressing technical, clinical, legal
and socio-ethical dimensions. The recommendations cover the entire lifecycle of
medical AI, from design, development and validation to regulation, deployment,
and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which
provides a structured approach for constructing medical AI tools that will be
trusted, deployed and adopted in real-world practice. Researchers are
encouraged to take the recommendations into account in proof-of-concept stages
to facilitate future translation towards clinical practice of medical AI
Transparency of deep neural networks for medical image analysis:A review of interpretability methods
Artificial Intelligence has emerged as a useful aid in numerous clinical
applications for diagnosis and treatment decisions. Deep neural networks have
shown same or better performance than clinicians in many tasks owing to the
rapid increase in the available data and computational power. In order to
conform to the principles of trustworthy AI, it is essential that the AI system
be transparent, robust, fair and ensure accountability. Current deep neural
solutions are referred to as black-boxes due to a lack of understanding of the
specifics concerning the decision making process. Therefore, there is a need to
ensure interpretability of deep neural networks before they can be incorporated
in the routine clinical workflow. In this narrative review, we utilized
systematic keyword searches and domain expertise to identify nine different
types of interpretability methods that have been used for understanding deep
learning models for medical image analysis applications based on the type of
generated explanations and technical similarities. Furthermore, we report the
progress made towards evaluating the explanations produced by various
interpretability methods. Finally we discuss limitations, provide guidelines
for using interpretability methods and future directions concerning the
interpretability of deep neural networks for medical imaging analysis
Transparency of deep neural networks for medical image analysis:A review of interpretability methods
Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair, and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision-making process. Therefore, there is a need to ensure the interpretability of deep neural networks before they can be incorporated into the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally, we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis