99 research outputs found
Recommended from our members
Large-scale Quality Control of Cardiac Imaging in Population Studies: Application to UK Biobank
In large population studies such as the UK Biobank (UKBB), quality control of the acquired images by visual assessment is unfeasible. In this paper, we apply a recently developed fully-automated quality control pipeline for cardiac MR (CMR) images to the first 19,265 short-axis (SA) cine stacks from the UKBB. We present the results for the three estimated quality metrics (heart coverage, inter-slice motion and image contrast in the cardiac region) as well as their potential associations with factors including acquisition details and subject-related phenotypes. Up to 14.2% of the analysed SA stacks had sub-optimal coverage (i.e. missing basal and/or apical slices), however most of them were limited to the first year of acquisition. Up to 16% of the stacks were affected by noticeable inter-slice motion (i.e. average inter-slice misalignment greater than 3.4 mm). Inter-slice motion was positively correlated with weight and body surface area. Only 2.1% of the stacks had an average end-diastolic cardiac image contrast below 30% of the dynamic range. These findings will be highly valuable for both the scientists involved in UKBB CMR acquisition and for the ones who use the dataset for research purposes
Self-Supervised Learning for Cardiac MR Image Segmentation by Anatomical Position Prediction
In the recent years, convolutional neural networks have transformed the field of medical image analysis due to their capacity to learn discriminative image features for a variety of classification and regression tasks. However, successfully learning these features requires a large amount of manually annotated data, which is expensive to acquire and limited by the available resources of expert image analysts. Therefore, unsupervised, weakly-supervised and self-supervised feature learning techniques receive a lot of attention, which aim to utilise the vast amount of available data, while at the same time avoid or substantially reduce the effort of manual annotation. In this paper, we propose a novel way for training a cardiac MR image segmentation network, in which features are learnt in a self-supervised manner by predicting anatomical positions. The anatomical positions serve as a supervisory signal and do not require extra manual annotation. We demonstrate that this seemingly simple task provides a strong signal for feature learning and with self-supervised learning, we achieve a high segmentation accuracy that is better than or comparable to a U-net trained from scratch, especially at a small data setting. When only five annotated subjects are available, the proposed method improves the mean Dice metric from 0.811 to 0.852 for short-axis image segmentation, compared to the baseline U-net
Artificial intelligence education for radiographers, an evaluation of a UK postgraduate educational intervention using participatory action research: a pilot study.
Artificial intelligence (AI)-enabled applications are increasingly being used in providing healthcare services, such as medical imaging support. Sufficient and appropriate education for medical imaging professionals is required for successful AI adoption. Although, currently, there are AI training programmes for radiologists, formal AI education for radiographers is lacking. Therefore, this study aimed to evaluate and discuss a postgraduate-level module on AI developed in the UK for radiographers. A participatory action research methodology was applied, with participants recruited from the first cohort of students enrolled in this module and faculty members. Data were collected using online, semi-structured, individual interviews and focus group discussions. Textual data were processed using data-driven thematic analysis. Seven students and six faculty members participated in this evaluation. Results can be summarised in the following four themes: a. participants' professional and educational backgrounds influenced their experiences, b. participants found the learning experience meaningful concerning module design, organisation, and pedagogical approaches, c. some module design and delivery aspects were identified as barriers to learning, and d. participants suggested how the ideal AI course could look like based on their experiences. The findings of our work show that an AI module can assist educators/academics in developing similar AI education provisions for radiographers and other medical imaging and radiation sciences professionals. A blended learning delivery format, combined with customisable and contextualised content, using an interprofessional faculty approach is recommended for future similar courses. [Abstract copyright: © 2023. The Author(s).
3D High-Resolution Cardiac Segmentation Reconstruction From 2D Views Using Conditional Variational Autoencoders
Accurate segmentation of heart structures imaged by cardiac MR is key for the quantitative analysis of pathology. High-resolution 3D MR sequences enable whole-heart structural imaging but are time-consuming, expensive to acquire and they often require long breath holds that are not suitable for patients. Consequently, multiplanar breath-hold 2D cines sequences are standard practice but are disadvantaged by lack of whole-heart coverage and low through-plane resolution. To address this, we propose a conditional variational autoencoder architecture able to learn a generative model of 3D high-resolution left ventricular (LV) segmentations which is conditioned on three 2D LV segmentations of one short-axis and two long-axis images. By only employing these three 2D segmentations, our model can efficiently reconstruct the 3D high-resolution LV segmentation of a subject. When evaluated on 400 unseen healthy volunteers, our model yielded an average Dice score of 87.92 ± 0.15 and outperformed competing architectures (TL-net, Dice score = 82.60 ± 0.23, p = 2.2 · 10 -16 )
Explainable Anatomical Shape Analysis through Deep Hierarchical Generative Models
Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging
Learning-based quality control for cardiac MR images
The effectiveness of a cardiovascular magnetic resonance (CMR) scan depends on the ability of the operator to correctly tune the acquisition parameters to the subject being scanned and on the potential occurrence of imaging artifacts, such as cardiac and respiratory motion. In the clinical practice, a quality control step is performed by visual assessment of the acquired images; however, this procedure is strongly operator-dependent, cumbersome, and sometimes incompatible with the time constraints in clinical settings and large-scale studies. We propose a fast, fully automated, and learning-based quality control pipeline for CMR images, specifically for short-axis image stacks. Our pipeline performs three important quality checks: 1) heart coverage estimation; 2) inter-slice motion detection; 3) image contrast estimation in the cardiac region. The pipeline uses a hybrid decision forest method—integrating both regression and structured classification models—to extract landmarks and probabilistic segmentation maps from both long- and short-axis images as a basis to perform the quality checks. The technique was tested on up to 3000 cases from the UK Biobank and on 100 cases from the UK Digital Heart Project and validated against manual annotations and visual inspections performed by expert interpreters. The results show the capability of the proposed pipeline to correctly detect incomplete or corrupted scans (e.g., on UK Biobank, sensitivity and specificity, respectively, 88% and 99% for heart coverage estimation and 85% and 95% for motion detection), allowing their exclusion from the analyzed dataset or the triggering of a new acquisition
Expression of the α7 nicotinic acetylcholine receptor in human lung cells
BACKGROUND: We and others have shown that one of the mechanisms of growth regulation of small cell lung cancer cell lines and cultured pulmonary neuroendocrine cells is by the binding of agonists to the α7 neuronal nicotinic acetylcholine receptor. In addition, we have shown that the nicotine-derived carcinogenic nitrosamine, 4(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), is a high affinity agonist for the α7 nicotinic acetylcholine receptor. In the present study, our goal was to determine the extent of α7 mRNA and protein expression in the human lung. METHODS: Experiments were done using reverse transcription polymerase chain reaction (RT-PCR), a nuclease protection assay and western blotting using membrane proteins. RESULTS: We detected mRNA for the neuronal nicotinic acetylcholine receptor α7 receptor in seven small cell lung cancer (SCLC) cell lines, in two pulmonary adenocarcinoma cell lines, in cultured normal human small airway epithelial cells (SAEC), one carcinoid cell line, three squamous cell lines and tissue samples from nine patients with various types of lung cancer. A nuclease protection assay showed prominent levels of α7 in the NCI-H82 SCLC cell line while α7 was not detected in SAEC, suggesting that α7 mRNA levels may be higher in SCLC compared to normal cells. Using a specific antibody to the α7 nicotinic receptor, protein expression of α7 was determined. All SCLC cell lines except NCI-H187 expressed protein for the α7 receptor. In the non-SCLC cells and normal cells that express the α7 nAChR mRNA, only in SAEC, A549 and NCI-H226 was expression of the α7 nicotinic receptor protein shown. When NCI-H69 SCLC cell line was exposed to 100 pm NNK, protein expression of the α7 receptor was increased at 60 and 150 min. CONCLUSION: Expression of mRNA for the neuronal nicotinic acetylcholine receptor α7 seems to be ubiquitously expressed in all human lung cancer cell lines tested (except for NCI-H441) as well as normal lung cells. The α7 nicotinic receptor protein is expressed in fewer cell lines, and the tobacco carcinogen NNK increases α7 nicotinic receptor protein levels
Recommended from our members
Artificial intelligence education for radiographers, an evaluation of a UK postgraduate educational intervention using participatory action research: a pilot study
BACKGROUND: Artificial intelligence (AI)-enabled applications are increasingly being used in providing healthcare services, such as medical imaging support. Sufficient and appropriate education for medical imaging professionals is required for successful AI adoption. Although, currently, there are AI training programmes for radiologists, formal AI education for radiographers is lacking. Therefore, this study aimed to evaluate and discuss a postgraduate-level module on AI developed in the UK for radiographers.
METHODOLOGY: A participatory action research methodology was applied, with participants recruited from the first cohort of students enrolled in this module and faculty members. Data were collected using online, semi-structured, individual interviews and focus group discussions. Textual data were processed using data-driven thematic analysis.
RESULTS: Seven students and six faculty members participated in this evaluation. Results can be summarised in the following four themes: a. participants' professional and educational backgrounds influenced their experiences, b. participants found the learning experience meaningful concerning module design, organisation, and pedagogical approaches, c. some module design and delivery aspects were identified as barriers to learning, and d. participants suggested how the ideal AI course could look like based on their experiences.
CONCLUSIONS: The findings of our work show that an AI module can assist educators/academics in developing similar AI education provisions for radiographers and other medical imaging and radiation sciences professionals. A blended learning delivery format, combined with customisable and contextualised content, using an interprofessional faculty approach is recommended for future similar courses
Recommended from our members
MOOD 2020: A public Benchmark for Out-of-Distribution Detection and Localization on medical Images
Detecting Out-of-Distribution (OoD) data is one of the greatest challenges in safe and robust deployment of machine learning algorithms in medicine. When the algorithms encounter cases that deviate from the distribution of the training data, they often produce incorrect and over-confident predictions. OoD detection algorithms aim to catch erroneous predictions in advance by analysing the data distribution and detecting potential instances of failure. Moreover, flagging OoD cases may support human readers in identifying incidental findings. Due to the increased interest in OoD algorithms, benchmarks for different domains have recently been established. In the medical imaging domain, for which reliable predictions are often essential, an open benchmark has been missing. We introduce the Medical-Out-Of-Distribution-Analysis-Challenge (MOOD) as an open, fair, and unbiased benchmark for OoD methods in the medical imaging domain. The analysis of the submitted algorithms shows that performance has a strong positive correlation with the perceived difficulty, and that all algorithms show a high variance for different anomalies, making it yet hard to recommend them for clinical practice. We also see a strong correlation between challenge ranking and performance on a simple toy test set, indicating that this might be a valuable addition as a proxy dataset during anomaly detection algorithm development
- …