6 research outputs found
A systematic review of machine learning models for predicting outcomes of stroke with structured data
Background and purposeMachine learning (ML) has attracted much attention with the hope that it could make use of large, routinely collected datasets and deliver accurate personalised prognosis. The aim of this systematic review is to identify and critically appraise the reporting and developing of ML models for predicting outcomes after stroke.MethodsWe searched PubMed and Web of Science from 1990 to March 2019, using previously published search filters for stroke, ML, and prediction models. We focused on structured clinical data, excluding image and text analysis. This review was registered with PROSPERO (CRD42019127154).ResultsEighteen studies were eligible for inclusion. Most studies reported less than half of the terms in the reporting quality checklist. The most frequently predicted stroke outcomes were mortality (7 studies) and functional outcome (5 studies). The most commonly used ML methods were random forests (9 studies), support vector machines (8 studies), decision trees (6 studies), and neural networks (6 studies). The median sample size was 475 (range 70-3184), with a median of 22 predictors (range 4-152) considered. All studies evaluated discrimination with thirteen using area under the ROC curve whilst calibration was assessed in three. Two studies performed external validation. None described the final model sufficiently well to reproduce it.ConclusionsThe use of ML for predicting stroke outcomes is increasing. However, few met basic reporting standards for clinical prediction tools and none made their models available in a way which could be used or evaluated. Major improvements in ML study conduct and reporting are needed before it can meaningfully be considered for practice
Recommended from our members
Towards a coâcrediting system for carbon and biodiversity
Funder: Royal Botanical Gardens, Kew; doi: http://dx.doi.org/10.13039/501100001296Funder: SPUNFunder: NWO Gravity grant MICROPSocietal Impact StatementHumankind is facing both climate and biodiversity crises. This article proposes the foundations of a scheme that offers tradable credits for combined aboveground and soil carbon and biodiversity. Multidiversityâas estimated based on highâthroughput molecular identification of soil meiofauna, fungi, bacteria, protists, plants and other organisms shedding DNA into soil, complemented by acoustic and video analyses of aboveground macrobiotaâoffers a costâeffective method that captures much of the terrestrial biodiversity. Such a voluntary crediting system would increase the quality of carbon projects and contribute funding for delivering the KunmingâMontreal Global Biodiversity Framework.SummaryCarbon crediting and land offsets for biodiversity protection have been developed to tackle the challenges of increasing greenhouse gas emissions and the loss of global biodiversity. Unfortunately, these two mechanisms are not optimal when considered separately. Focusing solely on carbon captureâthe primary goal of most carbonâfocused crediting and offsetting commitmentsâoften results in the establishment of nonânative, fastâgrowing monocultures that negatively affect biodiversity and soilârelated ecosystem services. Soil contributes a vast proportion of global biodiversity and contains traces of aboveground organisms. Here, we outline a carbon and biodiversity coâcrediting scheme based on the multiâkingdom molecular and carbon analyses of soil samples, along with remote sensing estimation of aboveground carbon as well as video and acoustic analysesâbased monitoring of aboveground macroorganisms. Combined, such a coâcrediting scheme could help halt biodiversity loss by incentivising industry and governments to account for biodiversity in carbon sequestration projects more rigorously, explicitly and equitably than they currently do. In most cases, this would help prioritise protection before restoration and help promote more socially and environmentally sustainable land stewardship towards a ânature positiveâ future.</jats:sec
Deep learning to automate the labelling of head MRI datasets for computer vision applications
OBJECTIVES: The purpose of this study was to build a deep learning model to derive labels from neuroradiology reports and assign these to the corresponding examinations, overcoming a bottleneck to computer vision model development. METHODS: Reference-standard labels were generated by a team of neuroradiologists for model training and evaluation. Three thousand examinations were labelled for the presence or absence of any abnormality by manually scrutinising the corresponding radiology reports (âreference-standard report labelsâ); a subset of these examinations (n = 250) were assigned âreference-standard image labelsâ by interrogating the actual images. Separately, 2000 reports were labelled for the presence or absence of 7 specialised categories of abnormality (acute stroke, mass, atrophy, vascular abnormality, small vessel disease, white matter inflammation, encephalomalacia), with a subset of these examinations (n = 700) also assigned reference-standard image labels. A deep learning model was trained using labelled reports and validated in two ways: comparing predicted labels to (i) reference-standard report labels and (ii) reference-standard image labels. The area under the receiver operating characteristic curve (AUC-ROC) was used to quantify model performance. Accuracy, sensitivity, specificity, and F1 score were also calculated. RESULTS: Accurate classification (AUC-ROC > 0.95) was achieved for all categories when tested against reference-standard report labels. A drop in performance (ÎAUC-ROC > 0.02) was seen for three categories (atrophy, encephalomalacia, vascular) when tested against reference-standard image labels, highlighting discrepancies in the original reports. Once trained, the model assigned labels to 121,556 examinations in under 30 min. CONCLUSIONS: Our model accurately classifies head MRI examinations, enabling automated dataset labelling for downstream computer vision applications. KEY POINTS: ⢠Deep learning is poised to revolutionise image recognition tasks in radiology; however, a barrier to clinical adoption is the difficulty of obtaining large labelled datasets for model training. ⢠We demonstrate a deep learning model which can derive labels from neuroradiology reports and assign these to the corresponding examinations at scale, facilitating the development of downstream computer vision models. ⢠We rigorously tested our model by comparing labels predicted on the basis of neuroradiology reports with two sets of reference-standard labels: (1) labels derived by manually scrutinising each radiology report and (2) labels derived by interrogating the actual images. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00330-021-08132-0
Automated Labelling using an Attention model for Radiology reports of MRI scans (ALARM)
Labelling large datasets for training high-capacity neural networks is a
major obstacle to the development of deep learning-based medical imaging
applications. Here we present a transformer-based network for magnetic
resonance imaging (MRI) radiology report classification which automates this
task by assigning image labels on the basis of free-text expert radiology
reports. Our model's performance is comparable to that of an expert
radiologist, and better than that of an expert physician, demonstrating the
feasibility of this approach. We make code available online for researchers to
label their own MRI datasets for medical imaging applications