25 research outputs found
High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100Â % (95Â % CI 99.73-100Â %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation
SDOH-NLI: a Dataset for Inferring Social Determinants of Health from Clinical Notes
Social and behavioral determinants of health (SDOH) play a significant role
in shaping health outcomes, and extracting these determinants from clinical
notes is a first step to help healthcare providers systematically identify
opportunities to provide appropriate care and address disparities. Progress on
using NLP methods for this task has been hindered by the lack of high-quality
publicly available labeled data, largely due to the privacy and regulatory
constraints on the use of real patients' information. This paper introduces a
new dataset, SDOH-NLI, that is based on publicly available notes and which we
release publicly. We formulate SDOH extraction as a natural language inference
(NLI) task, and provide binary textual entailment labels obtained from human
raters for a cross product of a set of social history snippets as premises and
SDOH factors as hypotheses. Our dataset differs from standard NLI benchmarks in
that our premises and hypotheses are obtained independently. We evaluate both
"off-the-shelf" entailment models as well as models fine-tuned on our data, and
highlight the ways in which our dataset appears more challenging than commonly
used NLI datasets.Comment: Findings of EMNLP 202
Large Language Models Encode Clinical Knowledge
Large language models (LLMs) have demonstrated impressive capabilities in
natural language understanding and generation, but the quality bar for medical
and clinical applications is high. Today, attempts to assess models' clinical
knowledge typically rely on automated evaluations on limited benchmarks. There
is no standard to evaluate model predictions and reasoning across a breadth of
tasks. To address this, we present MultiMedQA, a benchmark combining six
existing open question answering datasets spanning professional medical exams,
research, and consumer queries; and HealthSearchQA, a new free-response dataset
of medical questions searched online. We propose a framework for human
evaluation of model answers along multiple axes including factuality,
precision, possible harm, and bias. In addition, we evaluate PaLM (a
540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on
MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves
state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA,
MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US
Medical License Exam questions), surpassing prior state-of-the-art by over 17%.
However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve
this we introduce instruction prompt tuning, a parameter-efficient approach for
aligning LLMs to new domains using a few exemplars. The resulting model,
Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show
that comprehension, recall of knowledge, and medical reasoning improve with
model scale and instruction prompt tuning, suggesting the potential utility of
LLMs in medicine. Our human evaluations reveal important limitations of today's
models, reinforcing the importance of both evaluation frameworks and method
development in creating safe, helpful LLM models for clinical applications
Scalable and accurate deep learning for electronic health records
Predictive modeling with electronic health record (EHR) data is anticipated
to drive personalized medicine and improve healthcare quality. Constructing
predictive statistical models typically requires extraction of curated
predictor variables from normalized EHR data, a labor-intensive process that
discards the vast majority of information in each patient's record. We propose
a representation of patients' entire, raw EHR records based on the Fast
Healthcare Interoperability Resources (FHIR) format. We demonstrate that deep
learning methods using this representation are capable of accurately predicting
multiple medical events from multiple centers without site-specific data
harmonization. We validated our approach using de-identified EHR data from two
U.S. academic medical centers with 216,221 adult patients hospitalized for at
least 24 hours. In the sequential format we propose, this volume of EHR data
unrolled into a total of 46,864,534,945 data points, including clinical notes.
Deep learning models achieved high accuracy for tasks such as predicting
in-hospital mortality (AUROC across sites 0.93-0.94), 30-day unplanned
readmission (AUROC 0.75-0.76), prolonged length of stay (AUROC 0.85-0.86), and
all of a patient's final discharge diagnoses (frequency-weighted AUROC 0.90).
These models outperformed state-of-the-art traditional predictive models in all
cases. We also present a case-study of a neural-network attribution system,
which illustrates how clinicians can gain some transparency into the
predictions. We believe that this approach can be used to create accurate and
scalable predictions for a variety of clinical scenarios, complete with
explanations that directly highlight evidence in the patient's chart.Comment: Published version from
https://www.nature.com/articles/s41746-018-0029-