61 research outputs found
Creating Fair Models of Atherosclerotic Cardiovascular Disease Risk
Guidelines for the management of atherosclerotic cardiovascular disease
(ASCVD) recommend the use of risk stratification models to identify patients
most likely to benefit from cholesterol-lowering and other therapies. These
models have differential performance across race and gender groups with
inconsistent behavior across studies, potentially resulting in an inequitable
distribution of beneficial therapy. In this work, we leverage adversarial
learning and a large observational cohort extracted from electronic health
records (EHRs) to develop a "fair" ASCVD risk prediction model with reduced
variability in error rates across groups. We empirically demonstrate that our
approach is capable of aligning the distribution of risk predictions
conditioned on the outcome across several groups simultaneously for models
built from high-dimensional EHR data. We also discuss the relevance of these
results in the context of the empirical trade-off between fairness and model
performance
Adapting to Latent Subgroup Shifts via Concepts and Proxies
We address the problem of unsupervised domain adaptation when the source domain differs from the target domain because of a shift in the distribution of a latent subgroup. When this subgroup confounds all observed data, neither covariate shift nor label shift assumptions apply. We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain, and unlabeled data from the target. The identification results are constructive, immediately suggesting an algorithm for estimating the optimal predictor in the target. For continuous observations, when this algorithm becomes impractical, we propose a latent variable model specific to the data generation process at hand. We show how the approach degrades as the size of the shift changes, and verify that it outperforms both covariate and label shift adjustment
Creating Fair Models of Atherosclerotic Cardiovascular Disease Risk
International audienceGuidelines for the management of atherosclerotic cardiovascular disease (ASCVD) recommend theuse of risk stratification models to identify patients most likely to benefit from cholesterol-loweringand other therapies. These models have differential performance across race and gender groups withinconsistent behavior across studies, potentially resulting in an inequitable distribution of beneficialtherapy. In this work, we leverage adversarial learning and a large observational cohort extractedfrom electronic health records (EHRs) to develop a "fair" ASCVD risk prediction model with reducedvariability in error rates across groups. We empirically demonstrate that our approach is capableof aligning the distribution of risk predictions conditioned on the outcome across several groupssimultaneously for models built from high-dimensional EHR data. We also discuss the relevance ofthese results in the context of the empirical trade-off between fairness and model performance
Large Language Models Encode Clinical Knowledge
Large language models (LLMs) have demonstrated impressive capabilities in
natural language understanding and generation, but the quality bar for medical
and clinical applications is high. Today, attempts to assess models' clinical
knowledge typically rely on automated evaluations on limited benchmarks. There
is no standard to evaluate model predictions and reasoning across a breadth of
tasks. To address this, we present MultiMedQA, a benchmark combining six
existing open question answering datasets spanning professional medical exams,
research, and consumer queries; and HealthSearchQA, a new free-response dataset
of medical questions searched online. We propose a framework for human
evaluation of model answers along multiple axes including factuality,
precision, possible harm, and bias. In addition, we evaluate PaLM (a
540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on
MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves
state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA,
MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US
Medical License Exam questions), surpassing prior state-of-the-art by over 17%.
However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve
this we introduce instruction prompt tuning, a parameter-efficient approach for
aligning LLMs to new domains using a few exemplars. The resulting model,
Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show
that comprehension, recall of knowledge, and medical reasoning improve with
model scale and instruction prompt tuning, suggesting the potential utility of
LLMs in medicine. Our human evaluations reveal important limitations of today's
models, reinforcing the importance of both evaluation frameworks and method
development in creating safe, helpful LLM models for clinical applications
O delírio cibernético de Norbert Wiener
Esse texto trata da condição do mundo cada vez mais mediado por uma espécie de ciber-hifenização delirante da realidade que parece ter começado a partir do pensamento de Norbert Wiener
- …