5 research outputs found
Length of Stay prediction for Hospital Management using Domain Adaptation
Inpatient length of stay (LoS) is an important managerial metric which if
known in advance can be used to efficiently plan admissions, allocate resources
and improve care. Using historical patient data and machine learning
techniques, LoS prediction models can be developed. Ethically, these models can
not be used for patient discharge in lieu of unit heads but are of utmost
necessity for hospital management systems in charge of effective hospital
planning. Therefore, the design of the prediction system should be adapted to
work in a true hospital setting. In this study, we predict early hospital LoS
at the granular level of admission units by applying domain adaptation to
leverage information learned from a potential source domain. Time-varying data
from 110,079 and 60,492 patient stays to 8 and 9 intensive care units were
respectively extracted from eICU-CRD and MIMIC-IV. These were fed into a
Long-Short Term Memory and a Fully connected network to train a source domain
model, the weights of which were transferred either partially or fully to
initiate training in target domains. Shapley Additive exPlanations (SHAP)
algorithms were used to study the effect of weight transfer on model
explanability. Compared to the benchmark, the proposed weight transfer model
showed statistically significant gains in prediction accuracy (between 1% and
5%) as well as computation time (up to 2hrs) for some target domains. The
proposed method thus provides an adapted clinical decision support system for
hospital management that can ease processes of data access via ethical
committee, computation infrastructures and time
Diversity and Inclusion Metrics in Subset Selection
The ethical concept of fairness has recently been applied in machine learning
(ML) settings to describe a wide range of constraints and objectives. When
considering the relevance of ethical concepts to subset selection problems, the
concepts of diversity and inclusion are additionally applicable in order to
create outputs that account for social power and access differentials. We
introduce metrics based on these concepts, which can be applied together,
separately, and in tandem with additional fairness constraints. Results from
human subject experiments lend support to the proposed criteria. Social choice
methods can additionally be leveraged to aggregate and choose preferable sets,
and we detail how these may be applied
Fair Wrapping for Black-box Predictions
We introduce a new family of techniques to post-process ("wrap") a black-box
classifier in order to reduce its bias. Our technique builds on the recent
analysis of improper loss functions whose optimization can correct any twist in
prediction, unfairness being treated as a twist. In the post-processing, we
learn a wrapper function which we define as an -tree, which modifies
the prediction. We provide two generic boosting algorithms to learn
-trees. We show that our modification has appealing properties in terms
of composition of -trees, generalization, interpretability, and KL
divergence between modified and original predictions. We exemplify the use of
our technique in three fairness notions: conditional value-at-risk, equality of
opportunity, and statistical parity; and provide experiments on several readily
available datasets.Comment: Published in Advances in Neural Information Processing Systems 35
(NeurIPS 2022