21,092 research outputs found
DropIn: Making Reservoir Computing Neural Networks Robust to Missing Inputs by Dropout
The paper presents a novel, principled approach to train recurrent neural
networks from the Reservoir Computing family that are robust to missing part of
the input features at prediction time. By building on the ensembling properties
of Dropout regularization, we propose a methodology, named DropIn, which
efficiently trains a neural model as a committee machine of subnetworks, each
capable of predicting with a subset of the original input features. We discuss
the application of the DropIn methodology in the context of Reservoir Computing
models and targeting applications characterized by input sources that are
unreliable or prone to be disconnected, such as in pervasive wireless sensor
networks and ambient intelligence. We provide an experimental assessment using
real-world data from such application domains, showing how the Dropin
methodology allows to maintain predictive performances comparable to those of a
model without missing features, even when 20\%-50\% of the inputs are not
available
Benchmarking Deep Learning Architectures for Predicting Readmission to the ICU and Describing Patients-at-Risk
Objective: To compare different deep learning architectures for predicting
the risk of readmission within 30 days of discharge from the intensive care
unit (ICU). The interpretability of attention-based models is leveraged to
describe patients-at-risk. Methods: Several deep learning architectures making
use of attention mechanisms, recurrent layers, neural ordinary differential
equations (ODEs), and medical concept embeddings with time-aware attention were
trained using publicly available electronic medical record data (MIMIC-III)
associated with 45,298 ICU stays for 33,150 patients. Bayesian inference was
used to compute the posterior over weights of an attention-based model. Odds
ratios associated with an increased risk of readmission were computed for
static variables. Diagnoses, procedures, medications, and vital signs were
ranked according to the associated risk of readmission. Results: A recurrent
neural network, with time dynamics of code embeddings computed by neural ODEs,
achieved the highest average precision of 0.331 (AUROC: 0.739, F1-Score:
0.372). Predictive accuracy was comparable across neural network architectures.
Groups of patients at risk included those suffering from infectious
complications, with chronic or progressive conditions, and for whom standard
medical care was not suitable. Conclusions: Attention-based networks may be
preferable to recurrent networks if an interpretable model is required, at only
marginal cost in predictive accuracy
- …