12,098 research outputs found
Provenance-Centered Dataset of Drug-Drug Interactions
Over the years several studies have demonstrated the ability to identify
potential drug-drug interactions via data mining from the literature (MEDLINE),
electronic health records, public databases (Drugbank), etc. While each one of
these approaches is properly statistically validated, they do not take into
consideration the overlap between them as one of their decision making
variables. In this paper we present LInked Drug-Drug Interactions (LIDDI), a
public nanopublication-based RDF dataset with trusty URIs that encompasses some
of the most cited prediction methods and sources to provide researchers a
resource for leveraging the work of others into their prediction methods. As
one of the main issues to overcome the usage of external resources is their
mappings between drug names and identifiers used, we also provide the set of
mappings we curated to be able to compare the multiple sources we aggregate in
our dataset.Comment: In Proceedings of the 14th International Semantic Web Conference
(ISWC) 201
Multimodal Machine Learning for Automated ICD Coding
This study presents a multimodal machine learning model to predict ICD-10
diagnostic codes. We developed separate machine learning models that can handle
data from different modalities, including unstructured text, semi-structured
text and structured tabular data. We further employed an ensemble method to
integrate all modality-specific models to generate ICD-10 codes. Key evidence
was also extracted to make our prediction more convincing and explainable. We
used the Medical Information Mart for Intensive Care III (MIMIC -III) dataset
to validate our approach. For ICD code prediction, our best-performing model
(micro-F1 = 0.7633, micro-AUC = 0.9541) significantly outperforms other
baseline models including TF-IDF (micro-F1 = 0.6721, micro-AUC = 0.7879) and
Text-CNN model (micro-F1 = 0.6569, micro-AUC = 0.9235). For interpretability,
our approach achieves a Jaccard Similarity Coefficient (JSC) of 0.1806 on text
data and 0.3105 on tabular data, where well-trained physicians achieve 0.2780
and 0.5002 respectively.Comment: Machine Learning for Healthcare 201
Uncertainty-Aware Attention for Reliable Interpretation and Prediction
Department of Computer Science and EngineeringAttention mechanism is effective in both focusing the deep learning models on relevant features and
interpreting them. However, attentions may be unreliable since the networks that generate them are
often trained in a weakly-supervised manner. To overcome this limitation, we introduce the notion of
input-dependent uncertainty to the attention mechanism, such that it generates attention for each
feature with varying degrees of noise based on the given input, to learn larger variance on instances it
is uncertain about. We learn this Uncertainty-aware Attention (UA) mechanism using variational
inference, and validate it on various risk prediction tasks from electronic health records on which our
model significantly outperforms existing attention models. The analysis of the learned attentions
shows that our model generates attentions that comply with clinicians' interpretation, and provide
richer interpretation via learned variance. Further evaluation of both the accuracy of the uncertainty
calibration and the prediction performance with "I don't know'' decision show that UA yields networks
with high reliability as well.ope
Predicting diabetes-related hospitalizations based on electronic health records
OBJECTIVE: To derive a predictive model to identify patients likely to be hospitalized during the following year due to complications attributed to Type II diabetes. METHODS: A variety of supervised machine learning classification methods were tested and a new method that discovers hidden patient clusters in the positive class (hospitalized) was developed while, at the same time, sparse linear support vector machine classifiers were derived to separate positive samples from the negative ones (non-hospitalized). The convergence of the new method was established and theoretical guarantees were proved on how the classifiers it produces generalize to a test set not seen during training. RESULTS: The methods were tested on a large set of patients from the Boston Medical Center - the largest safety net hospital in New England. It is found that our new joint clustering/classification method achieves an accuracy of 89% (measured in terms of area under the ROC Curve) and yields informative clusters which can help interpret the classification results, thus increasing the trust of physicians to the algorithmic output and providing some guidance towards preventive measures. While it is possible to increase accuracy to 92% with other methods, this comes with increased computational cost and lack of interpretability. The analysis shows that even a modest probability of preventive actions being effective (more than 19%) suffices to generate significant hospital care savings. CONCLUSIONS: Predictive models are proposed that can help avert hospitalizations, improve health outcomes and drastically reduce hospital expenditures. The scope for savings is significant as it has been estimated that in the USA alone, about $5.8 billion are spent each year on diabetes-related hospitalizations that could be prevented.Accepted manuscrip
- …