43,166 research outputs found
Early hospital mortality prediction using vital signals
Early hospital mortality prediction is critical as intensivists strive to
make efficient medical decisions about the severely ill patients staying in
intensive care units. As a result, various methods have been developed to
address this problem based on clinical records. However, some of the laboratory
test results are time-consuming and need to be processed. In this paper, we
propose a novel method to predict mortality using features extracted from the
heart signals of patients within the first hour of ICU admission. In order to
predict the risk, quantitative features have been computed based on the heart
rate signals of ICU patients. Each signal is described in terms of 12
statistical and signal-based features. The extracted features are fed into
eight classifiers: decision tree, linear discriminant, logistic regression,
support vector machine (SVM), random forest, boosted trees, Gaussian SVM, and
K-nearest neighborhood (K-NN). To derive insight into the performance of the
proposed method, several experiments have been conducted using the well-known
clinical dataset named Medical Information Mart for Intensive Care III
(MIMIC-III). The experimental results demonstrate the capability of the
proposed method in terms of precision, recall, F1-score, and area under the
receiver operating characteristic curve (AUC). The decision tree classifier
satisfies both accuracy and interpretability better than the other classifiers,
producing an F1-score and AUC equal to 0.91 and 0.93, respectively. It
indicates that heart rate signals can be used for predicting mortality in
patients in the ICU, achieving a comparable performance with existing
predictions that rely on high dimensional features from clinical records which
need to be processed and may contain missing information.Comment: 11 pages, 5 figures, preprint of accepted paper in IEEE&ACM CHASE
2018 and published in Smart Health journa
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
Avoiding disclosure of individually identifiable health information: a literature review
Achieving data and information dissemination without arming anyone is a central task of any entity in charge of collecting data. In this article, the authors examine the literature on data and statistical confidentiality. Rather than comparing the theoretical properties of specific methods, they emphasize the main themes that emerge from the ongoing discussion among scientists regarding how best to achieve the appropriate balance between data protection, data utility, and data dissemination. They cover the literature on de-identification and reidentification methods with emphasis on health care data. The authors also discuss the benefits and limitations for the most common access methods. Although there is abundant theoretical and empirical research, their review reveals lack of consensus on fundamental questions for empirical practice: How to assess disclosure risk, how to choose among disclosure methods, how to assess reidentification risk, and how to measure utility loss.public use files, disclosure avoidance, reidentification, de-identification, data utility
An Automated Social Graph De-anonymization Technique
We present a generic and automated approach to re-identifying nodes in
anonymized social networks which enables novel anonymization techniques to be
quickly evaluated. It uses machine learning (decision forests) to matching
pairs of nodes in disparate anonymized sub-graphs. The technique uncovers
artefacts and invariants of any black-box anonymization scheme from a small set
of examples. Despite a high degree of automation, classification succeeds with
significant true positive rates even when small false positive rates are
sought. Our evaluation uses publicly available real world datasets to study the
performance of our approach against real-world anonymization strategies, namely
the schemes used to protect datasets of The Data for Development (D4D)
Challenge. We show that the technique is effective even when only small numbers
of samples are used for training. Further, since it detects weaknesses in the
black-box anonymization scheme it can re-identify nodes in one social network
when trained on another.Comment: 12 page
- …