21,295 research outputs found
Recommended from our members
Privacy-preserving model learning on a blockchain network-of-networks.
ObjectiveTo facilitate clinical/genomic/biomedical research, constructing generalizable predictive models using cross-institutional methods while protecting privacy is imperative. However, state-of-the-art methods assume a "flattened" topology, while real-world research networks may consist of "network-of-networks" which can imply practical issues including training on small data for rare diseases/conditions, prioritizing locally trained models, and maintaining models for each level of the hierarchy. In this study, we focus on developing a hierarchical approach to inherit the benefits of the privacy-preserving methods, retain the advantages of adopting blockchain, and address practical concerns on a research network-of-networks.Materials and methodsWe propose a framework to combine level-wise model learning, blockchain-based model dissemination, and a novel hierarchical consensus algorithm for model ensemble. We developed an example implementation HierarchicalChain (hierarchical privacy-preserving modeling on blockchain), evaluated it on 3 healthcare/genomic datasets, as well as compared its predictive correctness, learning iteration, and execution time with a state-of-the-art method designed for flattened network topology.ResultsHierarchicalChain improves the predictive correctness for small training datasets and provides comparable correctness results with the competing method with higher learning iteration and similar per-iteration execution time, inherits the benefits of the privacy-preserving learning and advantages of blockchain technology, and immutable records models for each level.DiscussionHierarchicalChain is independent of the core privacy-preserving learning method, as well as of the underlying blockchain platform. Further studies are warranted for various types of network topology, complex data, and privacy concerns.ConclusionWe demonstrated the potential of utilizing the information from the hierarchical network-of-networks topology to improve prediction
Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline
From medical charts to national census, healthcare has traditionally operated
under a paper-based paradigm. However, the past decade has marked a long and
arduous transformation bringing healthcare into the digital age. Ranging from
electronic health records, to digitized imaging and laboratory reports, to
public health datasets, today, healthcare now generates an incredible amount of
digital information. Such a wealth of data presents an exciting opportunity for
integrated machine learning solutions to address problems across multiple
facets of healthcare practice and administration. Unfortunately, the ability to
derive accurate and informative insights requires more than the ability to
execute machine learning models. Rather, a deeper understanding of the data on
which the models are run is imperative for their success. While a significant
effort has been undertaken to develop models able to process the volume of data
obtained during the analysis of millions of digitalized patient records, it is
important to remember that volume represents only one aspect of the data. In
fact, drawing on data from an increasingly diverse set of sources, healthcare
data presents an incredibly complex set of attributes that must be accounted
for throughout the machine learning pipeline. This chapter focuses on
highlighting such challenges, and is broken down into three distinct
components, each representing a phase of the pipeline. We begin with attributes
of the data accounted for during preprocessing, then move to considerations
during model building, and end with challenges to the interpretation of model
output. For each component, we present a discussion around data as it relates
to the healthcare domain and offer insight into the challenges each may impose
on the efficiency of machine learning techniques.Comment: Healthcare Informatics, Machine Learning, Knowledge Discovery: 20
Pages, 1 Figur
AutoDiscern: Rating the Quality of Online Health Information with Hierarchical Encoder Attention-based Neural Networks
Patients increasingly turn to search engines and online content before, or in
place of, talking with a health professional. Low quality health information,
which is common on the internet, presents risks to the patient in the form of
misinformation and a possibly poorer relationship with their physician. To
address this, the DISCERN criteria (developed at University of Oxford) are used
to evaluate the quality of online health information. However, patients are
unlikely to take the time to apply these criteria to the health websites they
visit. We built an automated implementation of the DISCERN instrument (Brief
version) using machine learning models. We compared the performance of a
traditional model (Random Forest) with that of a hierarchical encoder
attention-based neural network (HEA) model using two language embeddings, BERT
and BioBERT. The HEA BERT and BioBERT models achieved average F1-macro scores
across all criteria of 0.75 and 0.74, respectively, outperforming the Random
Forest model (average F1-macro = 0.69). Overall, the neural network based
models achieved 81% and 86% average accuracy at 100% and 80% coverage,
respectively, compared to 94% manual rating accuracy. The attention mechanism
implemented in the HEA architectures not only provided 'model explainability'
by identifying reasonable supporting sentences for the documents fulfilling the
Brief DISCERN criteria, but also boosted F1 performance by 0.05 compared to the
same architecture without an attention mechanism. Our research suggests that it
is feasible to automate online health information quality assessment, which is
an important step towards empowering patients to become informed partners in
the healthcare process
The Parameter Houlihan: a solution to high-throughput identifiability indeterminacy for brutally ill-posed problems
One way to interject knowledge into clinically impactful forecasting is to
use data assimilation, a nonlinear regression that projects data onto a
mechanistic physiologic model, instead of a set of functions, such as neural
networks. Such regressions have an advantage of being useful with particularly
sparse, non-stationary clinical data. However, physiological models are often
nonlinear and can have many parameters, leading to potential problems with
parameter identifiability, or the ability to find a unique set of parameters
that minimize forecasting error. The identifiability problems can be minimized
or eliminated by reducing the number of parameters estimated, but reducing the
number of estimated parameters also reduces the flexibility of the model and
hence increases forecasting error. We propose a method, the parameter Houlihan,
that combines traditional machine learning techniques with data assimilation,
to select the right set of model parameters to minimize forecasting error while
reducing identifiability problems. The method worked well: the data
assimilation-based glucose forecasts and estimates for our cohort using the
Houlihan-selected parameter sets generally also minimize forecasting errors
compared to other parameter selection methods such as by-hand parameter
selection. Nevertheless, the forecast with the lowest forecast error does not
always accurately represent physiology, but further advancements of the
algorithm provide a path for improving physiologic fidelity as well. Our hope
is that this methodology represents a first step toward combining machine
learning with data assimilation and provides a lower-threshold entry point for
using data assimilation with clinical data by helping select the right
parameters to estimate
- …