301 research outputs found
Decentralized Machine Learning for Intelligent Health Care Systems on the Computing Continuum
The introduction of electronic personal health records (EHR) enables
nationwide information exchange and curation among different health care
systems. However, the current EHR systems do not provide transparent means for
diagnosis support, medical research or can utilize the omnipresent data
produced by the personal medical devices. Besides, the EHR systems are
centrally orchestrated, which could potentially lead to a single point of
failure. Therefore, in this article, we explore novel approaches for
decentralizing machine learning over distributed ledgers to create intelligent
EHR systems that can utilize information from personal medical devices for
improved knowledge extraction. Consequently, we proposed and evaluated a
conceptual EHR to enable anonymous predictive analysis across multiple medical
institutions. The evaluation results indicate that the decentralized EHR can be
deployed over the computing continuum with reduced machine learning time of up
to 60% and consensus latency of below 8 seconds
A Learning Health System for Radiation Oncology
The proposed research aims to address the challenges faced by clinical data science researchers in radiation oncology accessing, integrating, and analyzing heterogeneous data from various sources. The research presents a scalable intelligent infrastructure, called the Health Information Gateway and Exchange (HINGE), which captures and structures data from multiple sources into a knowledge base with semantically interlinked entities. This infrastructure enables researchers to mine novel associations and gather relevant knowledge for personalized clinical outcomes.
The dissertation discusses the design framework and implementation of HINGE, which abstracts structured data from treatment planning systems, treatment management systems, and electronic health records. It utilizes disease-specific smart templates for capturing clinical information in a discrete manner. HINGE performs data extraction, aggregation, and quality and outcome assessment functions automatically, connecting seamlessly with local IT/medical infrastructure.
Furthermore, the research presents a knowledge graph-based approach to map radiotherapy data to an ontology-based data repository using FAIR (Findable, Accessible, Interoperable, Reusable) concepts. This approach ensures that the data is easily discoverable and accessible for clinical decision support systems. The dissertation explores the ETL (Extract, Transform, Load) process, data model frameworks, ontologies, and provides a real-world clinical use case for this data mapping.
To improve the efficiency of retrieving information from large clinical datasets, a search engine based on ontology-based keyword searching and synonym-based term matching tool was developed. The hierarchical nature of ontologies is leveraged to retrieve patient records based on parent and children classes. Additionally, patient similarity analysis is conducted using vector embedding models (Word2Vec, Doc2Vec, GloVe, and FastText) to identify similar patients based on text corpus creation methods. Results from the analysis using these models are presented.
The implementation of a learning health system for predicting radiation pneumonitis following stereotactic body radiotherapy is also discussed. 3D convolutional neural networks (CNNs) are utilized with radiographic and dosimetric datasets to predict the likelihood of radiation pneumonitis. DenseNet-121 and ResNet-50 models are employed for this study, along with integrated gradient techniques to identify salient regions within the input 3D image dataset. The predictive performance of the 3D CNN models is evaluated based on clinical outcomes.
Overall, the proposed Learning Health System provides a comprehensive solution for capturing, integrating, and analyzing heterogeneous data in a knowledge base. It offers researchers the ability to extract valuable insights and associations from diverse sources, ultimately leading to improved clinical outcomes. This work can serve as a model for implementing LHS in other medical specialties, advancing personalized and data-driven medicine
Clinical foundations and information architecture for the implementation of a federated health record service
Clinical care increasingly requires healthcare professionals to access patient record information that
may be distributed across multiple sites, held in a variety of paper and electronic formats, and
represented as mixtures of narrative, structured, coded and multi-media entries. A longitudinal
person-centred electronic health record (EHR) is a much-anticipated solution to this problem, but
its realisation is proving to be a long and complex journey.
This Thesis explores the history and evolution of clinical information systems, and establishes a set
of clinical and ethico-legal requirements for a generic EHR server. A federation approach (FHR) to
harmonising distributed heterogeneous electronic clinical databases is advocated as the basis for
meeting these requirements.
A set of information models and middleware services, needed to implement a Federated Health
Record server, are then described, thereby supporting access by clinical applications to a distributed
set of feeder systems holding patient record information. The overall information architecture thus
defined provides a generic means of combining such feeder system data to create a virtual
electronic health record. Active collaboration in a wide range of clinical contexts, across the whole
of Europe, has been central to the evolution of the approach taken.
A federated health record server based on this architecture has been implemented by the author
and colleagues and deployed in a live clinical environment in the Department of Cardiovascular
Medicine at the Whittington Hospital in North London. This implementation experience has fed
back into the conceptual development of the approach and has provided "proof-of-concept"
verification of its completeness and practical utility.
This research has benefited from collaboration with a wide range of healthcare sites, informatics
organisations and industry across Europe though several EU Health Telematics projects: GEHR,
Synapses, EHCR-SupA, SynEx, Medicate and 6WINIT.
The information models published here have been placed in the public domain and have
substantially contributed to two generations of CEN health informatics standards, including CEN
TC/251 ENV 13606
Unimodal Training-Multimodal Prediction: Cross-modal Federated Learning with Hierarchical Aggregation
Multimodal learning has seen great success mining data features from multiple
modalities with remarkable model performance improvement. Meanwhile, federated
learning (FL) addresses the data sharing problem, enabling privacy-preserved
collaborative training to provide sufficient precious data. Great potential,
therefore, arises with the confluence of them, known as multimodal federated
learning. However, limitation lies in the predominant approaches as they often
assume that each local dataset records samples from all modalities. In this
paper, we aim to bridge this gap by proposing an Unimodal Training - Multimodal
Prediction (UTMP) framework under the context of multimodal federated learning.
We design HA-Fedformer, a novel transformer-based model that empowers unimodal
training with only a unimodal dataset at the client and multimodal testing by
aggregating multiple clients' knowledge for better accuracy. The key advantages
are twofold. Firstly, to alleviate the impact of data non-IID, we develop an
uncertainty-aware aggregation method for the local encoders with layer-wise
Markov Chain Monte Carlo sampling. Secondly, to overcome the challenge of
unaligned language sequence, we implement a cross-modal decoder aggregation to
capture the hidden signal correlation between decoders trained by data from
different modalities. Our experiments on popular sentiment analysis benchmarks,
CMU-MOSI and CMU-MOSEI, demonstrate that HA-Fedformer significantly outperforms
state-of-the-art multimodal models under the UTMP federated learning
frameworks, with 15%-20% improvement on most attributes.Comment: 10 pages,5 figure
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine
Artificial intelligence (AI) continues to transform data analysis in many
domains. Progress in each domain is driven by a growing body of annotated data,
increased computational resources, and technological innovations. In medicine,
the sensitivity of the data, the complexity of the tasks, the potentially high
stakes, and a requirement of accountability give rise to a particular set of
challenges. In this review, we focus on three key methodological approaches
that address some of the particular challenges in AI-driven medical decision
making. (1) Explainable AI aims to produce a human-interpretable justification
for each output. Such models increase confidence if the results appear
plausible and match the clinicians expectations. However, the absence of a
plausible explanation does not imply an inaccurate model. Especially in highly
non-linear, complex models that are tuned to maximize accuracy, such
interpretable representations only reflect a small portion of the
justification. (2) Domain adaptation and transfer learning enable AI models to
be trained and applied across multiple domains. For example, a classification
task based on images acquired on different acquisition hardware. (3) Federated
learning enables learning large-scale models without exposing sensitive
personal health information. Unlike centralized AI learning, where the
centralized learning machine has access to the entire training data, the
federated learning process iteratively updates models across multiple sites by
exchanging only parameter updates, not personal health data. This narrative
review covers the basic concepts, highlights relevant corner-stone and
state-of-the-art research in the field, and discusses perspectives.Comment: This paper is accepted in IEEE CAA Journal of Automatica Sinica, Nov.
10 202
Proposal of a learning health system to transform the National Health System of Spain
This article identifies the main challenges of the National Health Service of Spain and proposes its transformation into a Learning Health System. For this purpose, the main indicators and reports published by the Spanish Ministries of Health and Finance, Organization for Economic Co-operation and Development (OECD) and World Health Organization (WHO) were reviewed. The Learning Health System proposal is based on some sections of an unpublished report, written by two of the authors under request of the Ministry of Health of Spain on Big Data for the National Health System. The main challenges identified are the rising old age dependency ratio; health expenditure pressures and the likely increase of out-of-pocket expenditure; drug expenditures, both retail and consumed in hospitals; waiting lists for surgery; potentially preventable hospital admissions; and the use of electronic health record (EHR) data to fulfil national health information and research objectives. To improve its efficacy, efficiency, and quality, the National Health Service of Spain should be transformed into a Learning Health System. Information and communication technologies (IT) enablers are a fundamental tool to address the complexity and vastness of health data as well as the urgency that clinical and management decisions require. Big Data solutions are a perfect match for that problem in health systems
- …