114 research outputs found

    Flexible Models for Heterogeneous Biomedical Data

    Get PDF
    With the development of biomedical sensing techniques and data storage, machine learning has been widely applied to many healthcare applications from the abundance of data resources. However, biomedical data, from real-world applications, has the nature of heterogeneity, and this heterogeneity has not been comprehensively considered and successfully addressed. The heterogeneity in biomedical data includes the various data distributions, the irregularly sampled timeseries data, the variation in the time domain, and other heterogeneous factors such as uncertain labeling. These different types of heterogeneity can happen individually or simultaneously, and sometimes a type of heterogeneity can trigger another one, for instance, a patient’s health condition changed over time, and the doctors made adjustments to the measurements and treatments which causes the irregular feature sampling. Facing the challenge of heterogeneous data, a generalized may have decent performance on average, but fails in certain cases, which should not be ignored in the clinic. In addition, when building individual models for each group of homogeneous data, the training data can become limited, even with a large data size in total. For example, there are a great number of medications, but each of them may not have enough data. The limitation of the generalized models and the possible shortage of training data make the data heterogeneity a very challenging problem to address. Therefore, flexible models are demanded for the various types of heterogeneous biomedical data in real-world applications. This dissertation investigates data heterogeneity and builds flexible models in biomedical data by focusing on different levels of heterogeneity: different types of heterogeneity happening individually, multi-source simultaneous heterogeneity, multiple data modalities on the same task, and clinical translation of data heterogeneity. We start by building different adaptive models for each individual heterogeneity on a certain type of biomedical data, focusing on time series, and then addressing a more complex situation of simultaneous heterogeneity. Next, the problem setting is extended from time-series data only to multiple data modalities, and finally, we introduce a clinical translation model trying to understand the data heterogeneity. Based on the focus on the heterogeneity in each type of data, transfer learning, adversarial training, and meta-learning techniques are proposed and applied to build adaptive models

    Survey and evaluation of hypertension machine learning research

    Get PDF
    Background: Machine learning (ML) is pervasive in all fields of research, from automating tasks to complex decision‐making. However, applications in different specialities are variable and generally limited. Like other conditions, the number of studies employing ML in hypertension research is growing rapidly. In this study, we aimed to survey hypertension research using ML, evaluate the reporting quality, and identify barriers to ML's potential to transform hypertension care. Methods and Results: The Harmonious Understanding of Machine Learning Analytics Network survey questionnaire was applied to 63 hypertension‐related ML research articles published between January 2019 and September 2021. The most common research topics were blood pressure prediction (38%), hypertension (22%), cardiovascular outcomes (6%), blood pressure variability (5%), treatment response (5%), and real‐time blood pressure estimation (5%). The reporting quality of the articles was variable. Only 46% of articles described the study population or derivation cohort. Most articles (81%) reported at least 1 performance measure, but only 40% presented any measures of calibration. Compliance with ethics, patient privacy, and data security regulations were mentioned in 30 (48%) of the articles. Only 14% used geographically or temporally distinct validation data sets. Algorithmic bias was not addressed in any of the articles, with only 6 of them acknowledging risk of bias. Conclusions: Recent ML research on hypertension is limited to exploratory research and has significant shortcomings in reporting quality, model validation, and algorithmic bias. Our analysis identifies areas for improvement that will help pave the way for the realization of the potential of ML in hypertension and facilitate its adoption

    Learning Sensory Representations with Minimal Supervision

    Get PDF

    Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey

    Full text link
    Digital twin (DT), refers to a promising technique to digitally and accurately represent actual physical entities. One typical advantage of DT is that it can be used to not only virtually replicate a system's detailed operations but also analyze the current condition, predict future behaviour, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate remote monitoring, diagnosis, prescription, surgery and rehabilitation. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT

    Health State Estimation

    Full text link
    Life's most valuable asset is health. Continuously understanding the state of our health and modeling how it evolves is essential if we wish to improve it. Given the opportunity that people live with more data about their life today than any other time in history, the challenge rests in interweaving this data with the growing body of knowledge to compute and model the health state of an individual continually. This dissertation presents an approach to build a personal model and dynamically estimate the health state of an individual by fusing multi-modal data and domain knowledge. The system is stitched together from four essential abstraction elements: 1. the events in our life, 2. the layers of our biological systems (from molecular to an organism), 3. the functional utilities that arise from biological underpinnings, and 4. how we interact with these utilities in the reality of daily life. Connecting these four elements via graph network blocks forms the backbone by which we instantiate a digital twin of an individual. Edges and nodes in this graph structure are then regularly updated with learning techniques as data is continuously digested. Experiments demonstrate the use of dense and heterogeneous real-world data from a variety of personal and environmental sensors to monitor individual cardiovascular health state. State estimation and individual modeling is the fundamental basis to depart from disease-oriented approaches to a total health continuum paradigm. Precision in predicting health requires understanding state trajectory. By encasing this estimation within a navigational approach, a systematic guidance framework can plan actions to transition a current state towards a desired one. This work concludes by presenting this framework of combining the health state and personal graph model to perpetually plan and assist us in living life towards our goals.Comment: Ph.D. Dissertation @ University of California, Irvin

    Assessment of Non-Invasive Blood Pressure Prediction from PPG and rPPG Signals Using Deep Learning

    Get PDF
    Exploiting photoplethysmography signals (PPG) for non-invasive blood pressure (BP) measurement is interesting for various reasons. First, PPG can easily be measured using fingerclip sensors. Second, camera based approaches allow to derive remote PPG (rPPG) signals similar to PPG and therefore provide the opportunity for non-invasive measurements of BP. Various methods relying on machine learning techniques have recently been published. Performances are often reported as the mean average error (MAE) on the data which is problematic. This work aims to analyze the PPG- and rPPG based BP prediction error with respect to the underlying data distribution. First, we train established neural network (NN) architectures and derive an appropriate parameterization of input segments drawn from continuous PPG signals. Second, we use this parameterization to train NNs with a larger PPG dataset and carry out a systematic evaluation of the predicted blood pressure. The analysis revealed a strong systematic increase of the prediction error towards less frequent BP values across NN architectures. Moreover, we tested different train/test set split configurations which underpin the importance of a careful subject-aware dataset assignment to prevent overly optimistic results. Third, we use transfer learning to train the NNs for rPPG based BP prediction. The resulting performances are similar to the PPG-only case. Finally, we apply different personalization techniques and retrain our NNs with subject-specific data for both the PPG-only and rPPG case. Whilst the particular technique is less important, personalization reduces the prediction errors significantly

    Physical Diagnosis and Rehabilitation Technologies

    Get PDF
    The book focuses on the diagnosis, evaluation, and assistance of gait disorders; all the papers have been contributed by research groups related to assistive robotics, instrumentations, and augmentative devices

    Roadmap on signal processing for next generation measurement systems

    Get PDF
    Signal processing is a fundamental component of almost any sensor-enabled system, with a wide range of applications across different scientific disciplines. Time series data, images, and video sequences comprise representative forms of signals that can be enhanced and analysed for information extraction and quantification. The recent advances in artificial intelligence and machine learning are shifting the research attention towards intelligent, data-driven, signal processing. This roadmap presents a critical overview of the state-of-the-art methods and applications aiming to highlight future challenges and research opportunities towards next generation measurement systems. It covers a broad spectrum of topics ranging from basic to industrial research, organized in concise thematic sections that reflect the trends and the impacts of current and future developments per research field. Furthermore, it offers guidance to researchers and funding agencies in identifying new prospects.AerodynamicsMicrowave Sensing, Signals & System
    • 

    corecore