91 research outputs found
Recommended from our members
MACHINE LEARNING METHODS FOR PERSONALIZED HEALTH MONITORING USING WEARABLE SENSORS
Mobile health is an emerging field that allows for real-time monitoring of individuals between routine clinical visits. Among others it makes it possible to remotely gather health signals, track disease progression and provide just-in-time interventions. Consumer grade wearable sensors can remotely gather health signals and other time series data. While wearable sensors can be readily deployed on individuals, there are significant challenges in converting raw sensor data into actionable insights. In this dissertation, we develop machine learning methods and models for personalized health monitoring using wearables. Specifically, we address three challenges that arise in these settings. First, data gathered from wearable sensors is noisy making it challenging to extract relevant but nuanced features. We develop probabilistic graphical models to effectively encode domain knowledge when extracting features from noisy wearable sensor data. Second, prediction models developed on one population in lab settings may not generalize to other populations in field settings. We develop domain adaptation techniques to improve lab-to-field generalizability. Third, collecting ground truth labels for health monitoring applications is expensive and burdensome. We develop active learning methods to minimize the effort involved in collecting ground truth labels. We evaluate these methods and models on two case studies: cocaine use detection and human activity recognition
Wearable devices for remote vital signs monitoring in the outpatient setting: an overview of the field
Early detection of physiological deterioration has been shown to improve patient outcomes. Due to recent improvements in technology, comprehensive outpatient vital signs monitoring is now possible. This is the first review to collate information on all wearable devices on the market for outpatient physiological monitoring.
A scoping review was undertaken. The monitors reviewed were limited to those that can function in the outpatient setting with minimal restrictions on the patient’s normal lifestyle, while measuring any or all of the vital signs: heart rate, ECG, oxygen saturation, respiration rate, blood pressure and temperature.
A total of 270 papers were included in the review. Thirty wearable monitors were examined: 6 patches, 3 clothing-based monitors, 4 chest straps, 2 upper arm bands and 15 wristbands. The monitoring of vital signs in the outpatient setting is a developing field with differing levels of evidence for each monitor. The most common clinical application was heart rate monitoring. Blood pressure and oxygen saturation measurements were the least common applications. There is a need for clinical validation studies in the outpatient setting to prove the potential of many of the monitors identified.
Research in this area is in its infancy. Future research should look at aggregating the results of validity and reliability and patient outcome studies for each monitor and between different devices. This would provide a more holistic overview of the potential for the clinical use of each device
Developing Transferable Deep Models for Mobile Health
Human behavior is one of the key facets of health. A major portion of healthcare spending in the US is attributed to chronic diseases, which are linked to behavioral risk factors such as smoking, drinking, unhealthy eating.
Mobile devices that are integrated into people's everyday lives make it possible for us to get a closer look into behavior. Two of the most commonly used sensing modalities include Ecological Momentary Assessments (EMAs): surveys about mental states, environment, and other factors, and wearable sensors that are used to capture high frequency contextual and physiological signals.
One of the main visions of mobile health (mHealth) is sensor-based behavior modification. Contextual data collected from participants is typically used to train a risk prediction model for adverse events such as smoking, which can then be used to inform intervention design. However, there are several choices in an mHealth study such as the demographics of the participants in the study, the type of sensors used, the questions included in the EMA. This results in two technical challenges to using machine learning models effectively across mHealth studies. The first is the problem of domain shift where the data distribution varies across studies. This would result in models trained on one study to have sub-optimal performance on a different study. Domain shift is common in wearable sensor data since there are several sources of variability such as sensor design, the placement of the sensor on the body, demographics of the users, etc. The second challenge is that of covariate-space shift where the input-space changes across datasets. This is common across EMA datasets since questions can vary based on the study.
This thesis studies the problem of covariate-space shift and domain shift in mHealth data.
First, I study the problem of domain shift caused by differences in the sensor type and placement in ECG and PPG signals. I propose a self-supervised learning based domain adaptation method that captures the physiological structure of these signals to improve transfer performance of predictive models.
Second, I present a method to find a common input representation irrespective of the fine-grained questions in EMA datasets to overcome the problem of covariate-space shift.
The next challenge to the deployment of ML models in health is explainability. I explore the problem of bridging the gap between explainability methods and domain experts and present a method to generate plausible, relevant, and convincing explanations.Ph.D
Continuous Estimation of Smoking Lapse Risk from Noisy Wrist Sensor Data Using Sparse and Positive-Only Labels
Estimating the imminent risk of adverse health behaviors provides opportunities for developing effective behavioral intervention mechanisms to prevent the occurrence of the target behavior. One of the key goals is to find opportune moments for intervention by passively detecting the rising risk of an imminent adverse behavior. Significant progress in mobile health research and the ability to continuously sense internal and external states of individual health and behavior has paved the way for detecting diverse risk factors from mobile sensor data. The next frontier in this research is to account for the combined effects of these risk factors to produce a composite risk score of adverse behaviors using wearable sensors convenient for daily use. Developing a machine learning-based model for assessing the risk of smoking lapse in the natural environment faces significant outstanding challenges requiring the development of novel and unique methodologies for each of them. The first challenge is coming up with an accurate representation of noisy and incomplete sensor data to encode the present and historical influence of behavioral cues, mental states, and the interactions of individuals with their ever-changing environment. The next noteworthy challenge is the absence of confirmed negative labels of low-risk states and adequate precise annotations of high-risk states. Finally, the model should work on convenient wearable devices to facilitate widespread adoption in research and practice. In this dissertation, we develop methods that account for the multi-faceted nature of smoking lapse behavior to train and evaluate a machine learning model capable of estimating composite risk scores in the natural environment. We first develop mRisk, which combines the effects of various mHealth biomarkers such as stress, physical activity, and location history in producing the risk of smoking lapse using sequential deep neural networks. We propose an event-based encoding of sensor data to reduce the effect of noises and then present an approach to efficiently model the historical influence of recent and past sensor-derived contexts on the likelihood of smoking lapse. To circumvent the lack of confirmed negative labels (i.e., annotated low-risk moments) and only a few positive labels (i.e., sensor-based detection of smoking lapse corroborated by self-reports), we propose a new loss function to accurately optimize the models. We build the mRisk models using biomarker (stress, physical activity) streams derived from chest-worn sensors. Adapting the models to work with less invasive and more convenient wrist-based sensors requires adapting the biomarker detection models to work with wrist-worn sensor data. To that end, we develop robust stress and activity inference methodologies from noisy wrist-sensor data. We first propose CQP, which quantifies wrist-sensor collected PPG data quality. Next, we show that integrating CQP within the inference pipeline improves accuracy-yield trade-offs associated with stress detection from wrist-worn PPG sensors in the natural environment. mRisk also requires sensor-based precise detection of smoking events and confirmation through self-reports to extract positive labels. Hence, we develop rSmoke, an orientation-invariant smoking detection model that is robust to the variations in sensor data resulting from orientation switches in the field. We train the proposed mRisk risk estimation models using the wrist-based inferences of lapse risk factors. To evaluate the utility of the risk models, we simulate the delivery of intelligent smoking interventions to at-risk participants as informed by the composite risk scores. Our results demonstrate the envisaged impact of machine learning-based models operating on wrist-worn wearable sensor data to output continuous smoking lapse risk scores. The novel methodologies we propose throughout this dissertation help instigate a new frontier in smoking research that can potentially improve the smoking abstinence rate in participants willing to quit
Low-power Wearable Healthcare Sensors
Advances in technology have produced a range of on-body sensors and smartwatches that can be used to monitor a wearer’s health with the objective to keep the user healthy. However, the real potential of such devices not only lies in monitoring but also in interactive communication with expert-system-based cloud services to offer personalized and real-time healthcare advice that will enable the user to manage their health and, over time, to reduce expensive hospital admissions. To meet this goal, the research challenges for the next generation of wearable healthcare devices include the need to offer a wide range of sensing, computing, communication, and human–computer interaction methods, all within a tiny device with limited resources and electrical power. This Special Issue presents a collection of six papers on a wide range of research developments that highlight the specific challenges in creating the next generation of low-power wearable healthcare sensors
Recommended from our members
Designing Efficient and Accurate Behavior-Aware Mobile Systems
The proliferation of sensors on smartphones, tablets and wearables has led to a plethora of behavior classification algorithms designed to sense various aspects of individual user\u27s behavior such as daily habits, activity, physiology, mobility, sleep, emotional and social contexts. This ability to sense and understand behaviors of mobile users will drive the next generation of mobile applications providing services based on the users\u27 behavioral patterns. In this thesis, we investigate ways in which we can enhance and utilize the understanding of user behaviors in such applications. In particular, we focus on identifying the key challenges in the following three aspects of behavior-aware applications: detection, understanding, and prediction of user behaviors; and present systems and techniques developed to address these challenges. In this thesis, we first demonstrate the utility of wristbands equipped with inertial sensors in real-time detection of health-related behaviors such as smoking and eating. Our approach detects these behaviors in a passive manner without any explicit user interaction and does not require use of any cumbersome device. Our results show that we can detect smoking with 95% accuracy, 91% precision and 81% recall in the natural environment. Second, we design a context-query engine for sensing multiple user contexts continuously, accurately and efficiently on mobile devices; the key necessity for understanding and analyzing behaviors. Our context-query engine performs information fusion of contexts for an individual user to enable optimizations like i) energy-efficient sensing, and ii) accurate context inference. Our results show that we can improve accuracy of a context classifier by up to 42% and reduce the number of classifiers required to observe the user state by 33%. Finally, we demonstrate the utility of predicting app usage behavior, in improving the freshness of mobile apps such as Facebook that present users with the latest content fetched from remote servers. We present an app prediction algorithm that utilizes user contexts to predict the app a user is likely to use and pre-fetches the data over the network for the predicted app. We show that our proposed algorithm delivers application content to the user that is on an average fresh within 3 minutes
WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM
Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments
- …