73 research outputs found
Understanding Ambulatory and Wearable Data for Health and Wellness
In our research, we aim (1) to recognize human internal states and behaviors (stress level, mood and sleep behaviors etc), (2) to reveal which features in which data can work as predictors and (3) to use them for intervention. We collect multi-modal (physiological, behavioral, environmental, and social) ambulatory data using wearable sensors and mobile phones, combining with standardized questionnaires and data measured in the laboratory. In this paper, we introduce our approach and some of our projects
Recognition of Sleep Dependent Memory Consolidation with Multi-modal Sensor Data
This paper presents the possibility of recognizing sleep dependent memory consolidation using multi-modal sensor data. We collected visual discrimination task (VDT) performance before and after sleep at laboratory, hospital and home for N=24 participants while recording EEG (electroencepharogram), EDA (electrodermal activity) and ACC (accelerometer) or actigraphy data during sleep. We extracted features and applied machine learning techniques (discriminant analysis, support vector machine and k-nearest neighbor) from the sleep data to classify whether the participants showed improvement in the memory task. Our results showed 60–70% accuracy in a binary classification of task performance using EDA or EDA+ACC features, which provided an improvement over the more traditional use of sleep stages (the percentages of slow wave sleep (SWS) in the 1st quarter and rapid eye movement (REM) in the 4th quarter of the night) to predict VDT improvement
Pink nodule accompanied with multiple yellow globules at the periphery
ArticleJAAD case reports. 3(4): 351-353. (2017)journal articl
PiRL: Participant-Invariant Representation Learning for Healthcare
Due to individual heterogeneity, performance gaps are observed between
generic (one-size-fits-all) models and person-specific models in data-driven
health applications. However, in real-world applications, generic models are
usually more favorable due to new-user-adaptation issues and system
complexities, etc. To improve the performance of the generic model, we propose
a representation learning framework that learns participant-invariant
representations, named PiRL. The proposed framework utilizes maximum mean
discrepancy (MMD) loss and domain-adversarial training to encourage the model
to learn participant-invariant representations. Further, a triplet loss, which
constrains the model for inter-class alignment of the representations, is
utilized to optimize the learned representations for downstream health
applications. We evaluated our frameworks on two public datasets related to
physical and mental health, for detecting sleep apnea and stress, respectively.
As preliminary results, we found the proposed approach shows around a 5%
increase in accuracy compared to the baseline
Routine Clustering of Mobile Sensor Data Facilitates Psychotic Relapse Prediction in Schizophrenia Patients
We aim to develop clustering models to obtain behavioral representations from
continuous multimodal mobile sensing data towards relapse prediction tasks. The
identified clusters could represent different routine behavioral trends related
to daily living of patients as well as atypical behavioral trends associated
with impending relapse.
We used the mobile sensing data obtained in the CrossCheck project for our
analysis. Continuous data from six different mobile sensing-based modalities
(e.g. ambient light, sound/conversation, acceleration etc.) obtained from a
total of 63 schizophrenia patients, each monitored for up to a year, were used
for the clustering models and relapse prediction evaluation. Two clustering
models, Gaussian Mixture Model (GMM) and Partition Around Medoids (PAM), were
used to obtain behavioral representations from the mobile sensing data. The
features obtained from the clustering models were used to train and evaluate a
personalized relapse prediction model using Balanced Random Forest. The
personalization was done by identifying optimal features for a given patient
based on a personalization subset consisting of other patients who are of
similar age.
The clusters identified using the GMM and PAM models were found to represent
different behavioral patterns (such as clusters representing sedentary days,
active but with low communications days, etc.). Significant changes near the
relapse periods were seen in the obtained behavioral representation features
from the clustering models. The clustering model based features, together with
other features characterizing the mobile sensing data, resulted in an F2 score
of 0.24 for the relapse prediction task in a leave-one-patient-out evaluation
setting. This obtained F2 score is significantly higher than a random
classification baseline with an average F2 score of 0.042
QuantifyMe: An Open-Source Automated Single-Case Experimental Design Platform
Smartphones and wearable sensors have enabled unprecedented data collection, with many products now providing feedback to users about recommended step counts or sleep durations. However, these recommendations do not provide personalized insights that have been shown to be best suited for a specific individual. A scientific way to find individualized recommendations and causal links is to conduct experi ments using single-case experimental design; however, properly designed single-case experiments are not easy to conduct on oneself. We designed, developed, and evaluated a novel platform, QuantifyMe, for novice self-experimenters to conduct proper-methodology single-case self-experiments in an automated and scientific manner using their smartphones. We provide software for the platform that we used (available for free on GitHub), which provides the methodological elements to run many kinds of customized studies. In this work, we evaluate its use with four different kinds of personalized investigations, examining how variables such as sleep duration and regularity, activity, and leisure time affect personal happiness, stress, productivity, and sleep efficiency. We conducted a six-week pilot study (N = 13) to evaluate QuantifyMe. We describe the lessons learned developing the platform and recommendations for its improvement, as well as its potential for enabling personalized insights to be scientifically evaluated in many individuals, reducing the high administrative cost for advancing human health and wellbeing. Keywords: single-case experimental design; mobile health; wearable sensors; self-experiment; self-trackin
SleepNet: Attention-Enhanced Robust Sleep Prediction using Dynamic Social Networks
Sleep behavior significantly impacts health and acts as an indicator of
physical and mental well-being. Monitoring and predicting sleep behavior with
ubiquitous sensors may therefore assist in both sleep management and tracking
of related health conditions. While sleep behavior depends on, and is reflected
in the physiology of a person, it is also impacted by external factors such as
digital media usage, social network contagion, and the surrounding weather. In
this work, we propose SleepNet, a system that exploits social contagion in
sleep behavior through graph networks and integrates it with physiological and
phone data extracted from ubiquitous mobile and wearable devices for predicting
next-day sleep labels about sleep duration. Our architecture overcomes the
limitations of large-scale graphs containing connections irrelevant to sleep
behavior by devising an attention mechanism. The extensive experimental
evaluation highlights the improvement provided by incorporating social networks
in the model. Additionally, we conduct robustness analysis to demonstrate the
system's performance in real-life conditions. The outcomes affirm the stability
of SleepNet against perturbations in input data. Further analyses emphasize the
significance of network topology in prediction performance revealing that users
with higher eigenvalue centrality are more vulnerable to data perturbations.Comment: Accepted for publication in Proceedings of the ACM on Interactive,
Mobile, Wearable and Ubiquitous Technologies (IMWUT), 8 (March 2024
Wavelet-based motion artifact removal for electrodermal activity
Electrodermal activity (EDA) recording is a powerful, widely used tool for monitoring psychological or physiological arousal. However, analysis of EDA is hampered by its sensitivity to motion artifacts. We propose a method for removing motion artifacts from EDA, measured as skin conductance (SC), using a stationary wavelet transform (SWT). We modeled the wavelet coefficients as a Gaussian mixture distribution corresponding to the underlying skin conductance level (SCL) and skin conductance responses (SCRs). The goodness-of-fit of the model was validated on ambulatory SC data. We evaluated the proposed method in comparison with three previous approaches. Our method achieved a greater reduction of artifacts while retaining motion-artifact-free data
Multimodal annotation tool for challenging behaviors in people with Autism spectrum disorders
Individuals diagnosed with Autism Spectrum Disorders (ASD) often have challenging behaviors (CB's), such as self-injury or emotional outbursts, which can negatively impact the quality of life of themselves and those around them. Recent advances in mobile and ubiquitous technologies provide an opportunity to efficiently and accurately capture important information preceding and associated with these CB's. The ability to obtain this type of data will help with both intervention and behavioral phenotyping efforts. Through collaboration with behavioral scientists and therapists, we identified relevant design requirements and created an easy-to-use mobile application for collecting, labeling, and sharing in-situ behavior data in individuals diagnosed with ASD. Furthermore, we have released the application to the community as an open-source project so it can be validated and extended by other researchers.National Science Foundation (U.S.) (Grant NSF CCF-1029585)MIT Media Lab ConsortiumAutism Speaks (Organization) (Innovative Technology for Autism Initiative Grant
- …