6 research outputs found
A Large-Scale Study of a Sleep Tracking and Improving Device with Closed-loop and Personalized Real-time Acoustic Stimulation
Various intervention therapies ranging from pharmaceutical to hi-tech
tailored solutions have been available to treat difficulty in falling asleep
commonly caused by insomnia in modern life. However, current techniques largely
remain ill-suited, ineffective, and unreliable due to their lack of precise
real-time sleep tracking, in-time feedback on the therapies, an ability to keep
people asleep during the night, and a large-scale effectiveness evaluation.
Here, we introduce a novel sleep aid system, called Earable, that can
continuously sense multiple head-based physiological signals and simultaneously
enable closed-loop auditory stimulation to entrain brain activities in time for
effective sleep promotion. We develop the system in a lightweight, comfortable,
and user-friendly headband with a comprehensive set of algorithms and dedicated
own-designed audio stimuli. We conducted multiple protocols from 883 sleep
studies on 377 subjects (241 women, 119 men) wearing either a gold-standard
device (PSG), Earable, or both concurrently. We demonstrate that our system
achieves (1) a strong correlation (0.89 +/- 0.03) between the physiological
signals acquired by Earable and those from the gold-standard PSG, (2) an 87.8
+/- 5.3% agreement on sleep scoring using our automatic real-time sleep staging
algorithm with the consensus scored by three sleep technicians, and (3) a
successful non-pharmacological stimulation alternative to effectively shorten
the duration of sleep falling by 24.1 +/- 0.1 minutes. These results show that
the efficacy of Earable exceeds existing techniques in intentions to promote
fast falling asleep, track sleep state accurately, and achieve high social
acceptance for real-time closed-loop personalized neuromodulation-based home
sleep care.Comment: 33 pages, 8 figure
A comprehensive study on the efficacy of a wearable sleep aid device featuring closed-loop real-time acoustic stimulation
Difficulty falling asleep is one of the typical insomnia symptoms. However, intervention therapies available nowadays, ranging from pharmaceutical to hi-tech tailored solutions, remain ineffective due to their lack of precise real-time sleep tracking, in-time feedback on the therapies, and an ability to keep people asleep during the night. This paper aims to enhance the efficacy of such an intervention by proposing a novel sleep aid system that can sense multiple physiological signals continuously and simultaneously control auditory stimulation to evoke appropriate brain responses for fast sleep promotion. The system, a lightweight, comfortable, and user-friendly headband, employs a comprehensive set of algorithms and dedicated own-designed audio stimuli. Compared to the gold-standard device in 883 sleep studies on 377 subjects, the proposed system achieves (1) a strong correlation (0.89 ± 0.03) between the physiological signals acquired by ours and those from the gold-standard PSG, (2) an 87.8% agreement on automatic sleep scoring with the consensus scored by sleep technicians, and (3) a successful non-pharmacological real-time stimulation to shorten the duration of sleep falling by 24.1 min. Conclusively, our solution exceeds existing ones in promoting fast falling asleep, tracking sleep state accurately, and achieving high social acceptance through a reliable large-scale evaluation
Recommended from our members
Explainable machine learning predictions of perceptual sensitivity for retinal prostheses.
ObjectiveRetinal prostheses evoke visual precepts by electrically stimulating functioning cells in the retina. Despite high variance in perceptual thresholds across subjects, among electrodes within a subject, and over time, retinal prosthesis users must undergo `system fitting', a process performed to calibrate stimulation parameters according to the subject's perceptual thresholds. Although previous work has identified electrode-retina distance and impedance as key factors affecting thresholds, an accurate predictive model is still lacking.ApproachTo address these challenges, we 1) fitted machine learning (ML) models to a large longitudinal dataset with the goal of predicting individual electrode thresholds and deactivation as a function of stimulus, electrode, and clinical parameters (`predictors') and 2) leveraged explainable artificial intelligence (XAI) to reveal which of these predictors were most important.Main resultsOur models accounted for up to 76% of the perceptual threshold response variance and enabled predictions of whether an electrode was deactivated in a given trial with F1 and AUC scores of up to 0.730 and 0.910, respectively. Our models identified novel predictors of perceptual sensitivity, including subject age, time since blindness onset, and electrode-fovea distance.SignificanceOur results demonstrate that routinely collected clinical measures and a single session of system fitting might be sufficient to inform an XAI-based threshold prediction strategy, which has the potential to transform clinical practice in predicting visual outcomes.
A pilot study of the Earable device to measure facial muscle and eye movement tasks among healthy volunteers.
The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model's prediction accuracy on the Earable device's classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings
A pilot study of the Earable device to measure facial muscle and eye movement tasks among healthy volunteers
The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model’s prediction accuracy on the Earable device’s classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings. Author summary Many neuromuscular disorders impair function of cranial nerve enervated muscles. Clinical assessment of cranial muscle function has several limitations. Clinician rating of symptoms suffers from inter-rater variation, qualitative or semi-quantitative scoring, and limited ability to capture infrequent or fluctuating symptoms. Patient-reported outcomes are limited by recall bias and poor precision. Current tools to measure orofacial and oculomotor function are cumbersome, difficult to implement, and non-portable. Here, we show how Earable, a wearable device, can discriminate certain cranial muscle activities such as chewing, talking, and swallowing. We demonstrate using data from a pilot study how Earable can be used to measure features from EMG, EEG, and EOG waveforms from subjects wearing the device while performing mock Performance Outcome Assessments (PerfOs), utilized widely in clinical research. Our analysis pipeline provides a framework for how to computationally process and statistically rank features from the Earable device. Our results, conducted in a pilot study of healthy participants, enable a more comprehensive strategy for the design, development, and analysis of wearable sensor data for investigating clinical populations. Understanding how to derive clinically meaningful quantitative metrics from wearable sensor devices is required for the development of novel digital endpoints, a hallmark goal of clinical research
A pilot study of the Earable device to measure facial muscle and eye movement tasks among healthy volunteers
Many neuromuscular disorders impair function of cranial nerve enervated
muscles. Clinical assessment of cranial muscle function has several
limitations. Clinician rating of symptoms suffers from inter-rater variation,
qualitative or semi-quantitative scoring, and limited ability to capture
infrequent or fluctuating symptoms. Patient-reported outcomes are limited by
recall bias and poor precision. Current tools to measure orofacial and
oculomotor function are cumbersome, difficult to implement, and non-portable.
Here, we show how Earable, a wearable device, can discriminate certain cranial
muscle activities such as chewing, talking, and swallowing. We demonstrate
using data from a pilot study of 10 healthy participants how Earable can be
used to measure features from EMG, EEG, and EOG waveforms from subjects
performing mock Performance Outcome Assessments (mock-PerfOs), utilized widely
in clinical research. Our analysis pipeline provides a framework for how to
computationally process and statistically rank features from the Earable
device. Finally, we demonstrate that Earable data may be used to classify these
activities. Our results, conducted in a pilot study of healthy participants,
enable a more comprehensive strategy for the design, development, and analysis
of wearable sensor data for investigating clinical populations. Additionally,
the results from this study support further evaluation of Earable or similar
devices as tools to objectively measure cranial muscle activity in the context
of a clinical research setting. Future work will be conducted in clinical
disease populations, with a focus on detecting disease signatures, as well as
monitoring intra-subject treatment responses. Readily available quantitative
metrics from wearable sensor devices like Earable support strategies for the
development of novel digital endpoints, a hallmark goal of clinical research