10 research outputs found

    Efficient Personalized Learning for Wearable Health Applications using HyperDimensional Computing

    Full text link
    Health monitoring applications increasingly rely on machine learning techniques to learn end-user physiological and behavioral patterns in everyday settings. Considering the significant role of wearable devices in monitoring human body parameters, on-device learning can be utilized to build personalized models for behavioral and physiological patterns, and provide data privacy for users at the same time. However, resource constraints on most of these wearable devices prevent the ability to perform online learning on them. To address this issue, it is required to rethink the machine learning models from the algorithmic perspective to be suitable to run on wearable devices. Hyperdimensional computing (HDC) offers a well-suited on-device learning solution for resource-constrained devices and provides support for privacy-preserving personalization. Our HDC-based method offers flexibility, high efficiency, resilience, and performance while enabling on-device personalization and privacy protection. We evaluate the efficacy of our approach using three case studies and show that our system improves the energy efficiency of training by up to 45.8×45.8\times compared with the state-of-the-art Deep Neural Network (DNN) algorithms while offering a comparable accuracy

    Edge-centric Optimization of Multi-modal ML-driven eHealth Applications

    Full text link
    Smart eHealth applications deliver personalized and preventive digital healthcare services to clients through remote sensing, continuous monitoring, and data analytics. Smart eHealth applications sense input data from multiple modalities, transmit the data to edge and/or cloud nodes, and process the data with compute intensive machine learning (ML) algorithms. Run-time variations with continuous stream of noisy input data, unreliable network connection, computational requirements of ML algorithms, and choice of compute placement among sensor-edge-cloud layers affect the efficiency of ML-driven eHealth applications. In this chapter, we present edge-centric techniques for optimized compute placement, exploration of accuracy-performance trade-offs, and cross-layered sense-compute co-optimization for ML-driven eHealth applications. We demonstrate the practical use cases of smart eHealth applications in everyday settings, through a sensor-edge-cloud framework for an objective pain assessment case study

    GSR Analysis for Stress: Development and Validation of an Open Source Tool for Noisy Naturalistic GSR Data

    Full text link
    The stress detection problem is receiving great attention in related research communities. This is due to its essential part in behavioral studies for many serious health problems and physical illnesses. There are different methods and algorithms for stress detection using different physiological signals. Previous studies have already shown that Galvanic Skin Response (GSR), also known as Electrodermal Activity (EDA), is one of the leading indicators for stress. However, the GSR signal itself is not trivial to analyze. Different features are extracted from GSR signals to detect stress in people like the number of peaks, max peak amplitude, etc. In this paper, we are proposing an open-source tool for GSR analysis, which uses deep learning algorithms alongside statistical algorithms to extract GSR features for stress detection. Then we use different machine learning algorithms and Wearable Stress and Affect Detection (WESAD) dataset to evaluate our results. The results show that we are capable of detecting stress with the accuracy of 92 percent using 10-fold cross-validation and using the features extracted from our tool.Comment: 6 pages and 5 figures. Link to the github of the tool: https://github.com/HealthSciTech/pyED

    Pain Recognition With Electrocardiographic Features in Postoperative Patients: Method Validation Study

    Get PDF
    Background: There is a strong demand for an accurate and objective means of assessing acute pain among hospitalized patients to help clinicians provide pain medications at a proper dosage and in a timely manner. Heart rate variability (HRV) comprises changes in the time intervals between consecutive heartbeats, which can be measured through acquisition and interpretation of electrocardiography (ECG) captured from bedside monitors or wearable devices. As increased sympathetic activity affects the HRV, an index of autonomic regulation of heart rate, ultra-short-term HRV analysis can provide a reliable source of information for acute pain monitoring. In this study, widely used HRV time and frequency domain measurements are used in acute pain assessments among postoperative patients. The existing approaches have only focused on stimulated pain in healthy subjects, whereas, to the best of our knowledge, there is no work in the literature building models using real pain data and on postoperative patients.Objective: The objective of our study was to develop and evaluate an automatic and adaptable pain assessment algorithm based on ECG features for assessing acute pain in postoperative patients likely experiencing mild to moderate pain.Methods: The study used a prospective observational design. The sample consisted of 25 patient participants aged 18 to 65 years. In part 1 of the study, a transcutaneous electrical nerve stimulation unit was employed to obtain baseline discomfort thresholds for the patients. In part 2, a multichannel biosignal acquisition device was used as patients were engaging in non-noxious activities. At all times, pain intensity was measured using patient self-reports based on the Numerical Rating Scale. A weak supervision framework was inherited for rapid training data creation. The collected labels were then transformed from 11 intensity levels to 5 intensity levels. Prediction models were developed using 5 different machine learning methods. Mean prediction accuracy was calculated using leave-one-out cross-validation. We compared the performance of these models with the results from a previously published research study.Results: Five different machine learning algorithms were applied to perform a binary classification of baseline (BL) versus 4 distinct pain levels (PL1 through PL4). The highest validation accuracy using 3 time domain HRV features from a BioVid research paper for baseline versus any other pain level was achieved by support vector machine (SVM) with 62.72% (BL vs PL4) to 84.14% (BL vs PL2). Similar results were achieved for the top 8 features based on the Gini index using the SVM method, with an accuracy ranging from 63.86% (BL vs PL4) to 84.79% (BL vs PL2).Conclusions: We propose a novel pain assessment method for postoperative patients using ECG signal. Weak supervision applied for labeling and feature extraction improves the robustness of the approach. Our results show the viability of using a machine learning algorithm to accurately and objectively assess acute pain among hospitalized patients.</div

    Pain assessment tool with electrodermal activity for postoperative patients: Method validation study

    Get PDF
    Background: Accurate, objective pain assessment is required in the health care domain and clinical settings for appropriate pain management. Automated, objective pain detection from physiological data in patients provides valuable information to hospital staff and caregivers to better manage pain, particularly for patients who are unable to self-report. Galvanic skin response (GSR) is one of the physiologic signals that refers to the changes in sweat gland activity, which can identify features of emotional states and anxiety induced by varying pain levels. This study used different statistical features extracted from GSR data collected from postoperative patients to detect their pain intensity. To the best of our knowledge, this is the first work building pain models using postoperative adult patients instead of healthy subjects.Objective: The goal of this study was to present an automatic pain assessment tool using GSR signals to predict different pain intensities in noncommunicative, postoperative patients.Methods: The study was designed to collect biomedical data from postoperative patients reporting moderate to high pain levels. We recruited 25 participants aged 23-89 years. First, a transcutaneous electrical nerve stimulation (TENS) unit was employed to obtain patients' baseline data. In the second part, the Empatica E4 wristband was worn by patients while they were performing low-intensity activities. Patient self-report based on the numeric rating scale (NRS) was used to record pain intensities that were correlated with objectively measured data. The labels were down-sampled from 11 pain levels to 5 different pain intensities, including the baseline. We used 2 different machine learning algorithms to construct the models. The mean decrease impurity method was used to find the top important features for pain prediction and improve the accuracy. We compared our results with a previously published research study to estimate the true performance of our models.Results: Four different binary classification models were constructed using each machine learning algorithm to classify the baseline and other pain intensities (Baseline [BL] vs Pain Level [PL] 1, BL vs PL2, BL vs PL3, and BL vs PL4). Our models achieved higher accuracy for the first 3 pain models than the BioVid paper approach despite the challenges in analyzing real patient data. For BL vs PL1, BL vs PL2, and BL vs PL4, the highest prediction accuracies were achieved when using a random forest classifier (86.0, 70.0, and 61.5, respectively). For BL vs PL3, we achieved an accuracy of 72.1 using a k-nearest-neighbor classifier.Conclusions: We are the first to propose and validate a pain assessment tool to predict different pain levels in real postoperative adult patients using GSR signals. We also exploited feature selection algorithms to find the top important features related to different pain intensities.</p

    The 10th International Conference on Ambient Systems, Networks and Technologies (ANT 2019) / The 2nd International Conference on Emerging Data and Industry 4.0 (EDI40 2019) / Affiliated Workshops

    No full text
    Photoplethysmography (PPG) as a non-invasive and low-cost technique plays a significant role in wearable Internet-of-Things based health monitoring systems, enabling continuous health and well-being data collection. As PPG monitoring is relatively simple, non-invasive, and convenient, it is widely used in a variety of wearable devices (e.g., smart bands, smart rings, smartphones) to acquire different vital signs such as heart rate and pulse rate variability. However, the accuracy of such vital signs highly depends on the quality of the signal and the presence of artifacts generated by other resources such as motion. This unreliable performance is unacceptable in health monitoring systems. To tackle this issue, different studies have proposed motion artifacts reduction and signal quality assessment methods. However, they merely focus on improvements in the results and signal quality. Therefore, they are unable to alleviate erroneous decision making due to invalid vital signs extracted from the unreliable PPG signals. In this paper, we propose a novel PPG quality assessment approach for IoT-based health monitoring systems, by which the reliability of the vital signs extracted from PPG quality is determined. Therefore, unreliable data can be discarded to prevent inaccurate decision making and false alarms. Exploiting a Convolutional Neural Networks (CNN) approach, a hypothesis function is created by comparing heart rate in the PPG with corresponding heart rate values extracted from ECG signal. We implement a proof-of-concept IoT-based system to evaluate the accuracy of the proposed approach.</p
    corecore