6 research outputs found

    Wearable devices for remote vital signs monitoring in the outpatient setting: an overview of the field

    Get PDF
    Early detection of physiological deterioration has been shown to improve patient outcomes. Due to recent improvements in technology, comprehensive outpatient vital signs monitoring is now possible. This is the first review to collate information on all wearable devices on the market for outpatient physiological monitoring. A scoping review was undertaken. The monitors reviewed were limited to those that can function in the outpatient setting with minimal restrictions on the patient’s normal lifestyle, while measuring any or all of the vital signs: heart rate, ECG, oxygen saturation, respiration rate, blood pressure and temperature. A total of 270 papers were included in the review. Thirty wearable monitors were examined: 6 patches, 3 clothing-based monitors, 4 chest straps, 2 upper arm bands and 15 wristbands. The monitoring of vital signs in the outpatient setting is a developing field with differing levels of evidence for each monitor. The most common clinical application was heart rate monitoring. Blood pressure and oxygen saturation measurements were the least common applications. There is a need for clinical validation studies in the outpatient setting to prove the potential of many of the monitors identified. Research in this area is in its infancy. Future research should look at aggregating the results of validity and reliability and patient outcome studies for each monitor and between different devices. This would provide a more holistic overview of the potential for the clinical use of each device

    Exploring Emotion Recognition for VR-EBT Using Deep Learning on a Multimodal Physiological Framework

    Get PDF
    Post-Traumatic Stress Disorder is a mental health condition that affects a growing number of people. A variety of PTSD treatment methods exist, however current research indicates that virtual reality exposure-based treatment has become more prominent in its use.Yet the treatment method can be costly and time consuming for clinicians and ultimately for the healthcare system. PTSD can be delivered in a more sustainable way using virtual reality. This is accomplished by using machine learning to autonomously adapt virtual reality scene changes. The use of machine learning will also support a more efficient way of inserting positive stimuli in virtual reality scenes. Machine learning has been used in medical areas such as rare diseases, oncology, medical data classification and psychiatry. This research used a public dataset that contained physiological recordings and emotional responses. The dataset was used to train a deep neural network, and a convolutional neural network to predict an individual’s valence, arousal and dominance. The results presented indicate that the deep neural network had the highest overall mean bounded regression accuracy and the lowest computational time

    A Globally Generalized Emotion Recognition System Involving Different Physiological Signals

    No full text
    Machine learning approaches for human emotion recognition have recently demonstrated high performance. However, only/mostly for subject-dependent approaches, in a variety of applications like advanced driver assisted systems, smart homes and medical environments. Therefore, now the focus is shifted more towards subject-independent approaches, which are more universal and where the emotion recognition system is trained using a specific group of subjects and then tested on totally new persons and thereby possibly while using other sensors of same physiological signals in order to recognize their emotions. In this paper, we explore a novel robust subject-independent human emotion recognition system, which consists of two major models. The first one is an automatic feature calibration model and the second one is a classification model based on Cellular Neural Networks (CNN). The proposed system produces state-of-the-art results with an accuracy rate between 80% and 89% when using the same elicitation materials and physiological sensors brands for both training and testing and an accuracy rate of 71.05% when the elicitation materials and physiological sensors brands used in training are different from those used in training. Here, the following physiological signals are involved: ECG (Electrocardiogram), EDA (Electrodermal activity) and ST (Skin-Temperature)

    A Globally Generalized Emotion Recognition System Involving Different Physiological Signals

    No full text
    Machine learning approaches for human emotion recognition have recently demonstrated high performance. However, only/mostly for subject-dependent approaches, in a variety of applications like advanced driver assisted systems, smart homes and medical environments. Therefore, now the focus is shifted more towards subject-independent approaches, which are more universal and where the emotion recognition system is trained using a specific group of subjects and then tested on totally new persons and thereby possibly while using other sensors of same physiological signals in order to recognize their emotions. In this paper, we explore a novel robust subject-independent human emotion recognition system, which consists of two major models. The first one is an automatic feature calibration model and the second one is a classification model based on Cellular Neural Networks (CNN). The proposed system produces state-of-the-art results with an accuracy rate between 80% and 89% when using the same elicitation materials and physiological sensors brands for both training and testing and an accuracy rate of 71.05% when the elicitation materials and physiological sensors brands used in training are different from those used in training. Here, the following physiological signals are involved: ECG (Electrocardiogram), EDA (Electrodermal activity) and ST (Skin-Temperature)

    CorrNet: Fine-grained emotion recognition for video watching using wearable physiological sensors

    Get PDF
    Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neu-tral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance
    corecore