3,477 research outputs found
WEARS: Wearable Emotion AI with Real-time Sensor data
Emotion prediction is the field of study to understand human emotions.
Existing methods focus on modalities like text, audio, facial expressions,
etc., which could be private to the user. Emotion can be derived from the
subject's psychological data as well. Various approaches that employ
combinations of physiological sensors for emotion recognition have been
proposed. Yet, not all sensors are simple to use and handy for individuals in
their daily lives. Thus, we propose a system to predict user emotion using
smartwatch sensors. We design a framework to collect ground truth in real-time
utilizing a mix of English and regional language-based videos to invoke
emotions in participants and collect the data. Further, we modeled the problem
as binary classification due to the limited dataset size and experimented with
multiple machine-learning models. We also did an ablation study to understand
the impact of features including Heart Rate, Accelerometer, and Gyroscope
sensor data on mood. From the experimental results, Multi-Layer Perceptron has
shown a maximum accuracy of 93.75 percent for pleasant-unpleasant (high/low
valence classification) moods
A novel Big Data analytics and intelligent technique to predict driver's intent
Modern age offers a great potential for automatically predicting the driver's intent through the increasing miniaturization of computing technologies, rapid advancements in communication technologies and continuous connectivity of heterogeneous smart objects. Inside the cabin and engine of modern cars, dedicated computer systems need to possess the ability to exploit the wealth of information generated by heterogeneous data sources with different contextual and conceptual representations. Processing and utilizing this diverse and voluminous data, involves many challenges concerning the design of the computational technique used to perform this task. In this paper, we investigate the various data sources available in the car and the surrounding environment, which can be utilized as inputs in order to predict driver's intent and behavior. As part of investigating these potential data sources, we conducted experiments on e-calendars for a large number of employees, and have reviewed a number of available geo referencing systems. Through the results of a statistical analysis and by computing location recognition accuracy results, we explored in detail the potential utilization of calendar location data to detect the driver's intentions. In order to exploit the numerous diverse data inputs available in modern vehicles, we investigate the suitability of different Computational Intelligence (CI) techniques, and propose a novel fuzzy computational modelling methodology. Finally, we outline the impact of applying advanced CI and Big Data analytics techniques in modern vehicles on the driver and society in general, and discuss ethical and legal issues arising from the deployment of intelligent self-learning cars
Coverage of emotion recognition for common wearable biosensors
The present research proposes a novel emotion recognition framework for the computer prediction of human emotions using common wearable biosensors. Emotional perception promotes specific patterns of biological responses in the human body and this can be sensed and used to predict emotions using only biomedical measurements. Based on theoretical and empirical psychophysiological research, the foundation of autonomic specificity facilitates the establishment of a strong background for recognising human emotions using machine learning on physiological patterning. However, a systematic way of choosing the physiological data covering the elicited emotional responses for recognising the target emotions is not obvious. The current study demonstrates through experimental measurements the coverage of emotion recognition using common off-the-shelf wearable biosesnors based on the synchronisation between audiovisual stimuli and the corresponding physiological responses. The work forms the basis of validating the hypothesis for emotional state recognition in the literature, and presents coverage of the use of common wearable biosensors coupled with a novel preprocessing algorithm to demonstrate the practical prediction of the emotional states of wearers
Automated Classification for Electrophysiological Data: Machine Learning Approaches for Disease Detection and Emotion Recognition
Smart healthcare is a health service system that utilizes technologies, e.g., artificial intelligence and
big data, to alleviate the pressures on healthcare systems. Much recent research has focused on the
automatic disease diagnosis and recognition and, typically, our research pays attention on automatic
classifications for electrophysiological signals, which are measurements of the electrical activity.
Specifically, for electrocardiogram (ECG) and electroencephalogram (EEG) data, we develop a
series of algorithms for automatic cardiovascular disease (CVD) classification, emotion recognition
and seizure detection.
With the ECG signals obtained from wearable devices, the candidate developed novel signal
processing and machine learning method for continuous monitoring of heart conditions. Compared to
the traditional methods based on the devices at clinical settings, the developed method in this thesis
is much more convenient to use. To identify arrhythmia patterns from the noisy ECG signals obtained
through the wearable devices, CNN and LSTM are used, and a wavelet-based CNN is proposed to
enhance the performance.
An emotion recognition method with a single channel ECG is developed, where a novel exploitative
and explorative GWO-SVM algorithm is proposed to achieve high performance emotion
classification. The attractive part is that the proposed algorithm has the capability to learn the SVM
hyperparameters automatically, and it can prevent the algorithm from falling into local solutions,
thereby achieving better performance than existing algorithms.
A novel EEG-signal based seizure detector is developed, where the EEG signals are transformed to
the spectral-temporal domain, so that the dimension of the input features to the CNN can be
significantly reduced, while the detector can still achieve superior detection performance
Using non-invasive wearables for detecting emotions with intelligent agents
This paper proposes the use of intelligent wristbands for the automatic
detection of emotional states in order to develop an application which
allows to extract, analyze, represent and manage the social emotion of a group
of entities. Nowadays, the detection of the joined emotion of an heterogeneous
group of people is still an open issue. Most of the existing approaches are centered
in the emotion detection and management of a single entity. Concretely,
the application tries to detect how music can influence in a positive or negative
way over individuals’ emotional states. The main goal of the proposed
system is to play music that encourages the increase of happiness of the overall
patrons.This work is partially supported by the MINECO/FEDER TIN2015-65515-C4-1-R
and the FPI grant AP2013-01276 awarded to Jaime-Andres Rincon. This work is
supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT – Fundação para a
Ciência e Tecnologia within the projects UID/CEC/00319/2013 and Post-Doc scholarship
SFRH/BPD/102696/2014 (A. Cost
Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications
In the era when the market segment of Internet of Things (IoT) tops the chart
in various business reports, it is apparently envisioned that the field of
medicine expects to gain a large benefit from the explosion of wearables and
internet-connected sensors that surround us to acquire and communicate
unprecedented data on symptoms, medication, food intake, and daily-life
activities impacting one's health and wellness. However, IoT-driven healthcare
would have to overcome many barriers, such as: 1) There is an increasing demand
for data storage on cloud servers where the analysis of the medical big data
becomes increasingly complex, 2) The data, when communicated, are vulnerable to
security and privacy issues, 3) The communication of the continuously collected
data is not only costly but also energy hungry, 4) Operating and maintaining
the sensors directly from the cloud servers are non-trial tasks. This book
chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog
Computing is a service-oriented intermediate layer in IoT, providing the
interfaces between the sensors and cloud servers for facilitating connectivity,
data transfer, and queryable local database. The centerpiece of Fog computing
is a low-power, intelligent, wireless, embedded computing node that carries out
signal conditioning and data analytics on raw data collected from wearables or
other medical sensors and offers efficient means to serve telehealth
interventions. We implemented and tested an fog computing system using the
Intel Edison and Raspberry Pi that allows acquisition, computing, storage and
communication of the various medical data such as pathological speech data of
individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate
estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area
Network, Body Sensor Network, Edge Computing, Fog Computing, Medical
Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment,
Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in
Smart Healthcare (2017), Springe
- …