9,382 research outputs found
Recommended from our members
Breathing Signature as Vitality Score Index Created by Exercises of Qigong: Implications of Artificial Intelligence Tools Used in Traditional Chinese Medicine.
Rising concerns about the short- and long-term detrimental consequences of administration of conventional pharmacopeia are fueling the search for alternative, complementary, personalized, and comprehensive approaches to human healthcare. Qigong, a form of Traditional Chinese Medicine, represents a viable alternative approach. Here, we started with the practical, philosophical, and psychological background of Ki (in Japanese) or Qi (in Chinese) and their relationship to Qigong theory and clinical application. Noting the drawbacks of the current state of Qigong clinic, herein we propose that to manage the unique aspects of the Eastern 'non-linearity' and 'holistic' approach, it needs to be integrated with the Western "linearity" "one-direction" approach. This is done through developing the concepts of "Qigong breathing signatures," which can define our life breathing patterns associated with diseases using machine learning technology. We predict that this can be achieved by establishing an artificial intelligence (AI)-Medicine training camp of databases, which will integrate Qigong-like breathing patterns with different pathologies unique to individuals. Such an integrated connection will allow the AI-Medicine algorithm to identify breathing patterns and guide medical intervention. This unique view of potentially connecting Eastern Medicine and Western Technology can further add a novel insight to our current understanding of both Western and Eastern medicine, thereby establishing a vitality score index (VSI) that can predict the outcomes of lifestyle behaviors and medical conditions
A dataset of continuous affect annotations and physiological signals for emotion analysis
From a computational viewpoint, emotions continue to be intriguingly hard to
understand. In research, direct, real-time inspection in realistic settings is
not possible. Discrete, indirect, post-hoc recordings are therefore the norm.
As a result, proper emotion assessment remains a problematic issue. The
Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as
it focusses on real-time continuous annotation of emotions, as experienced by
the participants, while watching various videos. For this purpose, a novel,
intuitive joystick-based annotation interface was developed, that allowed for
simultaneous reporting of valence and arousal, that are instead often annotated
independently. In parallel, eight high quality, synchronized physiological
recordings (1000 Hz, 16-bit ADC) were made of ECG, BVP, EMG (3x), GSR (or EDA),
respiration and skin temperature. The dataset consists of the physiological and
annotation data from 30 participants, 15 male and 15 female, who watched
several validated video-stimuli. The validity of the emotion induction, as
exemplified by the annotation and physiological data, is also presented.Comment: Dataset available at:
https://rmc.dlr.de/download/CASE_dataset/CASE_dataset.zi
Exploring Emotion Recognition for VR-EBT Using Deep Learning on a Multimodal Physiological Framework
Post-Traumatic Stress Disorder is a mental health condition that affects a growing number of people. A variety of PTSD treatment methods exist, however current research indicates that virtual reality exposure-based treatment has become more prominent in its use.Yet the treatment method can be costly and time consuming for clinicians and ultimately for the healthcare system. PTSD can be delivered in a more sustainable way using virtual reality. This is accomplished by using machine learning to autonomously adapt virtual reality scene changes. The use of machine learning will also support a more efficient way of inserting positive stimuli in virtual reality scenes. Machine learning has been used in medical areas such as rare diseases, oncology, medical data classification and psychiatry. This research used a public dataset that contained physiological recordings and emotional responses. The dataset was used to train a deep neural network, and a convolutional neural network to predict an individual’s valence, arousal and dominance. The results presented indicate that the deep neural network had the highest overall mean bounded regression accuracy and the lowest computational time
How Does the Body Affect the Mind? Role of Cardiorespiratory Coherence in the Spectrum of Emotions
The brain is considered to be the primary generator and regulator of emotions; however, afferent signals originating throughout the body are detected by the autonomic nervous system (ANS) and brainstem, and, in turn, can modulate emotional processes. During stress and negative
emotional states, levels of cardiorespiratory coherence (CRC) decrease, and a shift occurs toward sympathetic dominance. In contrast, CRC levels increase during more positive emotional states, and a shift occurs toward
parasympathetic dominance. Te dynamic changes in CRC that accompany different emotions can provide insights into how the activity of the limbic system and afferent feedback manifest as emotions. The authors propose that the brainstem and CRC are involved in important feedback mechanisms that modulate emotions and higher cortical areas. That mechanism may be one of
many mechanisms that underlie the physiological and neurological changes that are experienced during pranayama and meditation and may support the use of those techniques to treat various mood disorders and reduce stress
Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring
How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
- …