21,806 research outputs found
Deep Learning on VR-Induced Attention
Some evidence suggests that virtual reality (VR) approaches may lead to a greater attentional focus than experiencing the same scenarios presented on computer monitors. The aim of this study is to differentiate attention levels captured during a perceptual discrimination task presented on two different viewing platforms, standard personal computer (PC) monitor and head-mounted-display (HMD)-VR, using a well-described electroencephalography (EEG)-based measure (parietal P3b latency) and deep learning-based measure (that is EEG features extracted by a compact convolutional neural network-EEGNet and visualized by a gradient-based relevance attribution method-DeepLIFT). Twenty healthy young adults participated in this perceptual discrimination task in which according to a spatial cue they were required to discriminate either a "Target" or "Distractor" stimuli on the screen of viewing platforms. Experimental results show that the EEGNet-based classification accuracies are highly correlated with the p values of statistical analysis of P3b. Also, the visualized EEG features are neurophysiologically interpretable. This study provides the first visualized deep learning-based EEG features captured during an HMD-VR-based attentional tas
Recommended from our members
A Palette of Deepened Emotions: Exploring Emotional Challenge in Virtual Reality Games
Recent work introduced the notion of ‘emotional challenge’promising for understanding more unique and diverse player experiences (PX). Although emotional challenge has immediately attracted HCI researchers’ attention, the concept has not been experimentally explored, especially in virtual reality (VR), one of the latest gaming environments. We conducted two experiments to investigate how emotional challenge affects PX when separately from or jointly with conventional challenge in VR and PC conditions. We found that relatively exclusive emotional challenge induced a wider range of different emotions in both conditions, while the adding of emotional challenge broadened emotional responses only in VR. In both experiments, VR significantly enhanced the measured PX of emotional responses, appreciation, immersion and presence. Our findings indicate that VR may be an ideal medium to present emotional challenge and also extend the understanding of emotional (and conventional) challenge in video games
An interoceptive predictive coding model of conscious presence
We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness
Learning deep dynamical models from image pixels
Modeling dynamical systems is important in many disciplines, e.g., control,
robotics, or neurotechnology. Commonly the state of these systems is not
directly observed, but only available through noisy and potentially
high-dimensional observations. In these cases, system identification, i.e.,
finding the measurement mapping and the transition mapping (system dynamics) in
latent space can be challenging. For linear system dynamics and measurement
mappings efficient solutions for system identification are available. However,
in practical applications, the linearity assumptions does not hold, requiring
non-linear system identification techniques. If additionally the observations
are high-dimensional (e.g., images), non-linear system identification is
inherently hard. To address the problem of non-linear system identification
from high-dimensional observations, we combine recent advances in deep learning
and system identification. In particular, we jointly learn a low-dimensional
embedding of the observation by means of deep auto-encoders and a predictive
transition model in this low-dimensional space. We demonstrate that our model
enables learning good predictive models of dynamical systems from pixel
information only.Comment: 10 pages, 11 figure
- …