866 research outputs found

    Linked networks for learning and expressing location-specific threat

    Get PDF
    Learning locations of danger within our environment is a vital adaptive ability whose neural bases are only partially understood. We examined fMRI brain activity while participants navigated a virtual environment in which flowers appeared and were “picked.” Picking flowers in the danger zone (one-half of the environment) predicted an electric shock to the wrist (or “bee sting”); flowers in the safe zone never predicted shock; and household objects served as controls for neutral spatial memory. Participants demonstrated learning with shock expectancy ratings and skin conductance increases for flowers in the danger zone. Patterns of brain activity shifted between overlapping networks during different task stages. Learning about environmental threats, during flower approach in either zone, engaged the anterior hippocampus, amygdala, and ventromedial prefrontal cortex (vmPFC), with vmPFC–hippocampal functional connectivity increasing with experience. Threat appraisal, during approach in the danger zone, engaged the insula and dorsal anterior cingulate (dACC), with insula–hippocampal functional connectivity. During imminent threat, after picking a flower, this pattern was supplemented by activity in periaqueductal gray (PAG), insula–dACC coupling, and posterior hippocampal activity that increased with experience. We interpret these patterns in terms of multiple representations of spatial context (anterior hippocampus); specific locations (posterior hippocampus); stimuli (amygdala); value (vmPFC); threat, both visceral (insula) and cognitive (dACC); and defensive behaviors (PAG), interacting in different combinations to perform the functions required at each task stage. Our findings illuminate how we learn about location-specific threats and suggest how they might break down into overgeneralization or hypervigilance in anxiety disorders

    Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences

    Get PDF
    This article highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences

    Multisensory mechanisms of body ownership and self-location

    Get PDF
    Having an accurate sense of the spatial boundaries of the body is a prerequisite for interacting with the environment and is thus essential for the survival of any organism with a central nervous system. Every second, our brain receives a staggering amount of information from the body across different sensory channels, each of which features a certain degree of noise. Despite the complexity of the incoming multisensory signals, the brain manages to construct and maintain a stable representation of our own body and its spatial relationships to the external environment. This natural “in-body” experience is such a fundamental subjective feeling that most of us take it for granted. However, patients with lesions in particular brain areas can experience profound disturbances in their normal sense of ownership over their body (somatoparaphrenia) or lose the feeling of being located inside their physical body (out-of-body experiences), suggesting that our “in-body” experience depends on intact neural circuitry in the temporal, frontal, and parietal brain regions. The question at the heart of this thesis relates to how the brain combines visual, tactile, and proprioceptive signals to build an internal representation of the bodily self in space. Over the past two decades, perceptual body illusions have become an important tool for studying the mechanisms underlying our sense of body ownership and self-location. The most influential of these illusions is the rubber hand illusion, in which ownership of an artificial limb is induced via the synchronous stroking of a rubber hand and an individual’s hidden real hand. Studies of this illusion have shown that multisensory integration within the peripersonal space is a key mechanism for bodily self-attribution. In Study I, we showed that the default sense of ownership of one’s real hand, not just the sense of rubber hand ownership, also depends on spatial and temporal multisensory congruence principles implemented in fronto-parietal brain regions. In Studies II and III, we characterized two novel perceptual illusions that provide strong support for the notion that multisensory integration within the peripersonal space is intimately related to the sense of limb ownership, and we examine the role of vision in this process. In Study IV, we investigated a fullbody version of the rubber hand illusion—the “out-of-body illusion”—and show that it can be used to induce predictable changes in one’s sense of self-location and body ownership. Finally, in Study V, we used the out-of-body illusion to “perceptually teleport” participants during brain imaging and identify activity patterns specific to the sense of self-location in a given position in space. Together, these findings shed light on the role of multisensory integration in building the experience of the bodily self in space and provide initial evidence for how representations of body ownership and self-location interact in the brain

    Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses

    Full text link
    Tesis por compendio[ES] El uso de la realidad virtual (RV) se ha incrementado notablemente en la comunidad científica para la investigación del comportamiento humano. En particular, la RV inmersiva ha crecido debido a la democratización de las gafas de realidad virtual o head mounted displays (HMD), que ofrecen un alto rendimiento con una inversión económica. Uno de los campos que ha emergido con fuerza en la última década es el Affective Computing, que combina psicofisiología, informática, ingeniería biomédica e inteligencia artificial, desarrollando sistemas que puedan reconocer emociones automáticamente. Su progreso es especialmente importante en el campo de la investigación del comportamiento humano, debido al papel fundamental que las emociones juegan en muchos procesos psicológicos como la percepción, la toma de decisiones, la creatividad, la memoria y la interacción social. Muchos estudios se han centrado en intentar obtener una metodología fiable para evocar y automáticamente identificar estados emocionales, usando medidas fisiológicas objetivas y métodos de aprendizaje automático. Sin embargo, la mayoría de los estudios previos utilizan imágenes, audios o vídeos para generar los estados emocionales y, hasta donde llega nuestro conocimiento, ninguno de ellos ha desarrollado un sistema de reconocimiento emocional usando RV inmersiva. Aunque algunos trabajos anteriores sí analizan las respuestas fisiológicas en RV inmersivas, estos no presentan modelos de aprendizaje automático para procesamiento y clasificación automática de bioseñales. Además, un concepto crucial cuando se usa la RV en investigación del comportamiento humano es la validez: la capacidad de evocar respuestas similares en un entorno virtual a las evocadas por el espacio físico. Aunque algunos estudios previos han usado dimensiones psicológicas y cognitivas para comparar respuestas entre entornos reales y virtuales, las investigaciones que analizan respuestas fisiológicas o comportamentales están mucho menos extendidas. Según nuestros conocimientos, este es el primer trabajo que compara entornos físicos con su réplica en RV, empleando respuestas fisiológicas y algoritmos de aprendizaje automático y analizando la capacidad de la RV de transferir y extrapolar las conclusiones obtenidas al entorno real que se está simulando. El objetivo principal de la tesis es validar el uso de la RV inmersiva como una herramienta de estimulación emocional usando respuestas psicofisiológicas y comportamentales en combinación con algoritmos de aprendizaje automático, así como realizar una comparación directa entre un entorno real y virtual. Para ello, se ha desarrollado un protocolo experimental que incluye entornos emocionales 360º, un museo real y una virtualización 3D altamente realista del mismo museo. La tesis presenta novedosas contribuciones del uso de la RV inmersiva en la investigación del comportamiento humano, en particular en lo relativo al estudio de las emociones. Esta ayudará a aplicar metodologías a estímulos más realistas para evaluar entornos y situaciones de la vida diaria, superando las actuales limitaciones de la estimulación emocional que clásicamente ha incluido imágenes, audios o vídeos. Además, en ella se analiza la validez de la RV realizando una comparación directa usando una simulación altamente realista. Creemos que la RV inmersiva va a revolucionar los métodos de estimulación emocional en entornos de laboratorio. Además, su sinergia junto a las medidas fisiológicas y las técnicas de aprendizaje automático, impactarán transversalmente en muchas áreas de investigación como la arquitectura, la salud, la evaluación psicológica, el entrenamiento, la educación, la conducción o el marketing, abriendo un nuevo horizonte de oportunidades para la comunidad científica. La presente tesis espera contribuir a caminar en esa senda.[EN] In recent years the scientific community has significantly increased its use of virtual reality (VR) technologies in human behaviour research. In particular, the use of immersive VR has grown due to the introduction of affordable, high performance head mounted displays (HMDs). Among the fields that has strongly emerged in the last decade is affective computing, which combines psychophysiology, computer science, biomedical engineering and artificial intelligence in the development of systems that can automatically recognize emotions. The progress of affective computing is especially important in human behaviour research due to the central role that emotions play in many background processes, such as perception, decision-making, creativity, memory and social interaction. Several studies have tried to develop a reliable methodology to evoke and automatically identify emotional states using objective physiological measures and machine learning methods. However, the majority of previous studies used images, audio or video to elicit emotional statements; to the best of our knowledge, no previous research has developed an emotion recognition system using immersive VR. Although some previous studies analysed physiological responses in immersive VR, they did not use machine learning techniques for biosignal processing and classification. Moreover, a crucial concept when using VR for human behaviour research is validity: the capacity to evoke a response from the user in a simulated environment similar to the response that might be evoked in a physical environment. Although some previous studies have used psychological and cognitive dimensions to compare responses in real and virtual environments, few have extended this research to analyse physiological or behavioural responses. Moreover, to our knowledge, this is the first study to compare VR scenarios with their real-world equivalents using physiological measures coupled with machine learning algorithms, and to analyse the ability of VR to transfer and extrapolate insights obtained from VR environments to real environments. The main objective of this thesis is, using psycho-physiological and behavioural responses in combination with machine learning methods, and by performing a direct comparison between a real and virtual environment, to validate immersive VR as an emotion elicitation tool. To do so we develop an experimental protocol involving emotional 360º environments, an art exhibition in a real museum, and a highly-realistic 3D virtualization of the same art exhibition. This thesis provides novel contributions to the use of immersive VR in human behaviour research, particularly in relation to emotions. VR can help in the application of methodologies designed to present more realistic stimuli in the assessment of daily-life environments and situations, thus overcoming the current limitations of affective elicitation, which classically uses images, audio and video. Moreover, it analyses the validity of VR by performing a direct comparison using highly-realistic simulation. We believe that immersive VR will revolutionize laboratory-based emotion elicitation methods. Moreover, its synergy with physiological measurement and machine learning techniques will impact transversely in many other research areas, such as architecture, health, assessment, training, education, driving and marketing, and thus open new opportunities for the scientific community. The present dissertation aims to contribute to this progress.[CA] L'ús de la realitat virtual (RV) s'ha incrementat notablement en la comunitat científica per a la recerca del comportament humà. En particular, la RV immersiva ha crescut a causa de la democratització de les ulleres de realitat virtual o head mounted displays (HMD), que ofereixen un alt rendiment amb una reduïda inversió econòmica. Un dels camps que ha emergit amb força en l'última dècada és el Affective Computing, que combina psicofisiologia, informàtica, enginyeria biomèdica i intel·ligència artificial, desenvolupant sistemes que puguen reconéixer emocions automàticament. El seu progrés és especialment important en el camp de la recerca del comportament humà, a causa del paper fonamental que les emocions juguen en molts processos psicològics com la percepció, la presa de decisions, la creativitat, la memòria i la interacció social. Molts estudis s'han centrat en intentar obtenir una metodologia fiable per a evocar i automàticament identificar estats emocionals, utilitzant mesures fisiològiques objectives i mètodes d'aprenentatge automàtic. No obstant això, la major part dels estudis previs utilitzen imatges, àudios o vídeos per a generar els estats emocionals i, fins on arriba el nostre coneixement, cap d'ells ha desenvolupat un sistema de reconeixement emocional mitjançant l'ús de la RV immersiva. Encara que alguns treballs anteriors sí que analitzen les respostes fisiològiques en RV immersives, aquests no presenten models d'aprenentatge automàtic per a processament i classificació automàtica de biosenyals. A més, un concepte crucial quan s'utilitza la RV en la recerca del comportament humà és la validesa: la capacitat d'evocar respostes similars en un entorn virtual a les evocades per l'espai físic. Encara que alguns estudis previs han utilitzat dimensions psicològiques i cognitives per a comparar respostes entre entorns reals i virtuals, les recerques que analitzen respostes fisiològiques o comportamentals estan molt menys esteses. Segons els nostres coneixements, aquest és el primer treball que compara entorns físics amb la seua rèplica en RV, emprant respostes fisiològiques i algorismes d'aprenentatge automàtic i analitzant la capacitat de la RV de transferir i extrapolar les conclusions obtingudes a l'entorn real que s'està simulant. L'objectiu principal de la tesi és validar l'ús de la RV immersiva com una eina d'estimulació emocional usant respostes psicofisiològiques i comportamentals en combinació amb algorismes d'aprenentatge automàtic, així com realitzar una comparació directa entre un entorn real i virtual. Per a això, s'ha desenvolupat un protocol experimental que inclou entorns emocionals 360º, un museu real i una virtualització 3D altament realista del mateix museu. La tesi presenta noves contribucions de l'ús de la RV immersiva en la recerca del comportament humà, en particular quant a l'estudi de les emocions. Aquesta ajudarà a aplicar metodologies a estímuls més realistes per a avaluar entorns i situacions de la vida diària, superant les actuals limitacions de l'estimulació emocional que clàssicament ha inclòs imatges, àudios o vídeos. A més, en ella s'analitza la validesa de la RV realitzant una comparació directa usant una simulació altament realista. Creiem que la RV immersiva revolucionarà els mètodes d'estimulació emocional en entorns de laboratori. A més, la seua sinergia al costat de les mesures fisiològiques i les tècniques d'aprenentatge automàtic, impactaran transversalment en moltes àrees de recerca com l'arquitectura, la salut, l'avaluació psicològica, l'entrenament, l'educació, la conducció o el màrqueting, obrint un nou horitzó d'oportunitats per a la comunitat científica. La present tesi espera contribuir a caminar en aquesta senda.Marín Morales, J. (2020). Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/148717TESISCompendi

    Predicting Inattentional Blindness with Pupillary Response in a Simulated Flight Task

    Get PDF
    Inattentional blindness (IB) is the failure of observers to notice the presence of a clearly viewable but unexpected visual event when attentional resources are diverted elsewhere. Knowing when an operator is unable to respond or detect an unexpected event may help improve safety during task performance. Unfortunately, it is difficult to predict when such failures might occur. The current study was a secondary data analysis of data collected in the Human and Autonomous Vehicle Systems Laboratory at NASA Langley Research Center. Specifically, 60 subjects (29 male, with normal or corrected-to-normal vision, mean age of 34.5 years (SD = 13.3) were randomly assigned to one of three automation conditions (full automation, partial automation, and full manual) and took part in a simulated flight landing task. The dependent variable was the detection/non-detection of an IB occurrence (a truck on the landing runway). Scores on the NASA-TLX workload rating scale varied significantly by automation condition. The full automation condition reported the lowest subjective task load followed by partial automation and then manual condition. IB detection varied significantly across automation condition. The moderate workload condition of partial automation exhibited the lowest likelihood of IB occurrence. The low workload full automation condition did not differ significantly from the manual condition. Subjects who reported higher task demand had increased pupil dilation and subjects with larger pupil dilation were more likely to detect the runway incursion. These results show eye tracking may be used to identify periods of reduced unexpected visual stimulus detection for possible real-time IB mitigation

    Location-dependent threat and associated neural abnormalities in clinical anxiety

    Get PDF
    Anxiety disorders are characterized by maladaptive defensive responses to distal or uncertain threats. Elucidating neural mechanisms of anxiety is essential to understand the development and maintenance of anxiety disorders. In fMRI, patients with pathological anxiety (ANX, n = 23) and healthy controls (HC, n = 28) completed a contextual threat learning paradigm in which they picked flowers in a virtual environment comprising a danger zone in which flowers were paired with shock and a safe zone (no shock). ANX compared with HC showed 1) decreased ventromedial prefrontal cortex and anterior hippocampus activation during the task, particularly in the safe zone, 2) increased insula and dorsomedial prefrontal cortex activation during the task, particularly in the danger zone, and 3) increased amygdala and midbrain/periaqueductal gray activation in the danger zone prior to potential shock delivery. Findings suggest that ANX engage brain areas differently to modulate context-appropriate emotional responses when learning to discriminate cues within an environment

    The Role of Spatial Location in Threat Memory: Modulation of Learning and Discrimination

    Get PDF
    Learning about dangers in our environment is a vital adaptive behavior, and many have studied the association of environmental cues with danger or safety. However, the outcome associated with a specific environmental cue can depend on where it is encountered, and relatively little is known about the neural mechanisms behind location-specific threat learning within a single environment. Through a series of experiments, I developed a novel virtual reality task comprised of safe and dangerous zones within a single environment. Healthy volunteers explored this environment while ‘picking flowers’, which they were told might contain bees. On contacting a flower, participants were frozen for a short period of time and, if ‘stung,’ received a mild electric shock at the end of this period. Participants had the opportunity to learn that bees only inhabited flowers in one ‘dangerous’ half of the environment. Participants were able to discriminate zones that predict safety and threat within a single environment, with galvanic skin responses and subjective reports increasing as they approached and picked flowers in the dangerous half of the environment. Using functional magnetic resonance imaging, I found posterior medial temporal lobe structures (parahippocampus, posterior hippocampus) to be involved in memory for object locations. In contrast, anterior hippocampus, amygdala, and ventromedial prefrontal cortex showed greater activity when approaching flowers, but this activity did not differentiate between safe and dangerous zones. However, once participants reached a flower in the dangerous zone, increased activity was seen in areas associated with imminent threat, such as the midbrain/periaqueductal gray, dorsal anterior cingulate, and insula cortices. These results are the first to reveal mechanisms of location-specific threat learning in humans, in the absence of obvious boundaries delineating safety and danger zones. In the future, I hope this new paradigm will be used to understand the overgeneralization of threat in anxiety disorders and post-traumatic stress disorder

    Psychophysiology, task complexity, and team factors determine emergency response teams' shared beliefs

    Get PDF
    In field settings where the objective truth is not known, the extent to which you have the same understanding of the situation as your team leader may be used as an indicator for a team’s situation awareness. Two experiments showed emergency response team members’ degree of shared beliefs (measured as a ‘similarity index’) to be associated with which team they are in, but not with which position they have in the team. This indicates that factors specific to the teams, e.g. the leader’s behavior, the team’s shared experience, or communication patterns, are important for a team’s situation awareness. In the second experiment, task complexity was manipulated with a scripted scenario design and heart rate variability was measured as an indicator of executive function. Shared beliefs were shown to be associated with the degree of high frequency modulation of heart rate variability. Further, shared beliefs were associated with the designed task complexity for some teams. The experiments showed no association between the measure of shared beliefs and subjective reports of situation awareness
    corecore