45 research outputs found

    Where is my pain? : a neurocognitive investigation of the spatial perception of pain

    Get PDF

    Task-irrelevant perceptual learning of crossmodal links: specificity and mechanisms

    Full text link
    It is clear that in order to perceive the external environment in its entirety, inputs from multiple sensory systems (i.e. modalities) must be combined with regard to each object in the environment. Humans are highly vision-dependent creatures, with a large portion of the human cortex dedicated to visual perception and many multimodal areas proposed to integrate vision with other modalities. Recent studies of multimodal integration have shown crossmodal facilitation (increased performance at short stimulus onset asynchronies, SOA s) and/or inhibition of return ( IOR ; decreased performance at long SOAs) for detection of a target stimulus in one modality following a location-specific cue in a different modality. It has also been shown that unimodal systems maintain some level of plasticity through adulthood, as revealed through studies of sensory deprivation (i.e. unimodal areas respond to multimodal stimuli), and especially through perceptual learning ( PL )--a well-defined type of cortical plasticity. Few studies have attempted to investigate the specificity and plasticity of crossmodal effects or the contexts in which multimodal processing is necessary for accurate visual perception. This dissertation addresses these unanswered questions of audiovisual ( AV ) crossmodal cuing effects by combining findings from unimodal perceptual learning with those of multimodal cuing effects as follows: (1) the short- and long-term effects of audiovisual crossmodal cuing, as well as the plasticity of these effects were systematically examined using spatially specific audiovisual training to manipulate crossmodal associations using perceptual learning; (2) neural correlates of these plastic crossmodal effects were deduced using monocular viewing tests (discriminating simple and complex stimuli) following monocular and orientation specific crossmodal perceptual training; and (3) psychophysical boundaries of plasticity within and among these mechanisms as dependent on task/training type and difficulty were determined by varying stimulus salience and looking at post-PL changes in response operating characteristics

    Divisions Within the Posterior Parietal Cortex Help Touch Meet Vision

    Get PDF
    The parietal cortex is divided into two major functional regions: the anterior parietal cortex that includes primary somatosensory cortex, and the posterior parietal cortex (PPC) that includes the rest of the parietal lobe. The PPC contains multiple representations of space. In Dijkerman and de Haan’s (see record 2007-13802-022) model, higher spatial representations are separate from PPC functions. This model should be developed further so that the functions of the somatosensory system are integrated with specific functions within the PPC and higher spatial representations. Through this further specification of the model, one can make better predictions regarding functional interactions between somatosensory and visual systems

    Central role of somatosensory processes in sexual arousal as identified by neuroimaging techniques

    Get PDF
    Research on the neural correlates of sexual arousal is a growing field of research in affective neuroscience. A new approach studying the correlation between the hemodynamic cerebral response and autonomic genital response has enabled distinct brain areas to be identified according to their role in inducing penile erection, on the one hand, and in representing penile sensation, on the othe

    Sensor Fusion in the Perception of Self-Motion

    No full text
    This dissertation has been written at the Max Planck Institute for Biological Cybernetics (Max-Planck-Institut fĂĽr Biologische Kybernetik) in TĂĽbingen in the department of Prof. Dr. Heinrich H. BĂĽlthoff. The work has universitary support by Prof. Dr. GĂĽnther Palm (University of Ulm, Abteilung Neuroinformatik). Main evaluators are Prof. Dr. GĂĽnther Palm, Prof. Dr. Wolfgang Becker (University of Ulm, Sektion Neurophysiologie) and Prof. Dr. Heinrich BĂĽlthoff.amp;lt;bramp;gt;amp;lt;bramp;gt; The goal of this thesis was to investigate the integration of different sensory modalities in the perception of self-motion, by using psychophysical methods. Experiments with healthy human participants were to be designed for and performed in the Motion Lab, which is equipped with a simulator platform and projection screen. Results from psychophysical experiments should be used to refine models of the multisensory integration process, with an mphasis on Bayesian (maximum likelihood) integration mechanisms.amp;lt;bramp;gt;amp;lt;bramp;gt; To put the psychophysical experiments into the larger framework of research on multisensory integration in the brain, results of neuroanatomical and neurophysiological experiments on multisensory integration are also reviewed

    Investigating the Inhibition of the Return of Attention in the Tactile Domain

    Get PDF
    Purpose: The time-course needed to elicit tactile inhibition of return (IOR) has not been well-defined due to the paucity of research in this area especially studies investigating spatial discrimination. Reportedly tactile IOR uses higher-order mental representations to orient attention spatially yet the properties of low-level dermatomal maps may better account for how IOR orients tactile attention in space although its contribution is unclear. The present study sought to establish a time-course that evokes IOR in a unimodal tactile spatial discrimination task and decouples the contribution of the dermatome from higher-order representations. Methods: Two conditions containing distinct tactile cue-target paradigms designed to tap into either the whole finger representation (Finger trial) and its response gradient or the dermatomal representation (Location trial) were applied to the index and middle finger-tips of both hands of 17 participants. Targets appeared at a cued or uncued finger following an inter-stimulus interval (ISI; 150, 600, or 1200 ms) for Finger trials and they appeared at cued or uncued locations after an ISI within a single finger-tip for Location trials. Results: At ISIs of 1200 ms IOR and facilitation of response times (RTs) were elicited for cued and uncued homologous Finger trials respectively. As ISIs increased, RTs for uncued homologous and adjacent Finger trials linearly decreased and increased respectively. Thus, Finger trial type trends exhibited a non-linear response gradient but they were not different from those of Location trials, specifically cued and uncued Location trials mirrored cued and uncued homologous Finger trials. While no facilitation and IOR occurred between Location trials, cued and uncued trials showed trends typical of IOR. Conclusion: We showed that tactile IOR can be elicited in a unimodal spatial discrimination task and that tactile spatial attention, oriented via IOR, is likely driven by low-level dermatomal maps

    Putting It Into Words: The Impact of Visual Impairment on Perception, Experience and Presence

    Get PDF
    The experience of being “present” in a mediated environment, such that it appears real, is known to be affected by deficits in perception yet little research has been devoted to disabled audiences. People with a visual impairment access audiovisual media by means of Audio Description, which gives visual information in verbal form. The AD user plays an active role, engaging their own perceptual processing systems and bringing real-world experiences to the mediated environment. In exploring visual impairment and presence, this thesis concerns a question fundamental to psychology, whether propositional and experiential knowledge equate. It casts doubt on current models of sensory compensation in the blind and puts forward an alternative hypothesis of linguistic compensation. Qualitative evidence from Study 1 suggests that, in the absence of bimodal (audio-visual) cues, words can compensate for missing visual information. The role of vision in multisensory integration is explored experimentally in Studies 2 and 3. Crossmodal associations arising both from direct perception and imagery are shown to be altered by visual experience. Study 4 tests presence in an auditory environment. Non-verbal sound is shown to enhance presence in the sighted but not the blind. Both Studies 3 and 4 support neuroimaging evidence that words are processed differently in the absence of sight. Study 5, comparing mental spatial models, suggests this is explained by explicit verbal encoding by people with a visual impairment. Study 6 tests the effect of words on presence and emotion elicitation in an audiovisual environment. In the absence of coherent information from the dialogue, additional verbal information significantly improves understanding. Moreover, in certain circumstances, Audio Description significantly enhances presence and successfully elicits a target emotion. A model of Audio Description is presented. Implications are discussed for theoretical models of perceptual processing and presence in those with and without sight
    corecore