35 research outputs found
Recommended from our members
Sustained Spatial Attention in Touch: Modality-Specific and Multimodal Mechanisms
Sustained attention to a body location results in enhanced processing of tactile stimuli presented at that location compared to another unattended location. In this paper, we review studies investigating the neural orrelates of sustained spatial attention in touch. These studies consistently show that activity within modality-specific somatosensory areas (SI and SII) is modulated by sustained tactile-spatial attention. Recent evidence suggests that these somatosensory areas may be recruited as part of a larger cortical network, also including higher-level multimodal regions involved in spatial selection across modalities. We discuss, in turn, the following multimodal effects in sustained tactile-spatial attention tasks. First, cross-modal attentional links between touch and vision, reflected in enhanced processing of task-irrelevant visual stimuli at tactually attended locations, are mediated by common (multimodal) representations of external space. Second, vision of the body modulates activity underlying sustained tactile-spatial attention, facilitating attentional modulation of tactile processing in between-hand (when hands are sufficiently far apart) and impairing attentional modulation in within-hand selection tasks. Finally, body posture influences mechanisms of sustained tactile-spatial attention, relying, at least partly, on remapping of tactile stimuli in external, visually defined, spatial coordinates. Taken together, the findings reviewed in this paper indicate that sustained spatial attention in touch is subserved by both modality-specific and multimodal mechanisms. The interplay between these mechanisms allows flexible and efficient spatial selection within and across sensory modalities
Investigating the Inhibition of the Return of Attention in the Tactile Domain
Purpose: The time-course needed to elicit tactile inhibition of return (IOR) has not been well-defined due to the paucity of research in this area especially studies investigating spatial discrimination. Reportedly tactile IOR uses higher-order mental representations to orient attention spatially yet the properties of low-level dermatomal maps may better account for how IOR orients tactile attention in space although its contribution is unclear. The present study sought to establish a time-course that evokes IOR in a unimodal tactile spatial discrimination task and decouples the contribution of the dermatome from higher-order representations. Methods: Two conditions containing distinct tactile cue-target paradigms designed to tap into either the whole finger representation (Finger trial) and its response gradient or the dermatomal representation (Location trial) were applied to the index and middle finger-tips of both hands of 17 participants. Targets appeared at a cued or uncued finger following an inter-stimulus interval (ISI; 150, 600, or 1200 ms) for Finger trials and they appeared at cued or uncued locations after an ISI within a single finger-tip for Location trials. Results: At ISIs of 1200 ms IOR and facilitation of response times (RTs) were elicited for cued and uncued homologous Finger trials respectively. As ISIs increased, RTs for uncued homologous and adjacent Finger trials linearly decreased and increased respectively. Thus, Finger trial type trends exhibited a non-linear response gradient but they were not different from those of Location trials, specifically cued and uncued Location trials mirrored cued and uncued homologous Finger trials. While no facilitation and IOR occurred between Location trials, cued and uncued trials showed trends typical of IOR. Conclusion: We showed that tactile IOR can be elicited in a unimodal spatial discrimination task and that tactile spatial attention, oriented via IOR, is likely driven by low-level dermatomal maps
Task-irrelevant perceptual learning of crossmodal links: specificity and mechanisms
It is clear that in order to perceive the external environment in its entirety, inputs from multiple sensory systems (i.e. modalities) must be combined with regard to each object in the environment. Humans are highly vision-dependent creatures, with a large portion of the human cortex dedicated to visual perception and many multimodal areas proposed to integrate vision with other modalities. Recent studies of multimodal integration have shown crossmodal facilitation (increased performance at short stimulus onset asynchronies, SOA s) and/or inhibition of return ( IOR ; decreased performance at long SOAs) for detection of a target stimulus in one modality following a location-specific cue in a different modality. It has also been shown that unimodal systems maintain some level of plasticity through adulthood, as revealed through studies of sensory deprivation (i.e. unimodal areas respond to multimodal stimuli), and especially through perceptual learning ( PL )--a well-defined type of cortical plasticity. Few studies have attempted to investigate the specificity and plasticity of crossmodal effects or the contexts in which multimodal processing is necessary for accurate visual perception. This dissertation addresses these unanswered questions of audiovisual ( AV ) crossmodal cuing effects by combining findings from unimodal perceptual learning with those of multimodal cuing effects as follows: (1) the short- and long-term effects of audiovisual crossmodal cuing, as well as the plasticity of these effects were systematically examined using spatially specific audiovisual training to manipulate crossmodal associations using perceptual learning; (2) neural correlates of these plastic crossmodal effects were deduced using monocular viewing tests (discriminating simple and complex stimuli) following monocular and orientation specific crossmodal perceptual training; and (3) psychophysical boundaries of plasticity within and among these mechanisms as dependent on task/training type and difficulty were determined by varying stimulus salience and looking at post-PL changes in response operating characteristics
Sensor Fusion in the Perception of Self-Motion
This dissertation has been written at the Max Planck Institute for Biological Cybernetics (Max-Planck-Institut fĂĽr Biologische Kybernetik) in TĂĽbingen in the department of Prof. Dr. Heinrich H. BĂĽlthoff. The work has universitary support by Prof. Dr. GĂĽnther Palm (University of Ulm, Abteilung Neuroinformatik). Main evaluators are Prof. Dr. GĂĽnther Palm, Prof. Dr. Wolfgang Becker (University of Ulm, Sektion Neurophysiologie) and Prof. Dr. Heinrich BĂĽlthoff.amp;lt;bramp;gt;amp;lt;bramp;gt; The goal of this thesis was to investigate the integration of different sensory modalities in the perception of self-motion, by using psychophysical methods. Experiments with healthy human participants were to be designed for and performed in the Motion Lab, which is equipped with a simulator platform and projection screen. Results from psychophysical experiments should be used to refine models of the multisensory integration process, with an mphasis on Bayesian (maximum likelihood) integration mechanisms.amp;lt;bramp;gt;amp;lt;bramp;gt; To put the psychophysical experiments into the larger framework of research on multisensory integration in the brain, results of neuroanatomical and neurophysiological experiments on multisensory integration are also reviewed
Recommended from our members
Crossmodal spatial representations: behavioural and electrophysiological evidence on the effects of vision and posture on somatosensory processing in normal population and in right-brain-damaged patients
Interactions between different sensory modalities can affect processing of unisensory information, at both a perceptual and a neural level. The studies reported in this thesis address the effects of crossmodal interactions between vision and touch on tactile processing. In particular, these studies provide new behavioural and neural (ERP; event related potentials) evidence showing that: i) crossmodal interactions enhance tactile processing when (task-irrelevant) visual stimuli are presented, simultaneously with touch, at the same location as tactile stimuli compared to a different location in near or in far space; ii) crossmodal interactions between spatial congruent visual and tactile stimuli enhance tactile processing compared to incongruent visuo-tactile stimulation, also when (task-irrelevant) visual stimuli presented near the body are observed indirectly in a mirror (i.e., appearing in far space), although in this condition these crossmodal spatial modulations are delayed compared to direct viewing of the visual stimuli; iii) vision of the body (i.e., the hands) facilitates tactile-spatial attentional selection, as compared to no visual input (blindfolded condition), and also compared to visual-spatial information only (i.e., when the hands are hidden from view); iv) in rightbrain- damaged patients with tactile neglect and/or extinction, vision of the stimulated hand may further improve speed processing of contralesional tactile stimuli when the left, contralesional hand is placed in the right, 'intact' hemispace, under crossed posture.
In these studies, visual modulations of touch were present at early time intervals (i.e., early ERP components), suggesting that crossmodal spatial interactions can affect processing in cortical areas that have been considered 'modality-specific', namely, the secondary somatosensory cortex (SII). Taken together, the findings from the studies in this thesis provide new behavioural and ERP evidence in support of crossmodal spatial representations of the body and ofthe space surrounding the body (i.e., peripersonal space) in humans
The spatial logic of fear
Peripersonal space (PPS) is the multimodal sensorimotor representation of the space surrounding the body. This thesis investigates how PPS is modulated by emotional faces, which represent particularly salient cue in our environment. Study 1 shows that looming neutral, joyful, and angry faces gradually facilitate motor responses to tactile stimuli. Conversely, looming fearful faces show no such effect. Also, at the closest position in PPS, multisensory response facilitation is lower for fearful than neutral faces. Study 2a addresses the hypothesis that fearful faces promote a redirection of attention towards the peripheral space. In line with this, it shows that motor responses to tactile stimuli are facilitated when a looming fearful face is associated with the appearance of a visual element presented in the periphery, rather than close to the face. Also, this effect is found in near space and not in far space. This result suggests that a near looming fearful face elicits a redirection of attention to the peripheral space. Such effect is not found for neutral, joyful, or angry faces (Study 2b). Study 3 shows that the redirection of attention in PPS by fearful faces is accompanied by a modulation of the electrophysiological signal associated with face processing (N170). Finally, Study 4 shows that the skin conductance response to looming fearful, but not joyful or neutral faces, is modulated by the distance of the face from participants’ body, being maximal in the near space. Together these studies show that, at variance with other emotions, fearful faces shift attention to other portions of space - than that of the face - where the threat may be located. It is argued that this fear-evoked redirection of attention may enhance the defensive function of PPS, when most needed, i.e., when the source of threat is nearby, but its location remains unknown
Perceptual abnormalities in amputees: phantom pain, mirror-touch synaesthesia and referred tactile sensations
It is often reported that after amputation people experience "a constant or
inconstant... sensory ghost... faintly felt at time, but ready to be called up to [their]
perception" (Mitchell, 1866). Perceptual abnormalities have been highlighted in amputees,
such as sensations in the phantom when being stroked elsewhere (Ramachandran et al., 1992) or when observing someone in pain (Giummarra and Bradshaw, 2008). This thesis explored the perceptual changes that occur following amputation whist focusing on pain, vision and touch. A sample of over 100 amputees were recruited through the National Health Service. Despite finding no difference in phantom pain based on physical amputation details or nonpainful perceptual phenomena, results from Paper 1 indicated that phantom pain may be more intense, with sensations occurring more frequently, in amputees whose pain was triggerinduced. The survey in Paper 2 identified a group of amputees who in losing a limb acquired mirror-touch synaesthesia. Higher levels of empathy found in mirror-touch amputees might mean that some people are predisposed to develop synaesthesia, but that it takes sensory loss to bring dormant cross-sensory interactions into consciousness. Although the mirror-system may reach supra-threshold levels in some amputees, the experiments in Paper 3 suggested a relatively intact mirror-system in amputees overall. Specifically, in a task of apparent biological motion, amputees showed a similar, although weaker, pattern of results to normalbodied participants. The results of Paper 4 showed that tactile spatial acuity on the face was also largely not affected by amputation, as no difference was found between the sides ipsilateral and contralateral to the stump. In Paper 5 cross-modal cuing was used to investigate whether referred tactile sensations could prime a visually presented target in space occupied by the phantom limb. We conclude that perception is only moderately affected in most amputees, but that in some the sensory loss causes normally sub-threshold processing to enhance into conscious awareness