33 research outputs found

    Object-guided Spatial Attention in Touch: Holding the Same Object with Both Hands Delays Attentional Selection

    Get PDF
    Abstract Previous research has shown that attention to a specific location on a uniform visual object spreads throughout the entire object. Here we demonstrate that, similar to the visual system, spatial attention in touch can be object guided. We measured event-related brain potentials to tactile stimuli arising from objects held by observers' hands, when the hands were placed either near each other or far apart, holding two separate objects, or when they were far apart but holding a common object. Observers covertly oriented their attention to the left, to the right, or to both hands, following bilaterally presented tactile cues indicating likely tactile target location(s). Attentional modulations for tactile stimuli at attended compared to unattended locations were present in the time range of early somatosensory components only when the hands were far apart, but not when they were near. This was found to reflect enhanced somatosensory processing at attended locations rather than suppressed processing at unattended locations. Crucially, holding a common object with both hands delayed attentional selection, similar to when the hands were near. This shows that the proprioceptive distance effect on tactile attentional selection arises when distant event locations can be treated as separate and unconnected sources of tactile stimulation, but not when they form part of the same object. These findings suggest that, similar to visual attention, both space- and object-based attentional mechanisms can operate when we select between tactile events on our body surface.</jats:p

    Observation of Andreev Reflection Enhanced Shot Noise

    Full text link
    We have experimentally investigated the quasiparticle shot noise in NbN/MgO/NbN superconductor - insulator - superconductor tunnel junctions. The observed shot noise is significantly larger than theoretically expected. We attribute this to the occurrence of multiple Andreev reflection processes in pinholes present in the MgO barrier. This mechanism causes the current to flow in large charge quanta (Andreev clusters), with a voltage dependent average value of m = 1+ 2 Delta/eV times the electron charge. Because of this charge enhancement effect, the shot noise is increased by the factor m.Comment: 4 pages, 5 figures include

    Relative finger position influences whether you can localize tactile stimuli

    Get PDF
    To investigate whether the relative positions of the fingers influence tactile localization, participants were asked to localize tactile stimuli applied to their fingertips. We measured the location and rate of errors for three finger configurations: fingers stretched out and together so that they are touching each other, fingers stretched out and spread apart maximally and fingers stretched out with the two hands on top of each other so that the fingers are interwoven. When the fingers contact each other, it is likely that the error rate to the adjacent fingers will be higher than when the fingers are spread apart. In particular, we reasoned that localization would probably improve when the fingers are spread. We aimed at assessing whether such adjacency was measured in external coordinates (taking proprioception into account) or on the body (in skin coordinates). The results confirmed that the error rate was lower when the fingers were spread. However, there was no decrease in error rate to neighbouring fingertips in the fingers spread condition in comparison with the fingers together condition. In an additional experiment, we showed that the lower error rate when the fingers were spread was not related to the continuous tactile input from the neighbouring fingers when the fingers were together. The current results suggest that information from proprioception is taken into account in perceiving the location of a stimulus on one of the fingertips

    Timing and Sequence of Brain Activity in Top-Down Control of Visual-Spatial Attention

    Get PDF
    Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated a frontal-parietal network in the top-down control of attention. However, little is known about the timing and sequence of activations within this network. To investigate these timing questions, we used event-related electrical brain potentials (ERPs) and a specially designed visual-spatial attentional-cueing paradigm, which were applied as part of a multi-methodological approach that included a closely corresponding event-related fMRI study using an identical paradigm. In the first 400 ms post cue, attention-directing and control cues elicited similar general cue-processing activity, corresponding to the more lateral subregions of the frontal-parietal network identified with the fMRI. Following this, the attention-directing cues elicited a sustained negative-polarity brain wave that was absent for control cues. This activity could be linked to the more medial frontal–parietal subregions similarly identified in the fMRI as specifically involved in attentional orienting. Critically, both the scalp ERPs and the fMRI-seeded source modeling for this orienting-related activity indicated an earlier onset of frontal versus parietal contribution (∼400 versus ∼700 ms). This was then followed (∼800–900 ms) by pretarget biasing activity in the region-specific visual-sensory occipital cortex. These results indicate an activation sequence of key components of the attentional-control brain network, providing insight into their functional roles. More specifically, these results suggest that voluntary attentional orienting is initiated by medial portions of frontal cortex, which then recruit medial parietal areas. Together, these areas then implement biasing of region-specific visual-sensory cortex to facilitate the processing of upcoming visual stimuli

    Keeping in Touch with One's Self: Multisensory Mechanisms of Self-Consciousness

    Get PDF
    BACKGROUND: The spatial unity between self and body can be disrupted by employing conflicting visual-somatosensory bodily input, thereby bringing neurological observations on bodily self-consciousness under scientific scrutiny. Here we designed a novel paradigm linking the study of bodily self-consciousness to the spatial representation of visuo-tactile stimuli by measuring crossmodal congruency effects (CCEs) for the full body. METHODOLOGY/PRINCIPAL FINDINGS: We measured full body CCEs by attaching four vibrator-light pairs to the trunks (backs) of subjects who viewed their bodies from behind via a camera and a head mounted display (HMD). Subjects made speeded elevation (up/down) judgments of the tactile stimuli while ignoring light stimuli. To modulate self-identification for the seen body subjects were stroked on their backs with a stick and the felt stroking was either synchronous or asynchronous with the stroking that could be seen via the HMD. We found that (1) tactile stimuli were mislocalized towards the seen body (2) CCEs were modulated systematically during visual-somatosensory conflict when subjects viewed their body but not when they viewed a body-sized object, i.e. CCEs were larger during synchronous than during asynchronous stroking of the body and (3) these changes in the mapping of tactile stimuli were induced in the same experimental condition in which predictable changes in bodily self-consciousness occurred. CONCLUSIONS/SIGNIFICANCE: These data reveal that systematic alterations in the mapping of tactile stimuli occur in a full body illusion and thus establish CCE magnitude as an online performance proxy for subjective changes in global bodily self-consciousness

    Manipulable Objects Facilitate Cross-Modal Integration in Peripersonal Space

    Get PDF
    Previous studies have shown that tool use often modifies one's peripersonal space – i.e. the space directly surrounding our body. Given our profound experience with manipulable objects (e.g. a toothbrush, a comb or a teapot) in the present study we hypothesized that the observation of pictures representing manipulable objects would result in a remapping of peripersonal space as well. Subjects were required to report the location of vibrotactile stimuli delivered to the right hand, while ignoring visual distractors superimposed on pictures representing everyday objects. Pictures could represent objects that were of high manipulability (e.g. a cell phone), medium manipulability (e.g. a soap dispenser) and low manipulability (e.g. a computer screen). In the first experiment, when subjects attended to the action associated with the objects, a strong cross-modal congruency effect (CCE) was observed for pictures representing medium and high manipulability objects, reflected in faster reaction times if the vibrotactile stimulus and the visual distractor were in the same location, whereas no CCE was observed for low manipulability objects. This finding was replicated in a second experiment in which subjects attended to the visual properties of the objects. These findings suggest that the observation of manipulable objects facilitates cross-modal integration in peripersonal space

    Neural Correlates of Visual Motion Prediction

    Get PDF
    Predicting the trajectories of moving objects in our surroundings is important for many life scenarios, such as driving, walking, reaching, hunting and combat. We determined human subjects’ performance and task-related brain activity in a motion trajectory prediction task. The task required spatial and motion working memory as well as the ability to extrapolate motion information in time to predict future object locations. We showed that the neural circuits associated with motion prediction included frontal, parietal and insular cortex, as well as the thalamus and the visual cortex. Interestingly, deactivation of many of these regions seemed to be more closely related to task performance. The differential activity during motion prediction vs. direct observation was also correlated with task performance. The neural networks involved in our visual motion prediction task are significantly different from those that underlie visual motion memory and imagery. Our results set the stage for the examination of the effects of deficiencies in these networks, such as those caused by aging and mental disorders, on visual motion prediction and its consequences on mobility related daily activities

    Rubber Hands Feel Touch, but Not in Blind Individuals

    Get PDF
    Psychology and neuroscience have a long-standing tradition of studying blind individuals to investigate how visual experience shapes perception of the external world. Here, we study how blind people experience their own body by exposing them to a multisensory body illusion: the somatic rubber hand illusion. In this illusion, healthy blindfolded participants experience that they are touching their own right hand with their left index finger, when in fact they are touching a rubber hand with their left index finger while the experimenter touches their right hand in a synchronized manner (Ehrsson et al. 2005). We compared the strength of this illusion in a group of blind individuals (n = 10), all of whom had experienced severe visual impairment or complete blindness from birth, and a group of age-matched blindfolded sighted participants (n = 12). The illusion was quantified subjectively using questionnaires and behaviorally by asking participants to point to the felt location of the right hand. The results showed that the sighted participants experienced a strong illusion, whereas the blind participants experienced no illusion at all, a difference that was evident in both tests employed. A further experiment testing the participants' basic ability to localize the right hand in space without vision (proprioception) revealed no difference between the two groups. Taken together, these results suggest that blind individuals with impaired visual development have a more veridical percept of self-touch and a less flexible and dynamic representation of their own body in space compared to sighted individuals. We speculate that the multisensory brain systems that re-map somatosensory signals onto external reference frames are less developed in blind individuals and therefore do not allow efficient fusion of tactile and proprioceptive signals from the two upper limbs into a single illusory experience of self-touch as in sighted individuals

    Tactile localization biases are modulated by gaze direction

    Get PDF
    Identifying the spatial location of touch on the skin surface is a fundamental function of our somatosensory system. Despite the fact that stimulation of even single mechanoreceptive afferent fibres is sufficient to produce clearly localised percepts, tactile localisation can be modulated also by higher-level processes such as body posture. This suggests that tactile events are coded using multiple representations using different coordinate systems. Recent reports provide evidence for systematic biases on tactile localisation task, which are thought to result from a supramodal representation of the skin surface. While the influence of non-informative vision of the body and gaze direction on tactile discrimination tasks has been extensively studied, their effects on tactile localisation tasks remain largely unexplored. To address this question, participants performed a tactile localization task on their left hand under different visual conditions by means of a mirror box; in the mirror condition a single stimulus was delivered on participants’ hand while the reflexion of the right hand was seen through the mirror; in the object condition participants looked at a box through the mirror, and in the right hand condition participants looked directly at their right hand. Participants reported the location of the tactile stimuli using a silhouette of a hand. Results showed a shift in the localization of the touches towards the tip of the fingers (distal bias) and the thumb (radial biases) across conditions. Critically, distal biases were reduced when participants looked towards the mirror compared to when they looked at their right hand suggesting that gaze direction reduces the typical proximo-distal biases in tactile localization. Moreover, vision of the hand modulates the internal configuration of points’ locations, by elongating it, in the radio-ulnar axis
    corecore