58 research outputs found

    Age effects in mental rotation are due to the use of a different strategy

    Get PDF
    Older participants are slower than younger individuals in rotating objects in their minds. One possible explanation for this age effect in mental rotation (MR) relies on the different strategies used. To explore this possiblity, in the present study, younger and older participants were assessed with two MR tasks with three- (Exp.1) and two-dimensional objects (Exp.2)with different complexity levels. In both experiments, the performance of the two age groups was comparable in simple objects. However, systematic differences were observed between the MR rates of younger and older adults while processing complex objects. Younger participants were faster in processing complex than simple objects, whereas older participants were slower in rotating complex as compared to simple objects. These results revealed that different strategies were selected by the two age groups when rotating complex objects. A simplified representation of the objects was transformed by younger participants, while older participants rotated the objects piece-by-piece

    The space of words: on the sensorimotor processing of variable affordances in noun-adjective combinations.

    Get PDF
    Evidence suggests that the processing of graspable object nouns elicits specific motor programs related to potential hand-object interactions. Notably, adjectives specifying manipulative features of these objects are integrated into this sensorimotor representation. The present experiment investigated the effect of adjectives denoting the position of the object in space on the sensorimotor representation of graspable object nouns. We used a reach-to-grasp compatibility task, in which participants had to categorize object nouns as artifact or natural, by performing either a power or precision grip matching or not the typical grip associated with the object. On each trial, the object noun was presented with a near or far adjective. While reliable grasp-compatibility effects emerged for object nouns on RTs, this was not modulated by the spatial position denoted by the adjective. Spatial adjectives appear not to be integrated into the noun sensorimotor representation, supporting the distinction between stable and variable affordances

    Searching for a tactile target: the impact of set-size on the N140cc

    Get PDF
    The time needed to find a visual target amongst distractors (search task) can increase as a function of the distractors’ number (set-size) in the search-array (inefficient search). While the allocation of attention in search tasks has been extensively investigated and debated in the visual domain, little is known about these mechanisms in touch. Initial behavioral evidence shows inefficient search behavior when participants have to distinguish between target and distractors defined by their vibro-tactile frequencies. In the present study, to investigate the allocation of attention to items of the search-array we measured the N140cc during a tactile task in which the set-size was manipulated. The N140cc is a lateralized component of event-related brain potentials recently described as a psychophysiological marker of attentional allocation in tactile search tasks. Participants localized the target, a singleton frequency, while ignoring one, three or five homogeneous distractors. Results showed that error rates linearly increased as a function of set-size, while response times were not affected. Reliable N140cc components were observed for all set-sizes. Crucially, the N140cc amplitude decreased as the number of distractors increased. We argue that the presence of additional distractors hindered the preattentive analysis of the search array resulting in increased uncertainty about the target location (inefficient preattentive stage). This, in turn, increased the variability of the deployment of attention to the target, resulting in reduced N140cc amplitudes. Consistent with existing behavioral evidence, these findings highlight systematic differences between the visual and the tactile attentional systems

    Hands behind your back: effects of arm posture on tactile attention in the space behind the body

    Get PDF
    Previous research has shown that tactile-spatial information originating from the front of the body is remapped from an anatomical to an external-spatial coordinate system, guided by the availability of visual information early in development. Comparably little is known about regions of space for which visual information is not typically available, such as the space behind the body. This study tests for the first time the electrophysiological correlates of the effects of proprioceptive information on tactile-attentional mechanisms in the space behind the back. Observers were blindfolded and tactually cued to detect infrequent tactile targets on either their left or right hand and to respond to them either vocally or with index finger movements. We measured event-related potentials (ERPs) to tactile probes on the hands in order to explore tactile-spatial attention when the hands were either held close together or far apart behind the observer's back. Results show systematic effects of arm posture on tactile-spatial attention different from those previously found for front space. While attentional selection is typically more effective for hands placed far apart than close together in front space, we found that selection occurred more rapidly for close than far hands behind the back, during both covert attention and movement preparation tasks. This suggests that proprioceptive space may ‘wrap’ around the body, following the hands as they extend horizontally from the front body midline to the centre of the back

    Stay Tuned: What Is Special About Not Shifting Attention?

    Get PDF
    Background: When studying attentional orienting processes, brain activity elicited by symbolic cue is usually compared to a neutral condition in which no information is provided about the upcoming target location. It is generally assumed that when a neutral cue is provided, participants do not shift their attention. The present study sought to validate this assumption. We further investigated whether anticipated task demands had an impact on brain activity related to processing symbolic cues. Methodology/Principal Findings: Two experiments were conducted, during which event-related potentials were elicited by symbolic cues that instructed participants to shift their attention to a particular location on a computer screen. In Experiment 1, attention shift-inducing cues were compared to non-informative cues, while in both conditions participants were required to detect target stimuli that were subsequently presented at peripheral locations. In Experiment 2, a non-ambiguous "stay-central'' cue that explicitly required participants not to shift their attention was used instead. In the latter case, target stimuli that followed a stay-central cue were also presented at a central location. Both experiments revealed enlarged early latency contralateral ERP components to shift-inducing cues compared to those elicited by either non-informative (exp. 1) or stay-central cues (exp. 2). In addition, cueing effects were modulated by the anticipated difficulty of the upcoming target, particularly so in Experiment 2. A positive difference, predominantly over the posterior contralateral scalp areas, could be observed for stay-central cues, especially for those predicting that the upcoming target would be easy. This effect was not present for non-informative cues. Conclusions/Significance: We interpret our result in terms of a more rapid engagement of attention occurring in the presence of a more predictive instruction (i.e. stay-central easy target). Our results indicate that the human brain is capable of very rapidly identifying the difference between different types of instructions

    Tactile localization biases are modulated by gaze direction

    Get PDF
    Identifying the spatial location of touch on the skin surface is a fundamental function of our somatosensory system. Despite the fact that stimulation of even single mechanoreceptive afferent fibres is sufficient to produce clearly localised percepts, tactile localisation can be modulated also by higher-level processes such as body posture. This suggests that tactile events are coded using multiple representations using different coordinate systems. Recent reports provide evidence for systematic biases on tactile localisation task, which are thought to result from a supramodal representation of the skin surface. While the influence of non-informative vision of the body and gaze direction on tactile discrimination tasks has been extensively studied, their effects on tactile localisation tasks remain largely unexplored. To address this question, participants performed a tactile localization task on their left hand under different visual conditions by means of a mirror box; in the mirror condition a single stimulus was delivered on participants’ hand while the reflexion of the right hand was seen through the mirror; in the object condition participants looked at a box through the mirror, and in the right hand condition participants looked directly at their right hand. Participants reported the location of the tactile stimuli using a silhouette of a hand. Results showed a shift in the localization of the touches towards the tip of the fingers (distal bias) and the thumb (radial biases) across conditions. Critically, distal biases were reduced when participants looked towards the mirror compared to when they looked at their right hand suggesting that gaze direction reduces the typical proximo-distal biases in tactile localization. Moreover, vision of the hand modulates the internal configuration of points’ locations, by elongating it, in the radio-ulnar axis

    Dopamine, affordance and active inference.

    Get PDF
    The role of dopamine in behaviour and decision-making is often cast in terms of reinforcement learning and optimal decision theory. Here, we present an alternative view that frames the physiology of dopamine in terms of Bayes-optimal behaviour. In this account, dopamine controls the precision or salience of (external or internal) cues that engender action. In other words, dopamine balances bottom-up sensory information and top-down prior beliefs when making hierarchical inferences (predictions) about cues that have affordance. In this paper, we focus on the consequences of changing tonic levels of dopamine firing using simulations of cued sequential movements. Crucially, the predictions driving movements are based upon a hierarchical generative model that infers the context in which movements are made. This means that we can confuse agents by changing the context (order) in which cues are presented. These simulations provide a (Bayes-optimal) model of contextual uncertainty and set switching that can be quantified in terms of behavioural and electrophysiological responses. Furthermore, one can simulate dopaminergic lesions (by changing the precision of prediction errors) to produce pathological behaviours that are reminiscent of those seen in neurological disorders such as Parkinson's disease. We use these simulations to demonstrate how a single functional role for dopamine at the synaptic level can manifest in different ways at the behavioural level

    Finger posture modulates structural body representations

    Get PDF
    Patients with lesions of the left posterior parietal cortex commonly fail in identifying their fingers, a condition known as finger agnosia, yet are relatively unimpaired in sensation and skilled action. Such dissociations have traditionally been interpreted as evidence that structural body representations (BSR), such as the body structural description, are distinct from sensorimotor representations, such as the body schema. We investigated whether performance on tasks commonly used to assess finger agnosia is modulated by changes in hand posture. We used the ‘in between’ test in which participants estimate the number of unstimulated fingers between two touched fingers or a localization task in which participants judge which two fingers were stimulated. Across blocks, the fingers were placed in three levels of splay. Judged finger numerosity was analysed, in Exp. 1 by direct report and in Exp. 2 as the actual number of fingers between the fingers named. In both experiments, judgments were greater when non-adjacent stimulated fingers were positioned far apart compared to when they were close together or touching, whereas judgements were unaltered when adjacent fingers were stimulated. This demonstrates that BSRs are not fixed, but are modulated by the real-time physical distances between body parts
    • …
    corecore