70 research outputs found

    Accurate Visuomotor Control below the Perceptual Threshold of Size Discrimination

    Get PDF
    Background: Human resolution for object size is typically determined by psychophysical methods that are based on conscious perception. In contrast, grasping of the same objects might be less conscious. It is suggested that grasping is mediated by mechanisms other than those mediating conscious perception. In this study, we compared the visual resolution for object size of the visuomotor and the perceptual system. Methodology/Principal Findings: In Experiment 1, participants discriminated the size of pairs of objects once through perceptual judgments and once by grasping movements toward the objects. Notably, the actual size differences were set below the Just Noticeable Difference (JND). We found that grasping trajectories reflected the actual size differences between the objects regardless of the JND. This pattern was observed even in trials in which the perceptual judgments were erroneous. The results of an additional control experiment showed that these findings were not confounded by task demands. Participants were not aware, therefore, that their size discrimination via grasp was veridical. Conclusions/Significance: We conclude that human resolution is not fully tapped by perceptually determined thresholds

    When Ears Drive Hands: The Influence of Contact Sound on Reaching to Grasp

    Get PDF
    Background Most research on the roles of auditory information and its interaction with vision has focused on perceptual performance. Little is known on the effects of sound cues on visually-guided hand movements. Methodology/Principal Findings We recorded the sound produced by the fingers upon contact as participants grasped stimulus objects which were covered with different materials. Then, in a further session the pre-recorded contact sounds were delivered to participants via headphones before or following the initiation of reach-to-grasp movements towards the stimulus objects. Reach-to-grasp movement kinematics were measured under the following conditions: (i) congruent, in which the presented contact sound and the contact sound elicited by the to-be-grasped stimulus corresponded; (ii) incongruent, in which the presented contact sound was different to that generated by the stimulus upon contact; (iii) control, in which a synthetic sound, not associated with a real event, was presented. Facilitation effects were found for congruent trials; interference effects were found for incongruent trials. In a second experiment, the upper and the lower parts of the stimulus were covered with different materials. The presented sound was always congruent with the material covering either the upper or the lower half of the stimulus. Participants consistently placed their fingers on the half of the stimulus that corresponded to the presented contact sound. Conclusions/Significance Altogether these findings offer a substantial contribution to the current debate about the type of object representations elicited by auditory stimuli and on the multisensory nature of the sensorimotor transformations underlying action

    The Impact of Augmented Information on Visuo-Motor Adaptation in Younger and Older Adults

    Get PDF
    BACKGROUND: Adjustment to a visuo-motor rotation is known to be affected by ageing. According to previous studies, the age-related differences primarily pertain to the use of strategic corrections and the generation of explicit knowledge on which strategic corrections are based, whereas the acquisition of an (implicit) internal model of the novel visuo-motor transformation is unaffected. The present study aimed to assess the impact of augmented information on the age-related variation of visuo-motor adjustments. METHODOLOGY/PRINCIPAL FINDINGS: Participants performed aiming movements controlling a cursor on a computer screen. Visual feedback of direction of cursor motion was rotated 75 degrees relative to the direction of hand motion. Participants had to adjust to this rotation in the presence and absence of an additional hand-movement target that explicitly depicted the input-output relations of the visuo-motor transformation. An extensive set of tests was employed in order to disentangle the contributions of different processes to visuo-motor adjustment. Results show that the augmented information failed to affect the age-related variations of explicit knowledge, adaptive shifts, and aftereffects in a substantial way, whereas it clearly affected initial direction errors during practice and proprioceptive realignment. CONCLUSIONS: Contrary to expectations, older participants apparently made no use of the augmented information, whereas younger participants used the additional movement target to reduce initial direction errors early during practice. However, after a first block of trials errors increased, indicating a neglect of the augmented information, and only slowly declined thereafter. A hypothetical dual-task account of these findings is discussed. The use of the augmented information also led to a selective impairment of proprioceptive realignment in the younger group. The mere finding of proprioceptive realignment in adaptation to a visuo-motor rotation in a computer-controlled setup is noteworthy since visual and proprioceptive information pertain to different objects

    Practicing a Musical Instrument in Childhood is Associated with Enhanced Verbal Ability and Nonverbal Reasoning

    Get PDF
    Background: In this study we investigated the association between instrumental music training in childhood and outcomes closely related to music training as well as those more distantly related. Methodology/Principal Findings: Children who received at least three years (M = 4.6 years) of instrumental music training outperformed their control counterparts on two outcomes closely related to music (auditory discrimination abilities and fine motor skills) and on two outcomes distantly related to music (vocabulary and nonverbal reasoning skills). Duration of training also predicted these outcomes. Contrary to previous research, instrumental music training was not associated with heightened spatial skills, phonemic awareness, or mathematical abilities. Conclusions/Significance: While these results are correlational only, the strong predictive effect of training duration suggests that instrumental music training may enhance auditory discrimination, fine motor skills, vocabulary, and nonverba

    The effects of visual control and distance in modulating peripersonal spatial representation

    Get PDF
    In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets

    Grasping isoluminant stimuli

    Get PDF
    We used a virtual reality setup to let participants grasp discs, which differed in luminance, chromaticity and size. Current theories on perception and action propose a division of labor in the brain into a color proficient perception pathway and a less color-capable action pathway. In this study, we addressed the question whether isoluminant stimuli, which provide only a chromatic but no luminance contrast for action planning, are harder to grasp than stimuli providing luminance contrast or both kinds of contrast. Although we found that grasps of isoluminant stimuli had a slightly steeper slope relating the maximum grip aperture to disc size, all other measures of grip quality were unaffected. Overall, our results do not support the view that isoluminance of stimulus and background impedes the planning of a grasping movement

    Quantifying kinematics of purposeful movements to real, imagined, or absent functional objects: Implications for modelling trajectories for robot-assisted ADL tasks**

    Get PDF
    BACKGROUND: Robotic therapy is at the forefront of stroke rehabilitation. The Activities of Daily Living Exercise Robot (ADLER) was developed to improve carryover of gains after training by combining the benefits of Activities of Daily Living (ADL) training (motivation and functional task practice with real objects), with the benefits of robot mediated therapy (repeatability and reliability). In combining these two therapy techniques, we seek to develop a new model for trajectory generation that will support functional movements to real objects during robot training. We studied natural movements to real objects and report on how initial reaching movements are affected by real objects and how these movements deviate from the straight line paths predicted by the minimum jerk model, typically used to generate trajectories in robot training environments. We highlight key issues that to be considered in modelling natural trajectories. METHODS: Movement data was collected as eight normal subjects completed ADLs such as drinking and eating. Three conditions were considered: object absent, imagined, and present. This data was compared to predicted trajectories generated from implementing the minimum jerk model. The deviations in both the plane of the table (XY) and the saggital plane of torso (XZ) were examined for both reaches to a cup and to a spoon. Velocity profiles and curvature were also quantified for all trajectories. RESULTS: We hypothesized that movements performed with functional task constraints and objects would deviate from the minimum jerk trajectory model more than those performed under imaginary or object absent conditions. Trajectory deviations from the predicted minimum jerk model for these reaches were shown to depend on three variables: object presence, object orientation, and plane of movement. When subjects completed the cup reach their movements were more curved than for the spoon reach. The object present condition for the cup reach showed more curvature than in the object imagined and absent conditions. Curvature in the XZ plane of movement was greater than curvature in the XY plane for all movements. CONCLUSION: The implemented minimum jerk trajectory model was not adequate for generating functional trajectories for these ADLs. The deviations caused by object affordance and functional task constraints must be accounted for in order to allow subjects to perform functional task training in robotic therapy environments. The major differences that we have highlighted include trajectory dependence on: object presence, object orientation, and the plane of movement. With the ability to practice ADLs on the ADLER environment we hope to provide patients with a therapy paradigm that will produce optimal results and recovery

    Grasping Kinematics from the Perspective of the Individual Digits: A Modelling Study

    Get PDF
    Grasping is a prototype of human motor coordination. Nevertheless, it is not known what determines the typical movement patterns of grasping. One way to approach this issue is by building models. We developed a model based on the movements of the individual digits. In our model the following objectives were taken into account for each digit: move smoothly to the preselected goal position on the object without hitting other surfaces, arrive at about the same time as the other digit and never move too far from the other digit. These objectives were implemented by regarding the tips of the digits as point masses with a spring between them, each attracted to its goal position and repelled from objects' surfaces. Their movements were damped. Using a single set of parameters, our model can reproduce a wider variety of experimental findings than any previous model of grasping. Apart from reproducing known effects (even the angles under which digits approach trapezoidal objects' surfaces, which no other model can explain), our model predicted that the increase in maximum grip aperture with object size should be greater for blocks than for cylinders. A survey of the literature shows that this is indeed how humans behave. The model can also adequately predict how single digit pointing movements are made. This supports the idea that grasping kinematics follow from the movements of the individual digits
    • …
    corecore