290 research outputs found

    The latency for correcting a movement depends on the visual attribute that defines the target

    Get PDF
    Neurons in different cortical visual areas respond to different visual attributes with different latencies. How does this affect the on-line control of our actions? We studied hand movements directed toward targets that could be distinguished from other objects by luminance, size, orientation, color, shape or texture. In some trials, the target changed places with one of the other objects at the onset of the hand’s movement. We determined the latency for correcting the movement of the hand in the direction of the new target location. We show that subjects can correct their movements at short latency for all attributes, but that responses for the attributes color, form and texture (that are relevant for recognizing the object) are 50 ms slower than for the attributes luminance, orientation and size. This dichotomy corresponds to both to the distinction between magno-cellular and parvo-cellular pathways and to a dorsal–ventral distinction. The latency also differed systematically between subjects, independent of their reaction time

    Does planning a different trajectory influence the choice of grasping points?

    Get PDF
    We examined whether the movement path is considered when selecting the positions at which the digits will contact the object's surface (grasping points). Subjects grasped objects of different heights but with the same radius at various locations on a table. At some locations, one digit crossed to the side of the object opposite of where it started. In doing so, it moved over a short object whereas it curved around a tall object. This resulted in very different paths for different objects. Importantly, the selection of grasping points was unaffected. That subjects do not appear to consider the path when selecting grasping points suggests that the grasping points are selected before planning the movements towards those points. © 2010 The Author(s)

    How people achieve their amazing temporal precision in interception

    Get PDF
    People can hit rapidly moving balls with amazing precision. To determine how they manage to do so, we explored how various factors that we could manipulate influenced people's precision when intercepting virtual targets. We found that temporal precision was highest for fast targets that subjects were free to intercept wherever they wished. Temporal precision was much poorer when the point of interception was specified in advance. Examining responses to abrupt perturbations of the target's motion revealed that people adjusted where rather than when they would hit the target if given the choice. A model that combines judging how long it will take to reach the target's path with estimating the target's position at that time from its visually perceived position and velocity could account for the observed precision with reasonable values for all the parameters. The model considers all relevant sources of errors, together with the delays with which the various aspects can be adjusted. Our analysis provides a biologically plausible explanation for how light falling on the eye can guide the hand to intercept a moving ball with such high precision

    The tangent space of a bundle

    Get PDF
    In dynamic environments, it is crucial to accurately consider the timing of information. For instance, during saccades the eyes rotate so fast that even small temporal errors in relating retinal stimulation by flashed stimuli to extra-retinal information about the eyes' orientations will give rise to substantial errors in where the stimuli are judged to be. If spatial localization involves judging the eyes' orientations at the estimated time of the flash, we should be able to manipulate the pattern of mislocalization by altering the estimated time of the flash. We reasoned that if we presented a relevant flash within a short rapid sequence of irrelevant flashes, participants' estimates of when the relevant flash was presented might be shifted towards the centre of the sequence. In a first experiment, we presented five bars at different positions around the time of a saccade. Four of the bars were black. Either the second or the fourth bar in the sequence was red. The task was to localize the red bar. We found that when the red bar was presented second in the sequence, it was judged to be further in the direction of the saccade than when it was presented fourth in the sequence. Could this be because the red bar was processed faster when more black bars preceded it? In a second experiment, a red bar was either presented alone or followed by two black bars. When two black bars followed it, it was judged to be further in the direction of the saccade. We conclude that the spatial localization of flashed stimuli involves judging the eye orientation at the estimated time of the flash

    The effect of variability in other objects' sizes on the extent to which people rely on retinal image size as a cue for judging distance

    Get PDF
    Retinal image size can be used to judge objects' distances because for any object one can assume that some sizes are more likely than others. It has been shown that an increased variability in the size of otherwise identical target objects over trials reduces the weight given to retinal image size as a distance cue. Here, we examined whether an increased variability in the size of objects of a different color, orientation, or shape reduces the weight given to retinal image size when judging distance. Subjects had to indicate the 3D position of a simulated target object. Retinal image size was given significantly less weight as a cue for judging the target cube's distance when differently colored and differently oriented target objects appeared in many simulated sizes but not when differently shaped objects had many simulated sizes. We also examined whether increasing the variability in the size of cubes in the surroundings reduces the weight given to retinal image size when judging distance. It does not. We conclude that variability in surrounding or dissimilar objects' sizes has a negligible influence on the extent to which people rely on retinal image size as a cue for judging distance

    Online updating of obstacle positions when intercepting a virtual target

    Get PDF
    People rely upon sensory information in the environment to guide their actions. Ongoing goal-directed arm movements are constantly adjusted to the latest estimate of both the target and hand's positions. Does the continuous guidance of ongoing arm movements also consider the latest visual information of the position of obstacles in the surrounding? To find out, we asked participants to slide their finger across a screen to intercept a laterally moving virtual target while moving through a gap that was created by two virtual circular obstacles. At a fixed time during each trial, the target suddenly jumped slightly laterally while still continuing to move. In half the trials, the size of the gap changed at the same moment as the target jumped. As expected, participants adjusted their movements in response to the target jump. Importantly, the magnitude of this response depended on the new size of the gap. If participants were told that the circles were irrelevant, changing the gap between them had no effect on the responses. This shows that obstacles' instantaneous positions can be considered when visually guiding goal-directed movements

    Spatial contextual cues that help predict how a target will accelerate can be used to guide interception

    Get PDF
    Objects in one's environment do not always move at a constant velocity but often accelerate or decelerate. People are very poor at visually judging acceleration and normally make systematic errors when trying to intercept accelerating objects. If the acceleration is perpendicular to the direction of motion, it gives rise to a curved path. Can spatial contextual cues help one predict such accelerations and thereby help interception? To answer this question, we asked participants to hit a target that moved as if it were attached to a rolling disk, like a valve (target) on a bicycle wheel (disk) moves when cycling: constantly accelerating toward the wheel's center. On half the trials, the disk was visible such that participants could use the spatial relations between the target and the rolling disk to guide their interception. On the other half, the disk was not visible, so participants had no help in predicting the target's complicated pattern of accelerations and decelerations. Importantly, the target's path was the same in both cases. Participants hit more targets when the disk was visible than when it was invisible, even when using a strategy that can compensate for neglecting acceleration. We conclude that spatial contextual cues that help predict the target's accelerations can help intercept it

    Similarities between digits’ movements in grasping, touching and pushing

    Get PDF
    In order to find out whether the movements of single digits are controlled in a special way when grasping, we compared the movements of the digits when grasping an object with their movements in comparable single-digit tasks: pushing or lightly tapping the same object at the same place. The movements of the digits in grasping were very similar to the movements in the single-digit tasks. To determine to what extent the hand transport and grip formation in grasping emerges from a synchronised motion of individual digits, we combined movements of finger and thumb in the single-digit tasks to obtain hypothetical transport and grip components. We found a larger peak grip aperture earlier in the movement for the single-digit tasks. The timing of peak grip aperture depended in the same way on its size for all tasks. Furthermore, the deviations from a straight line of the transport component differed considerably between subjects, but were remarkably similar across tasks. These results support the idea that grasping should be regarded as consisting of moving the digits, rather than transporting the hand and shaping the grip

    Grasping trapezoidal objects

    Get PDF
    When grasping rectangular or circular objects with a precision grip the digits close in on the object in opposite directions. In doing so the digits move perpendicular to the local surface orientation as they approach opposite sides of the object. This perpendicular approach is advantageous for accurately placing the digits. Trapezoidal objects have non-parallel surfaces so that moving the digits in opposite directions would make the digits approach the contact surfaces at an angle that is not 90°. In this study we examined whether this happens, or whether subjects tend to approach trapezoidal objects’ surfaces perpendicularly. We used objects of different sizes and with different surface slants. Subjects tended to approach the object’s surfaces orthogonally, suggesting that they aim for an optimal precision of digit placement rather than simply closing their hand as it reaches the object

    Eye–hand coupling is not the cause of manual return movements when searching

    Get PDF
    When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed, so that subjects often make saccades away from the target and then have to make an additional return saccade. Presumably, the cost of the additional saccades is outweighed by the advantage of short fixations. We previously showed that when the cost of passing the target was increased, by having subjects manually move a window through which they could see the visual scene, subjects still passed the target and made return movements (with their hand). When moving a window in this manner, the eyes and hand follow the same path. To find out whether the hand still passes the target and then returns when eye and hand movements are uncoupled, we here compared moving a window across a scene with moving a scene behind a stationary window. We ensured that the required movement of the hand was identical in both conditions. Subjects found the target faster when moving the window across the scene than when moving the scene behind the window, but at the expense of making larger return movements. The relationship between the return movements and movement speed when comparing the two conditions was the same as the relationship between these two when comparing different window sizes. We conclude that the hand passing the target and then returning is not directly related to the eyes doing so, but rather that moving on before the information has been fully processed is a general principle of visuomotor control
    corecore