25 research outputs found

    Using curvature information in haptic shape perception of 3D objects

    Get PDF
    Are humans able to perceive the circularity of a cylinder that is grasped by the hand? This study presents the findings of an experiment in which cylinders with a circular cross-section had to be distinguished from cylinders with an elliptical cross-section. For comparison, the ability to distinguish a square cuboid from a rectangular cuboid was also investigated. Both elliptical and rectangular shapes can be characterized by the aspect ratio, but elliptical shapes also contain curvature information. We found that an elliptical shape with an aspect ratio of only 1.03 could be distinguished from a circular shape both in static and dynamic touch. However, for a rectangular shape, the aspect ratio needed to be about 1.11 for dynamic touch and 1.15 for static touch in order to be discernible from a square shape. We conclude that curvature information can be employed in a reliable and efficient manner in the perception of 3D shapes by touch

    What is ‘anti’ about anti-reaches? Reference frames selectively affect reaction times and endpoint variability

    Get PDF
    Reach movement planning involves the representation of spatial target information in different reference frames. Neurons at parietal and premotor stages of the cortical sensorimotor system represent target information in eye- or hand-centered reference frames, respectively. How the different neuronal representations affect behavioral parameters of motor planning and control, i.e. which stage of neural representation is relevant for which aspect of behavior, is not obvious from the physiology. Here, we test with a behavioral experiment if different kinematic movement parameters are affected to a different degree by either an eye- or hand-reference frame. We used a generalized anti-reach task to test the influence of stimulus-response compatibility (SRC) in eye- and hand-reference frames on reach reaction times, movement times, and endpoint variability. While in a standard anti-reach task, the SRC is identical in the eye- and hand-reference frames, we could separate SRC for the two reference frames. We found that reaction times were influenced by the SRC in eye- and hand-reference frame. In contrast, movement times were only influenced by the SRC in hand-reference frame, and endpoint variability was only influenced by the SRC in eye-reference frame. Since movement time and endpoint variability are the result of planning and control processes, while reaction times are consequences of only the planning process, we suggest that SRC effects on reaction times are highly suited to investigate reference frames of movement planning, and that eye- and hand-reference frames have distinct effects on different phases of motor action and different kinematic movement parameters

    Mapping Proprioception across a 2D Horizontal Workspace

    Get PDF
    Relatively few studies have been reported that document how proprioception varies across the workspace of the human arm. Here we examined proprioceptive function across a horizontal planar workspace, using a new method that avoids active movement and interactions with other sensory modalities. We systematically mapped both proprioceptive acuity (sensitivity to hand position change) and bias (perceived location of the hand), across a horizontal-plane 2D workspace. Proprioception of both the left and right arms was tested at nine workspace locations and in 2 orthogonal directions (left-right and forwards-backwards). Subjects made repeated judgments about the position of their hand with respect to a remembered proprioceptive reference position, while grasping the handle of a robotic linkage that passively moved their hand to each judgement location. To rule out the possibility that the memory component of the proprioceptive testing procedure may have influenced our results, we repeated the procedure in a second experiment using a persistent visual reference position. Both methods resulted in qualitatively similar findings. Proprioception is not uniform across the workspace. Acuity was greater for limb configurations in which the hand was closer to the body, and was greater in a forward-backward direction than in a left-right direction. A robust difference in proprioceptive bias was observed across both experiments. At all workspace locations, the left hand was perceived to be to the left of its actual position, and the right hand was perceived to be to the right of its actual position. Finally, bias was smaller for hand positions closer to the body. The results of this study provide a systematic map of proprioceptive acuity and bias across the workspace of the limb that may be used to augment computational models of sensory-motor control, and to inform clinical assessment of sensory function in patients with sensory-motor deficits

    Fix Your Eyes in the Space You Could Reach: Neurons in the Macaque Medial Parietal Cortex Prefer Gaze Positions in Peripersonal Space

    Get PDF
    Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping

    Predictive mechanisms in the control of contour following

    No full text
    Item does not contain fulltextIn haptic exploration, when running a fingertip along a surface, the control system may attempt to anticipate upcoming changes in curvature in order to maintain a consistent level of contact force. Such predictive mechanisms are well known in the visual system, but have yet to be studied in the somatosensory system. Thus, the present experiment was designed to reveal human capabilities for different types of haptic prediction. A robot arm with a large 3D workspace was attached to the index fingertip and was programmed to produce virtual surfaces with curvatures that varied within and across trials. With eyes closed, subjects moved the fingertip around elliptical hoops with flattened regions or Limaçon shapes, where the curvature varied continuously. Subjects anticipated the corner of the flattened region rather poorly, but for the Limaçon shapes, they varied finger speed with upcoming curvature according to the two-thirds power law. Furthermore, although the Limaçon shapes were randomly presented in various 3D orientations, modulation of contact force also indicated good anticipation of upcoming changes in curvature. The results demonstrate that it is difficult to haptically anticipate the spatial location of an abrupt change in curvature, but smooth changes in curvature may be facilitated by anticipatory predictions.12 p

    Manual tracking in three dimensions.

    Get PDF
    Contains fulltext : 50618.pdf (preprint version ) (Open Access) Contains fulltext : 50618.pdf (publisher's version ) (Closed access)Little is known about the manual tracking of targets that move in three dimensions. In the present study, human subjects followed, with the tip of a hand-held pen, a virtual target moving four times (period 5 s) around a novel, unseen path. Two basic types of target paths were used: a peanut-shaped Cassini ellipse and a quasi-spherical shape where four connected semicircles lay in orthogonal planes. The quasi-spherical shape was presented in three different sizes, and the Cassini shape was varied in spatial orientation and by folding it along one of the three bend axes. During the first cycle of Cassini shapes, the hand lagged behind the target by about 150 ms on average, which decreased to 100 ms during the last three cycles. Tracking performance gradually improved during the first 3 s of the first cycle and then stabilized. Tracking was especially good during the smooth, planar sections of the shapes, and time lag was significantly shorter when the tracking of a low-frequency component was compared to performance at a higher frequency (-88 ms at 0.2 Hz vs. -101 ms at 0.6 Hz). Even after the appropriate adjustment of the virtual target path to a virtual shape tracing condition, tracking in depth was poor compared to tracking in the frontal plane, resulting in a flattening of the hand path. In contrast to previous studies where target trajectories were linear or sinusoidal, these complex trajectories may have involved estimation of the overall shape, as well as prediction of target velocity

    Intrinsic joint kinematic planning. II: Hand-path predictions based on a Listing's plane constraint.

    No full text
    Contains fulltext : 50805_aut.pdf (author's version ) (Closed access) Contains fulltext : 50805_pub.pdf (publisher's version ) (Closed access)This study was aimed at examining the assumption that three-dimensional (3D) hand movements follow specific paths that are dictated by the operation of a Listing's law constraint at the intrinsic joint level of the arm. A kinematic model was used to simulate hand paths during 3D point-to-point movements. The model was based on the assumption that the shoulder obeys a 2D Listing's constraint and that rotations are about fixed single-axes. The elbow rotations were assumed to relate linearly to those of the shoulder. Both joints were assumed to rotate without reversals, and to start and end rotating simultaneously with zero initial and final velocities. Model predictions were compared to experimental observations made on four right-handed individuals that moved toward virtual objects in "extended arm", "radial", and "frontal plane" movement types. The results showed that the model was partially successful in accounting for the observed behavior. Best hand-path predictions were obtained for extended arm movements followed by radial ones. Frontal plane movements resulted in the largest discrepancies between the predicted and the observed paths. During such movements, the upper arm rotation vectors did not obey Listing's law and this may explain the observed discrepancies. For other movement types, small deviations from the predicted paths were observed which could be explained by the fact that single-axis rotations were not followed even though the rotation vectors remained within Listing's plane. Dynamic factors associated with movement execution, which were not taken into account in our purely kinematic approach, could also explain some of these small discrepancies. In conclusion, a kinematic model based on Listing's law can describe an intrinsic joint strategy for the control of arm orientation during pointing and reaching movements, but only in conditions in which the movements closely obey the Listing's plane assumption

    Optic ataxia errors depend on remapped, not viewed, target location

    No full text
    International audienceOptic ataxia is a disorder associated with posterior parietal lobe lesions, in which visually guided reaching errors typically occur for peripheral targets. It has been assumed that these errors are related to a faulty sensorimotor transformation of inputs from the 'ataxic visual field'. However, we show here that the errors observed in the contralesional field in optic ataxia depend on a dynamic gaze-centered internal representation of reach space
    corecore