209,287 research outputs found

    The Role of Visual and Proprioceptive Limb Information in Affordance Judgments and Action Capabilities

    Get PDF
    In the mirror illusion, visual information from a mirror reflection of one hand influences the perceived location of the other hand. Holmes, Crozier, and Spence (2004) demonstrated this visual capture effect on a spatial localization task in which visual information was found to influence reaching movements toward a target when the seen (in the mirror) and felt (proprioception) position of the hand did not match. Furthermore, past results suggest that visual information about hand position overrides the proprioceptive information when the hands were used to indicate perceived object length. The conflict between vision and proprioceptive information of limb location was further examined in three experiments by means of a task in which participants adjusted the physical distance of their unseen hand in the horizontal plane and sagittal plane during judgments of affordance. In each trial, participants viewed their visible hand and its reflection in a mirror, while their unseen hand was positioned at several positions located behind the mirror. At all times, the visible hand was positioned fifteen centimeters in front of the mirror, and as such, the unseen hand always appeared to be thirty centimeters from the visible hand regardless of its actual position. While viewing their visible hand and its reflection, participants performed simultaneous finger movements with both hands to maximize the visual capture illusion. In Experiments 1 and 2, participants then viewed a series of tubes of varying lengths presented in ascending and descending order and called out the point at which they were no longer able to catch the tube given the current distance between their hands, whether felt or seen. In Experiment 3, participants viewed an object presented at different locations in the sagittal plane and repositioned their unseen hand such that it was underneath the object. Future experiments should examine other action capabilities.https://ecommons.udayton.edu/stander_posters/1213/thumbnail.jp

    Kinematic analysis of reaching movements of the upper limb after total or reverse shoulder arthroplasty

    Get PDF
    Studies have analyzed three-dimensional complex motion of the shoulder in healthy subjects or patients undergoing total shoulder arthroplasty (TSA) or reverse shoulder arthroplasty (RSA). No study to date has assessed the reaching movements in patients with TSA or RSA. Twelve patients with TSA (Group A) and 12 with RSA (Group B) underwent kinematic analysis of reaching movements directed at four targets. The results were compared to those of 12 healthy subjects (Group C). The assessed parameters were hand-to-target distance, target-approaching velocity, humeral-elevation angular velocity, normalized jerk (indicating motion fluidity), elbow extension and humeral elevation angles. Mean Constant score increased by 38 points in Group A and 47 in Group B after surgery. In three of the tasks, there were no significant differences between healthy subjects and patients in the study groups. Mean target-approaching velocity and humeral-elevation angular velocity were significantly greater in the control group than in study groups and, overall, greater in Group A than Group B. Movement fluidity was significantly greater in the controls, with patients in Group B showing greater fluidity than those in Group A. Reaching movements in the study groups were comparable, in three of the tasks, to those in the control group. However, the latter performed significantly better with regard to target-approaching velocity, humeral-elevation angular velocity and movement fluidity, which are the most representative characteristics of reaching motion. These differences, that may be related to deterioration of shoulder proprioception after prosthetic implant, might possibly be decreased with appropriate rehabilitation

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm

    Full text link
    This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI 90-24877

    Intersegmental Coordination in the Kinematics of Prehension Movements of Macaques

    Get PDF
    The most popular model to explain how prehensile movements are organized assumes that they comprise two "components", the reaching component encoding information regarding the object's spatial location and the grasping component encoding information on the object's intrinsic properties such as size and shape. Comparative kinematic studies on grasping behavior in the humans and in macaques have been carried out to investigate the similarities and differences existing across the two species. Although these studies seem to favor the hypothesis that macaques and humans share a number of kinematic features it remains unclear how the reaching and grasping components are coordinated during prehension movements in free-ranging macaque monkeys. Twelve hours of video footage was filmed of the monkeys as they snatched food items from one another (i.e., snatching) or collect them in the absence of competitors (i.e., unconstrained). The video samples were analyzed frame-by-frame using digitization techniques developed to perform two-dimensional post-hoc kinematic analyses of the two types of actions. The results indicate that only for the snatching condition when the reaching variability increased there was an increase in the amplitude of maximum grip aperture. Besides, the start of a break-point along the deceleration phase of the velocity profile correlated with the time at which maximum grip aperture occurred. These findings suggest that macaques can spatially and temporally couple the reaching and the grasping components when there is pressure to act quickly. They offer a substantial contribution to the debate about the nature of how prehensile actions are programmed

    Neural Representations for Sensory-Motor Control I: Head-Centered 3-D Target Positions from Opponent Eye Commands

    Full text link
    This article describes how corollary discharges from outflow eye movement commands can be transformed by two stages of opponent neural processing into a head-centered representation of 3-D target position. This representation implicitly defines a cyclopean coordinate system whose variables approximate the binocular vergence and spherical horizontal and vertical angles with respect to the observer's head. Various psychophysical data concerning binocular distance perception and reaching behavior are clarified by this representation. The representation provides a foundation for learning head-centered and body-centered invariant representations of both foveated and non-foveated 3-D target positions. It also enables a solution to be developed of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space.Air Force Office of Scientific Research (URI 90-0175); Defense Advanced Research Projects Agency (AFOSR-90-0083); National Science Foundation (IRI-87-16960, IRI-90-24877

    Spatially valid proprioceptive cues improve the detection of a visual stimulus

    Get PDF
    Vision and proprioception are the main sensory modalities that convey hand location and direction of movement. Fusion of these sensory signals into a single robust percept is now well documented. However, it is not known whether these modalities also interact in the spatial allocation of attention, which has been demonstrated for other modality pairings. The aim of this study was to test whether proprioceptive signals can spatially cue a visual target to improve its detection. Participants were instructed to use a planar manipulandum in a forward reaching action and determine during this movement whether a near-threshold visual target appeared at either of two lateral positions. The target presentation was followed by a masking stimulus, which made its possible location unambiguous, but not its presence. Proprioceptive cues were given by applying a brief lateral force to the participant’s arm, either in the same direction (validly cued) or in the opposite direction (invalidly cued) to the on-screen location of the mask. The d′ detection rate of the target increased when the direction of proprioceptive stimulus was compatible with the location of the visual target compared to when it was incompatible. These results suggest that proprioception influences the allocation of attention in visual spac
    corecore