7 research outputs found

    How Listing's Law May Emerge from Neural Control of Reactive Saccades

    Full text link
    We hypothesize that Listing's Law emerges as a result of two key properties of the saccadic sensory-motor system: 1) The visual sensory apparatus has a 2-D topology and 2) motor synergists are synchronized. The theory is tested by showing that eye attitudes that obey Listing's Law are achieved in a 3-D saccadic control system that translates visual eccentricity into synchronized motor commands via a 2-D spatial gradient. Simulations of this system demonstrate that attitudes assumed by the eye upon accurate foveation tend to obey Listing's Law.Office of Naval Research (N00014-92-J-1309, N00014-95-1-1409); Air Force Office of Scientific Research (90-0083

    Gaze control in a dynamic paradigm of head-in-space orientation

    Full text link

    Modeling, design and analysis of a biomimetic eyeball-like robot with accommodation mechanism

    Full text link

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Life Sciences Program Tasks and Bibliography for FY 1996

    Get PDF
    This document includes information on all peer reviewed projects funded by the Office of Life and Microgravity Sciences and Applications, Life Sciences Division during fiscal year 1996. This document will be published annually and made available to scientists in the space life sciences field both as a hard copy and as an interactive Internet web page

    Life Sciences Program Tasks and Bibliography

    Get PDF
    This document includes information on all peer reviewed projects funded by the Office of Life and Microgravity Sciences and Applications, Life Sciences Division during fiscal year 1995. Additionally, this inaugural edition of the Task Book includes information for FY 1994 programs. This document will be published annually and made available to scientists in the space life sciences field both as a hard copy and as an interactive Internet web pag

    Visuomotor Coordination in Reach-To-Grasp Tasks: From Humans to Humanoids and Vice Versa

    Get PDF
    Understanding the principles involved in visually-based coordinated motor control is one of the most fundamental and most intriguing research problems across a number of areas, including psychology, neuroscience, computer vision and robotics. Not very much is known regarding computational functions that the central nervous system performs in order to provide a set of requirements for visually-driven reaching and grasping. Additionally, in spite of several decades of advances in the field, the abilities of humanoids to perform similar tasks are by far modest when needed to operate in unstructured and dynamically changing environments. More specifically, our first focus is understanding the principles involved in human visuomotor coordination. Not many behavioral studies considered visuomotor coordination in natural, unrestricted, head-free movements in complex scenarios such as obstacle avoidance. To fill this gap, we provide an assessment of visuomotor coordination when humans perform prehensile tasks with obstacle avoidance, an issue that has received far less attention. Namely, we quantify the relationships between the gaze and arm-hand systems, so as to inform robotic models, and we investigate how the presence of an obstacle modulates this pattern of correlations. Second, to complement these observations, we provide a robotic model of visuomotor coordination, with and without the presence of obstacles in the workspace. The parameters of the controller are solely estimated by using the human motion capture data from our human study. This controller has a number of interesting properties. It provides an efficient way to control the gaze, arm and hand movements in a stable and coordinated manner. When facing perturbations while reaching and grasping, our controller adapts its behavior almost instantly, while preserving coordination between the gaze, arm, and hand. In the third part of the thesis, we study the neuroscientific literature of the primates. We here stress the view that the cerebellum uses the cortical reference frame representation. The cerebellum by taking into account this representation performs closed-loop programming of multi-joint movements and movement synchronization between the eye-head system, arm and hand. Based on this investigation, we propose a functional architecture of the cerebellar-cortical involvement. We derive a number of improvements of our visuomotor controller for obstacle-free reaching and grasping. Because this model is devised by carefully taking into account the neuroscientific evidence, we are able to provide a number of testable predictions about the functions of the central nervous system in visuomotor coordination. Finally, we tackle the flow of the visuomotor coordination in the direction from the arm-hand system to the visual system. We develop two models of motor-primed attention for humanoid robots. Motor-priming of attention is a mechanism that implements prioritizing of visual processing with respect to motor-relevant parts of the visual field. Recent studies in humans and monkeys have shown that visual attention supporting natural behavior is not exclusively defined in terms of visual saliency in color or texture cues, rather the reachable space and motor plans present the predominant source of this attentional modulation. Here, we show that motor-priming of visual attention can be used to efficiently distribute robot's computational resources devoted to visual processing
    corecore