1,341 research outputs found

    Adaptive saccade controller inspired by the primates' cerebellum

    Get PDF
    Saccades are fast eye movements that allow humans and robots to bring the visual target in the center of the visual field. Saccades are open loop with respect to the vision system, thus their execution require a precise knowledge of the internal model of the oculomotor system. In this work, we modeled the saccade control, taking inspiration from the recurrent loops between the cerebellum and the brainstem. In this model, the brainstem acts as a fixed-inverse model of the oculomotor system, while the cerebellum acts as an adaptive element that learns the internal model of the oculomotor system. The adaptive filter is implemented using a state-of-the-art neural network, called I-SSGPR. The proposed approach, namely recurrent architecture, was validated through experiments performed both in simulation and on an antropomorphic robotic head. Moreover, we compared the recurrent architecture with another model of the cerebellum, the feedback error learning. Achieved results show that the recurrent architecture outperforms the feedback error learning in terms of accuracy and insensitivity to the choice of the feedback controller

    Hodge numbers for the cohomology of Calabi-Yau type local systems

    Full text link
    We use Higgs cohomology to determine the Hodge numbers of the first intersection cohomology group of a local system V arising from the third direct image of a family of Calabi-Yau 3-folds over a smooth, quasi-projective curve. We give applications to Rhode's families of Calabi-Yau 3-folds without MUM.Comment: Some signs corrected. This article draws heavily from arXiv:0911.027

    Reachable by walking: inappropriate integration of near and far space may lead to distance errors

    Get PDF
    Our experimental results show that infants while learning to walk intend to reach for unreachable objects. These distance errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. Infants during their first months are fairly immobile, their attention and actions are constrained to near (reachable) space. Walking, in contrast, lures attention to distal displays and provides the information to disambiguate far space. In this paper, we make use of a reward-mediated learning to mimic the development of absolute distance perception. The results obtained with the NAO robot support further our hypothesis that the representation of near space changes after the onset of walking, which may cause the occurrence of distance errors

    Reliable non-prehensile door opening through the combination of vision, tactile and force feedback

    Get PDF
    Whereas vision and force feedback—either at the wrist or at the joint level—for robotic manipulation purposes has received considerable attention in the literature, the benefits that tactile sensors can provide when combined with vision and force have been rarely explored. In fact, there are some situations in which vision and force feedback cannot guarantee robust manipulation. Vision is frequently subject to calibration errors, occlusions and outliers, whereas force feedback can only provide useful information on those directions that are constrained by the environment. In tasks where the visual feedback contains errors, and the contact configuration does not constrain all the Cartesian degrees of freedom, vision and force sensors are not sufficient to guarantee a successful execution. Many of the tasks performed in our daily life that do not require a firm grasp belong to this category. Therefore, it is important to develop strategies for robustly dealing with these situations. In this article, a new framework for combining tactile information with vision and force feedback is proposed and validated with the task of opening a sliding door. Results show how the vision-tactile-force approach outperforms vision-force and force-alone, in the sense that it allows to correct the vision errors at the same time that a suitable contact configuration is guaranteed.This research was partly supported by the Korea Science and Engineering Foundation under the WCU (World Class University) program funded by the Ministry of Education, Science and Technology, S. Korea (Grant No. R31-2008-000-10062-0), by the European Commission’s Seventh Framework Programme FP7/2007-2013 under grant agreements 217077 (EYESHOTS project), and 248497(TRIDENT Project), by Ministerio de Ciencia e Innovación (DPI-2008-06636; and DPI2008-06548-C03-01), by Fundació Caixa Castelló-Bancaixa (P1-1B2008-51; and P1-1B2009-50) and by Universitat Jaume I

    A framework for compliant physical interaction : the grasp meets the task

    Get PDF
    Although the grasp-task interplay in our daily life is unquestionable, very little research has addressed this problem in robotics. In order to fill the gap between the grasp and the task, we adopt the most successful approaches to grasp and task specification, and extend them with additional elements that allow to define a grasp-task link. We propose a global sensor-based framework for the specification and robust control of physical interaction tasks, where the grasp and the task are jointly considered on the basis of the task frame formalism and the knowledge-based approach to grasping. A physical interaction task planner is also presented, based on the new concept of task-oriented hand pre-shapes. The planner focuses on manipulation of articulated parts in home environments, and is able to specify automatically all the elements of a physical interaction task required by the proposed framework. Finally, several applications are described, showing the versatility of the proposed approach, and its suitability for the fast implementation of robust physical interaction tasks in very different robotic systems

    Reaching for the Unreachable: Reorganization of Reaching with Walking

    Get PDF
    Previous research suggests that reaching and walking behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an apparent loss of the distinction between the reachable and nonreachable distances as children start walking. The experiment compared non-walkers, walkers with help, and independent walkers in a reaching task to targets at varying distances. Reaching attempts, contact, leaning, and communication behaviors were recorded. Most of the children reached for the unreachable objects the first time it was presented. Non-walkers, however, reached less on the subsequent trials showing clear adjustment of their reaching decisions with the failures. On the contrary, walkers consistently attempted reaches to targets at unreachable distances. We suggest that these reaching errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. We propose a rewardmediated model implemented on a NAO humanoid robot that replicates the main results from our study showing an increase in reaching attempts to nonreachable distances after the onset of walking

    Ensayo de funcionamiento de cosechadoras de tomate en Extremadura.

    Get PDF
    En la presente comunicación se presentan los resultados obtenidos en un ensayo de funcionamiento de cosechadora de tomates, realizados en la campaña de 1.990 en Badajoz. Se ensayaron tres máquinas arrastradas y dos autopropulsadas, todas de fabricación europea, cada una de las cuales recolecto durante el ensayo una superficie de 0'4 ha. Se evaluó el rendimiento horario de las máquinas, las pérdidas de fruto producidad en el campo y los daños mecánicos que presentaron los frutos a la entrada de la fábrica

    Integration of Static and Self-motion-Based Depth Cues for Efficient Reaching and Locomotor Actions

    Get PDF
    The common approach to estimate the distance of an object in computer vision and robotics is to use stereo vision. Stereopsis, however, provides good estimates only within near space and thus is more suitable for reaching actions. In order to successfully plan and execute an action in far space, other depth cues must be taken into account. Self-body movements, such as head and eye movements or locomotion can provide rich information of depth. This paper proposes a model for integration of static and self-motion-based depth cues for a humanoid robot. Our results show that self-motion-based visual cues improve the accuracy of distance perception and combined with other depth cues provide the robot with a robust distance estimator suitable for both reaching and walking actions
    corecore