894 research outputs found

    Hierarchical generative modelling for autonomous robots

    Full text link
    Humans can produce complex whole-body motions when interacting with their surroundings, by planning, executing and combining individual limb movements. We investigated this fundamental aspect of motor control in the setting of autonomous robotic operations. We approach this problem by hierarchical generative modelling equipped with multi-level planning-for autonomous task completion-that mimics the deep temporal architecture of human motor control. Here, temporal depth refers to the nested time scales at which successive levels of a forward or generative model unfold, for example, delivering an object requires a global plan to contextualise the fast coordination of multiple local movements of limbs. This separation of temporal scales also motivates robotics and control. Specifically, to achieve versatile sensorimotor control, it is advantageous to hierarchically structure the planning and low-level motor control of individual limbs. We use numerical and physical simulation to conduct experiments and to establish the efficacy of this formulation. Using a hierarchical generative model, we show how a humanoid robot can autonomously complete a complex task that necessitates a holistic use of locomotion, manipulation, and grasping. Specifically, we demonstrate the ability of a humanoid robot that can retrieve and transport a box, open and walk through a door to reach the destination, approach and kick a football, while showing robust performance in presence of body damage and ground irregularities. Our findings demonstrated the effectiveness of using human-inspired motor control algorithms, and our method provides a viable hierarchical architecture for the autonomous completion of challenging goal-directed tasks

    Perception in real and artificial insects: a robotic investigation of cricket phonotaxis

    Get PDF
    The aim of this thesis is to investigate a methodology for studying percep¬ tual systems by building artificial ones. It is proposed that useful results can be obtained from detailed robotic modelling of specific sensorimotor mechanisms in lower animals. By looking at the sensory control of behaviour in simple biological organisms, and in working robots, it is argued that proper appreciation of the physical interaction of the system with the environment and the task is essential for discovering how perceptual mechanisms function. Although links to biology, and concern with perceptual competence, are fields of growing interest in Artificial Intelligence, much of the current research fails to adequately address these issues, as the model systems being built do not represent real sensorimotor problems.By analyzing what is required for a model of a system to contribute to ex¬ plaining that system, a particular approach to modeling perceptual systems is suggested. This involves choosing an appropriate target system to model, building a system that validly represents the target with respect to a particular hypothesis, and properly evaluating the behaviour of the model system to draw conclusions about the target. The viability and potential contribution of this approach is demonstrated in the design, implementation and evaluation of a mobile robot model of a hypothesised mechanism for phonotaxis in the cricket.The result is a robot that successfully locates a specific sound source under a variety of conditions, with a range of behaviour that resembles the cricket in many ways. This provides some support for the hypothesis that the neural mechanism for phonotaxis in crickets does not involve separate processing for recognition and location of the signal, as is generally supposed. It also shows the importance of un¬ derstanding the physical interaction of the system's structure with its environment in devising and implementing perceptual systems. Both these results vindicate the proposed methodology

    TOWARDS THE GROUNDING OF ABSTRACT CATEGORIES IN COGNITIVE ROBOTS

    Get PDF
    The grounding of language in humanoid robots is a fundamental problem, especially in social scenarios which involve the interaction of robots with human beings. Indeed, natural language represents the most natural interface for humans to interact and exchange information about concrete entities like KNIFE, HAMMER and abstract concepts such as MAKE, USE. This research domain is very important not only for the advances that it can produce in the design of human-robot communication systems, but also for the implication that it can have on cognitive science. Abstract words are used in daily conversations among people to describe events and situations that occur in the environment. Many scholars have suggested that the distinction between concrete and abstract words is a continuum according to which all entities can be varied in their level of abstractness. The work presented herein aimed to ground abstract concepts, similarly to concrete ones, in perception and action systems. This permitted to investigate how different behavioural and cognitive capabilities can be integrated in a humanoid robot in order to bootstrap the development of higher-order skills such as the acquisition of abstract words. To this end, three neuro-robotics models were implemented. The first neuro-robotics experiment consisted in training a humanoid robot to perform a set of motor primitives (e.g. PUSH, PULL, etc.) that hierarchically combined led to the acquisition of higher-order words (e.g. ACCEPT, REJECT). The implementation of this model, based on a feed-forward artificial neural networks, permitted the assessment of the training methodology adopted for the grounding of language in humanoid robots. In the second experiment, the architecture used for carrying out the first study was reimplemented employing recurrent artificial neural networks that enabled the temporal specification of the action primitives to be executed by the robot. This permitted to increase the combinations of actions that can be taught to the robot for the generation of more complex movements. For the third experiment, a model based on recurrent neural networks that integrated multi-modal inputs (i.e. language, vision and proprioception) was implemented for the grounding of abstract action words (e.g. USE, MAKE). Abstract representations of actions ("one-hot" encoding) used in the other two experiments, were replaced with the joints values recorded from the iCub robot sensors. Experimental results showed that motor primitives have different activation patterns according to the action's sequence in which they are embedded. Furthermore, the performed simulations suggested that the acquisition of concepts related to abstract action words requires the reactivation of similar internal representations activated during the acquisition of the basic concepts, directly grounded in perceptual and sensorimotor knowledge, contained in the hierarchical structure of the words used to ground the abstract action words.This study was financed by the EU project RobotDoC (235065) from the Seventh Framework Programme (FP7), Marie Curie Actions Initial Training Network
    corecore