3,395 research outputs found

    A biologically inspired meta-control navigation system for the Psikharpax rat robot

    Get PDF
    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e. g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    Phototaxic foraging of the archaepaddler, a hypothetical deep-sea species

    Get PDF
    An autonomous agent (animat, hypothetical animal), called the (archae) paddler, is simulated in sufficient detail to regard its simulated aquatic locomotion (paddling) as physically possible. The paddler is supposed to be a model of an animal that might exist, although it is perfectly possible to view it as a model of a robot that might be built. The agent is assumed to navigate in a simulated deep-sea environment, where it hunts autoluminescent prey. It uses a biologically inspired phototaxic foraging-strategy, while paddling in a layer just above the bottom. The advantage of this living space is that the navigation problem is essentially two-dimensional. Moreover, the deep-sea environment is physically simple (and hence easier to simulate): no significant currents, constant temperature, completely dark. A foraging performance metric is developed that circumvents the necessity to solve the travelling salesman problem. A parametric simulation study then quantifies the influence of habitat factors, such as the density of prey, and the body-geometry (e.g. placement, direction and directional selectivity of the eyes) on foraging success. Adequate performance proves to require a specific body-% geometry adapted to the habitat characteristics. In general performance degrades smoothly for modest changes of the geometric and habitat parameters, indicating that we work in a stable region of 'design space'. The parameters have to strike a compromise between on the one hand the ability to 'fixate' an attractive target, and on the other hand to 'see' as many targets at the same time as possible. One important conclusion is that simple reflex-based navigation can be surprisingly efficient. In the second place, performance in a global task (foraging) depends strongly on local parameters like visual direction-tuning, position of the eyes and paddles, etc. Behaviour and habitat 'mould' the body, and the body-geometry strongly influences performance. The resulting platform enables further testing of foraging strategies, or vision and locomotion theories stemming either from biology or from robotics

    The Twente humanoid head

    Get PDF
    This video shows the results of the project on the mechatronic development of the Twente humanoid head. The mechanical structure consists of a neck with four degrees of freedom (DOFs) and two eyes (a stereo pair system) which tilt on a common axis and rotate sideways freely providing a three more DOFs. The motion control algorithm is designed to receive, as an input, the output of a biological-inspired vision processing algorithm and to exploit the redundancy of the joints for the realization of the movements. The expressions of the humanoid head are implemented by projecting light from the internal part of the translucent plastic cover

    Adaptive saccade controller inspired by the primates' cerebellum

    Get PDF
    Saccades are fast eye movements that allow humans and robots to bring the visual target in the center of the visual field. Saccades are open loop with respect to the vision system, thus their execution require a precise knowledge of the internal model of the oculomotor system. In this work, we modeled the saccade control, taking inspiration from the recurrent loops between the cerebellum and the brainstem. In this model, the brainstem acts as a fixed-inverse model of the oculomotor system, while the cerebellum acts as an adaptive element that learns the internal model of the oculomotor system. The adaptive filter is implemented using a state-of-the-art neural network, called I-SSGPR. The proposed approach, namely recurrent architecture, was validated through experiments performed both in simulation and on an antropomorphic robotic head. Moreover, we compared the recurrent architecture with another model of the cerebellum, the feedback error learning. Achieved results show that the recurrent architecture outperforms the feedback error learning in terms of accuracy and insensitivity to the choice of the feedback controller

    Adaptive, fast walking in a biped robot under neuronal control and learning

    Get PDF
    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks

    Learning the visual–oculomotor transformation: effects on saccade control and space representation

    Get PDF
    Active eye movements can be exploited to build a visuomotor representation of the surrounding environment. Maintaining and improving such representation requires to update the internal model involved in the generation of eye movements. From this perspective, action and perception are thus tightly coupled and interdependent. In this work, we encoded the internal model for oculomotor control with an adaptive filter inspired by the functionality of the cerebellum. Recurrent loops between a feed-back controller and the internal model allow our system to perform accurate binocular saccades and create an implicit representation of the nearby space. Simulation results show that this recurrent architecture outperforms classical feedback-error-learning in terms of both accuracy and sensitivity to system parameters. The proposed approach was validated implementing the framework on an anthropomorphic robotic head
    corecore