43 research outputs found

    Neurobiologically Inspired Mobile Robot Navigation and Planning

    Get PDF
    After a short review of biologically inspired navigation architectures, mainly relying on modeling the hippocampal anatomy, or at least some of its functions, we present a navigation and planning model for mobile robots. This architecture is based on a model of the hippocampal and prefrontal interactions. In particular, the system relies on the definition of a new cell type “transition cells” that encompasses traditional “place cells”

    Emotional modulation of peripersonal space impacts the way robots interact

    No full text
    International audiencePeripersonal space refers to the area around the body that is perceived as secure and reachable. The ability to build such a representation is necessary in both approach and avoidance behaviors. Several studies show that the perception of reach-able and comfort areas depends on emotions. In this paper, we describe how we model an appetitive and an aversive pathway based on the role of some brain regions. The obtained emotional states modulate the robot perception of its periper-sonal space. This representation is directly used to control the robot behavior. Based on a single-resource multirobot experiment , we show the impact of such an emotional modulation. Aggressive or fearful behaviors emerge from the dynamics of interaction between the simulated robots

    Multimodal integration of visual place cells and grid cells for robots navigation

    No full text
    International audienceIn the present study, we propose a model of multimodal place cells merging visual and proprioceptive primitives. We will briefly introduce a new model of proprioceptive localization, giving rise to the so-called grid cells [Hafting2005], wich are congruent with neurobiological studies made on rodent. Then we show how a simple conditionning rule between both modalities can outperform visual-only driven models. Experiments show that this model enhances robot localization and allows to solve some benchmark problems for real life robotics applications

    Emotional modulation of peripersonal space as a way to represent reachable and comfort areas

    No full text
    International audienceThis work is based on the idea that, like in biological organisms, basic motivated behavior can be represented in terms of approach and avoidance. We propose a model for emotional modulation of the robot peripersonal space. That is to say, an area both reachable and secure; the space where the robot can act. The contribution of this paper is a generic model that integrates various stimuli to build a representation of reachable and comfort areas used to control robot behavior. Such an architecture is tested is three experiments using real robot and simulations. It is compared with two altered architecture versions. We show how our model allow the robot to solve various tasks, display emotionally colored behaviors and account for results from psychological studies

    Combining local and global visual information in context-based neurorobotic navigation

    No full text
    International audienceIn robotic navigation, biologically inspired localization models have often exhibited interesting features and proven to be competitive with other solutions in terms of adaptability and performance. In general, place recognition systems rely on global or local visual descriptors; or both. In this paper, we propose a model of context-based place cells combining these two information. Global visual features are extracted to represent visual contexts. Based on the idea of global precedence, contexts drive a more refined recognition level which has local visual descriptors as an input. We evaluate this model on a robotic navigation dataset that we recorded in the outdoors. Thus, our contribution is twofold: 1) a bio-inspired model of context-based place recognition using neural networks; and 2) an evaluation assessing its suitability for applications on real robot by comparing it to 4 other architectures -- 2 variants of the model and 2 stacking-based solutions -- in terms of performance and computational cost. The context-based model gets the highest score based on the three metrics we consider -- or is second to one of its variants. Moreover, a key feature makes the computational cost constant over time while it increases with the other methods. These promising results suggest that this model should be a good candidate for a robust place recognition in wide environments

    Embedded and real-time architecture for bio-inspired vision-based robot navigation

    No full text
    International audienceA recent trend in several robotics tasks is to consider vision as the primary sense to perceive the environment or to interact with humans. Therefore, vision processing becomes a central and challenging matter for the design of real-time control architectures. We follow in this paper a biological inspiration to propose a real-time and embedded control system relying on visual attention to learn specific actions in each place recognized by our robot. Faced with a performance challenge, the attentional model allows to reduce vision processing to a few regions of the visual field. However, the computational complexity of the visual chain remains an issue for a processing system embedded onto an indoor robot. That is why we propose as the first part of our system, a full-hardware architecture prototyped onto reconfigurable devices to detect salient features at the camera frequency. The second part learns continuously these features in order to implement specific robotics tasks. This neural control layer is implemented as embedded software making the robot fully autonomous from a computation point of view. The integration of such a system onto the robot enables not only to accelerate the frame rate of the visual processing, to relieve the control architecture but also to compress the data-flow at the output of the camera, thus reducing communication and energy consumption. We present in this paper the complete embedded sensorimotor architecture and the experimental setup. The presented results demonstrate its real-time behavior in vision-based navigation tasks
    corecore