5,170 research outputs found

    A Biologically Inspired Approach for Robot Depth Estimation

    Get PDF
    Aimed at building autonomous service robots, reasoning, perception, and action should be properly integrated. In this paper, the depth cue has been analysed as an early stage given its importance for robotic tasks. So, from neuroscience findings, a hierarchical four-level dorsal architecture has been designed and implemented. Mainly, from a stereo image pair, a set of complex Gabor filters is applied for estimating an egocentric quantitative disparity map. This map leads to a quantitative depth scene representation that provides the raw input for a qualitative approach. So, the reasoning method infers the data required to make the right decision at any time. As it will be shown, the experimental results highlight the robust performance of the biologically inspired approach presented in this paper.This paper describes research done at UJI Robotic Intelligence Laboratory. Support for this laboratory was provided in part by Ministerio de EconomĂ­a y Competitividad (DPI2015-69041-R) and by Universitat Jaume I

    Towards an Autonomous Walking Robot for Planetary Surfaces

    Get PDF
    In this paper, recent progress in the development of the DLR Crawler - a six-legged, actively compliant walking robot prototype - is presented. The robot implements a walking layer with a simple tripod and a more complex biologically inspired gait. Using a variety of proprioceptive sensors, different reflexes for reactively crossing obstacles within the walking height are realised. On top of the walking layer, a navigation layer provides the ability to autonomously navigate to a predefined goal point in unknown rough terrain using a stereo camera. A model of the environment is created, the terrain traversability is estimated and an optimal path is planned. The difficulty of the path can be influenced by behavioral parameters. Motion commands are sent to the walking layer and the gait pattern is switched according to the estimated terrain difficulty. The interaction between walking layer and navigation layer was tested in different experimental setups

    Camera oscillation pattern for VSLAM: translational versus rotational

    Get PDF
    Visual SLAM algorithms exploit natural scene features to infer the camera motion and build a map of the environment landmarks. SLAM algorithm has two interrelated processes localization and mapping. For accurate localization, we need the features location estimates to converge quickly. On the other hand, to build an accurate map, we need accurate localization. Recently, a biologically inspired approach exploits deliberate camera oscillation has been used to improve the convergence speed of depth estimate. In this paper, we explore the effect of camera oscillation pattern on the accuracy of VSLAM. Two main oscillation patterns are used for distance estimation: translational and rotational. Experiments, using static and moving robot, are made to explore the effect of these oscillation patterns on the VSLAM performance

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Enabling Depth-driven Visual Attention on the iCub Humanoid Robot: Instructions for Use and New Perspectives

    Get PDF
    The importance of depth perception in the interactions that humans have within their nearby space is a well established fact. Consequently, it is also well known that the possibility of exploiting good stereo information would ease and, in many cases, enable, a large variety of attentional and interactive behaviors on humanoid robotic platforms. However, the difficulty of computing real-time and robust binocular disparity maps from moving stereo cameras often prevents from relying on this kind of cue to visually guide robots' attention and actions in real-world scenarios. The contribution of this paper is two-fold: first, we show that the Efficient Large-scale Stereo Matching algorithm (ELAS) by A. Geiger et al. 2010 for computation of the disparity map is well suited to be used on a humanoid robotic platform as the iCub robot; second, we show how, provided with a fast and reliable stereo system, implementing relatively challenging visual behaviors in natural settings can require much less effort. As a case of study we consider the common situation where the robot is asked to focus the attention on one object close in the scene, showing how a simple but effective disparity-based segmentation solves the problem in this case. Indeed this example paves the way to a variety of other similar applications

    A biologically inspired meta-control navigation system for the Psikharpax rat robot

    Get PDF
    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e. g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics
    • 

    corecore