30,163 research outputs found

    Obstacle Avoidance by Means of an Operant Conditioning Model

    Full text link
    This paper describes the application of a model of operant conditioning to the problem of obstacle avoidance with a wheeled mobile robot. The main characteristic of the applied model is that the robot learns to avoid obstacles through a learning-by-doing cycle without external supervision. A series of ultrasonic sensors act as Conditioned Stimuli (CS), while collisions act as an Unconditioned Stimulus (UCS). By experiencing a series of movements in a cluttered environment, the robot learns to avoid sensor activation patterns that predict collisions, thereby learning to avoid obstacles. Learning generalizes to arbitrary cluttered environments. In this work we describe our initial implementation using a computer simulation

    A visual attention mechanism for autonomous robots controlled by sensorimotor contingencies

    Get PDF
    Alexander Maye, Dari Trendafilov, Daniel Polani, Andreas Engel, ‘A visual attention mechanism for autonomous robots controlled by sensorimotor contingencies’, paper presented at the International Conference on Intelligent Robots and Systems (IROS) 2015 Workshop on Sensorimotor Contingencies for Robotics, Hamburg, Germany, 2 October, 2015.Robot control architectures that are based on learning the dependencies between robot's actions and the resulting change in sensory input face the fundamental problem that for high-dimensional action and/or sensor spaces, the number of these sensorimotor dependencies can become huge. In this article we present a scenario of a robot that learns to avoid collisions with stationary objects from image-based motion flow and a collision detector. Following an information-theoretic approach, we demonstrate that the robot can infer image regions that facilitate the prediction of imminent collisions. This allows restricting the computation to the domain in the input space that is relevant for the given task, which enables learning sensorimotor contingencies in robots with high-dimensional sensor spaces.Peer reviewedFinal Accepted Versio

    Learning to Fly by Crashing

    Full text link
    How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see: https://youtu.be/u151hJaGKU
    • …
    corecore