858 research outputs found

    Perceiving guaranteed collision-free robot trajectories in unknown and unpredictable environments

    Get PDF
    The dissertation introduces novel approaches for solving a fundamental problem: detecting a collision-free robot trajectory based on sensing in real-world environments that are mostly unknown and unpredictable, i.e., obstacle geometries and their motions are unknown. Such a collision-free trajectory must provide a guarantee of safe robot motion by accounting for robot motion uncertainty and obstacle motion uncertainty. Further, as simultaneous planning and execution of robot motion is required to navigate in such environments, the collision-free trajectory must be detected in real-time. Two novel concepts: (a) dynamic envelopes and (b) atomic obstacles, are introduced to perceive if a robot at a configuration q, at a future time t, i.e., at a point ? = (q, t) in the robot's configuration-time space (CT space), will be collision-free or not, based on sensor data generated at each sensing moment t, in real-time. A dynamic envelope detects a collision-free region in the CT space in spite of unknown motions of obstacles. Atomic obstacles are used to represent perceived unknown obstacles in the environment at each sensing moment. The robot motion uncertainty is modeled by considering that a robot actually moves in a certain tunnel of a desired trajectory in its CT space. An approach based on dynamic envelopes is presented for detecting if a continuous tunnel of trajectories are guaranteed collision-free in an unpredictable environment, where obstacle motions are unknown. An efficient collision-checker is also developed that can perform fast real-time collision detection between a dynamic envelope and a large number of atomic obstacles in an unknown environment. The effectiveness of these methods is tested for different robots using both simulations and real-world experiments

    Bio-inspired collision detector with enhanced selectivity for ground robotic vision system

    Get PDF
    There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the first-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are first, enhancing the collision selectivity in a bio-inspired way, via constructing a computing efficient visual sensor, and realizing the revealed specific characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot

    Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning

    Get PDF
    The next generation of intelligent robots will need to be able to plan reaches. Not just ballistic point to point reaches, but reaches around things such as the edge of a table, a nearby human, or any other known object in the robot’s workspace. Planning reaches may seem easy to us humans, because we do it so intuitively, but it has proven to be a challenging problem, which continues to limit the versatility of what robots can do today. In this document, I propose a novel intrinsically motivated RL system that draws on both Path/Motion Planning and Reactive Control. Through Reinforcement Learning, it tightly integrates these two previously disparate approaches to robotics. The RL system is evaluated on a task, which is as yet unsolved by roboticists in practice. That is to put the palm of the iCub humanoid robot on arbitrary target objects in its workspace, start- ing from arbitrary initial configurations. Such motions can be generated by planning, or searching the configuration space, but this typically results in some kind of trajectory, which must then be tracked by a separate controller, and such an approach offers a brit- tle runtime solution because it is inflexible. Purely reactive systems are robust to many problems that render a planned trajectory infeasible, but lacking the capacity to search, they tend to get stuck behind constraints, and therefore do not replace motion planners. The planner/controller proposed here is novel in that it deliberately plans reaches without the need to track trajectories. Instead, reaches are composed of sequences of reactive motion primitives, implemented by my Modular Behavioral Environment (MoBeE), which provides (fictitious) force control with reactive collision avoidance by way of a realtime kinematic/geometric model of the robot and its workspace. Thus, to the best of my knowledge, mine is the first reach planning approach to simultaneously offer the best of both the Path/Motion Planning and Reactive Control approaches. By controlling the real, physical robot directly, and feeling the influence of the con- straints imposed by MoBeE, the proposed system learns a stochastic model of the iCub’s configuration space. Then, the model is exploited as a multiple query path planner to find sensible pre-reach poses, from which to initiate reaching actions. Experiments show that the system can autonomously find practical reaches to target objects in workspace and offers excellent robustness to changes in the workspace configuration as well as noise in the robot’s sensory-motor apparatus

    Learning Task Constraints from Demonstration for Hybrid Force/Position Control

    Full text link
    We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from the demonstrated kinematic motion, such as frictional forces between the end-effector and the contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive (DMP) framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.Comment: Under revie

    Deep Reinforcement Learning for Autonomous Collision Avoidance

    Get PDF
    La prevenció de col·lisions és una tasca complexa en el control de vehicles autònoms. Els mètodes tradicionals utilitzen models explícits per predir la dinàmica dels vehicles i intentar anticipar les decisions de control dels conductors en l'entorn. Aquests models no sempre aconsegueixen predir amb èxit la trajectòria dels obstacles dinàmics a l'entorn de el vehicle controlat. Aquesta tesi investiga un mètode de control basat en l'aprenentatge profund per reforç. L'agent processa les distàncies detectades des del vehicle controlat als objectes més propers, i mitjançant xarxes neuronals estima l'acció de control òptima per evitar col·lisions. Per a l'aprenentatge, es dissenya un simulador de trànsit que genera un ampli rang de carreteres i vehicles, que interactuen - no sempre seguint les normes de circulació - amb el vehicle de control, el que permet a l'agent demanar informació diversa i associar a cada estat una acció de control que minimitzi el risc de col·lisió. Després de l'entrenament, l'agent demostra haver après a evitar circular en àrees amb alta densitat de trànsit, a adaptar la seva velocitat per evitar col·lisions frontals i posteriors, i a realitzar girs que evitin xocs amb vehicles que s'aproximen pels laterals.La prevención de colisiones es una tarea compleja en el control de vehículos autónomos. Los métodos tradicionales utilizan modelos explícitos para predecir la dinámica de los vehículos e intentar anticipar las decisiones de control de los conductores en el entorno. Estos modelos no siempre consiguen predecir con éxito la trayectoria de los objectos dinámicos en el entorno del vehículo controlado. Esta tesis investiga un método de control basado en el aprendizaje profundo por refuerzo. El agente procesa las distancias detectadas desde el vehículo controlado a los objectos más cercanos, y mediante redes neuronales estima la acción de control óptima para evitar colisiones. Para el aprendizaje, se diseña un simulador de tráfico que genera un amplio rango de carreteras y vehículos, que interactúan - no siempre siguiendo las normas de circulación - con el vehículo de control, lo que permite al agente recabar información diversa y asociar a cada estado una acción de control que minimice el riesgo de colisión. Tras el entrenamiento, el agente demuestra haber aprendido a evitar circular en áreas con alta densidad de tráfico, a adaptar su velocidad para evitar colisiones frontales y traseras, y a realizar giros que eviten choques con vehículos que se aproximan por los laterales.Collision avoidance is a complicated task for autonomous vehicle control. Most traditional methods in this area consist on model-based solutions, where an understanding of vehicle dynamics and an accurate model of vehicle behavior is required, in order to predict the trajectory of the controlled car and the surrounding vehicles. Such solutions struggle to anticipate and explicitly model surrounding car driving behavior. This work investigates a model-free Deep Reinforcement Learning based method for collision avoidance, where the agent processes the distances to the closest entities and outputs the steering angle and acceleration required to avoid collisions. A traffic simulator is used to generate a wide range of roads and vehicles, which will interact - not always compliantly - with the learning agent allowing it to collect learning experience. After being trained on such conditions, the agent shows intelligent driving behavior, avoiding areas with high traffic density, adapting its speed to avoid rear or front crashes, and steering when necessary to avoid lateral crashes.Outgoin
    corecore