507 research outputs found

    Virtual to Real Reinforcement Learning for Autonomous Driving

    Full text link
    Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data

    Deep learning for video game playing

    Get PDF
    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards

    Gene regulated car driving: using a gene regulatory network to drive a virtual car

    Get PDF
    This paper presents a virtual racing car controller based on an artificial gene regulatory network. Usually used to control virtual cells in developmental models, recent works showed that gene regulatory networks are also capable to control various kinds of agents such as foraging agents, pole cart, swarm robots, etc. This paper details how a gene regulatory network is evolved to drive on any track through a three-stages incremental evolution. To do so, the inputs and outputs of the network are directly mapped to the car sensors and actuators. To make this controller a competitive racer, we have distorted its inputs online to make it drive faster and to avoid opponents. Another interesting property emerges from this approach: the regulatory network is naturally resistant to noise. To evaluate this approach, we participated in the 2013 simulated racing car competition against eight other evolutionary and scripted approaches. After its first participation, this approach finished in third place in the competition

    Deep Drone Racing: From Simulation to Reality with Domain Randomization

    Full text link
    Dynamically changing environments, unreliable state estimation, and operation under severe resource constraints are fundamental challenges that limit the deployment of small autonomous drones. We address these challenges in the context of autonomous, vision-based drone racing in dynamic environments. A racing drone must traverse a track with possibly moving gates at high speed. We enable this functionality by combining the performance of a state-of-the-art planning and control system with the perceptual awareness of a convolutional neural network (CNN). The resulting modular system is both platform- and domain-independent: it is trained in simulation and deployed on a physical quadrotor without any fine-tuning. The abundance of simulated data, generated via domain randomization, makes our system robust to changes of illumination and gate appearance. To the best of our knowledge, our approach is the first to demonstrate zero-shot sim-to-real transfer on the task of agile drone flight. We extensively test the precision and robustness of our system, both in simulation and on a physical platform, and show significant improvements over the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854

    Simulation-based reinforcement learning for real-world autonomous driving

    Full text link
    We use reinforcement learning in simulation to obtain a driving system controlling a full-size real-world vehicle. The driving policy takes RGB images from a single camera and their semantic segmentation as input. We use mostly synthetic data, with labelled real-world data appearing only in the training of the segmentation network. Using reinforcement learning in simulation and synthetic data is motivated by lowering costs and engineering effort. In real-world experiments we confirm that we achieved successful sim-to-real policy transfer. Based on the extensive evaluation, we analyze how design decisions about perception, control, and training impact the real-world performance

    Survey of Agile navigation algorithms for robot ground vehicles

    Get PDF
    En aquest treball, diversos mètodes orientats a la navegació àgil de vehicles robòtics terrestres son comparats. Primerament, es realitza un estudi de publicacions per a identificar els mètodes pertanyents a l'estat de la tècnica més adequats per a ser comparats amb un mètode de navegació àgil (''CarPlanner'') desenvolupat al Autonomous Robotics and Perception Group (ARPG). Diferents mètodes són examinats i implementats en un ambient simulat. Aquests mètodes són evaluats basant-se en la seva eficàcia navegant el vehicle robòtic terrestre en una pista que té salts, sotracs i bermes. L'ambient simulat conté un vehicle terrestre de quatre rodes amortiguades amb geometria d'Ackermann, el qual ha de conduïr per un terreny amb dinàmica de fricció no linear. Els criteris per a evaluar els mètodes inclouen l'habilitat per a utilitzar les dinàmiques del vehicle per a recórrer la pista de manera ràpida i segura. Finalment, el mètode més apropiat i amb millor resultats és implementat al cotxe NinjaCar d'escala 1:8 del laboratori ARPG i comparat amb l'algoritme CarPlanner mitjançant experimentació física.In this work, several state-of-the-art methods for agile navigation of robot ground vehicles are compared. First, a survey of the literature is performed to identify the state-of-the-art and most appropriate methods for comparing to an agile navigation method (''CarPlanner'') developed in the Autonomous Robotics and Perception Group (ARPG). Several methods are reviewed and implemented in a dynamic vehicle simulation environment. These methods are evaluated on their efficacy of navigating a robot ground vehicle around a race track featuring jumps, bumps, and berms. The simulation environment features a four-wheeled, Ackermann-style ground vehicle with suspension and austere terrain with nonlinear friction dynamics. Criteria for evaluating the methods includes the ability of the method at utilizing the vehicle dynamics to quickly and safely traverse the track. Finally, the most appropriate and best-performing method is implemented on ARPG's 1/8th-scale NinjaCar vehicle platform and compared in physical experimentation to ARPG's CarPlanner algorithm
    • …
    corecore