3,127 research outputs found

    Reinforcement learning for safety-critical control of an automated vehicle

    Full text link
    We present our approach for the development, validation and deployment of a data-driven decision-making function for the automated control of a vehicle. The decisionmaking function, based on an artificial neural network is trained to steer the mobile robot SPIDER towards a predefined, static path to a target point while avoiding collisions with obstacles along the path. The training is conducted by means of proximal policy optimisation (PPO), a state of the art algorithm from the field of reinforcement learning. The resulting controller is validated using KPIs quantifying its capability to follow a given path and its reactivity on perceived obstacles along the path. The corresponding tests are carried out in the training environment. Additionally, the tests shall be performed as well in the robotics situation Gazebo and in real world scenarios. For the latter the controller is deployed on a FPGA-based development platform, the FRACTAL platform, and integrated into the SPIDER software stack

    Near range path navigation using LGMD visual neural networks

    Get PDF
    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios

    A general learning co-evolution method to generalize autonomous robot navigation behavior

    Get PDF
    Congress on Evolutionary Computation. La Jolla, CA, 16-19 July 2000.A new coevolutive method, called Uniform Coevolution, is introduced, to learn weights for a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collision avoidance. The coevolutive method allows the evolution of the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with or without coevolution have been tested in a set of environments and the capability for generalization has been shown for each learned behavior. A simulator based on the mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to example-based problems

    Past, present and future of path-planning algorithms for mobile robot navigation in dynamic environments

    Get PDF
    Mobile robots have been making a significant contribution to the advancement of many sectors including automation of mining, space, surveillance, military, health, agriculture and many more. Safe and efficient navigation is a fundamental requirement of mobile robots, thus, the demand for advanced algorithms rapidly increased. Mobile robot navigation encompasses the following four requirements: perception, localization, path-planning and motion control. Among those, path-planning is a vital part of a fast, secure operation. During the last couple of decades, many path-planning algorithms were developed. Despite most of the mobile robot applications being in dynamic environments, the number of algorithms capable of navigating robots in dynamic environments is limited. This paper presents a qualitative comparative study of the up-to-date mobile robot path-planning methods capable of navigating robots in dynamic environments. The paper discusses both classical and heuristic methods including artificial potential field, genetic algorithm, fuzzy logic, neural networks, artificial bee colony, particle swarm optimization, bacterial foraging optimization, ant-colony and Agoraphilic algorithm. The general advantages and disadvantages of each method are discussed. Furthermore, the commonly used state-of-the-art methods are critically analyzed based on six performance criteria: algorithm's ability to navigate in dynamically cluttered areas, moving goal hunting ability, object tracking ability, object path prediction ability, incorporating the obstacle velocity in the decision, validation by simulation and experimentation. This investigation benefits researchers in choosing suitable path-planning methods for different applications as well as identifying gaps in this field. © 2020 IEEE

    A Consolidated Review of Path Planning and Optimization Techniques: Technical Perspectives and Future Directions

    Get PDF
    In this paper, a review on the three most important communication techniques (ground, aerial, and underwater vehicles) has been presented that throws light on trajectory planning, its optimization, and various issues in a summarized way. This kind of extensive research is not often seen in the literature, so an effort has been made for readers interested in path planning to fill the gap. Moreover, optimization techniques suitable for implementing ground, aerial, and underwater vehicles are also a part of this review. This paper covers the numerical, bio-inspired techniques and their hybridization with each other for each of the dimensions mentioned. The paper provides a consolidated platform, where plenty of available research on-ground autonomous vehicle and their trajectory optimization with the extension for aerial and underwater vehicles are documented
    • 

    corecore