9,435 research outputs found

    Robot Path Planning with IGA-MMAS and MMAS-IGA

    Get PDF
    Path Planning of mobile robots is one of the essential tasks in robotic research and studies with intelligent technologies. It helps in determining the path from a source to the destination. It has extended its roots from classic approaches to further improvements over time, such as evolutionary approaches. Ant Colony Optimization (ACO) and Genetic algorithm are well known evolutionary approaches in effective path planning. This research work focuses on the Max-Min Ant System (MMAS) derived from the ACO evolutionary approach of Ant System (AS) and Improved Genetic Algorithm (IGA) which is efficient over the classical Genetic Algorithm. In-order to study robot path planning two methods are combined in this research work combining MMAS and IGA as two-hybrid methods MMAS-IGA and IGA-MMAS . The results of the two-hybrid methods will be deriving the near optimal solution, demonstrated in the experimental study of this work. Grid maps are used for simulating the robot path planning environment which is modeled using the grid method. Genetic operators of IGA are combined with MMAS for the enhancement of the overall result of the methods IGA-MMAS and MMAS-IGA. The effectiveness of these two methods will be determined in the simulation modeled using MATLAB environment. The experimental results of these methods are done in a static environment and the results of MMAS-IGA and IGA-MMAS are compared to the path planning method GA-ACO

    Reset-free Trial-and-Error Learning for Robot Damage Recovery

    Get PDF
    The high probability of hardware failures prevents many advanced robots (e.g., legged robots) from being confidently deployed in real-world situations (e.g., post-disaster rescue). Instead of attempting to diagnose the failures, robots could adapt by trial-and-error in order to be able to complete their tasks. In this situation, damage recovery can be seen as a Reinforcement Learning (RL) problem. However, the best RL algorithms for robotics require the robot and the environment to be reset to an initial state after each episode, that is, the robot is not learning autonomously. In addition, most of the RL methods for robotics do not scale well with complex robots (e.g., walking robots) and either cannot be used at all or take too long to converge to a solution (e.g., hours of learning). In this paper, we introduce a novel learning algorithm called "Reset-free Trial-and-Error" (RTE) that (1) breaks the complexity by pre-generating hundreds of possible behaviors with a dynamics simulator of the intact robot, and (2) allows complex robots to quickly recover from damage while completing their tasks and taking the environment into account. We evaluate our algorithm on a simulated wheeled robot, a simulated six-legged robot, and a real six-legged walking robot that are damaged in several ways (e.g., a missing leg, a shortened leg, faulty motor, etc.) and whose objective is to reach a sequence of targets in an arena. Our experiments show that the robots can recover most of their locomotion abilities in an environment with obstacles, and without any human intervention.Comment: 18 pages, 16 figures, 3 tables, 6 pseudocodes/algorithms, video at https://youtu.be/IqtyHFrb3BU, code at https://github.com/resibots/chatzilygeroudis_2018_rt

    The Ariadne's Clew Algorithm

    Full text link
    We present a new approach to path planning, called the "Ariadne's clew algorithm". It is designed to find paths in high-dimensional continuous spaces and applies to robots with many degrees of freedom in static, as well as dynamic environments - ones where obstacles may move. The Ariadne's clew algorithm comprises two sub-algorithms, called Search and Explore, applied in an interleaved manner. Explore builds a representation of the accessible space while Search looks for the target. Both are posed as optimization problems. We describe a real implementation of the algorithm to plan paths for a six degrees of freedom arm in a dynamic environment where another six degrees of freedom arm is used as a moving obstacle. Experimental results show that a path is found in about one second without any pre-processing
    corecore