9,061 research outputs found

    Wavefront Propagation and Fuzzy Based Autonomous Navigation

    Full text link
    Path planning and obstacle avoidance are the two major issues in any navigation system. Wavefront propagation algorithm, as a good path planner, can be used to determine an optimal path. Obstacle avoidance can be achieved using possibility theory. Combining these two functions enable a robot to autonomously navigate to its destination. This paper presents the approach and results in implementing an autonomous navigation system for an indoor mobile robot. The system developed is based on a laser sensor used to retrieve data to update a two dimensional world model of therobot environment. Waypoints in the path are incorporated into the obstacle avoidance. Features such as ageing of objects and smooth motion planning are implemented to enhance efficiency and also to cater for dynamic environments

    DQN: Deep Q-Learning for Autonomous Navigation

    Get PDF
    This project deals with autonomous mobile robots trained using reinforcement learning, a branch of machine learning (the science of improving problem-solving performance based on experience) based on choosing actions to maximize rewards from various environments. This is a form of behavioral learning that is observed in nature and thus more biologically plausible than cognitive models based on labeled data provided by a teacher (supervised learning). We developed an experimental test bed by implementing Deep Q-Networks (DQN), a form of reinforcement learning, for goal-oriented navigation and obstacle avoidance tasks using a TurtleBot3 Burger robot and in the gazebo simulation environment for behavior learning in autonomous agents. To achieve the goal of avoiding obstacles, the DQN Agent provides a positive reward to the robot whenever it gets closer to its goal and a negative reward when it is farther from its goal. The TurtleBot3 Burger requires a large number of training iterations before it achieves the goal and successfully avoids obstacles. Future work involves extending the reward functions so that DQN can be used to learn to solve fully autonomous exploration and mapping tasks, where the robot does not know the exact location of the goal

    Q Learning Behavior on Autonomous Navigation of Physical Robot

    Get PDF
    Behavior based architecture gives robot fast and reliable action. If there are many behaviors in robot, behavior coordination is needed. Subsumption architecture is behavior coordination method that give quick and robust response. Learning mechanism improve robot’s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off policy. In this paper, Q learning will be used as learning mechanism for obstacle avoidance behavior in autonomous robot navigation. Learning rate of Q learning affect robot’s performance in learning phase. As the result, Q learning algorithm is successfully implemented in a physical robot with its imperfect environment

    Metric State Space Reinforcement Learning for a Vision-Capable Mobile Robot

    Full text link
    We address the problem of autonomously learning controllers for vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories. We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot. The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability. Together this allows learning from much less experience compared to previous methods.Comment: 14 pages, 8 figure
    • …
    corecore