222 research outputs found

    Design of Environment Aware Planning Heuristics for Complex Navigation Objectives

    Get PDF
    A heuristic is the simplified approximations that helps guide a planner in deducing the best way to move forward. Heuristics are valued in many modern AI algorithms and decision-making architectures due to their ability to drastically reduce computation time. Particularly in robotics, path planning heuristics are widely leveraged to aid in navigation and exploration. As the robotic platform explores and navigates, information about the world can and should be used to augment and update the heuristic to guide solutions. Complex heuristics that can account for environmental factors, robot capabilities, and desired actions provide optimal results with little wasted exploration, but are computationally expensive. This thesis demonstrates results of research into simplifying heuristics that maintains the performance improvements from complicated heuristics. The research presented is validated on two complex robotic tasks: stealth planning and energy efficient planning. The stealth heuristic was created to inform a planner and allow a ground robot to navigate unknown environments in a less visible manner. Due to the highly uncertain nature of the world (where unknown observers exist) this heuristic implemented was instrumental to enabling the first high-uncertainty stealth planner. Heuristic guidance is further explored for use in energy efficient planning, where a machine learning approach is used to generate a heuristic measure. This thesis demonstrates effective learned heuristics that simplify convergence time and accounts for the complexities of environment. A reduction of 60% in required compute time for planning was found

    Coordination and navigation of heterogeneous MAV-UGV formations localized by a 'hawk-eye'-like approach under a model predictive control scheme

    Get PDF
    n approach for coordination and control of 3D heterogeneous formations of unmanned aerial and ground vehicles under hawk-eye-like relative localization is presented in this paper. The core of the method lies in the use of visual top-view feedback from flying robots for the stabilization of the entire group in a leader–follower formation. We formulate a novel model predictive control-based methodology for guiding the formation. The method is employed to solve the trajectory planning and control of a virtual leader into a desired target region. In addition, the method is used for keeping the following vehicles in the desired shape of the group. The approach is designed to ensure direct visibility between aerial and ground vehicles, which is crucial for the formation stabilization using the hawk-eye-like approach. The presented system is verified in numerous experiments inspired by search-and-rescue applications, where the formation acts as a searching phalanx. In addition, stability and convergence analyses are provided to explicitly determine the limitations of the method in real-world applications

    Deep Reinforcement Learning for Autonomous Navigation of Mobile Robots in Indoor Environments

    Get PDF
    Conventional autonomous navigation framework for mobile robots is highly modularized with various subsystems such as localization, perception, mapping, planning and control. Although these provide easy interpretation, they are highly dependent on a known map of the robot’s surroundings for navigating in a cluttered environment. Local planners such as DWA require a map with all obstacles in the surroundings to calculate an optimal collision-free trajectory to the goal. Planning and tracking a collision-free path without knowing the obstacle locations is a challenging task. Since the advent of deep learning techniques, the field of deep reinforcement learning has proven to be a powerful learning framework for robotic tasks. Deep Reinforcement Learning has demonstrated wide success in various complex computer games such as Go and StarCraft which have high dimensional state and action spaces. However, it has rarely been used in real world applications due to the Sim-2-Real challenges in transferring the trained RL policy into the real-world. In this work, we propose a novel framework for autonomously navigating a mobile robot in a cluttered space without known localization of the obstacles in its surroundings using deep reinforcement learning techniques. The proposed method is a modular and scalable approach due to a strategic design of the training environment. It uses constrained space and randomization techniques to learn an effective reinforcement learning policy in lesser simulation training time. The state vector consists of the target location in the mobile robot coordinate frame and additionally a 36-dimensional lidar vector for obstacle avoidance task. We demonstrate the optimal discrete action policy on a Turtlebot in the real-world. We have also addressed some key challenges in robot pose estimation for autonomous driving tasks

    Self-organizing robot formations using velocity potential fields commands for material transfer

    Get PDF
    Mobile robot formations differ in accordance with the mission, environment, and robot abilities. In the case of decentralized control, the ability to achieve the shapes of these formations needs to be built in the controllers of each autonomous robot. In this paper, self-organizing formations control for material transfer is investigated, as an alternative to automatic guided vehicles. Leader–follower approach is applied for controllers design to drive the robots toward the goal. The results confirm the ability of velocity potential approach for motion control of both self-organizing formations
    • …
    corecore