8 research outputs found

    Experience-Based Planning with Sparse Roadmap Spanners

    Full text link
    We present an experienced-based planning framework called Thunder that learns to reduce computation time required to solve high-dimensional planning problems in varying environments. The approach is especially suited for large configuration spaces that include many invariant constraints, such as those found with whole body humanoid motion planning. Experiences are generated using probabilistic sampling and stored in a sparse roadmap spanner (SPARS), which provides asymptotically near-optimal coverage of the configuration space, making storing, retrieving, and repairing past experiences very efficient with respect to memory and time. The Thunder framework improves upon past experience-based planners by storing experiences in a graph rather than in individual paths, eliminating redundant information, providing more opportunities for path reuse, and providing a theoretical limit to the size of the experience graph. These properties also lead to improved handling of dynamically changing environments, reasoning about optimal paths, and reducing query resolution time. The approach is demonstrated on a 30 degrees of freedom humanoid robot and compared with the Lightning framework, an experience-based planner that uses individual paths to store past experiences. In environments with variable obstacles and stability constraints, experiments show that Thunder is on average an order of magnitude faster than Lightning and planning from scratch. Thunder also uses 98.8% less memory to store its experiences after 10,000 trials when compared to Lightning. Our framework is implemented and freely available in the Open Motion Planning Library.Comment: Submitted to ICRA 201

    Reliable Trajectories for Dynamic Quadrupeds using Analytical Costs and Learned Initializations

    Full text link
    Dynamic traversal of uneven terrain is a major objective in the field of legged robotics. The most recent model predictive control approaches for these systems can generate robust dynamic motion of short duration; however, planning over a longer time horizon may be necessary when navigating complex terrain. A recently-developed framework, Trajectory Optimization for Walking Robots (TOWR), computes such plans but does not guarantee their reliability on real platforms, under uncertainty and perturbations. We extend TOWR with analytical costs to generate trajectories that a state-of-the-art whole-body tracking controller can successfully execute. To reduce online computation time, we implement a learning-based scheme for initialization of the nonlinear program based on offline experience. The execution of trajectories as long as 16 footsteps and 5.5 s over different terrains by a real quadruped demonstrates the effectiveness of the approach on hardware. This work builds toward an online system which can efficiently and robustly replan dynamic trajectories.Comment: Video: https://youtu.be/LKFDB_BOhl

    Lazy validation of Experience Graphs

    Full text link

    Random Sampling of States in Dynamic Programming

    Full text link

    Human-Aware Motion Planning for Safe Human-Robot Collaboration

    Get PDF
    With the rapid adoption of robotic systems in our daily lives, robots must operate in the presence of humans in ways that improve safety and productivity. Currently, in industrial settings, human safety is ensured through physically separating the robotic system from the human. However, this greatly decreases the set of shared human-robot tasks that can be accomplished and also reduces human-robot team fluency. In recent years, robots with improved sensing capabilities have been introduced and the feasibility of humans and robots co-existing in shared spaces has become a topic of interest. This thesis proposes a human-aware motion planning approach building on RRT-Connect, dubbed Human-Aware RRT-Connect, that plans in the presence of humans. The planner considers a composite cost function that includes human separation distance and visibility costs to ensure the robot maintains a safety distance during motion while being as visible as possible to the human. A danger criterion cost considering two mutually dependent factors, human-robot center of mass distance and robot inertia, is also introduced into the cost formulation to ensure human safety during planning. A simulation study is conducted to demonstrate the planner performance. For the simulation study, the proposed Human-Aware RRT-Connect planner is evaluated against RRT-Connect through a set of problem scenarios that vary in environment and task complexity. Several human-robot configurations are tested in a shared workspace involving a simulated Franka Emika Panda arm and human model. Through the problem scenarios, it is shown that the Human-Aware RRT-Connect planner, paired with the developed HRI costs, performs better than the baseline RRT-Connect planner with respect to a set of quantitative metrics. The paths generated by the Human-Aware RRT-Connect planner maintain larger separation distances from the human, are more visible and also safer due to the minimization of the danger criterion. It is also shown that the proposed HRI cost formulation outperforms formulations from previous work when tested with the Human-Aware RRT-Connect planner

    Transfer of policies based on trajectory libraries

    No full text
    Abstract — Libraries of trajectories are a promising way of creating policies for difficult problems. However, often it is not desirable or even possible to create a new library for every task. We present a method for transferring libraries across tasks, which allows us to build libraries by learning from demonstration on one task and apply them to similar tasks. Representing the libraries in a feature-based space is key to supporting transfer. We also search through the library to ensure a complete path to the goal is possible. Results are shown for the Little Dog task. Little Dog is a quadruped robot that has to walk across rough terrain at reasonably fast speeds. I

    Transfer of policies based on trajectory libraries

    No full text
    corecore