10 research outputs found

    Learning Probabilistic Generative Models For Fast Sampling-Based Planning

    Get PDF
    Due to their simplicity and efficiency in high dimensional space, sampling-based motion planners have been gaining interest for robotic manipulation in recent years. We present several new learning approaches using probabilistic generative models for fast sampling-based planning. First, we propose fast collision detection in high dimensional configuration spaces based on Gaussian Mixture Models (GMMs) for Rapidly-exploring Random Trees (RRT). In addition, we introduce a new probabilistically safe local steering primitive based on the probabilistic model. Our local steering procedure is based on a new notion of a convex probabilistically safety corridor that is constructed around a configuration using tangent hyperplanes of confidence ellipsoids of GMMs learned from prior collision history. For efficient sampling, we suggest a sampling method with a learned Q-function with linear function approximation based on feature representations such as Radial Basis Functions. This sampling method chooses the optimal node from which to extend the search tree via the softmax function of learned state values. We also discuss a novel constrained sampling-based motion planning method for grasp and transport tasks with redundant robotic manipulators, which allows the best grasp configuration and approach direction to be automatically determined. Since these approaches with the learned probabilistic models require large size data and time for training, it is essential that they are able to be adapted to environmental change in an online manner. The suggested online learning approach with the Dirichlet Process Mixture Model (DPMM) can adapt the complexity to the data and learn new Gaussian clusters with streaming data in newly explored areas without batch learning. We have applied these approaches in a number of robot arm planning scenarios and have shown their utility and effectiveness in simulation and on a physical 7-DoF robot manipulator

    VFAS-Grasp: Closed Loop Grasping with Visual Feedback and Adaptive Sampling

    Full text link
    We consider the problem of closed-loop robotic grasping and present a novel planner which uses Visual Feedback and an uncertainty-aware Adaptive Sampling strategy (VFAS) to close the loop. At each iteration, our method VFAS-Grasp builds a set of candidate grasps by generating random perturbations of a seed grasp. The candidates are then scored using a novel metric which combines a learned grasp-quality estimator, the uncertainty in the estimate and the distance from the seed proposal to promote temporal consistency. Additionally, we present two mechanisms to improve the efficiency of our sampling strategy: We dynamically scale the sampling region size and number of samples in it based on past grasp scores. We also leverage a motion vector field estimator to shift the center of our sampling region. We demonstrate that our algorithm can run in real time (20 Hz) and is capable of improving grasp performance for static scenes by refining the initial grasp proposal. We also show that it can enable grasping of slow moving objects, such as those encountered during human to robot handover

    Pick2Place: Task-aware 6DoF Grasp Estimation via Object-Centric Perspective Affordance

    Full text link
    The choice of a grasp plays a critical role in the success of downstream manipulation tasks. Consider a task of placing an object in a cluttered scene; the majority of possible grasps may not be suitable for the desired placement. In this paper, we study the synergy between the picking and placing of an object in a cluttered scene to develop an algorithm for task-aware grasp estimation. We present an object-centric action space that encodes the relationship between the geometry of the placement scene and the object to be placed in order to provide placement affordance maps directly from perspective views of the placement scene. This action space enables the computation of a one-to-one mapping between the placement and picking actions allowing the robot to generate a diverse set of pick-and-place proposals and to optimize for a grasp under other task constraints such as robot kinematics and collision avoidance. With experiments both in simulation and on a real robot we demonstrate that with our method, the robot is able to successfully complete the task of placement-aware grasping with over 89% accuracy in such a way that generalizes to novel objects and scenes.Comment: IEEE International Conference on Robotics and Automation 202

    LiveGantt: Interactively Visualizing a Large Manufacturing Schedule

    No full text

    Real-time Simultaneous Multi-Object 3D Shape Reconstruction, 6DoF Pose Estimation and Dense Grasp Prediction

    Full text link
    Robotic manipulation systems operating in complex environments rely on perception systems that provide information about the geometry (pose and 3D shape) of the objects in the scene along with other semantic information such as object labels. This information is then used for choosing the feasible grasps on relevant objects. In this paper, we present a novel method to provide this geometric and semantic information of all objects in the scene as well as feasible grasps on those objects simultaneously. The main advantage of our method is its speed as it avoids sequential perception and grasp planning steps. With detailed quantitative analysis, we show that our method delivers competitive performance compared to the state-of-the-art dedicated methods for object shape, pose, and grasp predictions while providing fast inference at 30 frames per second speed
    corecore