13 research outputs found

    Automated Construction of Robotic Manipulation Programs

    No full text
    Society is becoming more automated with robots beginning to perform most tasks in factories and starting to help out in home and office environments. One of the most important functions of robots is the ability to manipulate objects in their environment. Because the space of possible robot designs, sensor modalities, and target tasks is huge, researchers end up having to manually create many models, databases, and programs for their specific task, an effort that is repeated whenever the task changes. Given a specification for a robot and a task, the presented framework automatically constructs the necessary databases and programs required for the robot to reliably execute manipulation tasks. It includes contributions in three major components that are critical for manipulation tasks. The first is a geometric-based planning system that analyzes all necessary modalities of manipulation planning and offers efficient algorithms to formulate and solve them. This allows identification of the necessary information needed from the task and robot specifications. Using this set of analyses, we build a planning knowledge-base that allows informative geometric reasoning about the structure of the scene and the robot's goals. We show how to efficiently generate and query the information for planners. The second is a set of efficient algorithms considering the visibility of objects in cameras when choosing manipulation goals. We show results with several robot platforms using grippers cameras to boost accuracy of the detected objects and to reliably complete the tasks. Furthermore, we use the presented planning and visibility infrastructure to develop a completely automated extrinsic camera calibration method and a method for detecting insufficient calibration data. The third is a vision-centric database that can analyze a rigid object's surface for stable and discriminable features to be used in pose extraction programs. Furthermore, we show work towards a new voting-based object pose extraction algorithm that does not rely on 2D/3D feature correspondences and thus reduces the early-commitment problem plaguing the generality of traditional vision-based pose extraction algorithms. In order to reinforce our theoric contributions with a solid implementation basis, we discuss the open-source planning environment OpenRAVE, which began and evolved as a result of the work done in this thesis. We present an analysis of its architecture and provide insight for successful robotics software environments

    Automated Construction of Robotic Manipulation Programs

    No full text
    Society is becoming more automated with robots beginning to perform most tasks in factoriesand starting to help out in home and office environments. One of the most importantfunctions of robots is the ability to manipulate objects in their environment. Because thespace of possible robot designs, sensor modalities, and target tasks is huge, researchers endup having to manually create many models, databases, and programs for their specic task,an eort that is repeated whenever the task changes. Given a specication for a robot anda task, the presented framework automatically constructs the necessary databases and programsrequired for the robot to reliably execute manipulation tasks. It includes contributionsin three major components that are critical for manipulation tasks.The rst is a geometric-based planning system that analyzes all necessary modalities ofmanipulation planning and oers efficient algorithms to formulate and solve them. Thisallows identication of the necessary information needed from the task and robot specications.Using this set of analyses, we build a planning knowledge-base that allows informativegeometric reasoning about the structure of the scene and the robot's goals. We show howto efficiently generate and query the information for planners.The second is a set of efficient algorithms considering the visibility of objects in cameraswhen choosing manipulation goals. We show results with several robot platforms usinggrippers cameras to boost accuracy of the detected objects and to reliably complete thetasks. Furthermore, we use the presented planning and visibility infrastructure to developa completely automated extrinsic camera calibration method and a method for detectinginsufficient calibration data. The third is a vision-centric database that can analyze a rigid object's surface for stableand discriminable features to be used in pose extraction programs. Furthermore, we showwork towards a new voting-based object pose extraction algorithm that does not rely on2D/3D feature correspondences and thus reduces the early-commitment problem plaguingthe generality of traditional vision-based pose extraction algorithms.In order to reinforce our theoric contributions with a solid implementation basis, we discussthe open-source planning environment OpenRAVE, which began and evolved as a result ofthe work done in this thesis. We present an analysis of its architecture and provide insightfor successful robotics software environments.</p

    Randomized statistical path planning

    No full text
    Abstract — This paper explores the use of statistical learning methods on randomized path planning algorithms. A continuous, randomized version of A * is presented along with an empirical analysis showing planning time convergence rates in the robotic manipulation domain. The algorithm relies on several heuristics that capture a manipulator’s kinematic feasibility and the local environment. A statistical framework is used to learn one of these heuristics from a large amount of training data saving the need to manually tweak parameters every time the problem changes. Using the appropriate formulation, we show that motion primitives can be automatically extracted from the training data in order to boost planning performance. Furthermore, we propose a Randomized Statistical Path Planning (RSPP) paradigm that outlines how a planner using heuristics should take advantage of machine learning algorithms. Planning results are shown for several manipulation problems tested in simulation. I

    REAL-TIME ADAPTIVE POINT SPLATTING FOR NOISY POINT CLOUDS Keywords: hardware

    No full text
    Abstract: Regular point splatting methods have a lot of problems when used on noisy data from stereo algorithms. Just a few unfiltered outliers, depth discontinuities, and holes can destroy the whole rendered image. We present a new multi-pass splatting method on GPU hardware called Adaptive Point Splatting (APS) to render noisy point clouds. By taking advantage of image processing algorithms on the GPU, APS dynamically fills holes and reduces depth discontinuities without loss of image sharpness. Since APS does not require any preprocessing on the CPU and does all its work on the GPU, it works in real-time with linear complexity in the number of points in the scene. We show experimental results on Teleimmersion stereo data produced by approximately forty cameras.
    corecore