5,545 research outputs found

    Searching force-closure optimal grasps of articulated 2D objects with n links

    Get PDF
    This paper proposes a method that finds a locally optimal grasp of an articulated 2D object with n links considering frictionless contacts. The surface of each link of the object is represented by a finite set of points, thus it may have any shape. The proposed approach finds, first, an initial force-closure grasp and from it starts an iterative search of a local optimum grasp. The quality measure considered in this work is the largest perturbation wrench that a grasp can resist with independence of the direction of the perturbation. The approach has been implemented and some illustrative examples are included in the article.Postprint (published version

    Examples of 3D grasp quality computations

    Get PDF
    Previous grasp quality research is mainly theoretical, and has assumed that contact types and positions are given, in order to preserve the generality of the proposed quality measures. The example results provided by these works either ignore hand geometry and kinematics entirely or involve only the simplest of grippers. We present a unique grasp analysis system that, when given a 3D object, hand, and pose for the hand, can accurately determine the types of contacts that will occur between the links of the hand and the object, and compute two measures of quality for the grasp. Using models of two articulated robotic hands, we analyze several grasps of a polyhedral model of a telephone handset, and we use a novel technique to visualize the 6D space used in these computations. In addition, we demonstrate the possibility of using this system for synthesizing high quality grasps by performing a search over a subset of possible hand configurations

    Grasp plannind under task-specific contact constraints

    Get PDF
    Several aspects have to be addressed before realizing the dream of a robotic hand-arm system with human-like capabilities, ranging from the consolidation of a proper mechatronic design, to the development of precise, lightweight sensors and actuators, to the efficient planning and control of the articular forces and motions required for interaction with the environment. This thesis provides solution algorithms for a main problem within the latter aspect, known as the {\em grasp planning} problem: Given a robotic system formed by a multifinger hand attached to an arm, and an object to be grasped, both with a known geometry and location in 3-space, determine how the hand-arm system should be moved without colliding with itself or with the environment, in order to firmly grasp the object in a suitable way. Central to our algorithms is the explicit consideration of a given set of hand-object contact constraints to be satisfied in the final grasp configuration, imposed by the particular manipulation task to be performed with the object. This is a distinguishing feature from other grasp planning algorithms given in the literature, where a means of ensuring precise hand-object contact locations in the resulting grasp is usually not provided. These conventional algorithms are fast, and nicely suited for planning grasps for pick-an-place operations with the object, but not for planning grasps required for a specific manipulation of the object, like those necessary for holding a pen, a pair of scissors, or a jeweler's screwdriver, for instance, when writing, cutting a paper, or turning a screw, respectively. To be able to generate such highly-selective grasps, we assume that a number of surface regions on the hand are to be placed in contact with a number of corresponding regions on the object, and enforce the fulfilment of such constraints on the obtained solutions from the very beginning, in addition to the usual constraints of grasp restrainability, manipulability and collision avoidance. The proposed algorithms can be applied to robotic hands of arbitrary structure, possibly considering compliance in the joints and the contacts if desired, and they can accommodate general patch-patch contact constraints, instead of more restrictive contact types occasionally considered in the literature. It is worth noting, also, that while common force-closure or manipulability indices are used to asses the quality of grasps, no particular assumption is made on the mathematical properties of the quality index to be used, so that any quality criterion can be accommodated in principle. The algorithms have been tested and validated on numerous situations involving real mechanical hands and typical objects, and find applications in classical or emerging contexts like service robotics, telemedicine, space exploration, prosthetics, manipulation in hazardous environments, or human-robot interaction in general

    Efficient Belief Propagation for Perception and Manipulation in Clutter

    Full text link
    Autonomous service robots are required to perform tasks in common human indoor environments. To achieve goals associated with these tasks, the robot should continually perceive, reason its environment, and plan to manipulate objects, which we term as goal-directed manipulation. Perception remains the most challenging aspect of all stages, as common indoor environments typically pose problems in recognizing objects under inherent occlusions with physical interactions among themselves. Despite recent progress in the field of robot perception, accommodating perceptual uncertainty due to partial observations remains challenging and needs to be addressed to achieve the desired autonomy. In this dissertation, we address the problem of perception under uncertainty for robot manipulation in cluttered environments using generative inference methods. Specifically, we aim to enable robots to perceive partially observable environments by maintaining an approximate probability distribution as a belief over possible scene hypotheses. This belief representation captures uncertainty resulting from inter-object occlusions and physical interactions, which are inherently present in clutterred indoor environments. The research efforts presented in this thesis are towards developing appropriate state representations and inference techniques to generate and maintain such belief over contextually plausible scene states. We focus on providing the following features to generative inference while addressing the challenges due to occlusions: 1) generating and maintaining plausible scene hypotheses, 2) reducing the inference search space that typically grows exponentially with respect to the number of objects in a scene, 3) preserving scene hypotheses over continual observations. To generate and maintain plausible scene hypotheses, we propose physics informed scene estimation methods that combine a Newtonian physics engine within a particle based generative inference framework. The proposed variants of our method with and without a Monte Carlo step showed promising results on generating and maintaining plausible hypotheses under complete occlusions. We show that estimating such scenarios would not be possible by the commonly adopted 3D registration methods without the notion of a physical context that our method provides. To scale up the context informed inference to accommodate a larger number of objects, we describe a factorization of scene state into object and object-parts to perform collaborative particle-based inference. This resulted in the Pull Message Passing for Nonparametric Belief Propagation (PMPNBP) algorithm that caters to the demands of the high-dimensional multimodal nature of cluttered scenes while being computationally tractable. We demonstrate that PMPNBP is orders of magnitude faster than the state-of-the-art Nonparametric Belief Propagation method. Additionally, we show that PMPNBP successfully estimates poses of articulated objects under various simulated occlusion scenarios. To extend our PMPNBP algorithm for tracking object states over continuous observations, we explore ways to propose and preserve hypotheses effectively over time. This resulted in an augmentation-selection method, where hypotheses are drawn from various proposals followed by the selection of a subset using PMPNBP that explained the current state of the objects. We discuss and analyze our augmentation-selection method with its counterparts in belief propagation literature. Furthermore, we develop an inference pipeline for pose estimation and tracking of articulated objects in clutter. In this pipeline, the message passing module with the augmentation-selection method is informed by segmentation heatmaps from a trained neural network. In our experiments, we show that our proposed pipeline can effectively maintain belief and track articulated objects over a sequence of observations under occlusion.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163159/1/kdesingh_1.pd

    Occlusion-Aware Multi-View Reconstruction of Articulated Objects for Manipulation

    Get PDF
    The goal of this research is to develop algorithms using multiple views to automatically recover complete 3D models of articulated objects in unstructured environments and thereby enable a robotic system to facilitate further manipulation of those objects. First, an algorithm called Procrustes-Lo-RANSAC (PLR) is presented. Structure-from-motion techniques are used to capture 3D point cloud models of an articulated object in two different configurations. Procrustes analysis, combined with a locally optimized RANSAC sampling strategy, facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. The algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Second, with such a resulting articulated model, a robotic system is then able to manipulate the object either along its joint axes at a specified grasp point in order to exercise its degrees of freedom or move its end effector to a particular position even if the point is not visible in the current view. This is one of the main advantages of the occlusion-aware approach, because the models capture all sides of the object meaning that the robot has knowledge of parts of the object that are not visible in the current view. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of real-world objects containing both revolute and prismatic joints. Third, we improve the proposed approach by using a RGBD sensor (Microsoft Kinect) that yield a depth value for each pixel immediately by the sensor itself rather than requiring correspondence to establish depth. KinectFusion algorithm is applied to produce a single high-quality, geometrically accurate 3D model from which rigid links of the object are segmented and aligned, allowing the joint axes to be estimated using the geometric approach. The improved algorithm does not require artificial markers attached to objects, yields much denser 3D models and reduces the computation time

    Representing Intangible Heritage: Questions Concerning Method

    Get PDF
    The research explores issues concerning the relation between text and images – an interesting field of enquiry little explored to date – involving archaeological heritage that has not survived and is therefore based on descriptions of artefacts and sites. Nowadays, this heritage can exist again thanks to digital technologies (relational databases) and methodologies (conceptual modelling) that allow the construction of 2D and 3D models. Studied here are the relations between the text and conceptual categories, between description and classification of objects in order to understand how all words and terms influence the results of interpretation and interaction between different profiles in the construction of models. In this context digital methodologies are discussed to assess the actual state of archaeological information systems and reflect upon possible future directions

    Visual articulated tracking in cluttered environments

    Get PDF
    This thesis is concerned with the state estimation of an articulated robotic manipulator during interaction with its environment. Traditionally, robot state estimation has relied on proprioceptive sensors as the single source of information about the internal state. In this thesis, we are motivated to shift the focus from proprioceptive to exteroceptive sensing, which is capable to represent a holistic interpretation of the entire manipulation scene. When visually observing grasping tasks, the tracked manipulator is subject to visual distractions caused by the background, the manipulated object and by occlusions from other objects present in the environment. The aim of this thesis is to investigate and develop methods for the robust visual state estimation of articulated kinematic chains in cluttered environments which suffer from partial occlusions. To make these methods widely applicable to a variety of kinematic setups and unseen environments, we intentionally refrain from using prior information about the internal state of the articulated kinematic chain, and we do not explicitly model visual distractions such as the background and manipulated objects in the environment. We approach this problem with model-fitting methods, in which an articulated model is associated to the observed data using discriminative information. We explore model-fitting objectives that are robust to occlusions and unseen environments, methods to generate synthetic training data for data-driven discriminative methods, and robust optimisers to minimise the tracking objective. This thesis contributes (1) an automatic colour and depth image synthesis pipeline for data-driven learning without depending on a real articulated robot; (2) a training strategy for discriminative model-fitting objectives with an implicit representation of objects; (3) a tracking objective that is able to track occluded parts of a kinematic chain; and finally (4) a robust multi-hypotheses optimiser. These contributions are evaluated on two robotic platforms in different environments and with different manipulated and occluding objects. We demonstrate that our image synthesis pipeline generalises well to colour and depth observations of the real robot without requiring real ground truth labelled images. While this synthesis approach introduces a visual simulation-to-reality gap, the combination of our robust tracking objective and optimiser enables stable tracking of an occluded end-effector during manipulation tasks
    corecore