6 research outputs found

    Searching force-closure optimal grasps of articulated 2D objects with n links

    Get PDF
    This paper proposes a method that finds a locally optimal grasp of an articulated 2D object with n links considering frictionless contacts. The surface of each link of the object is represented by a finite set of points, thus it may have any shape. The proposed approach finds, first, an initial force-closure grasp and from it starts an iterative search of a local optimum grasp. The quality measure considered in this work is the largest perturbation wrench that a grasp can resist with independence of the direction of the perturbation. The approach has been implemented and some illustrative examples are included in the article.Postprint (published version

    Learning Articulated Motions From Visual Demonstration

    Full text link
    Many functional elements of human homes and workplaces consist of rigid components which are connected through one or more sliding or rotating linkages. Examples include doors and drawers of cabinets and appliances; laptops; and swivel office chairs. A robotic mobile manipulator would benefit from the ability to acquire kinematic models of such objects from observation. This paper describes a method by which a robot can acquire an object model by capturing depth imagery of the object as a human moves it through its range of motion. We envision that in future, a machine newly introduced to an environment could be shown by its human user the articulated objects particular to that environment, inferring from these "visual demonstrations" enough information to actuate each object independently of the user. Our method employs sparse (markerless) feature tracking, motion segmentation, component pose estimation, and articulation learning; it does not require prior object models. Using the method, a robot can observe an object being exercised, infer a kinematic model incorporating rigid, prismatic and revolute joints, then use the model to predict the object's motion from a novel vantage point. We evaluate the method's performance, and compare it to that of a previously published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN: 978-0-9923747-0-

    Síntesis de prensiones con “force-closure” para un objeto articulado 3D con 3 eslabones

    Get PDF
    Este trabajo aborda el problema de la síntesis de prensiones con “force-closure” para un objeto articulado 3D constituido por 3 eslabones y considerando contactos sin fricción. La superficie de cada eslabón se representa por medio de un conjunto finito de puntos. En primer lugar se presenta una metodología por medio de la cual se define el espacio de fuerzas generalizadas generado por fuerzas aplicadas en los eslabones de un objeto articulado 3D. En segundo lugar se describe el algoritmo por medio del cual se calcula el conjunto de puntos que permiten una prensión con FC. El enfoque ha sido implementado y en el artículo se incluyen algunos ejemplos ilustrativos.Postprint (author's final draft

    Automatic 2D to Stereoscopic Video Conversion for 3DTV

    Get PDF
    In this thesis we address the problem of automatically converting a video filmed with a single camera to stereoscopic content tailored for viewing using 3D TVs. We present two techniques: (a) a non-parametric approach which does not require extensive training and produces good results for simple rigid scenes and, (b) a deep learning approach able to handle dynamic changes in the scene. The proposed solutions both include two stages: depth generation and rendering. For the first stage, for the non-parametric approach we utilize an energy-based optimization, and for the deep learning approach a multi-scale convolutional neural network to address the complex problem of depth estimation from a single image. Depth maps are generated based on the input RGB images. We reformulate and simplify the process of generating the virtual camera’s depth map and present how this can be used to render an anaglyph image. Anaglyph stereo was used for demonstration only because of the easy and wide availability of red/cyan glasses however, this does not limit the applicability of the proposed technique to other stereo forms. Finally, we have extensively tested the proposed approaches and present the results

    Occlusion-Aware Multi-View Reconstruction of Articulated Objects for Manipulation

    Get PDF
    The goal of this research is to develop algorithms using multiple views to automatically recover complete 3D models of articulated objects in unstructured environments and thereby enable a robotic system to facilitate further manipulation of those objects. First, an algorithm called Procrustes-Lo-RANSAC (PLR) is presented. Structure-from-motion techniques are used to capture 3D point cloud models of an articulated object in two different configurations. Procrustes analysis, combined with a locally optimized RANSAC sampling strategy, facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. The algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Second, with such a resulting articulated model, a robotic system is then able to manipulate the object either along its joint axes at a specified grasp point in order to exercise its degrees of freedom or move its end effector to a particular position even if the point is not visible in the current view. This is one of the main advantages of the occlusion-aware approach, because the models capture all sides of the object meaning that the robot has knowledge of parts of the object that are not visible in the current view. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of real-world objects containing both revolute and prismatic joints. Third, we improve the proposed approach by using a RGBD sensor (Microsoft Kinect) that yield a depth value for each pixel immediately by the sensor itself rather than requiring correspondence to establish depth. KinectFusion algorithm is applied to produce a single high-quality, geometrically accurate 3D model from which rigid links of the object are segmented and aligned, allowing the joint axes to be estimated using the geometric approach. The improved algorithm does not require artificial markers attached to objects, yields much denser 3D models and reduces the computation time

    Occlusion-Aware Reconstruction and Manipulation of 3D Articulated Objects

    No full text
    Abstract — We present a method to recover complete 3D models of articulated objects. Structure-from-motion techniques are used to capture 3D point cloud models of the object in two different configurations. A novel combination of Procrustes analysis and RANSAC facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. With the resulting articulated model, a robotic system is able to manipulate the object along its joint axes at a specified grasp point in order to exercise its degrees of freedom. Because the models capture all sides of the object, they are occluded-aware, enabling the robotic system to plan paths to parts of the object that are not visible in the current view. Our algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of objects with both revolute and prismatic joints. I
    corecore