2,578 research outputs found

    Real-time tracking of 3D elastic objects with an RGB-D sensor

    Get PDF
    This paper presents a method to track in real-time a 3D textureless object which undergoes large deformations such as elastic ones, and rigid motions, using the point cloud data provided by an RGB-D sensor. This solution is expected to be useful for enhanced manipulation of humanoid robotic systems. Our framework relies on a prior visual segmentation of the object in the image. The segmented point cloud is registered first in a rigid manner and then by non-rigidly fitting the mesh, based on the Finite Element Method to model elasticity, and on geometrical point-to-point correspondences to compute external forces exerted on the mesh. The real-time performance of the system is demonstrated on synthetic and real data involving challenging deformations and motions

    Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

    Get PDF
    In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.Comment: International Conference on Robotics and Automation (ICRA) 2017, http://visual.cs.ucl.ac.uk/pubs/cofusion, https://github.com/martinruenz/co-fusio

    RGB-D datasets using microsoft kinect or similar sensors: a survey

    Get PDF
    RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms

    Tracking an elastic object with an RGB-D sensor for a pizza chef robot

    Get PDF
    This paper presents a method to track in real-time a 3D object which undergoes large deformations such as elastic ones, and fast rigid motions, using the point cloud data provided by a RGB-D sensor. This solution would contribute to robotic humanoid manipulation purposes. Our framework relies on a prior visual segmentation of the object in the image. The segmented point cloud is then registered first in a rigid manner and then by non-rigidly fitting the mesh, based on the Finite Element Method to model elasticity and on geometrical point-to-point correspondences to compute external forces exerted on the mesh. The real-time performance of the system is demonstrated on real data involving challenging deformations and motions, for a pizza dough to be ideally manipulated by a chef robot

    Tracking Fractures of Deformable Objects in Real-Time with an RGB-D Sensor

    Get PDF
    This paper introduces a method able to track in real-time a 3D elastic deformable objects which undergo fractures, using the point cloud data provided by an RGB-D sensor. Our framework relies on a prior visual segmentation of the object in the image. The segmented point cloud is registered by non-rigidly fitting the mesh, based on the Finite Element Method to physically model elasticity, and on geometrical point-to-point correspondences to compute external forces exerted on the mesh. Fractures are handled by processing the stress tensors computed on the mesh of the FEM model, in order to detect fracturable nodes. Local remeshing around fracturable nodes is then performed to propagate the fracture. The real-time performance of the system is demonstrated on real data involving various deformations and fractures
    • …
    corecore