2,861 research outputs found

    Object modeling using a ToF camera under an uncertainty reduction approach

    Get PDF
    Trabajo presentado al ICRA 2010 celebrado en Anchorage (Alaska) del3 al 7 de mayo.Time-of-Flight (ToF) cameras deliver 3D images at 25 fps, offering great potential for developing fast object modeling algorithms. Surprisingly, this potential has not been extensively exploited up to now. A reason for this is that, since the acquired depth images are noisy, most of the available registration algorithms are hardly applicable. A further difficulty is that the transformations between views are in general not accurately known, a circumstance that multi-view object modeling algorithms do not handle properly under noisy conditions. In this work, we take into account both uncertainty sources (in images and camera poses) to generate spatially consistent 3D object models fusing multiple views with a probabilistic approach. We propose a method to compute the covariance of the registration process, and apply an iterative state estimation method to build object models under noisy conditions.This work was supported by projects: 'Perception, action & cognition through learning of object-action complexes.' (4915), 'CONSOL IDER-INGENIO 2010 Multimodal interaction in pattern recognition and computer vision' (V-00069), 'Percepción y acción ante incertidumbre' (4803). This work has been partially supported by the Spanish Ministry of Science and Innovation under project DPI2008-06022, the MIPRCV Consolider Ingenio 2010 project, and the EU PACO PLUS project FP6-2004-IST-4-27657. S. Foix and G. Alenyà are supported by PhD and postdoctoral fellowships, respectively, from CSIC’s JAE program.Peer Reviewe

    Using ToF and RGBD cameras for 3D robot perception and manipulation in human environments

    Get PDF
    Robots, traditionally confined into factories, are nowadays moving to domestic and assistive environments, where they need to deal with complex object shapes, deformable materials, and pose uncertainties at human pace. To attain quick 3D perception, new cameras delivering registered depth and intensity images at a high frame rate hold a lot of promise, and therefore many robotics researchers are now experimenting with structured-light RGBD and Time-of-Flight (ToF) cameras. In this paper both technologies are critically compared to help researchers to evaluate their use in real robots. The focus is on 3D perception at close distances for different types of objects that may be handled by a robot in a human environment. We review three robotics applications. The analysis of several performance aspects indicates the complementarity of the two camera types, since the user-friendliness and higher resolution of RGBD cameras is counterbalanced by the capability of ToF cameras to operate outdoors and perceive details.This research is partially funded by the EU GARNICS project FP7-247947, by CSIC project MANIPlus 201350E102, by the Spanish Ministry of Science and Innovation under project PAU+DPI2011-27510, and the Catalan Research Commission under Grant SGR-155.Peer Reviewe

    Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset

    Full text link
    Scene motion, multiple reflections, and sensor noise introduce artifacts in the depth reconstruction performed by time-of-flight cameras. We propose a two-stage, deep-learning approach to address all of these sources of artifacts simultaneously. We also introduce FLAT, a synthetic dataset of 2000 ToF measurements that capture all of these nonidealities, and allows to simulate different camera hardware. Using the Kinect 2 camera as a baseline, we show improved reconstruction errors over state-of-the-art methods, on both simulated and real data.Comment: ECCV 201

    ToF cameras for active vision in robotics

    Get PDF
    ToF cameras are now a mature technology that is widely being adopted to provide sensory input to robotic applications. Depending on the nature of the objects to be perceived and the viewing distance, we distinguish two groups of applications: those requiring to capture the whole scene and those centered on an object. It will be demonstrated that it is in this last group of applications, in which the robot has to locate and possibly manipulate an object, where the distinctive characteristics of ToF cameras can be better exploited. After presenting the physical sensor features and the calibration requirements of such cameras, we review some representative works highlighting for each one which of the distinctive ToF characteristics have been more essential. Even if at low resolution, the acquisition of 3D images at frame-rate is one of the most important features, as it enables quick background/ foreground segmentation. A common use is in combination with classical color cameras. We present three developed applications, using a mobile robot and a robotic arm, to exemplify with real images some of the stated advantages.This work was supported by the EU project GARNICS FP7-247947, by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, and by the Catalan Research Commission through SGR-00155Peer Reviewe

    Task-driven active sensing framework applied to leaf probing

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This article presents a new method for actively exploring a 3D workspace with the aim of localizing relevant regions for a given task. Our method encodes the exploration route in a multi-layer occupancy grid map. This map, together with a multiple-view estimator and a maximum-information-gain gathering approach, incrementally provide a better understanding of the scene until reaching the task termination criterion. This approach is designed to be applicable to any task entailing 3D object exploration where some previous knowledge of its approximate shape is available. Its suitability is demonstrated here for a leaf probing task using an eye-in-hand arm configuration in the context of a phenotyping application (leaf probing).Peer ReviewedPostprint (author's final draft

    3D sensor planning framework for leaf probing

    Get PDF
    Trabajo presentado a la International Conference on Intelligent Robots and Systems celebrada en Hamburgo (Alemania) del 28 de septiembre al 2 de octubre de 2015.Modern plant phenotyping requires active sensing technologies and particular exploration strategies. This article proposes a new method for actively exploring a 3D region of space with the aim of localizing special areas of interest for manipulation tasks over plants. In our method, exploration is guided by a multi-layer occupancy grid map. This map, together with a multiple-view estimator and a maximum-information-gain gathering approach, incrementally provides a better understanding of the scene until a task termination criterion is reached. This approach is designed to be applicable for any task entailing 3D object exploration where some previous knowledge of its general shape is available. Its suitability is demonstrated here for an eye-in-hand arm configuration in a leaf probing application.This research has been partially funded by the CSIC project MANIPlus 201350E102.Peer Reviewe

    3D Modelling from Real Data

    Get PDF
    The genesis of a 3D model has basically two definitely different paths. Firstly we can consider the CAD generated models, where the shape is defined according to a user drawing action, operating with different mathematical “bricks” like B-Splines, NURBS or subdivision surfaces (mathematical CAD modelling), or directly drawing small polygonal planar facets in space, approximating with them complex free form shapes (polygonal CAD modelling). This approach can be used for both ideal elements (a project, a fantasy shape in the mind of a designer, a 3D cartoon, etc.) or for real objects. In the latter case the object has to be first surveyed in order to generate a drawing coherent with the real stuff. If the surveying process is not only a rough acquisition of simple distances with a substantial amount of manual drawing, a scene can be modelled in 3D by capturing with a digital instrument many points of its geometrical features and connecting them by polygons to produce a 3D result similar to a polygonal CAD model, with the difference that the shape generated is in this case an accurate 3D acquisition of a real object (reality-based polygonal modelling). Considering only device operating on the ground, 3D capturing techniques for the generation of reality-based 3D models may span from passive sensors and image data (Remondino and El-Hakim, 2006), optical active sensors and range data (Blais, 2004; Shan & Toth, 2008; Vosselman and Maas, 2010), classical surveying (e.g. total stations or Global Navigation Satellite System - GNSS), 2D maps (Yin et al., 2009) or an integration of the aforementioned methods (Stumpfel et al., 2003; Guidi et al., 2003; Beraldin, 2004; Stamos et al., 2008; Guidi et al., 2009a; Remondino et al., 2009; Callieri et al., 2011). The choice depends on the required resolution and accuracy, object dimensions, location constraints, instrument’s portability and usability, surface characteristics, working team experience, project’s budget, final goal, etc. Although aware of the potentialities of the image-based approach and its recent developments in automated and dense image matching for non-expert the easy usability and reliability of optical active sensors in acquiring 3D data is generally a good motivation to decline image-based approaches. Moreover the great advantage of active sensors is the fact that they deliver immediately dense and detailed 3D point clouds, whose coordinate are metrically defined. On the other hand image data require some processing and a mathematical formulation to transform the two-dimensional image measurements into metric three-dimensional coordinates. Image-based modelling techniques (mainly photogrammetry and computer vision) are generally preferred in cases of monuments or architectures with regular geometric shapes, low budget projects, good experience of the working team, time or location constraints for the data acquisition and processing. This chapter is intended as an updated review of reality-based 3D modelling in terrestrial applications, with the different categories of 3D sensing devices and the related data processing pipelines

    Active perception of deformable objects using 3D cameras

    Get PDF
    Presentado al Workshop de Robótica Experimental celebrado en Sevilla del 28 al 29 de noviembre de 2011.Perception and manipulation of rigid objects has received a lot of attention, and several solutions have been proposed. In contrast, dealing with deformable objects is a relatively new and challenging task because they are more complex to model, their state is difficult to determine, and self-occlusions are common and hard to estimate. In this paper we present our progress/results in the perception of deformable objects both using conventional RGB cameras and active sensing strategies by means of depth cameras. We provide insights in two different areas of application: grasping of textiles and plant leaf modelling.This research is partially funded by the EU GARNICS project FP7-247947, by the Spanish Ministry of Science and Innovation under project DPI2008-06022 and MIPRCV Consolider Ingenio CSD2007-00018, and the Catalan Research Commission.Peer Reviewe
    corecore