42 research outputs found

    Exploitation of time-of-flight (ToF) cameras

    Get PDF
    This technical report reviews the state-of-the art in the field of ToF cameras, their advantages, their limitations, and their present-day applications sometimes in combination with other sensors. Even though ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their use in numerous areas of research. In robotics, these areas range from mobile robot navigation and map building to vision-based human motion capture and gesture recognition, showing particularly a great potential in object modeling and recognition.Preprin

    ToF cameras for active vision in robotics

    Get PDF
    ToF cameras are now a mature technology that is widely being adopted to provide sensory input to robotic applications. Depending on the nature of the objects to be perceived and the viewing distance, we distinguish two groups of applications: those requiring to capture the whole scene and those centered on an object. It will be demonstrated that it is in this last group of applications, in which the robot has to locate and possibly manipulate an object, where the distinctive characteristics of ToF cameras can be better exploited. After presenting the physical sensor features and the calibration requirements of such cameras, we review some representative works highlighting for each one which of the distinctive ToF characteristics have been more essential. Even if at low resolution, the acquisition of 3D images at frame-rate is one of the most important features, as it enables quick background/ foreground segmentation. A common use is in combination with classical color cameras. We present three developed applications, using a mobile robot and a robotic arm, to exemplify with real images some of the stated advantages.This work was supported by the EU project GARNICS FP7-247947, by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, and by the Catalan Research Commission through SGR-00155Peer Reviewe

    SR-4000 and CamCube3.0 Time of Flight (ToF) Cameras: Tests and Comparison

    Get PDF
    In this paper experimental comparisons between two Time-of-Flight (ToF) cameras are reported in order to test their performance and to give some procedures for testing data delivered by this kind of technology. In particular, the SR-4000 camera by Mesa Imaging AG and the CamCube3.0 by PMD Technologies have been evaluated since they have good performances and are well known to researchers dealing with Time-of- Flight (ToF) cameras. After a brief overview of commercial ToF cameras available on the market and the main specifications of the tested devices, two topics are presented in this paper. First, the influence of camera warm-up on distance measurement is analyzed: a warm-up of 40 minutes is suggested to obtain the measurement stability, especially in the case of the CamCube3.0 camera, that exhibits distance measurement variations of several centimeters. Secondly, the variation of distance measurement precision variation over integration time is presented: distance measurement precisions of some millimeters are obtained in both cases. Finally, a comparison between the two cameras based on the experiments and some information about future work on evaluation of sunlight influence on distance measurements are reporte

    ToF cameras for eye-in-hand robotics

    Get PDF
    This work was supported by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, by the EU Project IntellAct FP7-ICT2009-6-269959 and by the Catalan Research Commission through SGR-00155.Peer Reviewe

    The registration of point cloud data from range imaging camera

    Get PDF
    The measurement and 3D modelling techniques have been evolved as parallel technological improvements. Every new technique provides an opportunity for low-cost and fast measurements. The latest method for 3D measurement is the range imaging (RIM) camera. The new period in photogrammetry and 3D modelling applications has begun with the RIM cameras, and it has brought new research areas for scientists. The measurement capabilities, accuracies and application areas of RIM cameras have been increased by the time. In this study, the registration of point cloud data of RIM camera was investigated to perform 3D modelling task

    Task-oriented viewpoint planning for free-form objects

    Get PDF
    A thesis submitted to the Universitat Politècnica de Catalunya to obtain the degree of Doctor of Philosophy. Doctoral programme: Automatic Control, Robotics and Computer Vision. This thesis was completed at: Institut de Robòtica i Informàtica Industrial, CSIC-UPC.[EN]: This thesis deals with active sensing and its use in real exploration tasks under both scene ambiguities and measurement uncertainties. While object modeling is the implicit objective of most of active sensing algorithms, in this work we have explored new strategies to deal with more generic and more complex tasks. Active sensing requires the ability of moving the perceptual system to gather new information. Our approach uses a robot manipulator with a 3D Time-of-Flight (ToF) camera attached to the end-effector. For a complex task, we have focused our attention on plant phenotyping. Plants are complex objects, with leaves that change their position and size along time. Valid viewpoints for a certain plant are hardly valid for a different one, even belonging to the same species. Some instruments, such as chlorophyll meters or disk sampling tools, require being precisely positioned over a particular location of the leaf. Therefore, their use requires the modeling of specific regions of interest of the plant, including also the free space needed for avoiding obstacles and approaching the leaf with tool. It is easy to observe that predefined camera trajectories are not valid here, and that usually with one single view it is very difficult to acquire all the required information. The overall objective of this thesis is to solve complex active sensing tasks by embedding their exploratory goal into a pre-estimated geometrical model, using information-gain as the fundamental guideline for the reward function. The main contributions can be divided in two groups: first, the evaluation of ToF cameras and their calibration to assess the uncertainty of the measurements (presented in Part I); and second, the proposal of a framework capable of embedding the task, modeled as free and occupied space, and that takes into account the modeled sensor's uncertainty to improve the action selection algorithm (presented in Part II). This thesishas given rise to 14 publications, including 5 indexed journals, and its results have been used in the GARNICS European project. The complete framework is based on the Next-Best-View methodology and it can be summarized in the following main steps. First, an initial view of the object (e.g., a plant) is acquired. From this initial view and given a set of candidate viewpoints, the expected gain obtained by moving the robot and acquiring the next image is computed. This computation takes into account the uncertainty from all the different pixels of the sensor, the expected information based on a predefined task model, and the possible occlusions. Once the most promising view is selected, the robot moves, takes a new image, integrates this information intothe model, and evaluates again the set of remaining views. Finally, the task terminates when enough information is gathered. In our examples, this process enables the robot to perform a measurement on top of a leaf. The key ingredient is to model the complexity of the task in a layered representation of free-occupied occupancy grid maps. This allows to naturally encode the requirements of the task, to maintain and update the belief state with the measurements performed, to simulate and compute the expected gains of all potential viewpoints, and to encode the termination condition. During this work the technology of ToF cameras has incredibly evolved. Nowadays it is very popular and ToF cameras are already embedded in some consumer devices. Although the quality of the measurements has been considerably improved, it is still not uniform in the sensor. We believe, as it has been demonstrated in various experiments in this work, that a careful modeling of the sensor's uncertainty is highly beneficial and helps to design better decision systems. In our case, it enables a more realistic computation of the information gain measure, and consequently, a better selection criterion.[CA]: Aquesta tesi aborda el tema de la percepció activa i el seu ús en tasques d'exploració en entorns reals tot considerant la ambigüitat en l'escena i la incertesa del sistema de percepció. Al contrari de la majoria d'algoritmes de percepció activa, on el modelatge d'objectes sol ser l'objectiu implícit, en aquesta tesi hem explorat noves estratègies per poder tractar tasques genèriques i de major complexitat. Tot sistema de percepció activa requereix un aparell sensorial amb la capacitat de variar els seus paràmetres de forma controlada, per poder, d'aquesta manera, recopilar nova informació per resoldre una tasca determinada. En tasques d'exploració, la posició i orientació del sensor són paràmetres claus per resoldre la tasca. En el nostre estudi hem fet ús d'un robot manipulador com a sistema de posicionament i d'una càmera de profunditat de temps de vol (ToF), adherida al seu efector final, com a sistema de percepció. Com a tasca final, ens hem concentrat en l'adquisició de mesures sobre fulles dins de l'àmbit del fenotipatge de les plantes. Les plantes son objectes molt complexos, amb fulles que canvien de textura, posició i mida al llarg del temps. Això comporta diverses dificultats. Per una banda, abans de dur a terme una mesura sobre un fulla s'ha d'explorar l'entorn i trobar una regió que ho permeti. A més a més, aquells punts de vista que han estat adequats per una determinada planta difícilment ho seran per una altra, tot i sent les dues de la mateixa espècie. Per un altra banda, en el moment de la mesura, certs instruments, tals com els mesuradors de clorofil·la o les eines d'extracció de mostres, requereixen ser posicionats amb molta precisió. És necessari, doncs, disposar d'un model detallat d'aquestes regions d'interès, i que inclogui no només l'espai ocupat sinó també el lliure. Gràcies a la modelització de l'espai lliure es pot dur a terme una bona evitació d'obstacles i un bon càlcul de la trajectòria d'aproximació de l'eina a la fulla. En aquest context, és fàcil veure que, en general, amb un sol punt de vistano n'hi haprou per adquirir tota la informació necessària per prendre una mesura, i que l'ús de trajectòries predeterminades no garanteixen l'èxit. L'objectiu general d'aquesta tesi és resoldre tasques complexes de percepció activa mitjançant la codificació del seu objectiu d'exploració en un model geomètric prèviament estimat, fent servir el guany d'informació com a guia fonamental dins de la funció de cost. Les principals contribucions d'aquesta tesi es poden dividir en dos grups: primer, l'avaluació de les càmeres ToF i el seu calibratge per poder avaluar la incertesa de les seves mesures (presentat en la Part I); i en segon lloc, la proposta d'un sistema capaç de codificar la tasca mitjançant el modelatge de l'espai lliure i ocupat, i que té en compte la incertesa del sensor per millorar la selecció de les accions (presentat en la Part II). Aquesta tesi ha donat lloc a 14 publicacions, incloent 5 en revistes indexades, i els resultats obtinguts s'han fet servir en el projecte Europeu GARNICS. La funcionalitat del sistema complet està basada en els mètodes Next-Best-View (següent-millor-vista) i es pot desglossar en els següents passos principals. En primer lloc, s'obté una vista inicial de l'objecte (p. ex., una planta). A partir d'aquesta vista inicial i d'un conjunt de vistes candidates, s'estima, per cada una d'elles, el guany d'informació resultant, tant de moure la càmera com d'obtenir una nova mesura. És rellevant dir que aquest càlcul té en compte la incertesa de cada un dels píxels del sensor, l'estimació de la informació basada en el model de la tasca preestablerta i les possibles oclusions. Un cop seleccionada la vista més prometedora, el robot es mou a la nova posició, pren una nova imatge, integra aquesta informació en el model i torna a avaluar, un altre cop, el conjunt de punts de vista restants. Per últim, la tasca acaba en el moment que es recopila suficient informació.This work has been partially supported by a JAE fellowship of the Spanish Scientific Research Council (CSIC), the Spanish Ministry of Science and Innovation, the Catalan Research Commission and the European Commission under the research projects: DPI2008-06022: PAU: Percepción y acción ante incertidumbre. DPI2011-27510: PAU+: Perception and Action in Robotics Problems with Large State Spaces. 201350E102: MANIPlus: Manipulación robotizada de objetos deformables. 2009-SGR-155: SGR ROBÒTICA: Grup de recerca consolidat - Grup de Robòtica. FP6-2004-IST-4-27657: EU PACO PLUS project. FP7-ICT-2009-4-247947: GARNICS: Gardening with a cognitive system. FP7-ICT-2009-6-269959: IntellAct: Intelligent observation and execution of Actions and manipulations.Peer Reviewe

    Investigation of jitter on full-field amplitude modulated continuous wave time-of-flight range imaging cameras

    Get PDF
    The time-of-flight (ToF) range imaging cameras indirectly measure the time taken from the modulation light source to the scene and back to the camera and it is this principle that is used in depth cameras to perform depth measurements. This thesis is focused on ToF cameras that are based on the amplitude modulated continuous wave (AMCW) lidar techniques which measure the phase difference between the emitted and reflected light signals. Due to their portable size, feasible design, low weight and low energy consumption, these cameras have high demand in many applications. Commercially available AMCW ToF cameras have relatively high noise levels due to electronic sources such as shot noise, reset noise, amplifier noise, crosstalk, analogue to digital converters quantization and multipath light interference. Many noise sources in these cameras such as harmonic contamination, non-linearity, multipath interferences and light scattering are well investigated. In contrast, the effect of electronic jitter as a noise source in ranging cameras is barely studied. Jitter is defined to be any timing movement with reference to an ideal signal. An investigation of the effect of jitter on range imaging is important because timing errors potentially could cause errors in measuring phase, thus in range. The purpose of this research is to investigate the effect of jitter on range measurement in AMCW ToF range imaging. This is achieved by three main contributions: a development of a common algorithm for measurement of the jitter present in signals from depth cameras, secondly the proposal of a cost effective alternative method to measure jitter by using a software defined radio receiver, and finally an analysis of the influence of jitter on range measurement. Among the three contributions of this thesis, first, an algorithm for jitter extraction of a signal without access to a reference clock signal is proposed. The proposed algorithm is based upon Fourier analysis with signal processing techniques and it can be used for real time jitter extraction on a modulated signal with any kind of shape (sinusoidal, triangular, rectangular). The method is used to measure the amount of jitter in the light signals of two AMCW ToF range imaging cameras, namely, MESA Imaging SwissRanger 4000 and SoftKinetic DepthSense 325. Periodic and random jitter were found to be present in the light sources of both cameras with the MESA camera notably worse with random jitter of (159.6 +/- 0.1) ps RMS in amplitude. Next, in a novel approach, an inexpensive software defined radio (SDR) USB dongle is used with the proposed algorithm to extract the jitter in the light signal of the above two ToF cameras. This is a cost effective alternative to the expensive real-time medium speed digital oscilloscope. However, it is shown that this method has some significant limitations, (1) it can measure the jitter only up to half of the intermediate-frequency obtained from the down shift of the amplified radio frequency with the local oscillator which is less than the Nyquist frequency of the dongle and (2) if the number of samples per cycle captured from this dongle is not sufficient then the jitter extraction does not succeed since the signal is not properly (smoothly) represented. Finally, the periodic and random jitter influence on range measurements made with AMCW range imaging cameras are studied. An analytical model for the periodic jitter on the range measurements under the heterodyne and homodyne operations in AMCW ToF range imaging cameras is obtained in the frequency domain. The analytical model is tested through simulated data with various parameters in the system. The product of angular modulation frequency of the camera and the amplitude of the periodic jitter is a characteristic parameter for the phase error due to the presence of periodic jitter. We found that for currently available AMCW cameras (modulation frequency less than 100 MHz), neither periodic nor random jitter has a measurable effect on range measurement. But with modulation frequency increases and integration period decreases likely in the near future, periodic jitter may have a measurable detection affect on ranging. The influence of random jitter is also investigated by obtaining an analytical model based on stochastic calculus by using fundamental statistics and Fourier analysis. It is assumed that the random jitter follows the Gaussian distribution. Monte Carlo simulation is performed on the model obtained for a 1 ms integration period. We found increasing the modulation frequency above approximately 400 MHz with random jitter of 140 ps has a measurable affect on ranging

    3D data fusion from multiple sensors and its applications

    Get PDF
    The introduction of depth cameras in the mass market contributed to make computer vision applicable to many real world applications, such as human interaction in virtual environments, autonomous driving, robotics and 3D reconstruction. All these problems were originally tackled by means of standard cameras, but the intrinsic ambiguity in the bidimensional images led to the development of depth cameras technologies. Stereo vision was first introduced to provide an estimate of the 3D geometry of the scene. Structured light depth cameras were developed to use the same concepts of stereo vision but overcome some of the problems of passive technologies. Finally, Time-of-Flight (ToF) depth cameras solve the same depth estimation problem by using a different technology. This thesis focuses on the acquisition of depth data from multiple sensors and presents techniques to efficiently combine the information of different acquisition systems. The three main technologies developed to provide depth estimation are first reviewed, presenting operating principles and practical issues of each family of sensors. The use of multiple sensors then is investigated, providing practical solutions to the problem of 3D reconstruction and gesture recognition. Data from stereo vision systems and ToF depth cameras are combined together to provide a higher quality depth map. A confidence measure of depth data from the two systems is used to guide the depth data fusion. The lack of datasets with data from multiple sensors is addressed by proposing a system for the collection of data and ground truth depth, and a tool to generate synthetic data from standard cameras and ToF depth cameras. For gesture recognition, a depth camera is paired with a Leap Motion device to boost the performance of the recognition task. A set of features from the two devices is used in a classification framework based on Support Vector Machines and Random Forests

    Mobile robot map building from time-of-flight camera

    Get PDF
    A map building algorithm for mobile robots is introduced in this paper. The perceived environment is represented in a map containing in each cell a probability of presence of an object or part of an object. The environment is represented as a collection of modular occupancy grids which are added to the map as far as the mobile robot finds objects outside the existing grids. In this approach a time-of-flight (ToF) camera is exploited as a range sensor for mapping. Indeed, one of the areas where ToF sensors are adequate is in obstacle avoidance, because the detection region is not only horizontal but also vertical, allowing to detect obstacles with complex shapes. The main steps of the map building algorithm are extensively described in the paper. The results of testing the algorithm are considered in two different indoor environments
    corecore