30 research outputs found

    Analysis and adaptation of integration time in PMD camera for visual servoing

    Get PDF
    The depth perception in the objects of a scene can be useful for tracking or applying visual servoing in mobile systems. 3D time-of-flight (ToF) cameras provide range images which give measurements in real time to improve these types of tasks. However, the distance computed from these range images is very changing with the integration time parameter. This paper presents an analysis for the online adaptation of integration time of ToF cameras. This online adaptation is necessary in order to capture the images in the best condition irrespective of the changes of distance (between camera and objects) caused by its movement when the camera is mounted on a robotic arm.This work is supported by the Spanish Ministry of Education and Science (MEC) through the research project DPI2008-02647, “Manipulación Inteligente mediante percepción háptica y control visual empleando una estructura articular ubidada en el robot manipulador”

    Exploitation of time-of-flight (ToF) cameras

    Get PDF
    This technical report reviews the state-of-the art in the field of ToF cameras, their advantages, their limitations, and their present-day applications sometimes in combination with other sensors. Even though ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their use in numerous areas of research. In robotics, these areas range from mobile robot navigation and map building to vision-based human motion capture and gesture recognition, showing particularly a great potential in object modeling and recognition.Preprin

    ToF cameras for active vision in robotics

    Get PDF
    ToF cameras are now a mature technology that is widely being adopted to provide sensory input to robotic applications. Depending on the nature of the objects to be perceived and the viewing distance, we distinguish two groups of applications: those requiring to capture the whole scene and those centered on an object. It will be demonstrated that it is in this last group of applications, in which the robot has to locate and possibly manipulate an object, where the distinctive characteristics of ToF cameras can be better exploited. After presenting the physical sensor features and the calibration requirements of such cameras, we review some representative works highlighting for each one which of the distinctive ToF characteristics have been more essential. Even if at low resolution, the acquisition of 3D images at frame-rate is one of the most important features, as it enables quick background/ foreground segmentation. A common use is in combination with classical color cameras. We present three developed applications, using a mobile robot and a robotic arm, to exemplify with real images some of the stated advantages.This work was supported by the EU project GARNICS FP7-247947, by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, and by the Catalan Research Commission through SGR-00155Peer Reviewe

    ToF cameras for eye-in-hand robotics

    Get PDF
    This work was supported by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, by the EU Project IntellAct FP7-ICT2009-6-269959 and by the Catalan Research Commission through SGR-00155.Peer Reviewe

    Visuomotor Coordination in Reach-To-Grasp Tasks: From Humans to Humanoids and Vice Versa

    Get PDF
    Understanding the principles involved in visually-based coordinated motor control is one of the most fundamental and most intriguing research problems across a number of areas, including psychology, neuroscience, computer vision and robotics. Not very much is known regarding computational functions that the central nervous system performs in order to provide a set of requirements for visually-driven reaching and grasping. Additionally, in spite of several decades of advances in the field, the abilities of humanoids to perform similar tasks are by far modest when needed to operate in unstructured and dynamically changing environments. More specifically, our first focus is understanding the principles involved in human visuomotor coordination. Not many behavioral studies considered visuomotor coordination in natural, unrestricted, head-free movements in complex scenarios such as obstacle avoidance. To fill this gap, we provide an assessment of visuomotor coordination when humans perform prehensile tasks with obstacle avoidance, an issue that has received far less attention. Namely, we quantify the relationships between the gaze and arm-hand systems, so as to inform robotic models, and we investigate how the presence of an obstacle modulates this pattern of correlations. Second, to complement these observations, we provide a robotic model of visuomotor coordination, with and without the presence of obstacles in the workspace. The parameters of the controller are solely estimated by using the human motion capture data from our human study. This controller has a number of interesting properties. It provides an efficient way to control the gaze, arm and hand movements in a stable and coordinated manner. When facing perturbations while reaching and grasping, our controller adapts its behavior almost instantly, while preserving coordination between the gaze, arm, and hand. In the third part of the thesis, we study the neuroscientific literature of the primates. We here stress the view that the cerebellum uses the cortical reference frame representation. The cerebellum by taking into account this representation performs closed-loop programming of multi-joint movements and movement synchronization between the eye-head system, arm and hand. Based on this investigation, we propose a functional architecture of the cerebellar-cortical involvement. We derive a number of improvements of our visuomotor controller for obstacle-free reaching and grasping. Because this model is devised by carefully taking into account the neuroscientific evidence, we are able to provide a number of testable predictions about the functions of the central nervous system in visuomotor coordination. Finally, we tackle the flow of the visuomotor coordination in the direction from the arm-hand system to the visual system. We develop two models of motor-primed attention for humanoid robots. Motor-priming of attention is a mechanism that implements prioritizing of visual processing with respect to motor-relevant parts of the visual field. Recent studies in humans and monkeys have shown that visual attention supporting natural behavior is not exclusively defined in terms of visual saliency in color or texture cues, rather the reachable space and motor plans present the predominant source of this attentional modulation. Here, we show that motor-priming of visual attention can be used to efficiently distribute robot's computational resources devoted to visual processing

    Modeling and control of multi-elastic-link robots under gravity

    Get PDF
    Die Elastizität in den Armkörpern von Robotern und vergleichbaren Maschinen wird häufig als unerwünschter Effekt gesehen. Die Motivation der vorliegenden Arbeit entstammt einer dazu gegensätzlichen Sichtweise. Die Idee besteht darin, dem Roboter über die Ausnutzung der intrinsischen Nachgiebigkeit die Fähigkeit zur Wahrnehmung von Kontaktkräften zu verleihen und zugleich die Eigenmasse der gesamten Armstruktur zu reduzieren. Die zugrundeliegende Hypothese formuliert damit, dass die elastischen Eigenschaften der Armkörper nicht zwingend als Problem im Sinne einer Verschlechterung der Positioniergenauigkeit sowie einer Verlängerung von Ausregelzeiten gesehen werden müssen. Die vorliegende Arbeit leistet einen Beitrag zur Schwingungsdämpfung und Endeffektor-Positionierung in Gegenwart von last- und konfigurationsabhängigen statischen Verbiegungen für einen mehrgliedrigen gliedelastischen Roboterarm. Die erarbeiteten theoretischen Konzepte werden durch ausgiebige experimentelle Ergebnisse untermauert. Daran anknüpfend demonstriert die Arbeit die praktische Realisierbarkeit der Detektion von Kontaktkräften mit dem mehrgliedrigen gliedelastischen Roboterarm unter Gravitationseinfluss. Die behandelten Szenarien umfassen unvorhergesehene beziehungsweise unbeabsichtigte Kollisionen, wie auch beabsichtigte Kontakte im Sinne einer physischen Mensch-Roboter-Interaktion.Link elasticity is frequently considered an undesired effect in the mechanical design of robot arms and comparable machines. The driving motivator behind this work originates from the contrary perspective of exploiting the intrinsic compliance to grant elastic link robots force sensing capabilities and to simultaneously reduce the overall arm masses. The underlying hypothesis proposes that link elasticity is not necessarily just a problem, which degrades positioning accuracy and prolongs settling times. The present work contributes new theoretical concepts confirmed by extensive experimental results in the fields of oscillation damping and end effector positioning for a multi-elastic-link arm in the presence of load and joint configuration dependent static deflections. On top of that, the work practically demonstrates the general feasibility of detecting and reacting to external contact forces with a multi elastic link robot operating under gravity. The contact scenarios include unpredicted or accidental collisions between the robot and the environment as well as intentional contacts for physical human robot interaction

    In Vivo Human-Like Robotic Phenotyping of Leaf and Stem Traits in Maize and Sorghum in Greenhouse

    Get PDF
    In plant phenotyping, the measurement of morphological, physiological and chemical traits of leaves and stems is needed to investigate and monitor the condition of plants. The manual measurement of these properties is time consuming, tedious, error prone, and laborious. The use of robots is a new approach to accomplish such endeavors, which enables automatic monitoring with minimal human intervention. In this study, two plant phenotyping robotic systems were developed to realize automated measurement of plant leaf properties and stem diameter which could reduce the tediousness of data collection compare to manual measurements. The robotic systems comprised of a four degree of freedom (DOF) robotic manipulator and a Time-of-Flight (TOF) camera. Robotic grippers were developed to integrate an optical fiber cable (coupled to a portable spectrometer) for leaf spectral reflectance measurement, a thermistor for leaf temperature measurement, and a linear potentiometer for stem diameter measurement. An Image processing technique and deep learning method were used to identify grasping points on leaves and stems, respectively. The systems were tested in a greenhouse using maize and sorghum plants. The results from the leaf phenotyping robot experiment showed that leaf temperature measurements by the phenotyping robot were correlated with those measured manually by a human researcher (R2 = 0.58 for maize and 0.63 for sorghum). The leaf spectral measurements by the phenotyping robot predicted leaf chlorophyll, water content and potassium with moderate success (R2 ranged from 0.52 to 0.61), whereas the prediction for leaf nitrogen and phosphorus were poor. The total execution time to grasp and take measurements from one leaf was 35.5±4.4 s for maize and 38.5±5.7 s for sorghum. Furthermore, the test showed that the grasping success rate was 78% for maize and 48% for sorghum. The experimental results from the stem phenotyping robot demonstrated a high correlation between the manual and automated stem diameter measurements (R2 \u3e 0.98). The execution time for stem diameter measurement was 45.3 s. The system could successfully detect and localize, and also grasp the stem for all plants during the experiment. Both robots could decrease the tediousness of collecting phenotypes compare to manual measurements. The phenotyping robots can be useful to complement the traditional image-based high-throughput plant phenotyping in greenhouses by collecting in vivo morphological, physiological, and biochemical trait measurements for plant leaves and stems. Advisors: Yufeng Ge, Santosh Pitl

    Sliding Mode Control

    Get PDF
    The main objective of this monograph is to present a broad range of well worked out, recent application studies as well as theoretical contributions in the field of sliding mode control system analysis and design. The contributions presented here include new theoretical developments as well as successful applications of variable structure controllers primarily in the field of power electronics, electric drives and motion steering systems. They enrich the current state of the art, and motivate and encourage new ideas and solutions in the sliding mode control area

    Task-oriented viewpoint planning for free-form objects

    Get PDF
    A thesis submitted to the Universitat Politècnica de Catalunya to obtain the degree of Doctor of Philosophy. Doctoral programme: Automatic Control, Robotics and Computer Vision. This thesis was completed at: Institut de Robòtica i Informàtica Industrial, CSIC-UPC.[EN]: This thesis deals with active sensing and its use in real exploration tasks under both scene ambiguities and measurement uncertainties. While object modeling is the implicit objective of most of active sensing algorithms, in this work we have explored new strategies to deal with more generic and more complex tasks. Active sensing requires the ability of moving the perceptual system to gather new information. Our approach uses a robot manipulator with a 3D Time-of-Flight (ToF) camera attached to the end-effector. For a complex task, we have focused our attention on plant phenotyping. Plants are complex objects, with leaves that change their position and size along time. Valid viewpoints for a certain plant are hardly valid for a different one, even belonging to the same species. Some instruments, such as chlorophyll meters or disk sampling tools, require being precisely positioned over a particular location of the leaf. Therefore, their use requires the modeling of specific regions of interest of the plant, including also the free space needed for avoiding obstacles and approaching the leaf with tool. It is easy to observe that predefined camera trajectories are not valid here, and that usually with one single view it is very difficult to acquire all the required information. The overall objective of this thesis is to solve complex active sensing tasks by embedding their exploratory goal into a pre-estimated geometrical model, using information-gain as the fundamental guideline for the reward function. The main contributions can be divided in two groups: first, the evaluation of ToF cameras and their calibration to assess the uncertainty of the measurements (presented in Part I); and second, the proposal of a framework capable of embedding the task, modeled as free and occupied space, and that takes into account the modeled sensor's uncertainty to improve the action selection algorithm (presented in Part II). This thesishas given rise to 14 publications, including 5 indexed journals, and its results have been used in the GARNICS European project. The complete framework is based on the Next-Best-View methodology and it can be summarized in the following main steps. First, an initial view of the object (e.g., a plant) is acquired. From this initial view and given a set of candidate viewpoints, the expected gain obtained by moving the robot and acquiring the next image is computed. This computation takes into account the uncertainty from all the different pixels of the sensor, the expected information based on a predefined task model, and the possible occlusions. Once the most promising view is selected, the robot moves, takes a new image, integrates this information intothe model, and evaluates again the set of remaining views. Finally, the task terminates when enough information is gathered. In our examples, this process enables the robot to perform a measurement on top of a leaf. The key ingredient is to model the complexity of the task in a layered representation of free-occupied occupancy grid maps. This allows to naturally encode the requirements of the task, to maintain and update the belief state with the measurements performed, to simulate and compute the expected gains of all potential viewpoints, and to encode the termination condition. During this work the technology of ToF cameras has incredibly evolved. Nowadays it is very popular and ToF cameras are already embedded in some consumer devices. Although the quality of the measurements has been considerably improved, it is still not uniform in the sensor. We believe, as it has been demonstrated in various experiments in this work, that a careful modeling of the sensor's uncertainty is highly beneficial and helps to design better decision systems. In our case, it enables a more realistic computation of the information gain measure, and consequently, a better selection criterion.[CA]: Aquesta tesi aborda el tema de la percepció activa i el seu ús en tasques d'exploració en entorns reals tot considerant la ambigüitat en l'escena i la incertesa del sistema de percepció. Al contrari de la majoria d'algoritmes de percepció activa, on el modelatge d'objectes sol ser l'objectiu implícit, en aquesta tesi hem explorat noves estratègies per poder tractar tasques genèriques i de major complexitat. Tot sistema de percepció activa requereix un aparell sensorial amb la capacitat de variar els seus paràmetres de forma controlada, per poder, d'aquesta manera, recopilar nova informació per resoldre una tasca determinada. En tasques d'exploració, la posició i orientació del sensor són paràmetres claus per resoldre la tasca. En el nostre estudi hem fet ús d'un robot manipulador com a sistema de posicionament i d'una càmera de profunditat de temps de vol (ToF), adherida al seu efector final, com a sistema de percepció. Com a tasca final, ens hem concentrat en l'adquisició de mesures sobre fulles dins de l'àmbit del fenotipatge de les plantes. Les plantes son objectes molt complexos, amb fulles que canvien de textura, posició i mida al llarg del temps. Això comporta diverses dificultats. Per una banda, abans de dur a terme una mesura sobre un fulla s'ha d'explorar l'entorn i trobar una regió que ho permeti. A més a més, aquells punts de vista que han estat adequats per una determinada planta difícilment ho seran per una altra, tot i sent les dues de la mateixa espècie. Per un altra banda, en el moment de la mesura, certs instruments, tals com els mesuradors de clorofil·la o les eines d'extracció de mostres, requereixen ser posicionats amb molta precisió. És necessari, doncs, disposar d'un model detallat d'aquestes regions d'interès, i que inclogui no només l'espai ocupat sinó també el lliure. Gràcies a la modelització de l'espai lliure es pot dur a terme una bona evitació d'obstacles i un bon càlcul de la trajectòria d'aproximació de l'eina a la fulla. En aquest context, és fàcil veure que, en general, amb un sol punt de vistano n'hi haprou per adquirir tota la informació necessària per prendre una mesura, i que l'ús de trajectòries predeterminades no garanteixen l'èxit. L'objectiu general d'aquesta tesi és resoldre tasques complexes de percepció activa mitjançant la codificació del seu objectiu d'exploració en un model geomètric prèviament estimat, fent servir el guany d'informació com a guia fonamental dins de la funció de cost. Les principals contribucions d'aquesta tesi es poden dividir en dos grups: primer, l'avaluació de les càmeres ToF i el seu calibratge per poder avaluar la incertesa de les seves mesures (presentat en la Part I); i en segon lloc, la proposta d'un sistema capaç de codificar la tasca mitjançant el modelatge de l'espai lliure i ocupat, i que té en compte la incertesa del sensor per millorar la selecció de les accions (presentat en la Part II). Aquesta tesi ha donat lloc a 14 publicacions, incloent 5 en revistes indexades, i els resultats obtinguts s'han fet servir en el projecte Europeu GARNICS. La funcionalitat del sistema complet està basada en els mètodes Next-Best-View (següent-millor-vista) i es pot desglossar en els següents passos principals. En primer lloc, s'obté una vista inicial de l'objecte (p. ex., una planta). A partir d'aquesta vista inicial i d'un conjunt de vistes candidates, s'estima, per cada una d'elles, el guany d'informació resultant, tant de moure la càmera com d'obtenir una nova mesura. És rellevant dir que aquest càlcul té en compte la incertesa de cada un dels píxels del sensor, l'estimació de la informació basada en el model de la tasca preestablerta i les possibles oclusions. Un cop seleccionada la vista més prometedora, el robot es mou a la nova posició, pren una nova imatge, integra aquesta informació en el model i torna a avaluar, un altre cop, el conjunt de punts de vista restants. Per últim, la tasca acaba en el moment que es recopila suficient informació.This work has been partially supported by a JAE fellowship of the Spanish Scientific Research Council (CSIC), the Spanish Ministry of Science and Innovation, the Catalan Research Commission and the European Commission under the research projects: DPI2008-06022: PAU: Percepción y acción ante incertidumbre. DPI2011-27510: PAU+: Perception and Action in Robotics Problems with Large State Spaces. 201350E102: MANIPlus: Manipulación robotizada de objetos deformables. 2009-SGR-155: SGR ROBÒTICA: Grup de recerca consolidat - Grup de Robòtica. FP6-2004-IST-4-27657: EU PACO PLUS project. FP7-ICT-2009-4-247947: GARNICS: Gardening with a cognitive system. FP7-ICT-2009-6-269959: IntellAct: Intelligent observation and execution of Actions and manipulations.Peer Reviewe
    corecore