579 research outputs found

    Task-oriented viewpoint planning for free-form objects

    Get PDF
    A thesis submitted to the Universitat Politècnica de Catalunya to obtain the degree of Doctor of Philosophy. Doctoral programme: Automatic Control, Robotics and Computer Vision. This thesis was completed at: Institut de Robòtica i Informàtica Industrial, CSIC-UPC.[EN]: This thesis deals with active sensing and its use in real exploration tasks under both scene ambiguities and measurement uncertainties. While object modeling is the implicit objective of most of active sensing algorithms, in this work we have explored new strategies to deal with more generic and more complex tasks. Active sensing requires the ability of moving the perceptual system to gather new information. Our approach uses a robot manipulator with a 3D Time-of-Flight (ToF) camera attached to the end-effector. For a complex task, we have focused our attention on plant phenotyping. Plants are complex objects, with leaves that change their position and size along time. Valid viewpoints for a certain plant are hardly valid for a different one, even belonging to the same species. Some instruments, such as chlorophyll meters or disk sampling tools, require being precisely positioned over a particular location of the leaf. Therefore, their use requires the modeling of specific regions of interest of the plant, including also the free space needed for avoiding obstacles and approaching the leaf with tool. It is easy to observe that predefined camera trajectories are not valid here, and that usually with one single view it is very difficult to acquire all the required information. The overall objective of this thesis is to solve complex active sensing tasks by embedding their exploratory goal into a pre-estimated geometrical model, using information-gain as the fundamental guideline for the reward function. The main contributions can be divided in two groups: first, the evaluation of ToF cameras and their calibration to assess the uncertainty of the measurements (presented in Part I); and second, the proposal of a framework capable of embedding the task, modeled as free and occupied space, and that takes into account the modeled sensor's uncertainty to improve the action selection algorithm (presented in Part II). This thesishas given rise to 14 publications, including 5 indexed journals, and its results have been used in the GARNICS European project. The complete framework is based on the Next-Best-View methodology and it can be summarized in the following main steps. First, an initial view of the object (e.g., a plant) is acquired. From this initial view and given a set of candidate viewpoints, the expected gain obtained by moving the robot and acquiring the next image is computed. This computation takes into account the uncertainty from all the different pixels of the sensor, the expected information based on a predefined task model, and the possible occlusions. Once the most promising view is selected, the robot moves, takes a new image, integrates this information intothe model, and evaluates again the set of remaining views. Finally, the task terminates when enough information is gathered. In our examples, this process enables the robot to perform a measurement on top of a leaf. The key ingredient is to model the complexity of the task in a layered representation of free-occupied occupancy grid maps. This allows to naturally encode the requirements of the task, to maintain and update the belief state with the measurements performed, to simulate and compute the expected gains of all potential viewpoints, and to encode the termination condition. During this work the technology of ToF cameras has incredibly evolved. Nowadays it is very popular and ToF cameras are already embedded in some consumer devices. Although the quality of the measurements has been considerably improved, it is still not uniform in the sensor. We believe, as it has been demonstrated in various experiments in this work, that a careful modeling of the sensor's uncertainty is highly beneficial and helps to design better decision systems. In our case, it enables a more realistic computation of the information gain measure, and consequently, a better selection criterion.[CA]: Aquesta tesi aborda el tema de la percepció activa i el seu ús en tasques d'exploració en entorns reals tot considerant la ambigüitat en l'escena i la incertesa del sistema de percepció. Al contrari de la majoria d'algoritmes de percepció activa, on el modelatge d'objectes sol ser l'objectiu implícit, en aquesta tesi hem explorat noves estratègies per poder tractar tasques genèriques i de major complexitat. Tot sistema de percepció activa requereix un aparell sensorial amb la capacitat de variar els seus paràmetres de forma controlada, per poder, d'aquesta manera, recopilar nova informació per resoldre una tasca determinada. En tasques d'exploració, la posició i orientació del sensor són paràmetres claus per resoldre la tasca. En el nostre estudi hem fet ús d'un robot manipulador com a sistema de posicionament i d'una càmera de profunditat de temps de vol (ToF), adherida al seu efector final, com a sistema de percepció. Com a tasca final, ens hem concentrat en l'adquisició de mesures sobre fulles dins de l'àmbit del fenotipatge de les plantes. Les plantes son objectes molt complexos, amb fulles que canvien de textura, posició i mida al llarg del temps. Això comporta diverses dificultats. Per una banda, abans de dur a terme una mesura sobre un fulla s'ha d'explorar l'entorn i trobar una regió que ho permeti. A més a més, aquells punts de vista que han estat adequats per una determinada planta difícilment ho seran per una altra, tot i sent les dues de la mateixa espècie. Per un altra banda, en el moment de la mesura, certs instruments, tals com els mesuradors de clorofil·la o les eines d'extracció de mostres, requereixen ser posicionats amb molta precisió. És necessari, doncs, disposar d'un model detallat d'aquestes regions d'interès, i que inclogui no només l'espai ocupat sinó també el lliure. Gràcies a la modelització de l'espai lliure es pot dur a terme una bona evitació d'obstacles i un bon càlcul de la trajectòria d'aproximació de l'eina a la fulla. En aquest context, és fàcil veure que, en general, amb un sol punt de vistano n'hi haprou per adquirir tota la informació necessària per prendre una mesura, i que l'ús de trajectòries predeterminades no garanteixen l'èxit. L'objectiu general d'aquesta tesi és resoldre tasques complexes de percepció activa mitjançant la codificació del seu objectiu d'exploració en un model geomètric prèviament estimat, fent servir el guany d'informació com a guia fonamental dins de la funció de cost. Les principals contribucions d'aquesta tesi es poden dividir en dos grups: primer, l'avaluació de les càmeres ToF i el seu calibratge per poder avaluar la incertesa de les seves mesures (presentat en la Part I); i en segon lloc, la proposta d'un sistema capaç de codificar la tasca mitjançant el modelatge de l'espai lliure i ocupat, i que té en compte la incertesa del sensor per millorar la selecció de les accions (presentat en la Part II). Aquesta tesi ha donat lloc a 14 publicacions, incloent 5 en revistes indexades, i els resultats obtinguts s'han fet servir en el projecte Europeu GARNICS. La funcionalitat del sistema complet està basada en els mètodes Next-Best-View (següent-millor-vista) i es pot desglossar en els següents passos principals. En primer lloc, s'obté una vista inicial de l'objecte (p. ex., una planta). A partir d'aquesta vista inicial i d'un conjunt de vistes candidates, s'estima, per cada una d'elles, el guany d'informació resultant, tant de moure la càmera com d'obtenir una nova mesura. És rellevant dir que aquest càlcul té en compte la incertesa de cada un dels píxels del sensor, l'estimació de la informació basada en el model de la tasca preestablerta i les possibles oclusions. Un cop seleccionada la vista més prometedora, el robot es mou a la nova posició, pren una nova imatge, integra aquesta informació en el model i torna a avaluar, un altre cop, el conjunt de punts de vista restants. Per últim, la tasca acaba en el moment que es recopila suficient informació.This work has been partially supported by a JAE fellowship of the Spanish Scientific Research Council (CSIC), the Spanish Ministry of Science and Innovation, the Catalan Research Commission and the European Commission under the research projects: DPI2008-06022: PAU: Percepción y acción ante incertidumbre. DPI2011-27510: PAU+: Perception and Action in Robotics Problems with Large State Spaces. 201350E102: MANIPlus: Manipulación robotizada de objetos deformables. 2009-SGR-155: SGR ROBÒTICA: Grup de recerca consolidat - Grup de Robòtica. FP6-2004-IST-4-27657: EU PACO PLUS project. FP7-ICT-2009-4-247947: GARNICS: Gardening with a cognitive system. FP7-ICT-2009-6-269959: IntellAct: Intelligent observation and execution of Actions and manipulations.Peer Reviewe

    An Analysis of the Radiometric Quality of Small Unmanned Aircraft System Imagery

    Get PDF
    In recent years, significant advancements have been made in both sensor technology and small Unmanned Aircraft Systems (sUAS). Improved sensor technology has provided users with cheaper, lighter, and higher resolution imaging tools, while new sUAS platforms have become cheaper, more stable and easier to navigate both manually and programmatically. These enhancements have provided remote sensing solutions for both commercial and research applications that were previously unachievable. However, this has provided non-scientific practitioners with access to technology and techniques previously only available to remote sensing professionals, sometimes leading to improper diagnoses and results. The work accomplished in this dissertation demonstrates the impact of proper calibration and reflectance correction on the radiometric quality of sUAS imagery. The first part of this research conducts an in-depth investigation into a proposed technique for radiance-to-reflectance conversion. Previous techniques utilized reflectance conversion panels in-scene, which, while providing accurate results, required extensive time in the field to position the panels as well as measure them. We have positioned sensors on board the sUAS to record the downwelling irradiance which then can be used to produce reflectance imagery without the use of these reflectance conversion panels. The second part of this research characterizes and calibrates a MicaSense RedEdge-3, a multispectral imaging sensor. This particular sensor comes pre-loaded with metadata values, which are never recalibrated, for dark level bias, vignette and row-gradient correction and radiometric calibration. This characterization and calibration studies were accomplished to demonstrate the importance of recalibration of any sensors over a period of time. In addition, an error propagation was performed to detect the highest contributors of error in the production of radiance and reflectance imagery. Finally, a study of the inherent reflectance variability of vegetation was performed. In other words, this study attempts to determine how accurate the digital count to radiance calibration and the radiance to reflectance conversion has to be. Can we lower our accuracy standards for radiance and reflectance imagery, because the target itself is too variable to measure? For this study, six Coneflower plants were analyzed, as a surrogate for other cash crops, under different illumination conditions, at different times of the day, and at different ground sample distances (GSDs)

    Detection, identification, and quantification of fungal diseases of sugar beet leaves using imaging and non-imaging hyperspectral techniques

    Get PDF
    Plant diseases influence the optical properties of plants in different ways. Depending on the host pathogen system and disease specific symptoms, different regions of the reflectance spectrum are affected, resulting in specific spectral signatures of diseased plants. The aim of this study was to examine the potential of hyperspectral imaging and non-imaging sensor systems for the detection, differentiation, and quantification of plant diseases. Reflectance spectra of sugar beet leaves infected with the fungal pathogens Cercospora beticola, Erysiphe betae, and Uromyces betae causing Cercospora leaf spot, powdery mildew, and sugar beet rust, respectively, were recorded repeatedly during pathogenesis. Hyperspectral data were analyzed using various methods of data and image analysis and were compared to ground truth data. Several approaches with different sensors on the measuring scales leaf, canopy, and field have been tested and compared. Much attention was paid on the effect of spectral, spatial, and temporal resolution of hyperspectral sensors on disease recording. Another focus of this study was the description of spectral characteristics of disease specific symptoms. Therefore, different data analysis methods have been applied to gain a maximum of information from spectral signatures. Spectral reflectance of sugar beet was affected by each disease in a characteristic way, resulting in disease specific signatures. Reflectance differences, sensitivity, and best correlating spectral bands differed depending on the disease and the developmental stage of the diseases. Compared to non-imaging sensors, the hyperspectral imaging sensor gave extra information related to spatial resolution. The preciseness in detecting pixel-wise spatial and temporal differences was on a high level. Besides characterization of diseased leaves also the assessment of pure disease endmembers as well as of different regions of typical symptoms was realized. Spectral vegetation indices (SVIs) related to physiological parameters were calculated and correlated to the severity of diseases. The SVIs differed in their sensitivity to the different diseases. Combining the information from multiple SVIs in an automatic classification method with Support Vector Machines, high sensitivity and specificity for the detection and differentiation of diseased leaves was reached in an early stage. In addition to the detection and identification, the quantification of diseases was possible with high accuracy by SVIs and Spectral Angle Mapper classification, calculated from hyperspectral images. Knowledge from measurements under controlled condition was carried over to the field scale. Early detection and monitoring of Cercospora leaf spot and powdery mildew was facilitated. The results of this study contribute to a better understanding of plant optical properties during disease development. Methods will further be applicable in precision crop protection, to realize the detection, differentiation, and quantification of plant diseases in early stages.Nachweis, Identifizierung und Quantifizierung pilzlicher Blattkrankheiten der Zuckerrübe mit abbildenden und nicht-abbildenden hyperspektralen Sensoren Pflanzenkrankheiten wirken sich auf die optischen Eigenschaften von Pflanzen in unterschiedlicher Weise aus. Verschiedene Bereiche des Reflektionsspektrums werden in Abhängigkeit von Wirt-Pathogen System und krankheitsspezifischen Symptomen beeinflusst. Hyperspektrale, nicht-invasive Sensoren bieten die Möglichkeit, optische Veränderungen zu einem frühen Zeitpunkt der Krankheitsentwicklung zu detektieren. Ziel dieser Arbeit war es, das Potential hyperspektraler abbildender und nicht abbildender Sensoren für die Erkennung, Identifizierung und Quantifizierung von Pflanzenkrankheiten zu beurteilen. Zuckerrübenblätter wurden mit den pilzlichen Erregern Cercospora beticola, Erysiphe betae bzw. Uromyces betae inokuliert und die Auswirkungen der Entwicklung von Cercospora Blattflecken, Echtem Mehltau bzw. Rübenrost auf die Reflektionseigenschaften erfasst und mit optischen Bonituren verglichen. Auf den Skalenebenen Blatt, Bestand und Feld wurden Messansätze mit unterschiedlichen Sensoren verglichen. Besonders berücksichtigt wurden hierbei Anforderungen an die spektrale, räumliche und zeitliche Auflösung der Sensoren. Ein weiterer Schwerpunkt lag auf der Beschreibung der spektralen Eigenschaften von charakteristischen Symptomen. Verschiedene Auswerteverfahren wurden mit dem Ziel angewendet, einen maximalen Informationsgehalt aus spektralen Signaturen zu gewinnen. Jede Krankheit beeinflusste die spektrale Reflektion von Zuckerrübenblättern auf charakteristische Weise. Differenz der Reflektion, Sensitivität sowie Korrelation der spektralen Bänder zur Befallsstärke variierten in Abhängigkeit von den Krankheiten. Eine höhere Präzision durch die pixelweise Erfassung räumlicher und zeitlicher Unterschiede von befallenem und gesundem Gewebe konnte durch abbildende Sensoren erreicht werden. Spektrale Vegetationsindizes (SVIs), mit Bezug zu pflanzenphysiologischen Parametern wurden aus den Hyperspektraldaten errechnet und mit der Befallsstärke korreliert. Die SVIs unterschieden sich in ihrer Sensitivität gegenüber den drei Krankheiten. Durch den Einsatz von maschinellem Lernen wurde die kombinierte Information der errechneten Vegetationsindizes für eine automatische Klassifizierung genutzt. Eine hohe Sensitivität sowie eine hohe Spezifität bezüglich der Erkennung und Differenzierung von Krankheiten wurden erreicht. Eine Quantifizierung der Krankheiten war neben der Detektion und Identifizierung mittels SVIs bzw. Klassifizierung mit Spektral Angle Mapper an hyperspektralen Bilddaten möglich. Die Ergebnisse dieser Arbeit tragen zu einem besseren Verständnis der optischen Eigenschaften von Pflanzen unter Pathogeneinfluss bei. Die untersuchten Methoden bieten die Möglichkeit in Anwendungen des Präzisionspflanzenschutzes implementiert zu werden, um eine frühzeitige Erkennung, Differenzierung und Quantifizierung von Pflanzenkrankheiten zu ermöglichen
    corecore