314 research outputs found

    Automatic Plant Annotation Using 3D Computer Vision

    Get PDF

    3D Remote Sensing Applications in Forest Ecology: Composition, Structure and Function

    Get PDF
    Dear Colleagues, The composition, structure and function of forest ecosystems are the key features characterizing their ecological properties, and can thus be crucially shaped and changed by various biotic and abiotic factors on multiple spatial scales. The magnitude and extent of these changes in recent decades calls for enhanced mitigation and adaption measures. Remote sensing data and methods are the main complementary sources of up-to-date synoptic and objective information of forest ecology. Due to the inherent 3D nature of forest ecosystems, the analysis of 3D sources of remote sensing data is considered to be most appropriate for recreating the forest’s compositional, structural and functional dynamics. In this Special Issue of Forests, we published a set of state-of-the-art scientific works including experimental studies, methodological developments and model validations, all dealing with the general topic of 3D remote sensing-assisted applications in forest ecology. We showed applications in forest ecology from a broad collection of method and sensor combinations, including fusion schemes. All in all, the studies and their focuses are as broad as a forest’s ecology or the field of remote sensing and, thus, reflect the very diverse usages and directions toward which future research and practice will be directed

    Fruit Detection and Tree Segmentation for Yield Mapping in Orchards

    Get PDF
    Accurate information gathering and processing is critical for precision horticulture, as growers aim to optimise their farm management practices. An accurate inventory of the crop that details its spatial distribution along with health and maturity, can help farmers efficiently target processes such as chemical and fertiliser spraying, crop thinning, harvest management, labour planning and marketing. Growers have traditionally obtained this information by using manual sampling techniques, which tend to be labour intensive, spatially sparse, expensive, inaccurate and prone to subjective biases. Recent advances in sensing and automation for field robotics allow for key measurements to be made for individual plants throughout an orchard in a timely and accurate manner. Farmer operated machines or unmanned robotic platforms can be equipped with a range of sensors to capture a detailed representation over large areas. Robust and accurate data processing techniques are therefore required to extract high level information needed by the grower to support precision farming. This thesis focuses on yield mapping in orchards using image and light detection and ranging (LiDAR) data captured using an unmanned ground vehicle (UGV). The contribution is the framework and algorithmic components for orchard mapping and yield estimation that is applicable to different fruit types and orchard configurations. The framework includes detection of fruits in individual images and tracking them over subsequent frames. The fruit counts are then associated to individual trees, which are segmented from image and LiDAR data, resulting in a structured spatial representation of yield. The first contribution of this thesis is the development of a generic and robust fruit detection algorithm. Images captured in the outdoor environment are susceptible to highly variable external factors that lead to significant appearance variations. Specifically in orchards, variability is caused by changes in illumination, target pose, tree types, etc. The proposed techniques address these issues by using state-of-the-art feature learning approaches for image classification, while investigating the utility of orchard domain knowledge for fruit detection. Detection is performed using both pixel-wise classification of images followed instance segmentation, and bounding-box regression approaches. The experimental results illustrate the versatility of complex deep learning approaches over a multitude of fruit types. The second contribution of this thesis is a tree segmentation approach to detect the individual trees that serve as a standard unit for structured orchard information systems. The work focuses on trellised trees, which present unique challenges for segmentation algorithms due to their intertwined nature. LiDAR data are used to segment the trellis face, and to generate proposals for individual trees trunks. Additional trunk proposals are provided using pixel-wise classification of the image data. The multi-modal observations are fine-tuned by modelling trunk locations using a hidden semi-Markov model (HSMM), within which prior knowledge of tree spacing is incorporated. The final component of this thesis addresses the visual occlusion of fruit within geometrically complex canopies by using a multi-view detection and tracking approach. Single image fruit detections are tracked over a sequence of images, and associated to individual trees or farm rows, with the spatial distribution of the fruit counting forming a yield map over the farm. The results show the advantage of using multi-view imagery (instead of single view analysis) for fruit counting and yield mapping. This thesis includes extensive experimentation in almond, apple and mango orchards, with data captured by a UGV spanning a total of 5 hectares of farm area, over 30 km of vehicle traversal and more than 7,000 trees. The validation of the different processes is performed using manual annotations, which includes fruit and tree locations in image and LiDAR data respectively. Additional evaluation of yield mapping is performed by comparison against fruit counts on trees at the farm and counts made by the growers post-harvest. The framework developed in this thesis is demonstrated to be accurate compared to ground truth at all scales of the pipeline, including fruit detection and tree mapping, leading to accurate yield estimation, per tree and per row, for the different crops. Through the multitude of field experiments conducted over multiple seasons and years, the thesis presents key practical insights necessary for commercial development of an information gathering system in orchards

    Crop plant reconstruction and feature extraction based on 3-D vision

    Get PDF
    3-D imaging is increasingly affordable and offers new possibilities for a more efficient agricul-tural practice with the use of highly advances technological devices. Some reasons contrib-uting to this possibility include the continuous increase in computer processing power, the de-crease in cost and size of electronics, the increase in solid state illumination efficiency and the need for greater knowledge and care of the individual crops. The implementation of 3-D im-aging systems in agriculture is impeded by the economic justification of using expensive de-vices for producing relative low-cost seasonal products. However, this may no longer be true since low-cost 3-D sensors, such as the one used in this work, with advance technical capabili-ties are already available. The aim of this cumulative dissertation was to develop new methodologies to reconstruct the 3-D shape of agricultural environment in order to recognized and quantitatively describe struc-tures, in this case: maize plants, for agricultural applications such as plant breeding and preci-sion farming. To fulfil this aim a comprehensive review of the 3-D imaging systems in agricul-tural applications was done to select a sensor that was affordable and has not been fully inves-tigated in agricultural environments. A low-cost TOF sensor was selected to obtain 3-D data of maize plants and a new adaptive methodology was proposed for point cloud rigid registra-tion and stitching. The resulting maize 3-D point clouds were highly dense and generated in a cost-effective manner. The validation of the methodology showed that the plants were recon-structed with high accuracies and the qualitative analysis showed the visual variability of the plants depending on the 3-D perspective view. The generated point cloud was used to obtain information about the plant parameters (stem position and plant height) in order to quantita-tively describe the plant. The resulting plant stem positions were estimated with an average mean error and standard deviation of 27 mm and 14 mm, respectively. Additionally, meaning-ful information about the plant height profile was also provided, with an average overall mean error of 8.7 mm. Since the maize plants considered in this research were highly heterogeneous in height, some of them had folded leaves and were planted with standard deviations that emulate the real performance of a seeder; it can be said that the experimental maize setup was a difficult scenario. Therefore, a better performance, for both, plant stem position and height estimation could be expected for a maize field in better conditions. Finally, having a 3-D re-construction of the maize plants using a cost-effective sensor, mounted on a small electric-motor-driven robotic platform, means that the cost (either economic, energetic or time) of gen-erating every point in the point cloud is greatly reduced compared with previous researches.Die 3D-Bilderfassung ist zunehmend kostengünstiger geworden und bietet neue Möglichkeiten für eine effizientere landwirtschaftliche Praxis durch den Einsatz hochentwickelter technologischer Geräte. Einige Gründe, die diese ermöglichen, ist das kontinuierliche Wachstum der Computerrechenleistung, die Kostenreduktion und Miniaturisierung der Elektronik, die erhöhte Beleuchtungseffizienz und die Notwendigkeit einer besseren Kenntnis und Pflege der einzelnen Pflanzen. Die Implementierung von 3-D-Sensoren in der Landwirtschaft wird durch die wirtschaftliche Rechtfertigung der Verwendung teurer Geräte zur Herstellung von kostengünstigen Saisonprodukten verhindert. Dies ist jedoch nicht mehr länger der Fall, da kostengünstige 3-D-Sensoren, bereits verfügbar sind. Wie derjenige dier in dieser Arbeit verwendet wurde. Das Ziel dieser kumulativen Dissertation war, neue Methoden für die Visualisierung die 3-D-Form der landwirtschaftlichen Umgebung zu entwickeln, um Strukturen quantitativ zu beschreiben: in diesem Fall Maispflanzen für landwirtschaftliche Anwendungen wie Pflanzenzüchtung und Precision Farming zu erkennen. Damit dieses Ziel erreicht wird, wurde eine umfassende Überprüfung der 3D-Bildgebungssysteme in landwirtschaftlichen Anwendungen durchgeführt, um einen Sensor auszuwählen, der erschwinglich und in landwirtschaftlichen Umgebungen noch nicht ausgiebig getestet wurde. Ein kostengünstiger TOF-Sensor wurde ausgewählt, um 3-D-Daten von Maispflanzen zu erhalten und eine neue adaptive Methodik wurde für die Ausrichtung von Punktwolken vorgeschlagen. Die resultierenden Mais-3-D-Punktwolken hatten eine hohe Punktedichte und waren in einer kosteneffektiven Weise erzeugt worden. Die Validierung der Methodik zeigte, dass die Pflanzen mit hoher Genauigkeit rekonstruiert wurden und die qualitative Analyse die visuelle Variabilität der Pflanzen in Abhängigkeit der 3-D-Perspektive zeigte. Die erzeugte Punktwolke wurde verwendet, um Informationen über die Pflanzenparameter (Stammposition und Pflanzenhöhe) zu erhalten, die die Pflanze quantitativ beschreibt. Die resultierenden Pflanzenstammpositionen wurden mit einem durchschnittlichen mittleren Fehler und einer Standardabweichung von 27 mm bzw. 14 mm berechnet. Zusätzlich wurden aussagekräftige Informationen zum Pflanzenhöhenprofil mit einem durchschnittlichen Gesamtfehler von 8,7 mm bereitgestellt. Da die untersuchten Maispflanzen in der Höhe sehr heterogen waren, hatten einige von ihnen gefaltete Blätter und wurden mit Standardabweichungen gepflanzt, die die tatsächliche Genauigkeit einer Sämaschine nachahmen. Man kann sagen, dass der experimentelle Versuch ein schwieriges Szenario war. Daher könnte für ein Maisfeld unter besseren Bedingungen eine besseres Resultat sowohl für die Pflanzenstammposition als auch für die Höhenschätzung erwartet werden. Schließlich bedeutet eine 3D-Rekonstruktion der Maispflanzen mit einem kostengünstigen Sensor, der auf einer kleinen elektrischen, motorbetriebenen Roboterplattform montiert ist, dass die Kosten (entweder wirtschaftlich, energetisch oder zeitlich) für die Erzeugung jedes Punktes in den Punktwolken im Vergleich zu früheren Untersuchungen stark reduziert werden

    A Review on Deep Learning in UAV Remote Sensing

    Full text link
    Deep Neural Networks (DNNs) learn representation from data with an impressive capability, and brought important breakthroughs for processing images, time-series, natural language, audio, video, and many others. In the remote sensing field, surveys and literature revisions specifically involving DNNs algorithms' applications have been conducted in an attempt to summarize the amount of information produced in its subfields. Recently, Unmanned Aerial Vehicles (UAV) based applications have dominated aerial sensing research. However, a literature revision that combines both "deep learning" and "UAV remote sensing" thematics has not yet been conducted. The motivation for our work was to present a comprehensive review of the fundamentals of Deep Learning (DL) applied in UAV-based imagery. We focused mainly on describing classification and regression techniques used in recent applications with UAV-acquired data. For that, a total of 232 papers published in international scientific journal databases was examined. We gathered the published material and evaluated their characteristics regarding application, sensor, and technique used. We relate how DL presents promising results and has the potential for processing tasks associated with UAV-based image data. Lastly, we project future perspectives, commentating on prominent DL paths to be explored in the UAV remote sensing field. Our revision consists of a friendly-approach to introduce, commentate, and summarize the state-of-the-art in UAV-based image applications with DNNs algorithms in diverse subfields of remote sensing, grouping it in the environmental, urban, and agricultural contexts.Comment: 38 pages, 10 figure

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Perception for context awareness of agricultural robots

    Get PDF
    Context awareness is one key point for the realisation of robust autonomous systems in unstructured environments like agriculture. Robots need a precise description of their environment so that tasks could be planned and executed correctly. When using a robot system in a controlled, not changing environment, the programmer maybe could model all possible circumstances to get the system reliable. However, the situation gets more complex when the environment and the objects are changing their shape, position or behaviour. Perception for context awareness in agriculture means to detect and classify objects of interest in the environment correctly and react to them. The aim of this cumulative dissertation was to apply different strategies to increase context awareness with perception in mobile robots in agriculture. The objectives of this thesis were to address five aspects of environment perception: (I) test static local sensor communication with a mobile vehicle, (II) detect unstructured objects in a controlled environment, (III) describe the influence of growth stage to algorithm outcomes, (IV) use the gained sensor information to detect single plants and (V) improve the robustness of algorithms under noisy conditions. First, the communication between a static Wireless Sensor Network and a mobile robot was investigated. The wireless sensor nodes were able to send local data from sensors attached to the systems. The sensors were placed in a vineyard and the robot followed automatically the row structure to receive the data. It was possible to localize the single nodes just with the exact robot position and the attenuation model of the received signal strength with triangulation. The precision was 0.6 m and more precise than a provided differential global navigation satellite system signal. The second research area focused on the detection of unstructured objects in point clouds. Therefore, a low-cost sonar sensor was attached to a 3D-frame with millimetre level accuracy to exactly localize the sensor position. With the sensor position and the sensor reading, a 3D point cloud was created. In the workspace, 10 individual plant species were placed. They could be detected automatically with an accuracy of 2.7 cm. An attached valve was able to spray these specific plant positions, which resulted in a liquid saving of 72%, compared to a conventional spraying method, covering the whole crop row area. As plants are dynamic objects, the third objective of describing the plant growth with adequate sensor data, was important to characterise the unstructured agriculture domain. For revering and testing algorithms to the same data, maize rows were planted in a greenhouse. The exact positions of all plants were measured with a total station. Then a robot vehicle was guided through the crop rows and the data of attached sensors were recorded. With the help of the total station, it was possible to track down the vehicle position and to refer all data to the same coordinate frame. The data recording was performed over 7 times over a period of 6 weeks. This created datasets could afterwards be used to assess different algorithms and to test them against different growth changes of the plants. It could be shown that a basic RANSAC line following algorithm could not perform correctly under all growth stages without additional filtering. The fourth paper used this created datasets to search for single plants with a sensor normally used for obstacle avoidance. One tilted laser scanner was used with the exact robot position to create 3D point clouds, where two different methods for single plant detection were applied. Both methods used the spacing to detect single plants. The second method used the fixed plant spacing and row beginning, to resolve the plant positions iteratively. The first method reached detection rates of 73.7% and a root mean square error of 3.6 cm. The iterative second method reached a detection rate of 100% with an accuracy of 2.6 - 3.0 cm. For assessing the robustness of the plant detection, an algorithm was used to detect the plant positions in six different growth stages of the given datasets. A graph-cut based algorithm was used, what improved the results for single plant detection. As the algorithm was not sensitive against overlaying and noisy point clouds, a detection rate of 100% was realised, with an accuracy for the estimated height of the plants with 1.55 cm. The stem position was resolved with an accuracy of 2.05 cm. This thesis showed up different methods of perception for context awareness, which could help to improve the robustness of robots in agriculture. When the objects in the environment are known, it could be possible to react and interact smarter with the environment as it is the case in agricultural robotics. Especially the detection of single plants before the robot reaches them could help to improve the navigation and interaction of agricultural robots.Kontextwahrnehmung ist eine Schlüsselfunktion für die Realisierung von robusten autonomen Systemen in einer unstrukturierten Umgebung wie der Landwirtschaft. Roboter benötigen eine präzise Beschreibung ihrer Umgebung, so dass Aufgaben korrekt geplant und durchgeführt werden können. Wenn ein Roboter System in einer kontrollierten und sich nicht ändernden Umgebung eingesetzt wird, kann der Programmierer möglicherweise ein Modell erstellen, welches alle möglichen Umstände einbindet, um ein zuverlässiges System zu erhalten. Jedoch wird dies komplexer, wenn die Objekte und die Umwelt ihr Erscheinungsbild, Position und Verhalten ändern. Umgebungserkennung für Kontextwahrnehmung in der Landwirtschaft bedeutet relevante Objekte in der Umgebung zu erkennen, zu klassifizieren und auf diese zu reagieren. Ziel dieser kumulativen Dissertation war, verschiedene Strategien anzuwenden, um das Kontextbewusstsein mit Wahrnehmung bei mobilen Robotern in der Landwirtschaft zu erhöhen. Die Ziele dieser Arbeit waren fünf Aspekte von Umgebungserkennung zu adressieren: (I) Statische lokale Sensorkommunikation mit einem mobilen Fahrzeug zu testen, (II) unstrukturierte Objekte in einer kontrollierten Umgebung erkennen, (III) die Einflüsse von Wachstum der Pflanzen auf Algorithmen und ihre Ergebnisse zu beschreiben, (IV) gewonnene Sensorinformation zu benutzen, um Einzelpflanzen zu erkennen und (V) die Robustheit von Algorithmen unter verschiedenen Fehlereinflüssen zu verbessern. Als erstes wurde die Kommunikation zwischen einem statischen drahtlosen Sensor-Netzwerk und einem mobilen Roboter untersucht. Die drahtlosen Sensorknoten konnten Daten von lokal angeschlossenen Sensoren übermitteln. Die Sensoren wurden in einem Weingut verteilt und der Roboter folgte automatisch der Reihenstruktur, um die gesendeten Daten zu empfangen. Es war möglich, die Sendeknoten mithilfe von Triangulation aus der exakten Roboterposition und eines Sendesignal-Dämpfung-Modells zu lokalisieren. Die Genauigkeit war 0.6 m und somit genauer als das verfügbare Positionssignal eines differential global navigation satellite system. Der zweite Forschungsbereich fokussierte sich auf die Entdeckung von unstrukturierten Objekten in Punktewolken. Dafür wurde ein kostengünstiger Ultraschallsensor auf einen 3D Bewegungsrahmen mit einer Millimeter Genauigkeit befestigt, um die genaue Sensorposition bestimmen zu können. Mit der Sensorposition und den Sensordaten wurde eine 3D Punktewolke erstellt. Innerhalb des Arbeitsbereichs des 3D Bewegungsrahmens wurden 10 einzelne Pflanzen platziert. Diese konnten automatisch mit einer Genauigkeit von 2.7 cm erkannt werden. Eine angebaute Pumpe ermöglichte das punktuelle Besprühen der spezifischen Pflanzenpositionen, was zu einer Flüssigkeitsersparnis von 72%, verglichen mit einer konventionellen Methode welche die gesamte Pflanzenfläche benetzt, führte. Da Pflanzen sich ändernde Objekte sind, war das dritte Ziel das Pflanzenwachstum mit geeigneten Sensordaten zu beschreiben, was wichtig ist, um unstrukturierte Umgebung der Landwirtschaft zu charakterisieren. Um Algorithmen mit denselben Daten zu referenzieren und zu testen, wurden Maisreihen in einem Gewächshaus gepflanzt. Die exakte Position jeder einzelnen Pflanze wurde mit einer Totalstation gemessen. Anschließend wurde ein Roboterfahrzeug durch die Reihen gelenkt und die Daten der angebauten Sensoren wurden aufgezeichnet. Mithilfe der Totalstation war es möglich, die Fahrzeugposition zu ermitteln und alle Daten in dasselbe Koordinatensystem zu transformieren. Die Datenaufzeichnungen erfolgten 7-mal über einen Zeitraum von 6 Wochen. Diese generierten Datensätze konnten anschließend benutzt werden, um verschiedene Algorithmen unter verschiedenen Wachstumsstufen der Pflanzen zu testen. Es konnte gezeigt werden, dass ein Standard RANSAC Linien Erkennungsalgorithmus nicht fehlerfrei arbeiten kann, wenn keine zusätzliche Filterung eingesetzt wird. Die vierte Publikation nutzte diese generierten Datensätze, um nach Einzelpflanzen mithilfe eines Sensors zu suchen, der normalerweise für die Hinderniserkennung benutzt wird. Ein gekippter Laserscanner wurde zusammen mit der exakten Roboterposition benutzt, um eine 3D Punktewolke zu generieren. Zwei verschiedene Methoden für Einzelpflanzenerkennung wurden angewendet. Beide Methoden nutzten Abstände, um die Einzelpflanzen zu erkennen. Die zweite Methode nutzte den bekannten Pflanzenabstand und den Reihenanfang, um die Pflanzenpositionen iterativ zu erkennen. Die erste Methode erreichte eine Erkennungsrate von 73.7% und damit einen quadratischen Mittelwertfehler von 3.6 cm. Die iterative zweite Methode erreichte eine Erkennungsrate von bis zu 100% mit einer Genauigkeit von 2.6-3.0 cm. Um die Robustheit der Pflanzenerkennung zu bewerten, wurde ein Algorithmus zur Erkennung von Einzelpflanzen in sechs verschiedenen Wachstumsstufen der Datasets eingesetzt. Hier wurde ein graph-cut basierter Algorithmus benutzt, welcher die Robustheit der Ergebnisse für die Einzelpflanzenerkennung erhöhte. Da der Algorithmus nicht empfindlich gegen ungenaue und fehlerhafte Punktewolken ist, wurde eine Erkennungsrate von 100% mit einer Genauigkeit von 1.55 cm für die Höhe der Pflanzen erreicht. Der Stiel der Pflanzen wurde mit einer Genauigkeit von 2.05 cm erkannt. Diese Arbeit zeigte verschiedene Methoden für die Erkennung von Kontextwahrnehmung, was helfen kann, um die Robustheit von Robotern in der Landwirtschaft zu erhöhen. Wenn die Objekte in der Umwelt bekannt sind, könnte es möglich sein, intelligenter auf die Umwelt zu reagieren und zu interagieren, wie es aktuell der Fall in der Landwirtschaftsrobotik ist. Besonders die Erkennung von Einzelpflanzen bevor der Roboter sie erreicht, könnte helfen die Navigation und Interaktion von Robotern in der Landwirtschaft verbessern

    Método para el registro automático de imágenes basado en transformaciones proyectivas planas dependientes de las distancias y orientado a imágenes sin características comunes

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Arquitectura de Computadores y Automática, leída el 18-12-2015Multisensory data fusion oriented to image-based application improves the accuracy, quality and availability of the data, and consequently, the performance of robotic systems, by means of combining the information of a scene acquired from multiple and different sources into a unified representation of the 3D world scene, which is more enlightening and enriching for the subsequent image processing, improving either the reliability by using the redundant information, or the capability by taking advantage of complementary information. Image registration is one of the most relevant steps in image fusion techniques. This procedure aims the geometrical alignment of two or more images. Normally, this process relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. For instance, in the combination of ToF and RGB cameras, the robust feature-matching is not reliable. Typically, the fusion of these two sensors has been addressed from the computation of the cameras calibration parameters for coordinate transformation between them. As a result, a low resolution colour depth map is provided. For improving the resolution of these maps and reducing the loss of colour information, extrapolation techniques are adopted. A crucial issue for computing high quality and accurate dense maps is the presence of noise in the depth measurement from the ToF camera, which is normally reduced by means of sensor calibration and filtering techniques. However, the filtering methods, implemented for the data extrapolation and denoising, usually over-smooth the data, reducing consequently the accuracy of the registration procedure...La fusión multisensorial orientada a aplicaciones de procesamiento de imágenes, conocida como fusión de imágenes, es una técnica que permite mejorar la exactitud, la calidad y la disponibilidad de datos de un entorno tridimensional, que a su vez permite mejorar el rendimiento y la operatividad de sistemas robóticos. Dicha fusión, se consigue mediante la combinación de la información adquirida por múltiples y diversas fuentes de captura de datos, la cual se agrupa del tal forma que se obtiene una mejor representación del entorno 3D, que es mucho más ilustrativa y enriquecedora para la implementación de métodos de procesamiento de imágenes. Con ello se consigue una mejora en la fiabilidad y capacidad del sistema, empleando la información redundante que ha sido adquirida por múltiples sensores. El registro de imágenes es uno de los procedimientos más importantes que componen la fusión de imágenes. El objetivo principal del registro de imágenes es la consecución de la alineación geométrica entre dos o más imágenes. Normalmente, este proceso depende de técnicas de búsqueda de patrones comunes entre imágenes, lo cual puede ser un inconveniente cuando se combinan sensores que no proporcionan datos con características similares. Un ejemplo de ello, es la fusión de cámaras de color de alta resolución (RGB) con cámaras de Tiempo de Vuelo de baja resolución (Time-of-Flight (ToF)), con las cuales no es posible conseguir una detección robusta de patrones comunes entre las imágenes capturadas por ambos sensores. Por lo general, la fusión entre estas cámaras se realiza mediante el cálculo de los parámetros de calibración de las mismas, que permiten realizar la trasformación homogénea entre ellas. Y como resultado de este xii Abstract procedimiento, se obtienen mapas de profundad y de color de baja resolución. Con el objetivo de mejorar la resolución de estos mapas y de evitar la pérdida de información de color, se utilizan diversas técnicas de extrapolación de datos. Un factor crucial a tomar en cuenta para la obtención de mapas de alta calidad y alta exactitud, es la presencia de ruido en las medidas de profundidad obtenidas por las cámaras ToF. Este problema, normalmente se reduce mediante la calibración de estos sensores y con técnicas de filtrado de datos. Sin embargo, las técnicas de filtrado utilizadas, tanto para la interpolación de datos, como para la reducción del ruido, suelen producir el sobre-alisamiento de los datos originales, lo cual reduce la exactitud del registro de imágenes...Sección Deptal. de Arquitectura de Computadores y Automática (Físicas)Fac. de Ciencias FísicasTRUEunpu

    Multisource Data Integration in Remote Sensing

    Get PDF
    Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system
    corecore