34 research outputs found

    Evolutionary Optimization Techniques for 3D Simultaneous Localization and Mapping

    Get PDF
    Mención Internacional en el título de doctorMobile robots are growing up in applications to move through indoors and outdoors environments, passing from teleoperated applications to autonomous applications like exploring or navigating. For a robot to move through a particular location, it needs to gather information about the scenario using sensors. These sensors allow the robot to observe, depending on the sensor data type. Cameras mostly give information in two dimensions, with colors and pixels representing an image. Range sensors give distances from the robot to obstacles. Depth Cameras mix both technologies to expand their information to three-dimensional information. Light Detection and Ranging (LiDAR) provides information about the distance to the sensor but expands its range to planes and three dimensions alongside precision. So, mobile robots use those sensors to scan the scenario while moving. If the robot already has a map, the sensors measure, and the robot finds features that correspond to features on the map to localize itself. Men have used Maps as a specialized form of representing the environment for more than 5000 years, becoming a piece of important information in today’s daily basics. Maps are used to navigate from one place to another, localize something inside some boundaries, or as a form of documentation of essential features. So naturally, an intuitive way of making an autonomous mobile robot is to implement geometrical information maps to represent the environment. On the other hand, if the robot does not have a previous map, it should build it while moving around. The robot computes the sensor information with the odometer sensor information to achieve this task. However, sensors have their own flaws due to precision, calibration, or accuracy. Furthermore, moving a robot has its physical constraints and faults that may occur randomly, like wheel drifting or mechanical miscalibration that may make the odometers fail in the measurement, causing misalignment during the map building. A novel technique was presented in the mid-90s to solve this problem and overpass the uncertainty of sensors while the robot is building the map, the Simultaneous Localization and Mapping algorithm (SLAM). Its goal is to build a map while the robot’s position is corrected based on the information of two or more consecutive scans matched together or find the rigid registration vector between them. This algorithm has been broadly studied and developed for almost 25 years. Nonetheless, it is highly relevant in innovations, modifications, and adaptations due to the advances in new sensors and the complexity of the scenarios in emerging mobile robotics applications. The scan matching algorithm aims to find a pose vector representing the transformation or movement between two robot observations by finding the best possible value after solving an equation representing a good transformation. It means searching for a solution in an optimum way. Typically this optimization process has been solved using classical optimization algorithms, like Newton’s algorithm or solving gradient and second derivatives formulations, yet this requires an initial guess or initial state that helps the algorithm point in the right direction, most of the time by getting this information from the odometers or inertial sensors. Although, it is not always possible to have or trust this information, as some scenarios are complex and reckon sensors fail. In order to solve this problem, this research presents the uses of evolutionary optimization algorithms, those with a meta-heuristics definition based on iterative evolution that mimics optimization processes that do not need previous information to search a limited range for solutions to solve a fitness function. The main goal of this dissertation is to study, develop and prove the benefits of evolutionary optimization algorithms in simultaneous localization and mapping for mobile robots in six degrees of freedom scenarios using LiDAR sensor information. This work introduces several evolutionary algorithms for scan matching, acknowledge a mixed fitness function for registration, solve simultaneous localization and matching in different scenarios, implements loop closure and error relaxation, and proves its performance at indoors, outdoors and underground mapping applications.Los robots móviles están creciendo en aplicaciones para moverse por entornos interiores y exteriores, pasando de aplicaciones teleoperadas a aplicaciones autónomas como explorar o navegar. Para que un robot se mueva a través de una ubicación en particular, necesita recopilar información sobre el escenario utilizando sensores. Estos sensores permiten que el robot observe, según el tipo de datos del sensor. Las cámaras en su mayoría brindan información en dos dimensiones, con colores y píxeles que representan una imagen. Los sensores de rango dan distancias desde el robot hasta los obstáculos. Las Cámaras de Profundidad mezclan ambas tecnologías para expandir su información a información tridimensional. Light Detection and Ranging (LiDAR) proporciona información sobre la distancia al sensor, pero amplía su rango a planos y tres dimensiones así como mejora la precisión. Por lo tanto, los robots móviles usan esos sensores para escanear el escenario mientras se mueven. Si el robot ya tiene un mapa, los sensores miden y el robot encuentra características que corresponden a características en dicho mapa para localizarse. La humanidad ha utilizado los mapas como una forma especializada de representar el medio ambiente durante más de 5000 años, convirtiéndose en una pieza de información importante en los usos básicos diarios de hoy en día. Los mapas se utilizan para navegar de un lugar a otro, localizar algo dentro de algunos límites o como una forma de documentación de características esenciales. Entonces, naturalmente, una forma intuitiva de hacer un robot móvil autónomo es implementar mapas de información geométrica para representar el entorno. Por otro lado, si el robot no tiene un mapa previo, deberá construirlo mientras se desplaza. El robot junta la información del sensor de distancias con la información del sensor del odómetro para lograr esta tarea de crear un mapa. Sin embargo, los sensores tienen sus propios defectos debido a la precisión, la calibración o la exactitud. Además, mover un robot tiene sus limitaciones físicas y fallas que pueden ocurrir aleatoriamente, como el desvío de las ruedas o una mala calibración mecánica que puede hacer que los contadores de desplazamiento fallen en la medición, lo que provoca una desalineación durante la construcción del mapa. A mediados de los años 90 se presentó una técnica novedosa para resolver este problema y superar la incertidumbre de los sensores mientras el robot construye el mapa, el algoritmo de localización y mapeo simultáneos (SLAM). Su objetivo es construir un mapa mientras se corrige la posición del robot en base a la información de dos o más escaneos consecutivos emparejados o encontrar el vector de correspondencia entre ellos. Este algoritmo ha sido ampliamente estudiado y desarrollado durante casi 25 años. No obstante, es muy relevante en innovaciones, modificaciones y adaptaciones debido a los avances en sensores y la complejidad de los escenarios en las aplicaciones emergentes de robótica móvil. El algoritmo de correspondencia de escaneo tiene como objetivo encontrar un vector de pose que represente la transformación o el movimiento entre dos observaciones del robot al encontrar el mejor valor posible después de resolver una ecuación que represente una buena transformación. Significa buscar una solución de forma óptima. Por lo general, este proceso de optimización se ha resuelto utilizando algoritmos de optimización clásicos, como el algoritmo de Newton o la resolución de formulaciones de gradientes y segundas derivadas, pero esto requiere una conjetura inicial o un estado inicial que ayude al algoritmo a apuntar en la dirección correcta, la mayoría de las veces obteniendo esta información de los sensores odometricos o sensores de inercia, aunque no siempre es posible tener o confiar en esta información, ya que algunos escenarios son complejos y los sensores fallan. Para resolver este problema, esta investigación presenta los usos de los algoritmos de optimización evolutiva, aquellos con una definición meta-heurística basada en la evolución iterativa que imita los procesos de optimización que no necesitan información previa para buscar dentro de un rango limitado el grupo de soluciones que resuelve una función de calidad. El objetivo principal de esta tesis es estudiar, desarrollar y probar los usos de algoritmos de optimización evolutiva en localización y mapeado simultáneos para robots móviles en escenarios de seis grados de libertad utilizando información de sensores LiDAR. Este trabajo introduce varios algoritmos evolutivos que resuelven la correspondencia entre medidas, soluciona el problema de SLAM, implementa una fusion de funciones objetivos y demuestra sus ventajas con pruebas en escenarios reales tanto en interiores, exteriores como mapeado de escenarios subterraneos.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Gerardo Fernández López.- Secretario: María Dolores Blanco Rojas.- Vocal: David Álvarez Sánche

    From light rays to 3D models

    Get PDF

    Modélisation tridimensionnelle précise de l'environnement à l’aide des systèmes de photogrammétrie embarqués sur drones

    Get PDF
    Abstract : Images acquired from unmanned aerial vehicles (UAVs) can provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modeling. Solutions developed for this purpose are mainly operating based on photogrammetry concepts, namely UAV-Photogrammetry Systems (UAV-PS). Such systems are used in applications where both geospatial and visual information of the environment is required. These applications include, but are not limited to, natural resource management such as precision agriculture, military and police-related services such as traffic-law enforcement, precision engineering such as infrastructure inspection, and health services such as epidemic emergency management. UAV-photogrammetry systems can be differentiated based on their spatial characteristics in terms of accuracy and resolution. That is some applications, such as precision engineering, require high-resolution and high-accuracy information of the environment (e.g. 3D modeling with less than one centimeter accuracy and resolution). In other applications, lower levels of accuracy might be sufficient, (e.g. wildlife management needing few decimeters of resolution). However, even in those applications, the specific characteristics of UAV-PSs should be well considered in the steps of both system development and application in order to yield satisfying results. In this regard, this thesis presents a comprehensive review of the applications of unmanned aerial imagery, where the objective was to determine the challenges that remote-sensing applications of UAV systems currently face. This review also allowed recognizing the specific characteristics and requirements of UAV-PSs, which are mostly ignored or not thoroughly assessed in recent studies. Accordingly, the focus of the first part of this thesis is on exploring the methodological and experimental aspects of implementing a UAV-PS. The developed system was extensively evaluated for precise modeling of an open-pit gravel mine and performing volumetric-change measurements. This application was selected for two main reasons. Firstly, this case study provided a challenging environment for 3D modeling, in terms of scale changes, terrain relief variations as well as structure and texture diversities. Secondly, open-pit-mine monitoring demands high levels of accuracy, which justifies our efforts to improve the developed UAV-PS to its maximum capacities. The hardware of the system consisted of an electric-powered helicopter, a high-resolution digital camera, and an inertial navigation system. The software of the system included the in-house programs specifically designed for camera calibration, platform calibration, system integration, onboard data acquisition, flight planning and ground control point (GCP) detection. The detailed features of the system are discussed in the thesis, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The accuracy of the results was evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy were assessed. The second part of this thesis concentrates on improving the techniques of sparse and dense reconstruction. The proposed solutions are alternatives to traditional aerial photogrammetry techniques, properly adapted to specific characteristics of unmanned, low-altitude imagery. Firstly, a method was developed for robust sparse matching and epipolar-geometry estimation. The main achievement of this method was its capacity to handle a very high percentage of outliers (errors among corresponding points) with remarkable computational efficiency (compared to the state-of-the-art techniques). Secondly, a block bundle adjustment (BBA) strategy was proposed based on the integration of intrinsic camera calibration parameters as pseudo-observations to Gauss-Helmert model. The principal advantage of this strategy was controlling the adverse effect of unstable imaging networks and noisy image observations on the accuracy of self-calibration. The sparse implementation of this strategy was also performed, which allowed its application to data sets containing a lot of tie points. Finally, the concepts of intrinsic curves were revisited for dense stereo matching. The proposed technique could achieve a high level of accuracy and efficiency by searching only through a small fraction of the whole disparity search space as well as internally handling occlusions and matching ambiguities. These photogrammetric solutions were extensively tested using synthetic data, close-range images and the images acquired from the gravel-pit mine. Achieving absolute 3D mapping accuracy of 11±7 mm illustrated the success of this system for high-precision modeling of the environment.Résumé : Les images acquises à l’aide d’aéronefs sans pilote (ASP) permettent de produire des données de résolutions spatiales et temporelles uniques pour la modélisation tridimensionnelle (3D). Les solutions développées pour ce secteur d’activité sont principalement basées sur des concepts de photogrammétrie et peuvent être identifiées comme des systèmes photogrammétriques embarqués sur aéronefs sans pilote (SP-ASP). Ils sont utilisés dans plusieurs applications environnementales où l’information géospatiale et visuelle est essentielle. Ces applications incluent notamment la gestion des ressources naturelles (ex. : agriculture de précision), la sécurité publique et militaire (ex. : gestion du trafic), les services d’ingénierie (ex. : inspection de bâtiments) et les services de santé publique (ex. : épidémiologie et gestion des risques). Les SP-ASP peuvent être subdivisés en catégories selon les besoins en termes de précision et de résolution. En effet, dans certains cas, tel qu’en ingénierie, l’information sur l’environnement doit être de haute précision et de haute résolution (ex. : modélisation 3D avec une précision et une résolution inférieure à un centimètre). Pour d’autres applications, tel qu’en gestion de la faune sauvage, des niveaux de précision et de résolution moindres peut être suffisants (ex. : résolution de l’ordre de quelques décimètres). Cependant, même dans ce type d’applications les caractéristiques des SP-ASP devraient être prises en considération dans le développement des systèmes et dans leur utilisation, et ce, pour atteindre les résultats visés. À cet égard, cette thèse présente une revue exhaustive des applications de l’imagerie aérienne acquise par ASP et de déterminer les challenges les plus courants. Cette étude a également permis d’établir les caractéristiques et exigences spécifiques des SP-ASP qui sont généralement ignorées ou partiellement discutées dans les études récentes. En conséquence, la première partie de cette thèse traite des aspects méthodologiques et d’expérimentation de la mise en place d’un SP-ASP. Le système développé a été évalué pour la modélisation précise d’une gravière et utilisé pour réaliser des mesures de changement volumétrique. Cette application a été retenue pour deux raisons principales. Premièrement, ce type de milieu fournit un environnement difficile pour la modélisation, et ce, en termes de changement d’échelle, de changement de relief du terrain ainsi que la grande diversité de structures et de textures. Deuxièment, le suivi de mines à ciel ouvert exige un niveau de précision élevé, ce qui justifie les efforts déployés pour mettre au point un SP-ASP de haute précision. Les composantes matérielles du système consistent en un ASP à propulsion électrique de type hélicoptère, d’une caméra numérique à haute résolution ainsi qu’une station inertielle. La composante logicielle est composée de plusieurs programmes développés particulièrement pour calibrer la caméra et la plateforme, intégrer les systèmes, enregistrer les données, planifier les paramètres de vol et détecter automatiquement les points de contrôle au sol. Les détails complets du système sont abordés dans la thèse et des solutions sont proposées afin d’améliorer le système et la qualité des données photogrammétriques produites. La précision des résultats a été évaluée sous diverses conditions de cartographie, incluant le géoréférencement direct et indirect avec un nombre, une répartition et des types de points de contrôle variés. De plus, les effets de la configuration des images et la stabilité du réseau sur la précision de la modélisation ont été évalués. La deuxième partie de la thèse porte sur l’amélioration des techniques de reconstruction éparse et dense. Les solutions proposées sont des alternatives aux techniques de photogrammétrie aérienne traditionnelle et adaptée aux caractéristiques particulières de l’imagerie acquise à basse altitude par ASP. Tout d’abord, une méthode robuste de correspondance éparse et d’estimation de la géométrie épipolaire a été développée. L’élément clé de cette méthode est sa capacité à gérer le pourcentage très élevé des valeurs aberrantes (erreurs entre les points correspondants) avec une efficacité de calcul remarquable en comparaison avec les techniques usuelles. Ensuite, une stratégie d’ajustement de bloc basée sur l’intégration de pseudoobservations du modèle Gauss-Helmert a été proposée. Le principal avantage de cette stratégie consistait à contrôler les effets négatifs du réseau d’images instable et des images bruitées sur la précision de l’autocalibration. Une implémentation éparse de cette stratégie a aussi été réalisée, ce qui a permis de traiter des jeux de données contenant des millions de points de liaison. Finalement, les concepts de courbes intrinsèques ont été revisités pour l’appariement stéréo dense. La technique proposée pourrait atteindre un haut niveau de précision et d’efficacité en recherchant uniquement dans une petite portion de l’espace de recherche des disparités ainsi qu’en traitant les occlusions et les ambigüités d’appariement. Ces solutions photogrammétriques ont été largement testées à l’aide de données synthétiques, d’images à courte portée ainsi que celles acquises sur le site de la gravière. Le système a démontré sa capacité a modélisation dense de l’environnement avec une très haute exactitude en atteignant une précision 3D absolue de l’ordre de 11±7 mm

    Proceedings of the 7th International Conference on Functional-Structural Plant Models, Saariselkä, Finland, 9 - 14 June 2013

    Get PDF

    Remote Sensing of Biophysical Parameters

    Get PDF
    Vegetation plays an essential role in the study of the environment through plant respiration and photosynthesis. Therefore, the assessment of the current vegetation status is critical to modeling terrestrial ecosystems and energy cycles. Canopy structure (LAI, fCover, plant height, biomass, leaf angle distribution) and biochemical parameters (leaf pigmentation and water content) have been employed to assess vegetation status and its dynamics at scales ranging from kilometric to decametric spatial resolutions thanks to methods based on remote sensing (RS) data.Optical RS retrieval methods are based on the radiative transfer processes of sunlight in vegetation, determining the amount of radiation that is measured by passive sensors in the visible and infrared channels. The increased availability of active RS (radar and LiDAR) data has fostered their use in many applications for the analysis of land surface properties and processes, thanks to their insensitivity to weather conditions and the ability to exploit rich structural and texture information. Optical and radar data fusion and multi-sensor integration approaches are pressing topics, which could fully exploit the information conveyed by both the optical and microwave parts of the electromagnetic spectrum.This Special Issue reprint reviews the state of the art in biophysical parameters retrieval and its usage in a wide variety of applications (e.g., ecology, carbon cycle, agriculture, forestry and food security)

    Imaging Sensors and Applications

    Get PDF
    In past decades, various sensor technologies have been used in all areas of our lives, thus improving our quality of life. In particular, imaging sensors have been widely applied in the development of various imaging approaches such as optical imaging, ultrasound imaging, X-ray imaging, and nuclear imaging, and contributed to achieve high sensitivity, miniaturization, and real-time imaging. These advanced image sensing technologies play an important role not only in the medical field but also in the industrial field. This Special Issue covers broad topics on imaging sensors and applications. The scope range of imaging sensors can be extended to novel imaging sensors and diverse imaging systems, including hardware and software advancements. Additionally, biomedical and nondestructive sensing applications are welcome

    Characterizing Dryland Ecosystems Using Remote Sensing and Dynamic Global Vegetation Modeling

    Get PDF
    Drylands include all terrestrial regions where the production of crops, forage, wood and other ecosystem services are limited by water. These ecosystems cover approximately 40% of the earth terrestrial surface and accommodate more than 2 billion people (Millennium Ecosystem Assessment, 2005). Moreover, the interannual variability of the global carbon budget is strongly regulated by vegetation dynamics in drylands. Understanding the dynamics of such ecosystems is significant for assessing the potential for and impacts of natural or anthropogenic disturbances and mitigation planning, and a necessary step toward enhancing the economic and social well-being of dryland communities in a sustainable manner (Global Drylands: A UN system-wide response, 2011). In this research, a combination of remote sensing, field data collection, and ecosystem modeling were used to establish an integrated framework for semi-arid ecosystems dynamics monitoring. Foliar nitrogen (N) plays an important role in vegetation processes such as photosynthesis and there is wide interest in retrieving this variable from hyperspectral remote sensing data. In this study, I used the theory of canopy spectral invariants (AKA p-theory) to understand the role of canopy structure and soil in the retrieval of foliar N from hyperspectral data and machine learning techniques. The results of this study showed the inconsistencies among different machine learning techniques used for estimating N. Using p-theory, I demonstrated that soil can contribute up to 95% to the total radiation budget of the canopy. I suggested an alternative approach to study photosynthesis is the use of dynamic global vegetation models (DGVMs). Gross primary production (GPP) is the apparent ecosystem scale photosynthesis that can be estimated using DGVMs. In this study, I performed a thorough sensitivity analysis and calibrated the Ecosystem Demography (EDv2.2) model along an elevation gradient in a dryland study area. I investigated the GPP capacity and activity by comparing the EDv2.2 GPP with flux towers and remote sensing products. The overall results showed that EDv2.2 performed well in capturing GPP capacity and its long term trend at lower elevation sites within the study area; whereas the model performed worse at higher elevations likely due to the change in vegetation community. I discussed that adding more heterogeneity and modifying ecosystem processes such as phenology and plant hydraulics in ED.v2.2 will improve its application to higher elevation ecosystems where there is more vegetation production. And finally, I developed an integrated hyperspectral-lidar framework for regional mapping of xeric and mesic vegetation in the study area. I showed that by considering spectral shape and magnitude, canopy structure and landscape features (riparian zone), we can develop a straightforward algorithm for vegetation mapping in drylands. This framework is simple, easy to interpret and consistent with our ecological understanding of vegetation distribution in drylands over large areas. Collectively, the results I present in this dissertation demonstrate the potential for advanced remote sensing and modeling to help us better understand ecosystem processes in drylands

    Vision-Based Control of Unmanned Aerial Vehicles for Automated Structural Monitoring and Geo-Structural Analysis of Civil Infrastructure Systems

    Full text link
    The emergence of wireless sensors capable of sensing, embedded computing, and wireless communication has provided an affordable means of monitoring large-scale civil infrastructure systems with ease. To date, the majority of the existing monitoring systems, including those based on wireless sensors, are stationary with measurement nodes installed without an intention for relocation later. Many monitoring applications involving structural and geotechnical systems require a high density of sensors to provide sufficient spatial resolution to their assessment of system performance. While wireless sensors have made high density monitoring systems possible, an alternative approach would be to empower the mobility of the sensors themselves to transform wireless sensor networks (WSNs) into mobile sensor networks (MSNs). In doing so, many benefits would be derived including reducing the total number of sensors needed while introducing the ability to learn from the data obtained to improve the location of sensors installed. One approach to achieving MSNs is to integrate the use of unmanned aerial vehicles (UAVs) into the monitoring application. UAV-based MSNs have the potential to transform current monitoring practices by improving the speed and quality of data collected while reducing overall system costs. The efforts of this study have been chiefly focused upon using autonomous UAVs to deploy, operate, and reconfigure MSNs in a fully autonomous manner for field monitoring of civil infrastructure systems. This study aims to overcome two main challenges pertaining to UAV-enabled wireless monitoring: the need for high-precision localization methods for outdoor UAV navigation and facilitating modes of direct interaction between UAVs and their built or natural environments. A vision-aided UAV positioning algorithm is first introduced to augment traditional inertial sensing techniques to enhance the ability of UAVs to accurately localize themselves in a civil infrastructure system for placement of wireless sensors. Multi-resolution fiducial markers indicating sensor placement locations are applied to the surface of a structure, serving as navigation guides and precision landing targets for a UAV carrying a wireless sensor. Visual-inertial fusion is implemented via a discrete-time Kalman filter to further increase the robustness of the relative position estimation algorithm resulting in localization accuracies of 10 cm or smaller. The precision landing of UAVs that allows the MSN topology change is validated on a simple beam with the UAV-based MSN collecting ambient response data for extraction of global mode shapes of the structure. The work also explores the integration of a magnetic gripper with a UAV to drop defined weights from an elevation to provide a high energy seismic source for MSNs engaged in seismic monitoring applications. Leveraging tailored visual detection and precise position control techniques for UAVs, the work illustrates the ability of UAVs to—in a repeated and autonomous fashion—deploy wireless geophones and to introduce an impulsive seismic source for in situ shear wave velocity profiling using the spectral analysis of surface waves (SASW) method. The dispersion curve of the shear wave profile of the geotechnical system is shown nearly equal between the autonomous UAV-based MSN architecture and that taken by a traditional wired and manually operated SASW data collection system. The developments and proof-of-concept systems advanced in this study will extend the body of knowledge of robot-deployed MSN with the hope of extending the capabilities of monitoring systems while eradicating the need for human interventions in their design and use.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169980/1/zhh_1.pd
    corecore