32 research outputs found

    Percepción basada en visión estereoscópica, planificación de trayectorias y estrategias de navegación para exploración robótica autónoma

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia artificial, leída el 13-05-2015En esta tesis se trata el desarrollo de una estrategia de navegación autónoma basada en visión artificial para exploración robótica autónoma de superficies planetarias. Se han desarrollado una serie de subsistemas, módulos y software específicos para la investigación desarrollada en este trabajo, ya que la mayoría de las herramientas existentes para este dominio son propiedad de agencias espaciales nacionales, no accesibles a la comunidad científica. Se ha diseñado una arquitectura software modular multi-capa con varios niveles jerárquicos para albergar el conjunto de algoritmos que implementan la estrategia de navegación autónoma y garantizar la portabilidad del software, su reutilización e independencia del hardware. Se incluye también el diseño de un entorno de trabajo destinado a dar soporte al desarrollo de las estrategias de navegación. Éste se basa parcialmente en herramientas de código abierto al alcance de cualquier investigador o institución, con las necesarias adaptaciones y extensiones, e incluye capacidades de simulación 3D, modelos de vehículos robóticos, sensores, y entornos operacionales, emulando superficies planetarias como Marte, para el análisis y validación a nivel funcional de las estrategias de navegación desarrolladas. Este entorno también ofrece capacidades de depuración y monitorización.La presente tesis se compone de dos partes principales. En la primera se aborda el diseño y desarrollo de las capacidades de autonomía de alto nivel de un rover, centrándose en la navegación autónoma, con el soporte de las capacidades de simulación y monitorización del entorno de trabajo previo. Se han llevado a cabo un conjunto de experimentos de campo, con un robot y hardware real, detallándose resultados, tiempo de procesamiento de algoritmos, así como el comportamiento y rendimiento del sistema en general. Como resultado, se ha identificado al sistema de percepción como un componente crucial dentro de la estrategia de navegación y, por tanto, el foco principal de potenciales optimizaciones y mejoras del sistema. Como consecuencia, en la segunda parte de este trabajo, se afronta el problema de la correspondencia en imágenes estéreo y reconstrucción 3D de entornos naturales no estructurados. Se han analizado una serie de algoritmos de correspondencia, procesos de imagen y filtros. Generalmente se asume que las intensidades de puntos correspondientes en imágenes del mismo par estéreo es la misma. Sin embargo, se ha comprobado que esta suposición es a menudo falsa, a pesar de que ambas se adquieren con un sistema de visión compuesto de dos cámaras idénticas. En consecuencia, se propone un sistema experto para la corrección automática de intensidades en pares de imágenes estéreo y reconstrucción 3D del entorno basado en procesos de imagen no aplicados hasta ahora en el campo de la visión estéreo. Éstos son el filtrado homomórfico y la correspondencia de histogramas, que han sido diseñados para corregir intensidades coordinadamente, ajustando una imagen en función de la otra. Los resultados se han podido optimizar adicionalmente gracias al diseño de un proceso de agrupación basado en el principio de continuidad espacial para eliminar falsos positivos y correspondencias erróneas. Se han estudiado los efectos de la aplicación de dichos filtros, en etapas previas y posteriores al proceso de correspondencia, con eficiencia verificada favorablemente. Su aplicación ha permitido la obtención de un mayor número de correspondencias válidas en comparación con los resultados obtenidos sin la aplicación de los mismos, consiguiendo mejoras significativas en los mapas de disparidad y, por lo tanto, en los procesos globales de percepción y reconstrucción 3D.Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu

    Autonomous science for an ExoMars Rover-like mission

    Get PDF
    In common with other Mars exploration missions, human supervision of Europe's ExoMars Rover will be mostly indirect via orbital relay spacecraft and thus far from immediate. The gap between issuing commands and witnessing the results of the consequent rover actions will typically be on the order of several hours or even sols. In addition, it will not be possible to observe the external environment at the time of action execution. This lengthens the time required to carry out scientific exploration and limits the mission's ability to respond quickly to favorable science events. To increase potential science return for such missions, it will be necessary to deploy autonomous systems that include science target selection and active data acquisition. In this work, we have developed and integrated technologies that we explored in previous studies and used the resulting test bed to demonstrate an autonomous, opportunistic science concept on a representative robotic platform. In addition to progressing the system design approach and individual autonomy components, we have introduced a methodology for autonomous science assessment based on terrestrial field science practice

    On Martian Surface Exploration: Development of Automated 3D Reconstruction and Super-Resolution Restoration Techniques for Mars Orbital Images

    Get PDF
    Very high spatial resolution imaging and topographic (3D) data play an important role in modern Mars science research and engineering applications. This work describes a set of image processing and machine learning methods to produce the “best possible” high-resolution and high-quality 3D and imaging products from existing Mars orbital imaging datasets. The research work is described in nine chapters of which seven are based on separate published journal papers. These include a) a hybrid photogrammetric processing chain that combines the advantages of different stereo matching algorithms to compute stereo disparity with optimal completeness, fine-scale details, and minimised matching artefacts; b) image and 3D co-registration methods that correct a target image and/or 3D data to a reference image and/or 3D data to achieve robust cross-instrument multi-resolution 3D and image co-alignment; c) a deep learning network and processing chain to estimate pixel-scale surface topography from single-view imagery that outperforms traditional photogrammetric methods in terms of product quality and processing speed; d) a deep learning-based single-image super-resolution restoration (SRR) method to enhance the quality and effective resolution of Mars orbital imagery; e) a subpixel-scale 3D processing system using a combination of photogrammetric 3D reconstruction, SRR, and photoclinometric 3D refinement; and f) an optimised subpixel-scale 3D processing system using coupled deep learning based single-view SRR and deep learning based 3D estimation to derive the best possible (in terms of visual quality, effective resolution, and accuracy) 3D products out of present epoch Mars orbital images. The resultant 3D imaging products from the above listed new developments are qualitatively and quantitatively evaluated either in comparison with products from the official NASA planetary data system (PDS) and/or ESA planetary science archive (PSA) releases, and/or in comparison with products generated with different open-source systems. Examples of the scientific application of these novel 3D imaging products are discussed

    Adaptive Localization and Mapping for Planetary Rovers

    Get PDF
    Future rovers will be equipped with substantial onboard autonomy as space agencies and industry proceed with missions studies and technology development in preparation for the next planetary exploration missions. Simultaneous Localization and Mapping (SLAM) is a fundamental part of autonomous capabilities and has close connections to robot perception, planning and control. SLAM positively affects rover operations and mission success. The SLAM community has made great progress in the last decade by enabling real world solutions in terrestrial applications and is nowadays addressing important challenges in robust performance, scalability, high-level understanding, resources awareness and domain adaptation. In this thesis, an adaptive SLAM system is proposed in order to improve rover navigation performance and demand. This research presents a novel localization and mapping solution following a bottom-up approach. It starts with an Attitude and Heading Reference System (AHRS), continues with a 3D odometry dead reckoning solution and builds up to a full graph optimization scheme which uses visual odometry and takes into account rover traction performance, bringing scalability to modern SLAM solutions. A design procedure is presented in order to incorporate inertial sensors into the AHRS. The procedure follows three steps: error characterization, model derivation and filter design. A complete kinematics model of the rover locomotion subsystem is developed in order to improve the wheel odometry solution. Consequently, the parametric model predicts delta poses by solving a system of equations with weighed least squares. In addition, an odometry error model is learned using Gaussian processes (GPs) in order to predict non-systematic errors induced by poor traction of the rover with the terrain. The odometry error model complements the parametric solution by adding an estimation of the error. The gained information serves to adapt the localization and mapping solution to the current navigation demands (domain adaptation). The adaptivity strategy is designed to adjust the visual odometry computational load (active perception) and to influence the optimization back-end by including highly informative keyframes in the graph (adaptive information gain). Following this strategy, the solution is adapted to the navigation demands, providing an adaptive SLAM system driven by the navigation performance and conditions of the interaction with the terrain. The proposed methodology is experimentally verified on a representative planetary rover under realistic field test scenarios. This thesis introduces a modern SLAM system which adapts the estimated pose and map to the predicted error. The system maintains accuracy with fewer nodes, taking the best of both wheel and visual methods in a consistent graph-based smoothing approach

    Characterizing and evaluating autonomous controllers

    Get PDF
    Premio Extraordinario de Doctorado de la UAH en el año académico 2016-2017La autonomía en robótica por medio de técnicas de Inteligencia Artificial, particularmente mediante el empleo sistemas de Planning & Scheduling (P&S), presenta un amplio campo de investigación con gran interés en aplicaciones como la robótica de exploración para entornos hostiles o difícilmente accesibles para los humanos. Sin embargo, las pruebas experimentales realizadas en los artículos de divulgación científica sobre controladores autónomos generalmente no están correctamente realizadas, ya que se carece de una metodología de estudio común. En este sentido se hace complicado comparar los nuevos sistemas con los trabajos previos, práctica habitual en otras disciplinas. Por ello, en esta tesis se propone un entorno de trabajo llamado On-Ground Autonomy Test Environment (OGATE) para permitir la evaluación de controladores autónomos. Este desarrollo consta de una metodología para estructurar la fase experimental, así como de un conjunto de métricas independientes tanto del dominio como del campo de aplicación del sistema robótico. La unión de estos elementos, mediante un software que automatiza el proceso experimental, permite obtener evaluaciones reproducibles y objetivas sobre los controladores autónomos bajo estudio. Para demostrar la efectividad del entorno de trabajo, se han utilizado dos controladores autónomos basados en diferentes paradigmas para P&S. Primero se ha utilizado el Goal Oriented Autonomous Controller (GOAC), desarrollado bajo contrato de la Agencia Espacial Europea. Segundo, durante esta tesis se ha implementado la Model-Based Architecture (MoBAr). MoBAr está diseñado con el objetivo de probar diferentes planificadores basados en el Planning Domain Definition Language (PDDL) para conseguir autonomía a bordo. En este sentido, en la tesis también se introduce un nuevo planificador llamado Unified Path Planning and Task Planning Architecture (UP2TA). Dicho sistema integra un planificador general basado en PDDL y algoritmos de planificación de rutas con el objetivo de generar planes más seguros y eficientes para robots de exploración. Referente a la planificación de rutas, en la tesis se incluye la definición de dos nuevos algoritmos enfocados en la movilidad de los robots de exploración: S-Theta* y 3D Accurate Navigation Algorithm (3Dana). S-Theta* permite obtener rutas con un menor número de cambios de dirección que algoritmos previos, mientras que 3Dana genera rutas más seguras y restringidas en función de la pendiente del entorno, empleando para ello Modelos Digitales de Terreno (MDT) y mapas de costes trasversales. Partiendo de GOAC y MoBAr, se ha empleado OGATE para evaluar ambos controladores, siendo posible caracterizar aspectos relevantes de la integración entre Planning & Execution (P&E) difícilmente accesibles mediante otros enfoques. Además, los resultados obtenidos son objetivos y reproducibles, permitiendo realizar comparaciones entre controladores autónomos con diferentes tecnologías y/o paradigmas de P&S

    Spin-scanning Cameras for Planetary Exploration: Imager Analysis and Simulation

    Get PDF
    In this thesis, a novel approach to spaceborne imaging is investigated, building upon the scan imaging technique in which camera motion is used to construct an image. This thesis investigates its use with wide-angle (≥90° field of view) optics mounted on spin stabilised probes for large-coverage imaging of planetary environments, and focusses on two instruments. Firstly, a descent camera concept for a planetary penetrator. The imaging geometry of the instrument is analysed. Image resolution is highest at the penetrator’s nadir and lowest at the horizon, whilst any point on the surface is imaged with highest possible resolution when the camera’s altitude is equal to that point’s radius from nadir. Image simulation is used to demonstrate the camera’s images and investigate analysis techniques. A study of stereophotogrammetric measurement of surface topography using pairs of descent images is conducted. Measurement accuracies and optimum stereo geometries are presented. Secondly, the thesis investigates the EnVisS (Entire Visible Sky) instrument, under development for the Comet Interceptor mission. The camera’s imaging geometry, coverage and exposure times are calculated, and used to model the expected signal and noise in EnVisS observations. It is found that the camera’s images will suffer from low signal, and four methods for mitigating this – binning, coaddition, time-delay integration and repeat sampling – are investigated and described. Use of these methods will be essential if images of sufficient signal are to be acquired, particularly for conducting polarimetry, the performance of which is modelled using Monte Carlo simulation. Methods of simulating planetary cameras’ images are developed to facilitate the study of both cameras. These methods enable the accurate simulation of planetary surfaces and cometary atmospheres, are based on Python libraries commonly used in planetary science, and are intended to be readily modified and expanded for facilitating the study of a variety of planetary cameras

    Single and multiple stereo view navigation for planetary rovers

    Get PDF
    © Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment
    corecore