349 research outputs found

    AstroVision: Towards Autonomous Feature Detection and Description for Missions to Small Bodies Using Deep Learning

    Full text link
    Missions to small celestial bodies rely heavily on optical feature tracking for characterization of and relative navigation around the target body. While deep learning has led to great advancements in feature detection and description, training and validating data-driven models for space applications is challenging due to the limited availability of large-scale, annotated datasets. This paper introduces AstroVision, a large-scale dataset comprised of 115,970 densely annotated, real images of 16 different small bodies captured during past and ongoing missions. We leverage AstroVision to develop a set of standardized benchmarks and conduct an exhaustive evaluation of both handcrafted and data-driven feature detection and description methods. Next, we employ AstroVision for end-to-end training of a state-of-the-art, deep feature detection and description network and demonstrate improved performance on multiple benchmarks. The full benchmarking pipeline and the dataset will be made publicly available to facilitate the advancement of computer vision algorithms for space applications

    Human and Robotic Mission to Small Bodies: Mapping, Planning and Exploration

    Get PDF
    This study investigates the requirements, performs a gap analysis and makes a set of recommendations for mapping products and exploration tools required to support operations and scientific discovery for near- term and future NASA missions to small bodies. The mapping products and their requirements are based on the analysis of current mission scenarios (rendezvous, docking, and sample return) and recommendations made by the NEA Users Team (NUT) in the framework of human exploration. The mapping products that sat- isfy operational, scienti c, and public outreach goals include topography, images, albedo, gravity, mass, density, subsurface radar, mineralogical and thermal maps. The gap analysis points to a need for incremental generation of mapping products from low (flyby) to high-resolution data needed for anchoring and docking, real-time spatial data processing for hazard avoidance and astronaut or robot localization in low gravity, high dynamic environments, and motivates a standard for coordinate reference systems capable of describing irregular body shapes. Another aspect investigated in this study is the set of requirements and the gap analysis for exploration tools that support visualization and simulation of operational conditions including soil interactions, environment dynamics, and communications coverage. Building robust, usable data sets and visualisation/simulation tools is the best way for mission designers and simulators to make correct decisions for future missions. In the near term, it is the most useful way to begin building capabilities for small body exploration without needing to commit to specific mission architectures

    Robust vision based slope estimation and rocks detection for autonomous space landers

    Get PDF
    As future robotic surface exploration missions to other planets, moons and asteroids become more ambitious in their science goals, there is a rapidly growing need to significantly enhance the capabilities of entry, descent and landing technology such that landings can be carried out with pin-point accuracy at previously inaccessible sites of high scientific value. As a consequence of the extreme uncertainty in touch-down locations of current missions and the absence of any effective hazard detection and avoidance capabilities, mission designers must exercise extreme caution when selecting candidate landing sites. The entire landing uncertainty footprint must be placed completely within a region of relatively flat and hazard free terrain in order to minimise the risk of mission ending damage to the spacecraft at touchdown. Consequently, vast numbers of scientifically rich landing sites must be rejected in favour of safer alternatives that may not offer the same level of scientific opportunity. The majority of truly scientifically interesting locations on planetary surfaces are rarely found in such hazard free and easily accessible locations, and so goals have been set for a number of advanced capabilities of future entry, descent and landing technology. Key amongst these is the ability to reliably detect and safely avoid all mission critical surface hazards in the area surrounding a pre-selected landing location. This thesis investigates techniques for the use of a single camera system as the primary sensor in the preliminary development of a hazard detection system that is capable of supporting pin-point landing operations for next generation robotic planetary landing craft. The requirements for such a system have been stated as the ability to detect slopes greater than 5 degrees and surface objects greater than 30cm in diameter. The primary contribution in this thesis, aimed at achieving these goals, is the development of a feature-based,self-initialising, fully adaptive structure from motion (SFM) algorithm based on a robust square-root unscented Kalman filtering framework and the fusion of the resulting SFM scene structure estimates with a sophisticated shape from shading (SFS) algorithm that has the potential to produce very dense and highly accurate digital elevation models (DEMs) that possess sufficient resolution to achieve the sensing accuracy required by next generation landers. Such a system is capable of adapting to potential changes in the external noise environment that may result from intermittent and varying rocket motor thrust and/or sudden turbulence during descent, which may translate to variations in the vibrations experienced by the platform and introduce varying levels of motion blur that will affect the accuracy of image feature tracking algorithms. Accurate scene structure estimates have been obtained using this system from both real and synthetic descent imagery, allowing for the production of accurate DEMs. While some further work would be required in order to produce DEMs that possess the resolution and accuracy needed to determine slopes and the presence of small objects such as rocks at the levels of accuracy required, this thesis presents a very strong foundation upon which to build and goes a long way towards developing a highly robust and accurate solution

    Long-term localization of unmanned aerial vehicles based on 3D environment perception

    Get PDF
    Los vehículos aéreos no tripulados (UAVs por sus siglas en inglés, Unmanned Aerial Vehicles) se utilizan actualmente en innumerables aplicaciones civiles y comerciales, y la tendencia va en aumento. Su operación en espacios exteriores libres de obstáculos basada en GPS (del inglés Global Positioning System) puede ser considerada resuelta debido a la disponibilidad de productos comerciales con cierto grado de madurez. Sin embargo, algunas aplicaciones requieren su uso en espacios confinados o en interiores, donde las señales del GPS no están disponibles. Para permitir la introducción de robots aéreos de manera segura en zonas sin cobertura GPS, es necesario mejorar la fiabilidad en determinadas tecnologías clave para conseguir una operación robusta del sistema, tales como la localización, la evitación de obstáculos y la planificación de trayectorias. Actualmente, las técnicas existentes para la navegación autónoma de robots móviles en zonas sin GPS no son suficientemente fiables cuando se trata de robots aéreos, o no son robustas en el largo plazo. Esta tesis aborda el problema de la localización, proponiendo una metodología adecuada para robots aéreos que se mueven en un entorno tridimensional, utilizando para ello una combinación de medidas obtenidas a partir de varios sensores a bordo. Nos hemos centrado en la fusión de datos procedentes de tres tipos de sensores: imágenes y nubes de puntos adquiridas a partir de cámaras estéreo o de luz estructurada (RGB-D), medidas inerciales de una IMU (del inglés Inertial Measurement Unit) y distancias entre radiobalizas de tecnología UWB (del inglés Ultra Wide-Band) instaladas en el entorno y en la propia aeronave. La localización utiliza un mapa 3D del entorno, para el cual se presenta también un algoritmo de mapeado que explora las sinergias entre nubes de puntos y radiobalizas, con el fin de poder utilizar la metodología al completo en cualquier escenario dado. Las principales contribuciones de esta tesis doctoral se centran en una cuidadosa combinación de tecnologías para lograr una localización de UAVs en interiores válida para operaciones a largo plazo, de manera que sea robusta, fiable y eficiente computacionalmente. Este trabajo ha sido validado y demostrado durante los últimos cuatro años en el contexto de diferentes proyectos de investigación relacionados con la localización y estimación del estado de robots aéreos en zonas sin cobertura GPS. En particular en el proyecto European Robotics Challenges (EuRoC), en el que el autor participa en la competición entre las principales instituciones de investigación de Europa. Los resultados experimentales demuestran la viabilidad de la metodología completa, tanto en términos de precisión como en eficiencia computacional, probados a través de vuelos reales en interiores y siendo éstos validados con datos de un sistema de captura de movimiento.Unmanned Aerial Vehicles (UAVs) are currently used in countless civil and commercial applications, and the trend is rising. Outdoor obstacle-free operation based on Global Positioning System (GPS) can be generally assumed thanks to the availability of mature commercial products. However, some applications require their use in confined spaces or indoors, where GPS signals are not available. In order to allow for the safe introduction of autonomous aerial robots in GPS-denied areas, there is still a need for reliability in several key technologies to procure a robust operation, such as localization, obstacle avoidance and planning. Existing approaches for autonomous navigation in GPS-denied areas are not robust enough when it comes to aerial robots, or fail in long-term operation. This dissertation handles the localization problem, proposing a methodology suitable for aerial robots moving in a Three Dimensional (3D) environment using a combination of measurements from a variety of on-board sensors. We have focused on fusing three types of sensor data: images and 3D point clouds acquired from stereo or structured light cameras, inertial information from an on-board Inertial Measurement Unit (IMU), and distance measurements to several Ultra Wide-Band (UWB) radio beacons installed in the environment. The overall approach makes use of a 3D map of the environment, for which a mapping method that exploits the synergies between point clouds and radio-based sensing is also presented, in order to be able to use the whole methodology in any given scenario. The main contributions of this dissertation focus on a thoughtful combination of technologies in order to achieve robust, reliable and computationally efficient long-term localization of UAVs in indoor environments. This work has been validated and demonstrated for the past four years in the context of different research projects related to the localization and state estimation of aerial robots in GPS-denied areas. In particular the European Robotics Challenges (EuRoC) project, in which the author is participating in the competition among top research institutions in Europe. Experimental results demonstrate the feasibility of our full approach, both in accuracy and computational efficiency, which is tested through real indoor flights and validated with data from a motion capture system

    CubeSat autonomous navigation and guidance for low-cost asteroid flyby missions

    Get PDF
    Recent advancements in CubeSat technology unfold new mission ideas and the opportunity to lower the cost of space exploration. Ground operations costs for interplanetary CubeSats, however, still represent a challenge toward low-cost CubeSat missions: hence, certain levels of autonomy are desirable. The feasibility of autonomous asteroid flyby missions using CubeSats is assessed here, and an effective strategy for autonomous operations is proposed. The navigation strategy is composed of observations of the Sun, visible planets, and the target asteroid, whereas the guidance strategy is composed of two optimally timed trajectory correction maneuvers. A Monte Carlo analysis is performed to understand the flyby accuracies that can be achieved by autonomous CubeSats, in consideration of errors and uncertainties in a) departure conditions, b) propulsive maneuvers, c) observations, and d) asteroid ephemerides. Flyby accuracies better than ±100  km (3σ)" role>±100  km (3σ)±100  km (3σ) are found possible, and main limiting factors to autonomous missions are identified, namely a) on-board asteroid visibility time (Vlim≥11" role=>Vlim≥11Vlim≥11), b) ΔV" role=">ΔVΔV for correction maneuvers (>15  m/s>15  m/s), c) asteroid ephemeris uncertainty (<1000  km<1000  km), and d) short duration of transfer to asteroid. Ultimately, this study assesses the readiness level of current CubeSat technology to autonomously flyby near-Earth asteroids, in consideration of realistic system specifications, errors, and uncertainties

    Enhancement of Trajectory Determination of Orbiter Spacecraft by Using Pairs of Planetary Optical Images

    Get PDF
    The subject of the present thesis is about the enhancement of orbiter spacecraft navigation capabilities obtained by the standard radiometric link, taking advantage of an imaging payload and making use of a novel definition of optical measurements. An ESA Mission to Mercury called BepiColombo, was selected as a reference case for this study, and in particular its Mercury Planetary Orbiter (MPO), because of the presence of SIMBIO-SYS, an instrument suite part of the MPO payload, capable of acquiring high resolution images of the surface of Mercury. The use of optical measurements for navigation, can provide complementary informations with respect to Doppler, for enhanced performances or a relaxation of the radio tracking requisites in term of ground station schedule. Classical optical techniques based on centroids, limbs or landmarks, were the base to a novel idea for optical navigation, inspired by concepts of stereoscopic vision. In brief, the relation between two overlapped images acquired by a nadir pointed orbiter spacecraft at different times, was defined, and this information was then formulated into an optical measurement, to be processed by a navigation filter. The formulation of this novel optical observable is presented, moreover the analysis of the possible impact on the mission budget and images scheduling is addressed. Simulations are conducted using an orbit determination software already in use for spacecraft navigation in which the proposed optical measurements were implemented and the final results are given

    Unmanned Aerial Systems: Research, Development, Education & Training at Embry-Riddle Aeronautical University

    Get PDF
    With technological breakthroughs in miniaturized aircraft-related components, including but not limited to communications, computer systems and sensors, state-of-the-art unmanned aerial systems (UAS) have become a reality. This fast-growing industry is anticipating and responding to a myriad of societal applications that will provide new and more cost-effective solutions that previous technologies could not, or will replace activities that involved humans in flight with associated risks. Embry-Riddle Aeronautical University has a long history of aviation-related research and education, and is heavily engaged in UAS activities. This document provides a summary of these activities, and is divided into two parts. The first part provides a brief summary of each of the various activities, while the second part lists the faculty associated with those activities. Within the first part of this document we have separated UAS activities into two broad areas: Engineering and Applications. Each of these broad areas is then further broken down into six sub-areas, which are listed in the Table of Contents. The second part lists the faculty, sorted by campus (Daytona Beach-D, Prescott-P and Worldwide-W) associated with the UAS activities. The UAS activities and the corresponding faculty are cross-referenced. We have chosen to provide very short summaries of the UAS activities rather than lengthy descriptions. If more information is desired, please contact me directly, or visit our research website (https://erau.edu/research), or contact the appropriate faculty member using their e-mail address provided at the end of this document
    corecore