329 research outputs found

    Hardware Verification of Lunar Terrain Relative Navigation

    Get PDF
    Autonomous delivery to the lunar surface requires proven, cost-effective navigation techniques, especially during the final descent. Terrain Relative Navigation (TRN) is a compelling solution because it has no infrastructure requirements, such as beacons on the lunar surface. However, previous validation of this technique has relied on software simulation of image acquisition and vehicle state estimation. This research leverages the autonomous drone facilities at USU to validate the TRN technique in a hardware system. A hardware demonstration will verify the effectiveness of the TRN technique in a realistic environment simulating the final descent of a lunar lander. This environment will provide realistic data using a scaled model of the lunar surface generated from digital elevation model data. The autonomous drone environment will verify performance by comparing the state estimation onboard the drone with the true state observed by the motion tracking system. Validation of this technique in a hardware application is critical for reliable autonomous navigation to the lunar surface.https://digitalcommons.usu.edu/fsrs2020/1010/thumbnail.jp

    Experimental results of a terrain relative navigation algorithm using a simulated lunar scenario

    Get PDF
    This paper deals with the problem of the navigation of a lunar lander based on the Terrain Relative Navigation approach. An algorithm is developed and tested on a scaled simulated lunar scenario, over which a tri-axial moving frame has been built to reproduce the landing trajectories. At the tip of the tri-axial moving frame, a long-range and a short-range infrared distance sensor are mounted to measure the altitude. The calibration of the distance sensors is of crucial importance to obtain good measurements. For this purpose, the sensors are calibrated by optimizing a nonlinear transfer function and a bias function using a least squares method. As a consequence, the covariance of the sensors is approximated with a second order function of the distance. The two sensors have two different operation ranges that overlap in a small region. A switch strategy is developed in order to obtain the best performances in the overlapping range. As a result, a single error model function of the distance is found after the evaluation of the switch strategy. Because of different environmental factors, such as temperature, a bias drift is evaluated for both the sensors and properly taken into account in the algorithm. In order to reflect information of the surface in the navigation algorithm, a Digital Elevation Model of the simulated lunar surface has been considered. The navigation algorithm is designed as an Extended Kalman Filter which uses the altitude measurements, the Digital Elevation Model and the accelerations measurements coming from the moving frame. The objective of the navigation algorithm is to estimate the position of the simulated space vehicle during the landing from an altitude of 3 km to a landing site in the proximity of a crater rim. Because the algorithm needs to be updated during the landing, a crater peak detector is conceived in order to reset the navigation filter with a new state vector and new state covariance. Experimental results of the navigation algorithm are presented in the paper

    A Topographical Lidar System for Terrain-Relative Navigation

    Get PDF
    An imaging lidar system is being developed for use in navigation, relative to the local terrain. This technology will potentially be used for future spacecraft landing on the Moon. Systems like this one could also be used on Earth for diverse purposes, including mapping terrain, navigating aircraft with respect to terrain and military applications. The system has been field-tested aboard a helicopter in the Mojave Desert. When this system was designed, digitizers with sufficient sampling rate (2 GHz) were only available with very limited memory. Also, it was desirable to limit the amount of data to be transferred between the digitizer and the mass storage between individual frames. One of the novelty design features of this system was to design the system around the limited amount of memory of the digitizer. The system is required to operate over an altitude (distance) range from a few meters to approximately 1 km, but for each scan across the full field of view, the digitizer memory is only able to hold data for an altitude range no more than 100 m. Data acquisition methods in support of the limited 100 m wide altitude range are described

    SPLICE Safe and Precise Landing - Integrated Capabilities Evolution

    Get PDF
    The SPLICE project is developing, maturing, demonstrating, and infusing precision landing and hazard avoidance (PL&HA) technologies for NASA and potential commercial spaceflight missions. Near-term development includes high precision and accuracy velocimetry with ranging (via the NDL), high-resolution real-time mapping and hazard detection with ranging (via the HDL), lunar terrain relative navigation (TRN), and the requisite high performance computing capability. These technologies are initially intended to provide PL&HA for the moon, but are extensible to any planetary body. Long-term, the goal is to make these capabilities available to government and commercial entities and to license technology to commercial entities for production

    Implicit Extended Kalman Filter for Optical Terrain Relative Navigation Using Delayed Measurements

    Get PDF
    The exploration of celestial bodies such as the Moon, Mars, or even smaller ones such as comets and asteroids, is the next frontier of space exploration. One of the most interesting and attractive purposes from the scientific point of view in this field, is the capability for a spacecraft to land on such bodies. Monocular cameras are widely adopted to perform this task due to their low cost and system complexity. Nevertheless, image-based algorithms for motion estimation range across different scales of complexities and computational loads. In this paper, a method to perform relative (or local) terrain navigation using frame-to-frame features correspondences and altimeter measurements is presented. The proposed image-based approach relies on the implementation of the implicit extended Kalman filter, which works using nonlinear dynamic models and corrections from measurements that are implicit functions of the state variables. In particular, here, the epipolar constraint, which is a geometric relationship between the feature point position vectors and the camera translation vector, is employed as the implicit measurement fused with altimeter updates. In realistic applications, the image processing routines require a certain amount of time to be executed. For this reason, the presented navigation system entails a fast cycle using altimeter measurements and a slow cycle with image-based updates. Moreover, the intrinsic delay of the feature matching execution is taken into account using a modified extrapolation method

    Lunar Terrain Relative Navigation Using a Convolutional Neural Network for Visual Crater Detection

    Full text link
    Terrain relative navigation can improve the precision of a spacecraft's position estimate by detecting global features that act as supplementary measurements to correct for drift in the inertial navigation system. This paper presents a system that uses a convolutional neural network (CNN) and image processing methods to track the location of a simulated spacecraft with an extended Kalman filter (EKF). The CNN, called LunaNet, visually detects craters in the simulated camera frame and those detections are matched to known lunar craters in the region of the current estimated spacecraft position. These matched craters are treated as features that are tracked using the EKF. LunaNet enables more reliable position tracking over a simulated trajectory due to its greater robustness to changes in image brightness and more repeatable crater detections from frame to frame throughout a trajectory. LunaNet combined with an EKF produces a decrease of 60% in the average final position estimation error and a decrease of 25% in average final velocity estimation error compared to an EKF using an image processing-based crater detection method when tested on trajectories using images of standard brightness.Comment: 6 pages, 4 figures. This work was accepted by the 2020 American Control Conferenc

    Vision-Based Terrain Relative Navigation on High-Altitude Balloon and Sub-Orbital Rocket

    Full text link
    We present an experimental analysis on the use of a camera-based approach for high-altitude navigation by associating mapped landmarks from a satellite image database to camera images, and by leveraging inertial sensors between camera frames. We evaluate performance of both a sideways-tilted and downward-facing camera on data collected from a World View Enterprises high-altitude balloon with data beginning at an altitude of 33 km and descending to near ground level (4.5 km) with 1.5 hours of flight time. We demonstrate less than 290 meters of average position error over a trajectory of more than 150 kilometers. In addition to showing performance across a range of altitudes, we also demonstrate the robustness of the Terrain Relative Navigation (TRN) method to rapid rotations of the balloon, in some cases exceeding 20 degrees per second, and to camera obstructions caused by both cloud coverage and cords swaying underneath the balloon. Additionally, we evaluate performance on data collected by two cameras inside the capsule of Blue Origin's New Shepard rocket on payload flight NS-23, traveling at speeds up to 880 km/hr, and demonstrate less than 55 meters of average position error.Comment: Published in 2023 AIAA SciTec

    Performance Characterization of a Landmark Measurement System for ARRM Terrain Relative Navigation

    Get PDF
    This paper describes the landmark measurement system being developed for terrain relative navigation on NASAs Asteroid Redirect Robotic Mission (ARRM),and the results of a performance characterization study given realistic navigational and model errors. The system is called Retina, and is derived from the stereophotoclinometry methods widely used on other small-body missions. The system is simulated using synthetic imagery of the asteroid surface and discussion is given on various algorithmic design choices. Unlike other missions, ARRMs Retina is the first planned autonomous use of these methods during the close-proximity and descent phase of the mission

    A Long Distance Laser Altimeter for Terrain Relative Navigation and Spacecraft Landing

    Get PDF
    A high precision laser altimeter was developed under the Autonomous Landing and Hazard Avoidance (ALHAT) project at NASA Langley Research Center. The laser altimeter provides slant-path range measurements from operational ranges exceeding 30 km that will be used to support surface-relative state estimation and navigation during planetary descent and precision landing. The altimeter uses an advanced time-of-arrival receiver, which produces multiple signal-return range measurements from tens of kilometers with 5 cm precision. The transmitter is eye-safe, simplifying operations and testing on earth. The prototype is fully autonomous, and able to withstand the thermal and mechanical stresses experienced during test flights conducted aboard helicopters, fixed-wing aircraft, and Morpheus, a terrestrial rocket-powered vehicle developed by NASA Johnson Space Center. This paper provides an overview of the sensor and presents results obtained during recent field experiments including a helicopter flight test conducted in December 2012 and Morpheus flight tests conducted during March of 2014

    Autonomous vision-based terrain-relative navigation for planetary exploration

    Get PDF
    Abstract: The interest of major space agencies in the world for vision sensors in their mission designs has been increasing over the years. Indeed, cameras offer an efficient solution to address the ever-increasing requirements in performance. In addition, these sensors are multipurpose, lightweight, proven and a low-cost technology. Several researchers in vision sensing for space application currently focuse on the navigation system for autonomous pin-point planetary landing and for sample and return missions to small bodies. In fact, without a Global Positioning System (GPS) or radio beacon around celestial bodies, high-accuracy navigation around them is a complex task. Most of the navigation systems are based only on accurate initialization of the states and on the integration of the acceleration and the angular rate measurements from an Inertial Measurement Unit (IMU). This strategy can track very accurately sudden motions of short duration, but their estimate diverges in time and leads normally to high landing error. In order to improve navigation accuracy, many authors have proposed to fuse those IMU measurements with vision measurements using state estimators, such as Kalman filters. The first proposed vision-based navigation approach relies on feature tracking between sequences of images taken in real time during orbiting and/or landing operations. In that case, image features are image pixels that have a high probability of being recognized between images taken from different camera locations. By detecting and tracking these features through a sequence of images, the relative motion of the spacecraft can be determined. This technique, referred to as Terrain-Relative Relative Navigation (TRRN), relies on relatively simple, robust and well-developed image processing techniques. It allows the determination of the relative motion (velocity) of the spacecraft. Despite the fact that this technology has been demonstrated with space qualified hardware, its gain in accuracy remains limited since the spacecraft absolute position is not observable from the vision measurements. The vision-based navigation techniques currently studied consist in identifying features and in mapping them into an on-board cartographic database indexed by an absolute coordinate system, thereby providing absolute position determination. This technique, referred to as Terrain-Relative Absolute Navigation (TRAN), relies on very complex Image Processing Software (IPS) having an obvious lack of robustness. In fact, these software depend often on the spacecraft attitude and position, they are sensitive to illumination conditions (the elevation and azimuth of the Sun when the geo-referenced database is built must be similar to the ones present during mission), they are greatly influenced by the image noise and finally they hardly manage multiple varieties of terrain seen during the same mission (the spacecraft can fly over plain zone as well as mountainous regions, the images may contain old craters with noisy rims as well as young crater with clean rims and so on). At this moment, no real-time hardware-in-the-loop experiment has been conducted to demonstrate the applicability of this technology to space mission. The main objective of the current study is to develop autonomous vision-based navigation algorithms that provide absolute position and surface-relative velocity during the proximity operations of a planetary mission (orbiting phase and landing phase) using a combined approach of TRRN and TRAN technologies. The contributions of the study are: (1) reference mission definition, (2) advancements in the TRAN theory (image processing as well as state estimation) and (3) practical implementation of vision-based navigation.Résumé: L’intérêt des principales agences spatiales envers les technologies basées sur la vision artificielle ne cesse de croître. En effet, les caméras offrent une solution efficace pour répondre aux exigences de performance, toujours plus élevées, des missions spatiales. De surcroît, ces capteurs sont multi-usages, légers, éprouvés et peu coûteux. Plusieurs chercheurs dans le domaine de la vision artificielle se concentrent actuellement sur les systèmes autonomes pour l’atterrissage de précision sur des planètes et sur les missions d’échantillonnage sur des astéroïdes. En effet, sans système de positionnement global « Global Positioning System (GPS) » ou de balises radio autour de ces corps célestes, la navigation de précision est une tâche très complexe. La plupart des systèmes de navigation sont basés seulement sur l’intégration des mesures provenant d’une centrale inertielle. Cette stratégie peut être utilisée pour suivre les mouvements du véhicule spatial seulement sur une courte durée, car les données estimées divergent rapidement. Dans le but d’améliorer la précision de la navigation, plusieurs auteurs ont proposé de fusionner les mesures provenant de la centrale inertielle avec des mesures d’images du terrain. Les premiers algorithmes de navigation utilisant l’imagerie du terrain qui ont été proposés reposent sur l’extraction et le suivi de traits caractéristiques dans une séquence d’images prises en temps réel pendant les phases d’orbite et/ou d’atterrissage de la mission. Dans ce cas, les traits caractéristiques de l’image correspondent à des pixels ayant une forte probabilité d’être reconnus entre des images prises avec différentes positions de caméra. En détectant et en suivant ces traits caractéristiques, le déplacement relatif du véhicule (la vitesse) peut être déterminé. Ces techniques, nommées navigation relative, utilisent des algorithmes de traitement d’images robustes, faciles à implémenter et bien développés. Bien que cette technologie a été éprouvée sur du matériel de qualité spatiale, le gain en précision demeure limité étant donné que la position absolue du véhicule n’est pas observable dans les mesures extraites de l’image. Les techniques de navigation basées sur la vision artificielle actuellement étudiées consistent à identifier des traits caractéristiques dans l’image pour les apparier avec ceux contenus dans une base de données géo-référencées de manière à fournir une mesure de position absolue au filtre de navigation. Cependant, cette technique, nommée navigation absolue, implique l’utilisation d’algorithmes de traitement d’images très complexes souffrant pour le moment des problèmes de robustesse. En effet, ces algorithmes dépendent souvent de la position et de l’attitude du véhicule. Ils sont très sensibles aux conditions d’illuminations (l’élévation et l’azimut du Soleil présents lorsque la base de données géo-référencée est construite doit être similaire à ceux observés pendant la mission). Ils sont grandement influencés par le bruit dans l’image et enfin ils supportent mal les multiples variétés de terrain rencontrées pendant la même mission (le véhicule peut survoler autant des zones de plaine que des régions montagneuses, les images peuvent contenir des vieux cratères avec des contours flous aussi bien que des cratères jeunes avec des contours bien définis, etc.). De plus, actuellement, aucune expérimentation en temps réel et sur du matériel de qualité spatiale n’a été réalisée pour démontrer l’applicabilité de cette technologie pour les missions spatiales. Par conséquent, l’objectif principal de ce projet de recherche est de développer un système de navigation autonome par imagerie du terrain qui fournit la position absolue et la vitesse relative au terrain d’un véhicule spatial pendant les opérations à basse altitude sur une planète. Les contributions de ce travail sont : (1) la définition d’une mission de référence, (2) l’avancement de la théorie de la navigation par imagerie du terrain (algorithmes de traitement d’images et estimation d’états) et (3) implémentation pratique de cette technologie
    • …
    corecore