62 research outputs found
Lunar Terrain Relative Navigation Using a Convolutional Neural Network for Visual Crater Detection
Terrain relative navigation can improve the precision of a spacecraft's
position estimate by detecting global features that act as supplementary
measurements to correct for drift in the inertial navigation system. This paper
presents a system that uses a convolutional neural network (CNN) and image
processing methods to track the location of a simulated spacecraft with an
extended Kalman filter (EKF). The CNN, called LunaNet, visually detects craters
in the simulated camera frame and those detections are matched to known lunar
craters in the region of the current estimated spacecraft position. These
matched craters are treated as features that are tracked using the EKF. LunaNet
enables more reliable position tracking over a simulated trajectory due to its
greater robustness to changes in image brightness and more repeatable crater
detections from frame to frame throughout a trajectory. LunaNet combined with
an EKF produces a decrease of 60% in the average final position estimation
error and a decrease of 25% in average final velocity estimation error compared
to an EKF using an image processing-based crater detection method when tested
on trajectories using images of standard brightness.Comment: 6 pages, 4 figures. This work was accepted by the 2020 American
Control Conferenc
Autonomous crater detection on asteroids using a fully-convolutional neural network
This paper shows the application of autonomous Crater Detection using the
U-Net, a Fully-Convolutional Neural Network, on Ceres. The U-Net is trained on
optical images of the Moon Global Morphology Mosaic based on data collected by
the LRO and manual crater catalogues. The Moon-trained network will be tested
on Dawn optical images of Ceres: this task is accomplished by means of a
Transfer Learning (TL) approach. The trained model has been fine-tuned using
100, 500 and 1000 additional images of Ceres. The test performance was measured
on 350 never before seen images, reaching a testing accuracy of 96.24%, 96.95%
and 97.19%, respectively. This means that despite the intrinsic differences
between the Moon and Ceres, TL works with encouraging results. The output of
the U-Net contains predicted craters: it will be post-processed applying global
thresholding for image binarization and a template matching algorithm to
extract craters positions and radii in the pixel space. Post-processed craters
will be counted and compared to the ground truth data in order to compute image
segmentation metrics: precision, recall and F1 score. These indices will be
computed, and their effect will be discussed for tasks such as automated crater
cataloguing and optical navigation
A flexible deep learning crater detection scheme using Segment Anything Model (SAM)
Peer reviewedPublisher PD
An Open Source, Autonomous, Vision-Based Algorithm for Hazard Detection and Avoidance for Celestial Body Landing
Planetary exploration is one of the main goals that humankind has established as a must for space exploration in order to be prepared for colonizing new places and provide scientific data for a better understanding of the formation of our solar system. In order to provide a safe approach, several safety measures must be undertaken to guarantee not only the success of the mission but also the safety of the crew. One of these safety measures is the Autonomous Hazard, Detection, and Avoidance (HDA) sub-system for celestial body landers that will enable different spacecraft to complete solar system exploration. The main objective of the HDA sub-system is to assemble a map of the local terrain during the descent of the spacecraft so that a safe landing site can be marked down. This thesis will be focused on a passive method using a monocular camera as its primary detection sensor due to its form factor and weight, which enables its implementation alongside the proposed HDA algorithm in the Intuitive Machines lunar lander NOVA-C as part of the Commercial Lunar Payload Services technological demonstration in 2021 for the NASA Artemis program to take humans back to the moon. This algorithm is implemented by including two different sources for making decisions, a two-dimensional (2D) vision-based HDA map and a three-dimensional (3D) HDA map obtained through a Structure from Motion process in combination with a plane fitting sequence. These two maps will provide different metrics in order to provide the lander a better probability of performing a safe touchdown. These metrics are processed to optimize a cost function
Deep learning methods applied to digital elevation models: state of the art
Deep Learning (DL) has a wide variety of applications in various
thematic domains, including spatial information. Although with
limitations, it is also starting to be considered in operations
related to Digital Elevation Models (DEMs). This study aims to
review the methods of DL applied in the field of altimetric spatial
information in general, and DEMs in particular. Void Filling (VF),
Super-Resolution (SR), landform classification and hydrography
extraction are just some of the operations where traditional methods
are being replaced by DL methods. Our review concludes
that although these methods have great potential, there are
aspects that need to be improved. More appropriate terrain information
or algorithm parameterisation are some of the challenges
that this methodology still needs to face.Functional Quality of Digital Elevation Models in Engineering’ of the State Agency Research of SpainPID2019-106195RB- I00/AEI/10.13039/50110001103
AstroVision: Towards Autonomous Feature Detection and Description for Missions to Small Bodies Using Deep Learning
Missions to small celestial bodies rely heavily on optical feature tracking
for characterization of and relative navigation around the target body. While
deep learning has led to great advancements in feature detection and
description, training and validating data-driven models for space applications
is challenging due to the limited availability of large-scale, annotated
datasets. This paper introduces AstroVision, a large-scale dataset comprised of
115,970 densely annotated, real images of 16 different small bodies captured
during past and ongoing missions. We leverage AstroVision to develop a set of
standardized benchmarks and conduct an exhaustive evaluation of both
handcrafted and data-driven feature detection and description methods. Next, we
employ AstroVision for end-to-end training of a state-of-the-art, deep feature
detection and description network and demonstrate improved performance on
multiple benchmarks. The full benchmarking pipeline and the dataset will be
made publicly available to facilitate the advancement of computer vision
algorithms for space applications
Implicit Extended Kalman Filter for Optical Terrain Relative Navigation Using Delayed Measurements
The exploration of celestial bodies such as the Moon, Mars, or even smaller ones such as comets and asteroids, is the next frontier of space exploration. One of the most interesting and attractive purposes from the scientific point of view in this field, is the capability for a spacecraft to land on such bodies. Monocular cameras are widely adopted to perform this task due to their low cost and system complexity. Nevertheless, image-based algorithms for motion estimation range across different scales of complexities and computational loads. In this paper, a method to perform relative (or local) terrain navigation using frame-to-frame features correspondences and altimeter measurements is presented. The proposed image-based approach relies on the implementation of the implicit extended Kalman filter, which works using nonlinear dynamic models and corrections from measurements that are implicit functions of the state variables. In particular, here, the epipolar constraint, which is a geometric relationship between the feature point position vectors and the camera translation vector, is employed as the implicit measurement fused with altimeter updates. In realistic applications, the image processing routines require a certain amount of time to be executed. For this reason, the presented navigation system entails a fast cycle using altimeter measurements and a slow cycle with image-based updates. Moreover, the intrinsic delay of the feature matching execution is taken into account using a modified extrapolation method
AI Applications on Planetary Rovers
The rise in the number of robotic missions to space is paving the way for the use of artificial intelligence and machine learning in the autonomy and augmentation of rover operations. For one, more rovers mean more images, and more images mean more data bandwidth required for downlinking as well as more mental bandwidth for analyzing the images. On the other hand, light-weight, low-powered microrover platforms are being developed to accommodate the drive for planetary exploration. As a result of the mass and power constraints, these microrover platforms will not carry typical navigational instruments like a stereocamera or a laser rangerfinder, relying instead on a single, monocular camera.
The first project in this thesis explores the realm of novelty detection where the goal is to find `new\u27 and `interesting\u27 features such that instead of sending a whole set of images, the algorithm could simply flag any image that contains novel features to prioritize its downlink. This form of data triage allows the science team to redirect its attention to objects that could be of high science value. For this project, a combination of a Convolutional Neural Network (CNN) with a K-means algorithm as a tool for novelty detection is introduced. By leveraging the powerful feature extraction capabilities of a CNN, typical images could be tightly clustered into the number of expected entities within the rover\u27s environment. The distance between the extracted feature vector and the closest cluster centroid is then defined to be its novelty score. As such, a novel image will have a significantly higher distance to the cluster centroids compared to the typical images. This algorithm was trained on images obtained from the Canadian Space Agency\u27s Analogue Terrain Facility and was shown to be effective in capturing the majority of the novel images within the dataset.
The second project in this thesis aims to augment microrover platforms that are lacking the instruments for distance measurements. Particularly, this project explores the application of monocular depth estimation where the goal is to estimate a depth map from a monocular image. This problem is inherently difficult to solve given that recovering depth from a 2D image is a mathematically ill-posed problem, compounded by the fact that the lunar environment is a dull, colourless landscape. To solve his problem, a dataset of images and their corresponding ground truth depth maps have been taken at Mission Control Space Service\u27s Indoor Analogue Terrain. An autoencoder was then trained to take in the image and output an estimated depth map. The results of this project show that the model is not reliable at gauging the distances of slopes and objects near the horizon. However, the generated depth maps are reliable in the short to mid range, where the distances are most relevant for remote rover operations
- …