2,022 research outputs found

    Real-Time Satellite Component Recognition with YOLO-V5

    Get PDF
    With the increasing risk of collisions with space debris and the growing interest in on-orbit servicing, the ability to autonomously capture non-cooperative, tumbling target objects remains an unresolved challenge. To accomplish this task, characterizing and classifying satellite components is critical to the success of the mission. This paper focuses on using machine vision by a small satellite to perform image classification based on locating and identifying satellite components such as satellite bodies, solar panels or antennas. The classification and component detection approach is based on “You Only Look Once” (YOLO) V5, which uses Neural Networks to identify the satellite components. The training dataset includes images of real and virtual satellites and additional preprocessed images to increase the effectiveness of the algorithm. The weights obtained from the algorithm are then used in a spacecraft motion dynamics and orbital lighting simulator to test classification and detection performance. Each test case entails a different approach path of the chaser satellite to the target satellite, a different attitude motion of the target satellite, and different lighting conditions to mimic that of the Sun. Initial results indicate that once trained, the YOLO V5 approach is able to effectively process an input camera feed to solve satellite classification and component detection problems in real-time within the limitations of flight computers

    Tracking Keypoints from Consecutive Video Frames Using CNN Features for Space Applications

    Get PDF
    Hard time constraints in space missions bring in the problem of fast video processing for numerous autonomous tasks. Video processing involves the separation of distinct image frames, fetching image descriptors, applying different machine learning algorithms for object detection, obstacle avoidance, and many more tasks involved in the automatic maneuvering of a spacecraft. These tasks require the most informative descriptions of an image within the time constraints. Tracking these informative points from consecutive image frames is needed in flow estimation applications. Classical algorithms like SIFT and SURF are the milestones in the feature description development. But computational complexity and high time requirements force the critical missions to avoid these techniques to get adopted in real-time processing. Hence a time conservative and less complex pre-trained Convolutional Neural Network (CNN) model is chosen in this paper as a feature descriptor. 7-layer CNN model is designed and implemented with pre-trained VGG model parameters and then these CNN features are used to match the points of interests from consecutive image frames of a lunar descent video. The performance of the system is evaluated based on visual and empirical keypoints matching. The scores of matches between two consecutive images from the video using CNN features are then compared with state-of-the-art algorithms like SIFT and SURF. The results show that CNN features are more reliable and robust in case of time-critical video processing tasks for keypoint tracking applications of space missions

    Autonomous Navigation for Mars Exploration

    Get PDF
    The autonomous navigation technology uses the multiple sensors to percept and estimate the spatial locations of the aerospace prober or the Mars rover and to guide their motions in the orbit or the Mars surface. In this chapter, the autonomous navigation methods for the Mars exploration are reviewed. First, the current development status of the autonomous navigation technology is summarized. The popular autonomous navigation methods, such as the inertial navigation, the celestial navigation, the visual navigation, and the integrated navigation, are introduced. Second, the application of the autonomous navigation technology for the Mars exploration is presented. The corresponding issues in the Entry Descent and Landing (EDL) phase and the Mars surface roving phase are mainly discussed. Third, some challenges and development trends of the autonomous navigation technology are also addressed

    Taking a PEEK into YOLOv5 for Satellite Component Recognition via Entropy-based Visual Explanations

    Full text link
    The escalating risk of collisions and the accumulation of space debris in Low Earth Orbit (LEO) has reached critical concern due to the ever increasing number of spacecraft. Addressing this crisis, especially in dealing with non-cooperative and unidentified space debris, is of paramount importance. This paper contributes to efforts in enabling autonomous swarms of small chaser satellites for target geometry determination and safe flight trajectory planning for proximity operations in LEO. Our research explores on-orbit use of the You Only Look Once v5 (YOLOv5) object detection model trained to detect satellite components. While this model has shown promise, its inherent lack of interpretability hinders human understanding, a critical aspect of validating algorithms for use in safety-critical missions. To analyze the decision processes, we introduce Probabilistic Explanations for Entropic Knowledge extraction (PEEK), a method that utilizes information theoretic analysis of the latent representations within the hidden layers of the model. Through both synthetic in hardware-in-the-loop experiments, PEEK illuminates the decision-making processes of the model, helping identify its strengths, limitations and biases

    Efficient Feature Description for Small Body Relative Navigation using Binary Convolutional Neural Networks

    Full text link
    Missions to small celestial bodies rely heavily on optical feature tracking for characterization of and relative navigation around the target body. While techniques for feature tracking based on deep learning are a promising alternative to current human-in-the-loop processes, designing deep architectures that can operate onboard spacecraft is challenging due to onboard computational and memory constraints. This paper introduces a novel deep local feature description architecture that leverages binary convolutional neural network layers to significantly reduce computational and memory requirements. We train and test our models on real images of small bodies from legacy and ongoing missions and demonstrate increased performance relative to traditional handcrafted methods. Moreover, we implement our models onboard a surrogate for the next-generation spacecraft processor and demonstrate feasible runtimes for online feature tracking.Comment: Presented at the 2023 AAS Guidance, Navigation and Control (GN&C) Conference, February 2-8, 2023. arXiv admin note: text overlap with arXiv:2208.0205

    Robust Adversarial Attacks Detection for Deep Learning based Relative Pose Estimation for Space Rendezvous

    Full text link
    Research on developing deep learning techniques for autonomous spacecraft relative navigation challenges is continuously growing in recent years. Adopting those techniques offers enhanced performance. However, such approaches also introduce heightened apprehensions regarding the trustability and security of such deep learning methods through their susceptibility to adversarial attacks. In this work, we propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes based on the explainability concept. We develop for an orbital rendezvous scenario an innovative relative pose estimation technique adopting our proposed Convolutional Neural Network (CNN), which takes an image from the chaser's onboard camera and outputs accurately the target's relative position and rotation. We perturb seamlessly the input images using adversarial attacks that are generated by the Fast Gradient Sign Method (FGSM). The adversarial attack detector is then built based on a Long Short Term Memory (LSTM) network which takes the explainability measure namely SHapley Value from the CNN-based pose estimator and flags the detection of adversarial attacks when acting. Simulation results show that the proposed adversarial attack detector achieves a detection accuracy of 99.21%. Both the deep relative pose estimator and adversarial attack detector are then tested on real data captured from our laboratory-designed setup. The experimental results from our laboratory-designed setup demonstrate that the proposed adversarial attack detector achieves an average detection accuracy of 96.29%

    Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation around Non-Cooperative Targets

    Full text link
    Autonomous navigation and path-planning around non-cooperative space objects is an enabling technology for on-orbit servicing and space debris removal systems. The navigation task includes the determination of target object motion, the identification of target object features suitable for grasping, and the identification of collision hazards and other keep-out zones. Given this knowledge, chaser spacecraft can be guided towards capture locations without damaging the target object or without unduly the operations of a servicing target by covering up solar arrays or communication antennas. One way to autonomously achieve target identification, characterization and feature recognition is by use of artificial intelligence algorithms. This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task. The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5), is tested using experimental data obtained in formation flight simulations in the ORION Lab at Florida Institute of Technology. The simulation scenarios vary the yaw motion of the target object, the chaser approach trajectory, and the lighting conditions in order to test the algorithms in a wide range of realistic and performance limiting situations. The data analyzed include the mean average precision metrics in order to compare the performance of the object detectors. The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.Comment: 12 pages, 10 figures, 9 tables, IEEE Aerospace Conference 202
    corecore