13 research outputs found

    Characterizing Satellite Geometry via Accelerated 3D Gaussian Splatting

    Full text link
    The accelerating deployment of spacecraft in orbit have generated interest in on-orbit servicing (OOS), inspection of spacecraft, and active debris removal (ADR). Such missions require precise rendezvous and proximity operations in the vicinity of non-cooperative, possible unknown, resident space objects. Safety concerns with manned missions and lag times with ground-based control necessitate complete autonomy. This requires robust characterization of the target's geometry. In this article, we present an approach for mapping geometries of satellites on orbit based on 3D Gaussian Splatting that can run on computing resources available on current spaceflight hardware. We demonstrate model training and 3D rendering performance on a hardware-in-the-loop satellite mock-up under several realistic lighting and motion conditions. Our model is shown to be capable of training on-board and rendering higher quality novel views of an unknown satellite nearly 2 orders of magnitude faster than previous NeRF-based algorithms. Such on-board capabilities are critical to enable downstream machine intelligence tasks necessary for autonomous guidance, navigation, and control tasks.Comment: 11 pages, 5 figure

    Real-Time Satellite Component Recognition with YOLO-V5

    Get PDF
    With the increasing risk of collisions with space debris and the growing interest in on-orbit servicing, the ability to autonomously capture non-cooperative, tumbling target objects remains an unresolved challenge. To accomplish this task, characterizing and classifying satellite components is critical to the success of the mission. This paper focuses on using machine vision by a small satellite to perform image classification based on locating and identifying satellite components such as satellite bodies, solar panels or antennas. The classification and component detection approach is based on “You Only Look Once” (YOLO) V5, which uses Neural Networks to identify the satellite components. The training dataset includes images of real and virtual satellites and additional preprocessed images to increase the effectiveness of the algorithm. The weights obtained from the algorithm are then used in a spacecraft motion dynamics and orbital lighting simulator to test classification and detection performance. Each test case entails a different approach path of the chaser satellite to the target satellite, a different attitude motion of the target satellite, and different lighting conditions to mimic that of the Sun. Initial results indicate that once trained, the YOLO V5 approach is able to effectively process an input camera feed to solve satellite classification and component detection problems in real-time within the limitations of flight computers

    Resource-constrained FPGA Design for Satellite Component Feature Extraction

    Full text link
    The effective use of computer vision and machine learning for on-orbit applications has been hampered by limited computing capabilities, and therefore limited performance. While embedded systems utilizing ARM processors have been shown to meet acceptable but low performance standards, the recent availability of larger space-grade field programmable gate arrays (FPGAs) show potential to exceed the performance of microcomputer systems. This work proposes use of neural network-based object detection algorithm that can be deployed on a comparably resource-constrained FPGA to automatically detect components of non-cooperative, satellites on orbit. Hardware-in-the-loop experiments were performed on the ORION Maneuver Kinematics Simulator at Florida Tech to compare the performance of the new model deployed on a small, resource-constrained FPGA to an equivalent algorithm on a microcomputer system. Results show the FPGA implementation increases the throughput and decreases latency while maintaining comparable accuracy. These findings suggest future missions should consider deploying computer vision algorithms on space-grade FPGAs.Comment: 9 pages, 7 figures, 4 tables, Accepted at IEEE Aerospace Conference 202

    Comparison of Tracking-By-Detection Algorithms for Real-Time Satellite Component Tracking

    Get PDF
    With space becoming more and more crowded, there is a growing demand for increasing satellite lifetimes and performing on-orbit servicing (OOS) at a scale that calls for autonomous missions. Many such missions would require chaser satellites to autonomously execute safe and effective flightpath to dock with a non-cooperative target satellite on orbit. Performing this autonomously requires the chaser to be aware of hazards to route around and safe capture points through time, i.e., by first identifying and tracking key components of the target satellite. State-of-the-art object detection algorithms are effective at detecting such objects on a frame-by-frame basis. However, implementing them on a real-time video feed often results in poor performance at tracking objects over time, making errors which could be easily corrected by rejecting non-physical predictions or by exploiting temporal patterns. On the other hand, dedicated object tracking algorithms can be far too computationally expensive for spaceflight computers. Considering this, the paradigm of tracking-by-detection works by incorporating patterns of prior-frame detections and the corresponding physics in tandem with a base object detector. This paper focuses on comparing the performance of object tracking-by-detection algorithms with a YOLOv8 base object detector: namely, BoTSORT and ByteTrack. These algorithms are hardware-in-the-loop tested for autonomous spacecraft component detection for a simulated tumbling target satellite. This will emulate mission conditions, including motion and lighting, with a focus on operating under spaceflight computational and power limitations, providing an experimental comparison of performance. Results demonstrate lightweight tracking-by-detection can improve the reliability of autonomous vision-based navigation

    Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation around Non-Cooperative Targets

    Full text link
    Autonomous navigation and path-planning around non-cooperative space objects is an enabling technology for on-orbit servicing and space debris removal systems. The navigation task includes the determination of target object motion, the identification of target object features suitable for grasping, and the identification of collision hazards and other keep-out zones. Given this knowledge, chaser spacecraft can be guided towards capture locations without damaging the target object or without unduly the operations of a servicing target by covering up solar arrays or communication antennas. One way to autonomously achieve target identification, characterization and feature recognition is by use of artificial intelligence algorithms. This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task. The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5), is tested using experimental data obtained in formation flight simulations in the ORION Lab at Florida Institute of Technology. The simulation scenarios vary the yaw motion of the target object, the chaser approach trajectory, and the lighting conditions in order to test the algorithms in a wide range of realistic and performance limiting situations. The data analyzed include the mean average precision metrics in order to compare the performance of the object detectors. The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.Comment: 12 pages, 10 figures, 9 tables, IEEE Aerospace Conference 202

    Autonomous Rendezvous with Non-cooperative Target Objects with Swarm Chasers and Observers

    Full text link
    Space debris is on the rise due to the increasing demand for spacecraft for com-munication, navigation, and other applications. The Space Surveillance Network (SSN) tracks over 27,000 large pieces of debris and estimates the number of small, un-trackable fragments at over 1,00,000. To control the growth of debris, the for-mation of further debris must be reduced. Some solutions include deorbiting larger non-cooperative resident space objects (RSOs) or servicing satellites in or-bit. Both require rendezvous with RSOs, and the scale of the problem calls for autonomous missions. This paper introduces the Multipurpose Autonomous Ren-dezvous Vision-Integrated Navigation system (MARVIN) developed and tested at the ORION Facility at Florida Institution of Technology. MARVIN consists of two sub-systems: a machine vision-aided navigation system and an artificial po-tential field (APF) guidance algorithm which work together to command a swarm of chasers to safely rendezvous with the RSO. We present the MARVIN architec-ture and hardware-in-the-loop experiments demonstrating autonomous, collabo-rative swarm satellite operations successfully guiding three drones to rendezvous with a physical mockup of a non-cooperative satellite in motion.Comment: Presented at AAS/AIAA Spaceflight Mechanics Meeting 2023, 17 pages, 9 figures, 3 table

    Taking a PEEK into YOLOv5 for Satellite Component Recognition via Entropy-based Visual Explanations

    Full text link
    The escalating risk of collisions and the accumulation of space debris in Low Earth Orbit (LEO) has reached critical concern due to the ever increasing number of spacecraft. Addressing this crisis, especially in dealing with non-cooperative and unidentified space debris, is of paramount importance. This paper contributes to efforts in enabling autonomous swarms of small chaser satellites for target geometry determination and safe flight trajectory planning for proximity operations in LEO. Our research explores on-orbit use of the You Only Look Once v5 (YOLOv5) object detection model trained to detect satellite components. While this model has shown promise, its inherent lack of interpretability hinders human understanding, a critical aspect of validating algorithms for use in safety-critical missions. To analyze the decision processes, we introduce Probabilistic Explanations for Entropic Knowledge extraction (PEEK), a method that utilizes information theoretic analysis of the latent representations within the hidden layers of the model. Through both synthetic in hardware-in-the-loop experiments, PEEK illuminates the decision-making processes of the model, helping identify its strengths, limitations and biases

    ORACLE : A sample-return mission to Titan

    Get PDF
    With a hazy atmosphere, a hydrocarbon cycle, seasons, and a diverse set of surface features, Titan is one of the most unique objects in the Solar System. Further exploration of Titan can elucidate its geologic activity, chemical history, and astrobiological potential. While one-way missions can provide a wealth of information about Titan through remote sensing, in-situ measurements, and communication relays back to Earth, returning samples from Titan allows for unparalleled scientific analysis. Here, we propose a novel mission concept to explore and analyze Titan in situ and return samples from its hydrocarbon lakes. Within ORACLE, a separate lander and orbiter segment will perform all the scientific investigations and collect the hydrocarbon lake samples. After collection of the samples, another segment will return the samples to Earth while the lander and orbiter continue investigating Titan. This mission concept demonstrates novel Titan lake sampling technology and incorporates sample return and in-situ scientific investigation to significantly increase our understanding of Titan, with far broader planetary science implications
    corecore