4,131 research outputs found

    Entropy-based Guidance of Deep Neural Networks for Accelerated Convergence and Improved Performance

    Full text link
    Neural networks have dramatically increased our capacity to learn from large, high-dimensional datasets across innumerable disciplines. However, their decisions are not easily interpretable, their computational costs are high, and building and training them are uncertain processes. To add structure to these efforts, we derive new mathematical results to efficiently measure the changes in entropy as fully-connected and convolutional neural networks process data, and introduce entropy-based loss terms. Experiments in image compression and image classification on benchmark datasets demonstrate these losses guide neural networks to learn rich latent data representations in fewer dimensions, converge in fewer training epochs, and achieve better test metrics.Comment: 13 pages, 4 figure

    Characterizing Satellite Geometry via Accelerated 3D Gaussian Splatting

    Full text link
    The accelerating deployment of spacecraft in orbit have generated interest in on-orbit servicing (OOS), inspection of spacecraft, and active debris removal (ADR). Such missions require precise rendezvous and proximity operations in the vicinity of non-cooperative, possible unknown, resident space objects. Safety concerns with manned missions and lag times with ground-based control necessitate complete autonomy. This requires robust characterization of the target's geometry. In this article, we present an approach for mapping geometries of satellites on orbit based on 3D Gaussian Splatting that can run on computing resources available on current spaceflight hardware. We demonstrate model training and 3D rendering performance on a hardware-in-the-loop satellite mock-up under several realistic lighting and motion conditions. Our model is shown to be capable of training on-board and rendering higher quality novel views of an unknown satellite nearly 2 orders of magnitude faster than previous NeRF-based algorithms. Such on-board capabilities are critical to enable downstream machine intelligence tasks necessary for autonomous guidance, navigation, and control tasks.Comment: 11 pages, 5 figure

    Real-Time Satellite Component Recognition with YOLO-V5

    Get PDF
    With the increasing risk of collisions with space debris and the growing interest in on-orbit servicing, the ability to autonomously capture non-cooperative, tumbling target objects remains an unresolved challenge. To accomplish this task, characterizing and classifying satellite components is critical to the success of the mission. This paper focuses on using machine vision by a small satellite to perform image classification based on locating and identifying satellite components such as satellite bodies, solar panels or antennas. The classification and component detection approach is based on “You Only Look Once” (YOLO) V5, which uses Neural Networks to identify the satellite components. The training dataset includes images of real and virtual satellites and additional preprocessed images to increase the effectiveness of the algorithm. The weights obtained from the algorithm are then used in a spacecraft motion dynamics and orbital lighting simulator to test classification and detection performance. Each test case entails a different approach path of the chaser satellite to the target satellite, a different attitude motion of the target satellite, and different lighting conditions to mimic that of the Sun. Initial results indicate that once trained, the YOLO V5 approach is able to effectively process an input camera feed to solve satellite classification and component detection problems in real-time within the limitations of flight computers

    Investigation into the Ratio of Operating and Support Costs to Life-Cycle Costs for DoD Weapon Systems

    Get PDF
    Recent legislation, such as the Weapon Systems Acquisition Reform Act of 2009, requires a renewed emphasis on understanding Operating and Support (O&S) costs. Conventional wisdom within the acquisition community suggests a 70:30 cost ratio with respect to O&S and acquisition of an average weapon system. Using 37 Air Force and Navy programs, the authors estimate the mean overall ratio of O&S costs to acquisition costs to be closer to 55:45, although many weapon systems displayed significant deviation from this 55 percent average. Contributing factors such as life expectancy and acquisition strategy (i.e., new system or modification) affect this variance. Their research advises against using a single “one-size-fits-all” O&S/ acquisition cost ratio for all major DoD weapon systems

    Resource-constrained FPGA Design for Satellite Component Feature Extraction

    Full text link
    The effective use of computer vision and machine learning for on-orbit applications has been hampered by limited computing capabilities, and therefore limited performance. While embedded systems utilizing ARM processors have been shown to meet acceptable but low performance standards, the recent availability of larger space-grade field programmable gate arrays (FPGAs) show potential to exceed the performance of microcomputer systems. This work proposes use of neural network-based object detection algorithm that can be deployed on a comparably resource-constrained FPGA to automatically detect components of non-cooperative, satellites on orbit. Hardware-in-the-loop experiments were performed on the ORION Maneuver Kinematics Simulator at Florida Tech to compare the performance of the new model deployed on a small, resource-constrained FPGA to an equivalent algorithm on a microcomputer system. Results show the FPGA implementation increases the throughput and decreases latency while maintaining comparable accuracy. These findings suggest future missions should consider deploying computer vision algorithms on space-grade FPGAs.Comment: 9 pages, 7 figures, 4 tables, Accepted at IEEE Aerospace Conference 202

    Comparison of Tracking-By-Detection Algorithms for Real-Time Satellite Component Tracking

    Get PDF
    With space becoming more and more crowded, there is a growing demand for increasing satellite lifetimes and performing on-orbit servicing (OOS) at a scale that calls for autonomous missions. Many such missions would require chaser satellites to autonomously execute safe and effective flightpath to dock with a non-cooperative target satellite on orbit. Performing this autonomously requires the chaser to be aware of hazards to route around and safe capture points through time, i.e., by first identifying and tracking key components of the target satellite. State-of-the-art object detection algorithms are effective at detecting such objects on a frame-by-frame basis. However, implementing them on a real-time video feed often results in poor performance at tracking objects over time, making errors which could be easily corrected by rejecting non-physical predictions or by exploiting temporal patterns. On the other hand, dedicated object tracking algorithms can be far too computationally expensive for spaceflight computers. Considering this, the paradigm of tracking-by-detection works by incorporating patterns of prior-frame detections and the corresponding physics in tandem with a base object detector. This paper focuses on comparing the performance of object tracking-by-detection algorithms with a YOLOv8 base object detector: namely, BoTSORT and ByteTrack. These algorithms are hardware-in-the-loop tested for autonomous spacecraft component detection for a simulated tumbling target satellite. This will emulate mission conditions, including motion and lighting, with a focus on operating under spaceflight computational and power limitations, providing an experimental comparison of performance. Results demonstrate lightweight tracking-by-detection can improve the reliability of autonomous vision-based navigation
    • …
    corecore