34,132 research outputs found

    Satellite image analysis using neural networks

    Get PDF
    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data

    Solar Power Plant Detection on Multi-Spectral Satellite Imagery using Weakly-Supervised CNN with Feedback Features and m-PCNN Fusion

    Full text link
    Most of the traditional convolutional neural networks (CNNs) implements bottom-up approach (feed-forward) for image classifications. However, many scientific studies demonstrate that visual perception in primates rely on both bottom-up and top-down connections. Therefore, in this work, we propose a CNN network with feedback structure for Solar power plant detection on middle-resolution satellite images. To express the strength of the top-down connections, we introduce feedback CNN network (FB-Net) to a baseline CNN model used for solar power plant classification on multi-spectral satellite data. Moreover, we introduce a method to improve class activation mapping (CAM) to our FB-Net, which takes advantage of multi-channel pulse coupled neural network (m-PCNN) for weakly-supervised localization of the solar power plants from the features of proposed FB-Net. For the proposed FB-Net CAM with m-PCNN, experimental results demonstrated promising results on both solar-power plant image classification and detection task.Comment: 9 pages, 9 figures, 4 table

    Real-Time Satellite Component Recognition with YOLO-V5

    Get PDF
    With the increasing risk of collisions with space debris and the growing interest in on-orbit servicing, the ability to autonomously capture non-cooperative, tumbling target objects remains an unresolved challenge. To accomplish this task, characterizing and classifying satellite components is critical to the success of the mission. This paper focuses on using machine vision by a small satellite to perform image classification based on locating and identifying satellite components such as satellite bodies, solar panels or antennas. The classification and component detection approach is based on “You Only Look Once” (YOLO) V5, which uses Neural Networks to identify the satellite components. The training dataset includes images of real and virtual satellites and additional preprocessed images to increase the effectiveness of the algorithm. The weights obtained from the algorithm are then used in a spacecraft motion dynamics and orbital lighting simulator to test classification and detection performance. Each test case entails a different approach path of the chaser satellite to the target satellite, a different attitude motion of the target satellite, and different lighting conditions to mimic that of the Sun. Initial results indicate that once trained, the YOLO V5 approach is able to effectively process an input camera feed to solve satellite classification and component detection problems in real-time within the limitations of flight computers

    Temporal updating scheme for probabilistic neural network with application to satellite cloud classification

    Get PDF
    Includes bibliographical references.In cloud classification from satellite imagery, temporal change in the images is one of the main factors that causes degradation in the classifier performance. In this paper, a novel temporal updating approach is developed for probabilistic neural network (PNN) classifiers that can be used to track temporal changes in a sequence of images. This is done by utilizing the temporal contextual information and adjusting the PNN to adapt to such changes. Whenever a new set of images arrives, an initial classification is first performed using the PNN updated up to the last frame while at the same time, a prediction using Markov chain models is also made based on the classification results of the previous frame. The results of both the old PNN and the predictor are then compared. Depending on the outcome, either a supervised or an unsupervised updating scheme is used to update the PNN classifier. Maximum likelihood (ML) criterion is adopted in both the training and updating schemes. The proposed scheme is examined on both a simulated data set and the Geostationary Operational Environmental Satellite (GOES) 8 satellite cloud imagery data. These results indicate the improvements in the classification accuracy when the proposed scheme is used.This work was supported by the Department of Defense under the Contract DAAH04 94 G0420

    Transfer Learning for High Resolution Aerial Image Classification

    Get PDF
    With rapid developments in satellite and sensor technologies, increasing amount of high spatial resolution aerial images have become available. Classification of these images are important for many remote sensing image understanding tasks, such as image retrieval and object detection. Meanwhile, image classification in the computer vision field is revolutionized with recent popularity of the convolutional neural networks (CNN), based on which the state-of-the-art classification results are achieved. Therefore, the idea of applying the CNN for high resolution aerial image classification is straightforward. However, it is not trivial mainly because the amount of labeled images in remote sensing for training a deep neural network is limited. As a result, transfer learning techniques were adopted for this problem, where the CNN used for the classification problem is pre-trained on a larger dataset beforehand. In this paper, we propose a specific fine-tuning strategy that results in better CNN models for aerial image classification. Extensive experiments were carried out using the proposed approach with different CNN architectures. Our proposed method shows competitive results compared to the existing approaches, indicating the superiority of the proposed fine-tuning algorith
    corecore