1,857 research outputs found

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Combat Identification of Synthetic Aperture Radar Images using Contextual Features and Bayesian Belief Networks

    Get PDF
    Given the nearly infinite combination of modifications and configurations for weapon systems, no two targets are ever exactly the same. Synthetic Aperture Radar (SAR) imagery and associated High Range Resolution (HRR) profiles of the same target will have different signatures when viewed from different angles. To overcome this challenge, data from a wide range of aspect and depression angles must be used to train pattern recognition algorithms. Alternatively, features invariant to aspect and depression angle must be found. This research uses simple segmentation algorithms and multivariate analysis methods to extract contextual features from SAR imagery. These features used in conjunction with HRR features improve classification accuracy at similar or extended operating conditions. Classification accuracy improvements achieved through Bayesian Belief Networks and the direct use of the contextual features in a template matching algorithm are demonstrated using a General Dynamics Data Collection System SAR data set

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Synthetic Aperture LADAR Automatic Target Recognizer Design and Performance Prediction via Geometric Properties of Targets

    Get PDF
    Synthetic Aperture LADAR (SAL) has several phenomenology differences from Synthetic Aperture RADAR (SAR) making it a promising candidate for automatic target recognition (ATR) purposes. The diffuse nature of SAL results in more pixels on target. Optical wavelengths offers centimeter class resolution with an aperture baseline that is 10,000 times smaller than an SAR baseline. While diffuse scattering and optical wavelengths have several advantages, there are also a number of challenges. The diffuse nature of SAL leads to a more pronounced speckle effect than in the SAR case. Optical wavelengths are more susceptible to atmospheric noise, leading to distortions in formed imagery. While these advantages and disadvantages are studied and understood in theory, they have yet to be put into practice. This dissertation aims to quantify the impact switching from specular SAR to diffuse SAL has on algorithm design. In addition, a methodology for performance prediction and template generation is proposed given the geometric and physical properties of CAD models. This methodology does not rely on forming images, and alleviates the computational burden of generating multiple speckle fields and redundant ray-tracing. This dissertation intends to show that the performance of template matching ATRs on SAL imagery can be accurately and rapidly estimated by analyzing the physical and geometric properties of CAD models

    High-resolution SAR images for fire susceptibility estimation in urban forestry

    Get PDF
    We present an adaptive system for the automatic assessment of both physical and anthropic fire impact factors on periurban forestries. The aim is to provide an integrated methodology exploiting a complex data structure built upon a multi resolution grid gathering historical land exploitation and meteorological data, records of human habits together with suitably segmented and interpreted high resolution X-SAR images, and several other information sources. The contribution of the model and its novelty rely mainly on the definition of a learning schema lifting different factors and aspects of fire causes, including physical, social and behavioural ones, to the design of a fire susceptibility map, of a specific urban forestry. The outcome is an integrated geospatial database providing an infrastructure that merges cartography, heterogeneous data and complex analysis, in so establishing a digital environment where users and tools are interactively connected in an efficient and flexible way

    Automatic target recognition in sonar imagery using a cascade of boosted classifiers

    Get PDF
    This thesis is concerned with the problem of automating the interpretation of data representing the underwater environment retrieved from sensors. This is an important task which potentially allows underwater robots to become completely autonomous, keeping humans out of harm’s way and reducing the operational time and cost of many underwater applications. Typical applications include unexploded ordnance clearance, ship/plane wreck hunting (e.g. Malaysia Airlines flight MH370), and oilfield inspection (e.g. Deepwater Horizon disaster). Two attributes of the processing are crucial if automated interpretation is to be successful. First, computational efficiency is required to allow real-time analysis to be performed on-board robots with limited resources. Second, detection accuracy comparable to human experts is required in order to replace them. Approaches in the open literature do not appear capable of achieving these requirements and this therefore has become the objective of this thesis. This thesis proposes a novel approach capable of recognizing targets in sonar data extremely rapidly with a low number of false alarms. The approach was originally developed for face detection in video, and it is applied to sonar data here for the first time. Aside from the application, the main contribution of this thesis, therefore, is in the way this approach is extended to reduce its training time and improve its detection accuracy. Results obtained on large sets of real sonar data on a variety of challenging terrains are presented to show the discriminative power of the proposed approach. In real field trials, the proposed approach was capable of processing sonar data real-time on-board underwater robots. In direct comparison with human experts, the proposed approach offers 40% reduction in the number of false alarms

    Feedback-assisted automatic target and clutter discrimination using a Bayesian convolutional neural network for improved explainability in SAR applications

    Get PDF
    DATA AVAILABILITY STATEMENT : The NATO-SET 250 dataset is not publicly available; however, the MSTAR dataset can be found at the following url: https://www.sdms.afrl.af.mil/index.php?collection=mstar (accessed on 5 January 2022).In this paper, a feedback training approach for efficiently dealing with distribution shift in synthetic aperture radar target detection using a Bayesian convolutional neural network is proposed. After training the network on in-distribution data, it is tested on out-of-distribution data. Samples that are classified incorrectly with high certainty are fed back for a second round of training. This results in the reduction of false positives in the out-of-distribution dataset. False positive target detections challenge human attention, sensor resource management, and mission engagement. In these types of applications, a reduction in false positives thus often takes precedence over target detection and classification performance. The classifier is used to discriminate the targets from the clutter and to classify the target type in a single step as opposed to the traditional approach of having a sequential chain of functions for target detection and localisation before the machine learning algorithm. Another aspect of automated synthetic aperture radar detection and recognition problems addressed here is the fact that human users of the output of traditional classification systems are presented with decisions made by “black box” algorithms. Consequently, the decisions are not explainable, even to an expert in the sensor domain. This paper makes use of the concept of explainable artificial intelligence via uncertainty heat maps that are overlaid onto synthetic aperture radar imagery to furnish the user with additional information about classification decisions. These uncertainty heat maps facilitate trust in the machine learning algorithm and are derived from the uncertainty estimates of the classifications from the Bayesian convolutional neural network. These uncertainty overlays further enhance the users’ ability to interpret the reasons why certain decisions were made by the algorithm. Further, it is demonstrated that feeding back the high-certainty, incorrectly classified out-of-distribution data results in an average improvement in detection performance and a reduction in uncertainty for all synthetic aperture radar images processed. Compared to the baseline method, an improvement in recall of 11.8%, and a reduction in the false positive rate of 7.08% were demonstrated using the Feedback-assisted Bayesian Convolutional Neural Network or FaBCNN.The Radar and Electronic Warfare department at the CSIR.http://www.mdpi.com/journal/remotesensinghj2023Electrical, Electronic and Computer Engineerin
    corecore