300 research outputs found

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    RoadSeg-CD: A Network With Connectivity Array and Direction Map for Road Extraction From SAR Images

    Get PDF
    Road extraction from synthetic aperture radar (SAR) images has attracted much attention in the field of remote sensing image processing. General road extraction algorithms, affected by shadows of buildings and trees, are prone to producing fragmented road segments. To improve the accuracy and completeness of road extraction, we propose a neural network-based algorithm, which takes the connectivity and direction features of roads into consideration, named RoadSeg-CD. It consists of two branches: one is the main branch for road segmentation; the other is the auxiliary branch for learning road directions. In the main branch, a connectivity array is designed to utilize local contextual information and construct a connectivity loss based on the predicted probabilities of neighboring pixels. In the auxiliary branch, we proposed a novel road direction map, which is used for learning the directions of roads. The two branches are connected by specific feature fusion process, and the output from the main branch is taken as the road extraction result. Experiments on real radar images are implemented to validate the effectiveness of our method. The experimental results demonstrate that our method can obtain more continuous and more complete roads than several state-of-the-art road extraction algorithms

    RoadSeg-CD: A Network With Connectivity Array and Direction Map for Road Extraction From SAR Images

    Get PDF
    Road extraction from synthetic aperture radar (SAR) images has attracted much attention in the field of remote sensing image processing. General road extraction algorithms, affected by shadows of buildings and trees, are prone to producing fragmented road segments. To improve the accuracy and completeness of road extraction, we propose a neural network-based algorithm, which takes the connectivity and direction features of roads into consideration, named RoadSeg-CD. It consists of two branches: one is the main branch for road segmentation; the other is the auxiliary branch for learning road directions. In the main branch, a connectivity array is designed to utilize local contextual information and construct a connectivity loss based on the predicted probabilities of neighboring pixels. In the auxiliary branch, we proposed a novel road direction map, which is used for learning the directions of roads. The two branches are connected by specific feature fusion process, and the output from the main branch is taken as the road extraction result. Experiments on real radar images are implemented to validate the effectiveness of our method. The experimental results demonstrate that our method can obtain more continuous and more complete roads than several state-of-the-art road extraction algorithms

    Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models

    Get PDF
    To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented. The modeling of increasing level of information is used to extract, represent and link image features to semantic content. The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images

    Segmentation Methods for Synthetic Aperture Radar

    Get PDF

    Operator State Estimation for Adaptive Aiding in Uninhabited Combat Air Vehicles

    Get PDF
    This research demonstrated the first closed-loop implementation of adaptive automation using operator functional state in an operationally relevant environment. In the Uninhabited Combat Air Vehicle (UCAV) environment, operators can become cognitively overloaded and their performance may decrease during mission critical events. This research demonstrates an unprecedented closed-loop system, one that adaptively aids UCAV operators based on their cognitive functional state A series of experiments were conducted to 1) determine the best classifiers for estimating operator functional state, 2) determine if physiological measures can be used to develop multiple cognitive models based on information processing demands and task type, 3) determine the salient psychophysiological measures in operator functional state, and 4) demonstrate the benefits of intelligent adaptive aiding using operator functional state. Aiding the operator actually improved performance and increased mission effectiveness by 67%

    Mapping three-dimensional geological features from remotely-sensed images and digital elevation models.

    Get PDF
    Accurate mapping of geological structures is important in numerous applications, ranging from mineral exploration through to hydrogeological modelling. Remotely sensed data can provide synoptic views of study areas enabling mapping of geological units within the area. Structural information may be derived from such data using standard manual photo-geologic interpretation techniques, although these are often inaccurate and incomplete. The aim of this thesis is, therefore, to compile a suite of automated and interactive computer-based analysis routines, designed to help a the user map geological structure. These are examined and integrated in the context of an expert system. The data used in this study include Digital Elevation Model (DEM) and Airborne Thematic Mapper images, both with a spatial resolution of 5m, for a 5 x 5 km area surrounding Llyn Cow lyd, Snowdonia, North Wales. The geology of this area comprises folded and faulted Ordo vician sediments intruded throughout by dolerite sills, providing a stringent test for the automated and semi-automated procedures. The DEM is used to highlight geomorphological features which may represent surface expressions of the sub-surface geology. The DEM is created from digitized contours, for which kriging is found to provide the best interpolation routine, based on a number of quantitative measures. Lambertian shading and the creation of slope and change of slope datasets are shown to provide the most successful enhancement of DEMs, in terms of highlighting a range of key geomorphological features. The digital image data are used to identify rock outcrops as well as lithologically controlled features in the land cover. To this end, a series of standard spectral enhancements of the images is examined. In this respect, the least correlated 3 band composite and a principal component composite are shown to give the best visual discrimination of geological and vegetation cover types. Automatic edge detection (followed by line thinning and extraction) and manual interpretation techniques are used to identify a set of 'geological primitives' (linear or arc features representing lithological boundaries) within these data. Inclusion of the DEM data provides the three-dimensional co-ordinates of these primitives enabling a least-squares fit to be employed to calculate dip and strike values, based, initially, on the assumption of a simple, linearly dipping structural model. A very large number of scene 'primitives' is identified using these procedures, only some of which have geological significance. Knowledge-based rules are therefore used to identify the relevant. For example, rules are developed to identify lake edges, forest boundaries, forest tracks, rock-vegetation boundaries, and areas of geomorphological interest. Confidence in the geological significance of some of the geological primitives is increased where they are found independently in both the DEM and remotely sensed data. The dip and strike values derived in this way are compared to information taken from the published geological map for this area, as well as measurements taken in the field. Many results are shown to correspond closely to those taken from the map and in the field, with an error of < 1°. These data and rules are incorporated into an expert system which, initially, produces a simple model of the geological structure. The system also provides a graphical user interface for manual control and interpretation, where necessary. Although the system currently only allows a relatively simple structural model (linearly dipping with faulting), in the future it will be possible to extend the system to model more complex features, such as anticlines, synclines, thrusts, nappes, and igneous intrusions

    Radar target micro-doppler signature classification

    Get PDF
    This thesis reports on research into the field of Micro-Doppler Signature (μ-DS) based radar Automatic Target Recognition (ATR) with additional contributions to general radar ATR methodology. The μ-DS based part of the research contributes to three distinct areas: time domain classification; frequency domain classification; and multiperspective μ-DS classification that includes the development of a theory for the multistatic μ-DS. The contribution to general radar ATR is the proposal of a methodology to allow better evaluation of potential approaches and to allow comparison between different studies. The proposed methodology is based around a “black box” model of a radar ATR system that, critically, includes a threshold to detect inputs that are previously unknown to the system. From this model a set of five evaluation metrics are defined. The metrics increase the understanding of the classifier’s performance from the common probability of correct classification, that reports how often the classifier correctly identifies an input, to understanding how reliable it is, how capable it is of generalizing from the reference data, and how effective its unknown input detection is. Additionally, the significance of performance prediction is discussed and a preliminary method to estimate how well a classifier should perform is developed. The proposed methodology is then used to evaluate the μ-DS based radar ATR approaches considered. The time domain classification investigation is based around using Dynamic Time Warping (DTW) to identify radar targets based on their μ-DS. DTW is a speech processing technique that classifies data series by comparing them with a pre-classified reference dataset. This is comparable to the common k-Nearest Neighbour (k-NN) algorithm, so k-NN is used as a benchmark against which to evaluate DTW’s performance. The DTW approach is observed to work well. It achieved high probability of correct classification and reliability as well as being able to detect inputs of unknown class. However, the classifier’s ability to generalize from the reference data is less impressive and it performed only slightly better than a random selection from the possible output classes. Difficulties in classifying the μ-DS in the time domain are identified from the k-NN results prompting a change to the frequency domain. Processing the μ-DS in the frequency domain permitted the development of an advanced feature extraction routine to maximize the separation of the target classes and therefore reduce the effort required to classify them. The frequency domain also permitted the use of the performance prediction method developed as part of the radar ATR methodology and the introduction of a na¨ıve Bayesian approach to classification. The results for the DTW and k-NN classifiers in the frequency domain were comparable to the time domain, an unexpected result since it was anticipated that the μ-DS would be easier to classify in the frequency domain. However, the naıve Bayesian classifier produced excellent results that matched with the predicted performance suggesting it could not be bettered. With a successful classifier, that would be suitable for real-world use, developed attention turned to the possibilities offered by the multistatic μ-DS. Multiperspective radar ATR uses data collected from different target aspects simultaneously to improve classification rates. It has been demonstrated successful for some of the alternatives to μ-DS based ATR and it was therefore speculated that it might improve the performance of μ-DS ATR solutions. The multiple perspectives required for the classifier were gathered using a multistatic radar developed at University College London (UCL). The production of a dataset, and its subsequent analysis, resulted in the first reported findings in the novel field of the multistatic μ-DS theory. Unfortunately, the nature of the radar used resulted in limited micro-Doppler being observed in the collected data and this reduced its value for classification testing. An attempt to use DTW to perform multiperspective μ-DS ATR was made but the results were inconclusive. However, consideration of the improvements offered by multiperspective processing in alternative forms of ATR mean it is still expected that μ-DS based ATR would benefit from this processing

    Deep Learning based Vehicle Detection in Aerial Imagery

    Get PDF
    Der Einsatz von luftgestützten Plattformen, die mit bildgebender Sensorik ausgestattet sind, ist ein wesentlicher Bestandteil von vielen Anwendungen im Bereich der zivilen Sicherheit. Bekannte Anwendungsgebiete umfassen unter anderem die Entdeckung verbotener oder krimineller Aktivitäten, Verkehrsüberwachung, Suche und Rettung, Katastrophenhilfe und Umweltüberwachung. Aufgrund der großen Menge zu verarbeitender Daten und der daraus resultierenden kognitiven Überbelastung ist jedoch eine Analyse der Luftbilddaten ausschließlich durch menschliche Auswerter in der Praxis nicht anwendbar. Zur Unterstützung der menschlichen Auswerter kommen daher in der Regel automatische Bild- und Videoverarbeitungsalgorithmen zum Einsatz. Eine zentrale Aufgabe bildet dabei eine zuverlässige Detektion relevanter Objekte im Sichtfeld der Kamera, bevor eine Interpretation der gegebenen Szene stattfinden kann. Die geringe Bodenauflösung aufgrund der großen Distanz zwischen Kamera und Erde macht die Objektdetektion in Luftbilddaten zu einer herausfordernden Aufgabe, welche durch Bewegungsunschärfe, Verdeckungen und Schattenwurf zusätzlich erschwert wird. Obwohl in der Literatur eine Vielzahl konventioneller Ansätze zur Detektion von Objekten in Luftbilddaten existiert, ist die Detektionsgenauigkeit durch die Repräsentationsfähigkeit der verwendeten manuell entworfenen Merkmale beschränkt. Im Rahmen dieser Arbeit wird ein neuer Deep-Learning basierter Ansatz zur Detektion von Objekten in Luftbilddaten präsentiert. Der Fokus der Arbeit liegt dabei auf der Detektion von Fahrzeugen in Luftbilddaten, die senkrecht von oben aufgenommen wurden. Grundlage des entwickelten Ansatzes bildet der Faster R-CNN Detektor, der im Vergleich zu anderen Deep-Learning basierten Detektionsverfahren eine höhere Detektionsgenauigkeit besitzt. Da Faster R-CNN wie auch die anderen Deep-Learning basierten Detektionsverfahren auf Benchmark Datensätzen optimiert wurden, werden in einem ersten Schritt notwendige Anpassungen an die Eigenschaften der Luftbilddaten, wie die geringen Abmessungen der zu detektierenden Fahrzeuge, systematisch untersucht und daraus resultierende Probleme identifiziert. Im Hinblick auf reale Anwendungen sind hier vor allem die hohe Anzahl fehlerhafter Detektionen durch fahrzeugähnliche Strukturen und die deutlich erhöhte Laufzeit problematisch. Zur Reduktion der fehlerhaften Detektionen werden zwei neue Ansätze vorgeschlagen. Beide Ansätze verfolgen dabei das Ziel, die verwendete Merkmalsrepräsentation durch zusätzliche Kontextinformationen zu verbessern. Der erste Ansatz verfeinert die räumlichen Kontextinformationen durch eine Kombination der Merkmale von frühen und tiefen Schichten der zugrundeliegenden CNN Architektur, so dass feine und grobe Strukturen besser repräsentiert werden. Der zweite Ansatz macht Gebrauch von semantischer Segmentierung um den semantischen Informationsgehalt zu erhöhen. Hierzu werden zwei verschiedene Varianten zur Integration der semantischen Segmentierung in das Detektionsverfahren realisiert: zum einen die Verwendung der semantischen Segmentierungsergebnisse zur Filterung von unwahrscheinlichen Detektionen und zum anderen explizit durch Verschmelzung der CNN Architekturen zur Detektion und Segmentierung. Sowohl durch die Verfeinerung der räumlichen Kontextinformationen als auch durch die Integration der semantischen Kontextinformationen wird die Anzahl der fehlerhaften Detektionen deutlich reduziert und somit die Detektionsgenauigkeit erhöht. Insbesondere der starke Rückgang von fehlerhaften Detektionen in unwahrscheinlichen Bildregionen, wie zum Beispiel auf Gebäuden, zeigt die erhöhte Robustheit der gelernten Merkmalsrepräsentationen. Zur Reduktion der Laufzeit werden im Rahmen der Arbeit zwei alternative Strategien verfolgt. Die erste Strategie ist das Ersetzen der zur Merkmalsextraktion standardmäßig verwendeten CNN Architektur mit einer laufzeitoptimierten CNN Architektur unter Berücksichtigung der Eigenschaften der Luftbilddaten, während die zweite Strategie ein neues Modul zur Reduktion des Suchraumes umfasst. Mit Hilfe der vorgeschlagenen Strategien wird die Gesamtlaufzeit sowie die Laufzeit für jede Komponente des Detektionsverfahrens deutlich reduziert. Durch Kombination der vorgeschlagenen Ansätze kann sowohl die Detektionsgenauigkeit als auch die Laufzeit im Vergleich zur Faster R-CNN Baseline signifikant verbessert werden. Repräsentative Ansätze zur Fahrzeugdetektion in Luftbilddaten aus der Literatur werden quantitativ und qualitativ auf verschiedenen Datensätzen übertroffen. Des Weiteren wird die Generalisierbarkeit des entworfenen Ansatzes auf ungesehenen Bildern von weiteren Luftbilddatensätzen mit abweichenden Eigenschaften demonstriert
    corecore