12 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Self-supervised Learning in Remote Sensing: A Review

    Get PDF
    In deep learning research, self-supervised learning (SSL) has received great attention triggering interest within both the computer vision and remote sensing communities. While there has been a big success in computer vision, most of the potential of SSL in the domain of earth observation remains locked. In this paper, we provide an introduction to, and a review of the concepts and latest developments in SSL for computer vision in the context of remote sensing. Further, we provide a preliminary benchmark of modern SSL algorithms on popular remote sensing datasets, verifying the potential of SSL in remote sensing and providing an extended study on data augmentations. Finally, we identify a list of promising directions of future research in SSL for earth observation (SSL4EO) to pave the way for fruitful interaction of both domains.Comment: Accepted by IEEE Geoscience and Remote Sensing Magazine. 32 pages, 22 content page

    Advanced signal processing solutions for ATR and spectrum sharing in distributed radar systems

    Get PDF
    Previously held under moratorium from 11 September 2017 until 16 February 2022This Thesis presents advanced signal processing solutions for Automatic Target Recognition (ATR) operations and for spectrum sharing in distributed radar systems. Two Synthetic Aperture Radar (SAR) ATR algorithms are described for full- and single-polarimetric images, and tested on the GOTCHA and the MSTAR datasets. The first one exploits the Krogager polarimetric decomposition in order to enhance peculiar scattering mechanisms from manmade targets, used in combination with the pseudo-Zernike image moments. The second algorithm employs the Krawtchouk image moments, that, being discrete defined, provide better representations of targets’ details. The proposed image moments based framework can be extended to the availability of several images from multiple sensors through the implementation of a simple fusion rule. A model-based micro-Doppler algorithm is developed for the identification of helicopters. The approach relies on the proposed sparse representation of the signal scattered from the helicopter’s rotor and received by the radar. Such a sparse representation is obtained through the application of a greedy sparse recovery framework, with the goal of estimating the number, the length and the rotation speed of the blades, parameters that are peculiar for each helicopter’s model. The algorithm is extended to deal with the identification of multiple helicopters flying in formation that cannot be resolved in another domain. Moreover, a fusion rule is presented to integrate the results of the identification performed from several sensors in a distributed radar system. Tests performed both on simulated signals and on real signals acquired from a scale model of a helicopter, confirm the validity of the algorithm. Finally, a waveform design framework for joint radar-communication systems is presented. The waveform is composed by quasi-orthogonal chirp sub-carriers generated through the Fractional Fourier Transform (FrFT), with the aim of preserving the radar performance of a typical Linear Frequency Modulated (LFM) pulse while embedding data to be sent to a cooperative system. Techniques aimed at optimise the design parameters and mitigate the Inter-Carrier Interference (ICI) caused by the quasiorthogonality of the chirp sub-carriers are also described. The FrFT based waveform is extensively tested and compared with Orthogonal Frequency Division Multiplexing (OFDM) and LFM waveforms, in order to assess both its radar and communication performance.This Thesis presents advanced signal processing solutions for Automatic Target Recognition (ATR) operations and for spectrum sharing in distributed radar systems. Two Synthetic Aperture Radar (SAR) ATR algorithms are described for full- and single-polarimetric images, and tested on the GOTCHA and the MSTAR datasets. The first one exploits the Krogager polarimetric decomposition in order to enhance peculiar scattering mechanisms from manmade targets, used in combination with the pseudo-Zernike image moments. The second algorithm employs the Krawtchouk image moments, that, being discrete defined, provide better representations of targets’ details. The proposed image moments based framework can be extended to the availability of several images from multiple sensors through the implementation of a simple fusion rule. A model-based micro-Doppler algorithm is developed for the identification of helicopters. The approach relies on the proposed sparse representation of the signal scattered from the helicopter’s rotor and received by the radar. Such a sparse representation is obtained through the application of a greedy sparse recovery framework, with the goal of estimating the number, the length and the rotation speed of the blades, parameters that are peculiar for each helicopter’s model. The algorithm is extended to deal with the identification of multiple helicopters flying in formation that cannot be resolved in another domain. Moreover, a fusion rule is presented to integrate the results of the identification performed from several sensors in a distributed radar system. Tests performed both on simulated signals and on real signals acquired from a scale model of a helicopter, confirm the validity of the algorithm. Finally, a waveform design framework for joint radar-communication systems is presented. The waveform is composed by quasi-orthogonal chirp sub-carriers generated through the Fractional Fourier Transform (FrFT), with the aim of preserving the radar performance of a typical Linear Frequency Modulated (LFM) pulse while embedding data to be sent to a cooperative system. Techniques aimed at optimise the design parameters and mitigate the Inter-Carrier Interference (ICI) caused by the quasiorthogonality of the chirp sub-carriers are also described. The FrFT based waveform is extensively tested and compared with Orthogonal Frequency Division Multiplexing (OFDM) and LFM waveforms, in order to assess both its radar and communication performance

    Ship Detection for PolSAR Images via Task-Driven Discriminative Dictionary Learning

    No full text
    Ship detection with polarimetric synthetic aperture radar (PolSAR) has received increasing attention for its wide usage in maritime applications. However, extracting discriminative features to implement ship detection is still a challenging problem. In this paper, we propose a novel ship detection method for PolSAR images via task-driven discriminative dictionary learning (TDDDL). An assumption that ship and clutter information are sparsely coded under two separate dictionaries is made. Contextual information is considered by imposing superpixel-level joint sparsity constraints. In order to amplify the discrimination of the ship and clutter, we impose incoherence constraints between the two sub-dictionaries in the objective of feature coding. The discriminative dictionary is trained jointly with a linear classifier in task-driven dictionary learning (TDDL) framework. Based on the learnt dictionary and classifier, we extract discriminative features by sparse coding, and obtain robust detection results through binary classification. Different from previous methods, our ship detection cue is obtained through active learning strategies rather than artificially designed rules, and thus, is more adaptive, effective and robust. Experiments performed on synthetic images and two RADARSAT-2 images demonstrate that our method outperforms other comparative methods. In addition, the proposed method yields better shape-preserving ability and lower computation cost

    Statistical and Stochastic Learning Algorithms for Distributed and Intelligent Systems

    Get PDF
    In the big data era, statistical and stochastic learning for distributed and intelligent systems focuses on enhancing and improving the robustness of learning models that have become pervasive and are being deployed for decision-making in real-life applications including general classification, prediction, and sparse sensing. The growing prospect of statistical learning approaches such as Linear Discriminant Analysis and distributed Learning being used (e.g., community sensing) has raised concerns around the robustness of algorithm design. Recent work on anomalies detection has shown that such Learning models can also succumb to the so-called \u27edge-cases\u27 where the real-life operational situation presents data that are not well-represented in the training data set. Such cases have been the primary reason for quite a few mis-classification bottleneck problems recently. Although initial research has begun to address scenarios with specific Learning models, there remains a significant knowledge gap regarding the detection and adaptation of learning models to \u27edge-cases\u27 and extreme ill-posed settings in the context of distributed and intelligent systems. With this motivation, this dissertation explores the complex in several typical applications and associated algorithms to detect and mitigate the uncertainty which will substantially reduce the risk in using statistical and stochastic learning algorithms for distributed and intelligent systems

    Efficient Multi-Objective NeuroEvolution in Computer Vision and Applications for Threat Identification

    Get PDF
    Concealed threat detection is at the heart of critical security systems designed to en- sure public safety. Currently, methods for threat identification and detection are primarily manual, but there is a recent vision to automate the process. Problematically, developing computer vision models capable of operating in a wide range of settings, such as the ones arising in threat detection, is a challenging task involving multiple (and often conflicting) objectives. Automated machine learning (AutoML) is a flourishing field which endeavours to dis- cover and optimise models and hyperparameters autonomously, providing an alternative to classic, effort-intensive hyperparameter search. However, existing approaches typ- ically show significant downsides, like their (1) high computational cost/greediness in resources, (2) limited (or absent) scalability to custom datasets, (3) inability to provide competitive alternatives to expert-designed and heuristic approaches and (4) common consideration of a single objective. Moreover, most existing studies focus on standard classification tasks and thus cannot address a plethora of problems in threat detection and, more broadly, in a wide variety of compelling computer vision scenarios. This thesis leverages state-of-the-art convolutional autoencoders and semantic seg- mentation (Chapter 2) to develop effective multi-objective AutoML strategies for neural architecture search. These strategies are designed for threat detection and provide in- sights into some quintessential computer vision problems. To this end, the thesis first introduces two new models, a practical Multi-Objective Neuroevolutionary approach for Convolutional Autoencoders (MONCAE, Chapter 3) and a Resource-Aware model for Multi-Objective Semantic Segmentation (RAMOSS, Chapter 4). Interestingly, these ap- proaches reached state-of-the-art results using a fraction of computational resources re- quired by competing systems (0.33 GPU days compared to 3150), yet allowing for mul- tiple objectives (e.g., performance and number of parameters) to be simultaneously op- timised. This drastic speed-up was possible through the coalescence of neuroevolution algorithms with a new heuristic technique termed Progressive Stratified Sampling. The presented methods are evaluated on a range of benchmark datasets and then applied to several threat detection problems, outperforming previous attempts in balancing multiple objectives. The final chapter of the thesis focuses on thread detection, exploiting these two mod- els and novel components. It presents first a new modification of specialised proxy scores to be embedded in RAMOSS, enabling us to further accelerate the AutoML process even more drastically while maintaining avant-garde performance (above 85% precision for SIXray). This approach rendered a new automatic evolutionary Multi-objEctive method for cOncealed Weapon detection (MEOW), which outperforms state-of-the-art models for threat detection in key datasets: a gold standard benchmark (SixRay) and a security- critical, proprietary dataset. Finally, the thesis shifts the focus from neural architecture search to identifying the most representative data samples. Specifically, the Multi-objectIve Core-set Discovery through evolutionAry algorithMs in computEr vision approach (MIRA-ME) showcases how the new neural architecture search techniques developed in previous chapters can be adapted to operate on data space. MIRA-ME offers supervised and unsupervised ways to select maximally informative, compact sets of images via dataset compression. This operation can offset the computational cost further (above 90% compression), with a minimal sacrifice in performance (less than 5% for MNIST and less than 13% for SIXray). Overall, this thesis proposes novel model- and data-centred approaches towards a more widespread use of AutoML as an optimal tool for architecture and coreset discov- ery. With the presented and future developments, the work suggests that AutoML can effectively operate in real-time and performance-critical settings such as in threat de- tection, even fostering interpretability by uncovering more parsimonious optimal models. More widely, these approaches have the potential to provide effective solutions to chal- lenging computer vision problems that nowadays are typically considered unfeasible for AutoML settings

    Обработка радиолокационных изображений: монография

    Full text link
    Книга посвящена решению теоретических и практических проблем обнаружения, измерения параметров и классификации пространственно-распределённых целей (ПРЦ) по их радиолокационным изображениям (РЛИ), формируемым в многопозиционной системе наблюдения, реализованной группой космических аппаратов. В книге подробно рассмотрены методы синтеза и анализа алгоритмов классификации ПРЦ, алгоритмы оценки параметров РЛИ, алгоритмы классификации с использованием нейронных сетей, частично-когерентных РЛС, алгоритмы формирования РЛИ движущихся объектов, методы фильтрации спекл-шума, методы анализа помехоустойчивости, методы геокоррекции формируемых РЛИ. Книга представляет интерес для специалистов, студентов и аспирантов, работающих в области разработки современных радиотехнических систем военного и гражданского назначения
    corecore