3,811 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    A deep-neural-network-based hybrid method for semi-supervised classification of polarimetric SAR data

    Get PDF
    This paper proposes a deep-neural-network-based semi-supervised method for polarimetric synthetic aperture radar (PolSAR) data classification. The proposed method focuses on achieving a well-trained deep neural network (DNN) when the amount of the labeled samples is limited. In the proposed method, the probability vectors, where each entry indicates the probability of a sample associated with a category, are first evaluated for the unlabeled samples, leading to an augmented training set. With this augmented training set, the parameters in the DNN are learned by solving the optimization problem, where the log-likelihood cost function and the class probability vectors are used. To alleviate the “salt-and-pepper” appearance in the classification results of PolSAR images, the spatial interdependencies are incorporated by introducing a Markov random field (MRF) prior in the prediction step. The experimental results on two realistic PolSAR images demonstrate that the proposed method effectively incorporates the spatial interdependencies and achieves the good classification accuracy with a limited number of labeled samples

    Design Framework of UAV-Based Environment Sensing, Localization, and Imaging System

    Get PDF
    In this dissertation research, we develop a framework for designing an Unmanned Aerial Vehicle or UAV-based environment sensing, localization, and imaging system for challenging environments with no GPS signals and low visibility. The UAV system relies on the various sensors that it carries to conduct accurate sensing and localization of the objects in an environment, and further to reconstruct the 3D shapes of those objects. The system can be very useful when exploring an unknown or dangerous environment, e.g., a disaster site, which is not convenient or not accessible for humans. In addition, the system can be used for monitoring and object tracking in a large scale environment, e.g., a smart manufacturing factory, for the purposes of workplace management/safety, and maintaining optimal system performance/productivity. In our framework, the UAV system is comprised of two subsystems: a sensing and localization subsystem; and a mmWave radar-based 3D object reconstruction subsystem. The first subsystem is referred to as LIDAUS (Localization of IoT Device via Anchor UAV SLAM), which is an infrastructure-free, multi-stage SLAM (Simultaneous Localization and Mapping) system that utilizes a UAV to accurately localize and track IoT devices in a space with weak or no GPS signals. The rapidly increasing deployment of Internet of Things (IoT) around the world is changing many aspects of our society. IoT devices can be deployed in various places for different purposes, e.g., in a manufacturing site or a large warehouse, and they can be displaced over time due to human activities, or manufacturing processes. Usually in an indoor environment, the lack of GPS signals and infrastructure support makes most existing indoor localization systems not practical when localizing a large number of wireless IoT devices. In addition, safety concerns, access restriction, and simply the huge amount of IoT devices make it not practical for humans to manually localize and track IoT devices. Our LIDAUS is developed to address these problems. The UAV in our LIDAUS system conducts multi-stage 3D SLAM trips to localize devices based only on Received Signal Strength Indicator (RSSI), the most widely available measurement of the signals of almost all commodity IoT devices. Our simulations and experiments of Bluetooth IoT devices demonstrate that our system LIDAUS can achieve high localization accuracy based only on RSSIs of commodity IoT devices. Build on the first subsystem, we further develop the second subsystem for environment reconstruction and imaging via mmWave radar and deep learning. This subsystem is referred to as 3DRIMR/R2P (3D Reconstruction and Imaging via mmWave Radar/Radar to Point Cloud). It enables an exploring UAV to fly within an environment and collect mmWave radar data by scanning various objects in the environment. Taking advantage of the accurate locations given by the first subsystem, the UAV can scan an object from different viewpoints. Then based on radar data only, the UAV can reconstruct the 3D shapes of the objects in the space. mmWave radar has been shown as an effective sensing technique in low visibility, smoke, dusty, and dense fog environment. However, tapping the potential of radar sensing to reconstruct 3D object shapes remains a great challenge, due to the characteristics of radar data such as sparsity, low resolution, specularity, large noise, and multi-path induced shadow reflections and artifacts. Hence, it is challenging to reconstruct 3D object shapes based on the raw sparse and low-resolution mmWave radar signals. To address the challenges, our second subsystem utilizes deep learning models to extract features from sparse raw mmWave radar intensity data, and reconstructs 3D shapes of objects in the format of dense and detailed point cloud. We first develop a deep learning model to reconstruct a single object’s 3D shape. The model first converts mmWave radar data to depth images, and then reconstructs an object’s 3D shape in point cloud format. Our experiments demonstrate the significant performance improvement of our system over the popular existing methods such as PointNet, PointNet++ and PCN. Then we further explore the feasibility of utilizing a mmWave radar sensor installed on a UAV to reconstruct the 3D shapes of multiple objects in a space. We evaluate two different models. Model 1 is 3DRIMR/R2P model, and Model 2 is formed by adding a segmentation stage in the processing pipeline of Model 1. Our experiments demonstrate that both models are promising in solving the multiple object reconstruction problem. We also show that Model 2, despite producing denser and smoother point clouds, can lead to higher reconstruction loss or even missing objects. In addition, we find that both models are robust to the highly noisy radar data obtained by unstable Synthetic Aperture Radar (SAR) operation due to the instability or vibration of a small UAV hovering at its intended scanning point. Our research shows a promising direction of applying mmWave radar sensing in 3D object reconstruction
    • …
    corecore