2,615 research outputs found
Recommended from our members
Daytime precipitation estimation using bispectral cloud classification system
Two previously developed Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) algorithms that incorporate cloud classification system (PERSIANN-CCS) and multispectral analysis (PERSIANN-MSA) are integrated and employed to analyze the role of cloud albedo from Geostationary Operational Environmental Satellite-12 (GOES-12) visible (0.65 μm) channel in supplementing infrared (10.7 mm) data. The integrated technique derives finescale (0.04° × 0.04° latitudelongitude every 30 min) rain rate for each grid box through four major steps: 1) segmenting clouds into a number of cloud patches using infrared or albedo images; 2) classification of cloud patches into a number of cloud types using radiative, geometrical, and textural features for each individual cloud patch; 3) classification of each cloud type into a number of subclasses and assigning rain rates to each subclass using a multidimensional histogram matching method; and 4) associating satellite gridbox information to the appropriate corresponding cloud type and subclass to estimate rain rate in grid scale. The technique was applied over a study region that includes the U.S. landmass east of 115°W. One reference infrared-only and three different bis-pectral (visible and infrared) rain estimation scenarios were compared to investigate the technique's ability to address two major drawbacks of infrared-only methods: 1) underestimating warm rainfall and 2) the inability to screen out no-rain thin cirrus clouds. Radar estimates were used to evaluate the scenarios at a range of temporal (3 and 6 hourly) and spatial (0.04°, 0.08°, 0.12°, and 0.24° latitude-longitude) scales. Overall, the results using daytime data during June-August 2006 indicate that significant gain over infrared-only technique is obtained once albedo is used for cloud segmentation followed by bispectral cloud classification and rainfall estimation. At 3-h, 0.04° resolution, the observed improvement using bispectral information was about 66% for equitable threat score and 26% for the correlation coefficient. At coarser 0.24° resolution, the gains were 34% and 32% for the two performance measures, respectively. © 2010 American Meteorological Society
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Recommended from our members
PERSIANN-CNN: Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks-Convolutional Neural Networks
Abstract
Accurate and timely precipitation estimates are critical for monitoring and forecasting natural disasters such as floods. Despite having high-resolution satellite information, precipitation estimation from remotely sensed data still suffers from methodological limitations. State-of-the-art deep learning algorithms, renowned for their skill in learning accurate patterns within large and complex datasets, appear well suited to the task of precipitation estimation, given the ample amount of high-resolution satellite data. In this study, the effectiveness of applying convolutional neural networks (CNNs) together with the infrared (IR) and water vapor (WV) channels from geostationary satellites for estimating precipitation rate is explored. The proposed model performances are evaluated during summer 2012 and 2013 over central CONUS at the spatial resolution of 0.08° and at an hourly time scale. Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN)–Cloud Classification System (CCS), which is an operational satellite-based product, and PERSIANN–Stacked Denoising Autoencoder (PERSIANN-SDAE) are employed as baseline models. Results demonstrate that the proposed model (PERSIANN-CNN) provides more accurate rainfall estimates compared to the baseline models at various temporal and spatial scales. Specifically, PERSIANN-CNN outperforms PERSIANN-CCS (and PERSIANN-SDAE) by 54% (and 23%) in the critical success index (CSI), demonstrating the detection skills of the model. Furthermore, the root-mean-square error (RMSE) of the rainfall estimates with respect to the National Centers for Environmental Prediction (NCEP) Stage IV gauge–radar data, for PERSIANN-CNN was lower than that of PERSIANN-CCS (PERSIANN-SDAE) by 37% (14%), showing the estimation accuracy of the proposed model
Exploiting Deep Features for Remote Sensing Image Retrieval: A Systematic Investigation
Remote sensing (RS) image retrieval is of great significant for geological
information mining. Over the past two decades, a large amount of research on
this task has been carried out, which mainly focuses on the following three
core issues: feature extraction, similarity metric and relevance feedback. Due
to the complexity and multiformity of ground objects in high-resolution remote
sensing (HRRS) images, there is still room for improvement in the current
retrieval approaches. In this paper, we analyze the three core issues of RS
image retrieval and provide a comprehensive review on existing methods.
Furthermore, for the goal to advance the state-of-the-art in HRRS image
retrieval, we focus on the feature extraction issue and delve how to use
powerful deep representations to address this task. We conduct systematic
investigation on evaluating correlative factors that may affect the performance
of deep features. By optimizing each factor, we acquire remarkable retrieval
results on publicly available HRRS datasets. Finally, we explain the
experimental phenomenon in detail and draw conclusions according to our
analysis. Our work can serve as a guiding role for the research of
content-based RS image retrieval
Understanding Heterogeneous EO Datasets: A Framework for Semantic Representations
Earth observation (EO) has become a valuable source of comprehensive, reliable, and persistent
information for a wide number of applications. However, dealing with the complexity of land cover is
sometimes difficult, as the variety of EO sensors reflects in the multitude of details recorded in several types
of image data. Their properties dictate the category and nature of the perceptible land structures. The data
heterogeneity hampers proper understanding, preventing the definition of universal procedures for content
exploitation. The main shortcomings are due to the different human and sensor perception on objects, as well
as to the lack of coincidence between visual elements and similarities obtained by computation. In order to
bridge these sensory and semantic gaps, the paper presents a compound framework for EO image information
extraction. The proposed approach acts like a common ground between the user's understanding, who is
visually shortsighted to the visible domain, and the machines numerical interpretation of a much wider
information. A hierarchical data representation is considered. At first, basic elements are automatically
computed. Then, users can enforce their judgement on the data processing results until semantic structures
are revealed. This procedure completes a user-machine knowledge transfer. The interaction is formalized as
a dialogue, where communication is determined by a set of parameters guiding the computational process
at each level of representation. The purpose is to maintain the data-driven observable connected to the level
of semantics and to human awareness. The proposed concept offers flexibility and interoperability to users,
allowing them to generate those results that best fit their application scenario. The experiments performed on
different satellite images demonstrate the ability to increase the performances in case of semantic annotation
by adjusting a set of parameters to the particularities of the analyzed data
- …