8,700 research outputs found
Exploiting Deep Features for Remote Sensing Image Retrieval: A Systematic Investigation
Remote sensing (RS) image retrieval is of great significant for geological
information mining. Over the past two decades, a large amount of research on
this task has been carried out, which mainly focuses on the following three
core issues: feature extraction, similarity metric and relevance feedback. Due
to the complexity and multiformity of ground objects in high-resolution remote
sensing (HRRS) images, there is still room for improvement in the current
retrieval approaches. In this paper, we analyze the three core issues of RS
image retrieval and provide a comprehensive review on existing methods.
Furthermore, for the goal to advance the state-of-the-art in HRRS image
retrieval, we focus on the feature extraction issue and delve how to use
powerful deep representations to address this task. We conduct systematic
investigation on evaluating correlative factors that may affect the performance
of deep features. By optimizing each factor, we acquire remarkable retrieval
results on publicly available HRRS datasets. Finally, we explain the
experimental phenomenon in detail and draw conclusions according to our
analysis. Our work can serve as a guiding role for the research of
content-based RS image retrieval
Recommended from our members
PERSIANN-CNN: Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks-Convolutional Neural Networks
Abstract
Accurate and timely precipitation estimates are critical for monitoring and forecasting natural disasters such as floods. Despite having high-resolution satellite information, precipitation estimation from remotely sensed data still suffers from methodological limitations. State-of-the-art deep learning algorithms, renowned for their skill in learning accurate patterns within large and complex datasets, appear well suited to the task of precipitation estimation, given the ample amount of high-resolution satellite data. In this study, the effectiveness of applying convolutional neural networks (CNNs) together with the infrared (IR) and water vapor (WV) channels from geostationary satellites for estimating precipitation rate is explored. The proposed model performances are evaluated during summer 2012 and 2013 over central CONUS at the spatial resolution of 0.08° and at an hourly time scale. Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN)–Cloud Classification System (CCS), which is an operational satellite-based product, and PERSIANN–Stacked Denoising Autoencoder (PERSIANN-SDAE) are employed as baseline models. Results demonstrate that the proposed model (PERSIANN-CNN) provides more accurate rainfall estimates compared to the baseline models at various temporal and spatial scales. Specifically, PERSIANN-CNN outperforms PERSIANN-CCS (and PERSIANN-SDAE) by 54% (and 23%) in the critical success index (CSI), demonstrating the detection skills of the model. Furthermore, the root-mean-square error (RMSE) of the rainfall estimates with respect to the National Centers for Environmental Prediction (NCEP) Stage IV gauge–radar data, for PERSIANN-CNN was lower than that of PERSIANN-CCS (PERSIANN-SDAE) by 37% (14%), showing the estimation accuracy of the proposed model
Estimation of forest variables using airborne laser scanning
Airborne laser scanning can provide three-dimensional measurements of the forest canopy with high efficiency and precision. There are presently a large number of airborne laser scanning instruments in operation. The aims of the studies reported in this thesis were, to develop and validate methods for estimation of forest variables using laser data, and to investigate the influence of laser system parameters on the estimates. All studies were carried out in hemi-boreal forest at a test area in southwestern Sweden (lat. 58°30’N, long. 13°40’ E). Forest variables were estimated using regression models. On plot level, the Root Mean Square Error (RMSE) for mean tree height estimations ranged between 6% and 11% of the average value for different datasets and methods. The RMSE for stem volume estimations ranged between 19% and 26% of the average value for different datasets and methods. On stand level (area 0.64 ha), the RMSE was 3% and 11% of the average value for mean tree height and stem volume estimations, respectively. A simulation model was used to investigate the effect of different scanning angles on laser measurement of tree height and canopy closure. The effect of different scanning angles was different within different simulated forest types, e.g., different tree species. High resolution laser data were used for detection of individual trees. In total, 71% of the field measurements were detected representing 91% of the total stem volume. Height and crown diameter of the detected trees could be estimated with a RMSE of 0.63 m and 0.61 m, respectively. The magnitude of the height estimation errors was similar to what is usually achieved using field inventory. Using different laser footprint diameters (0.26 to 3.68 m) gave similar estimation accuracies. The tree species Norway spruce (Picea abies L. Karst.) and Scots pine (Pinus sylvestris L.) were discriminated at individual tree level with an accuracy of 95%. The results in this thesis show that airborne laser scanners are useful as forest inventory tools. Forest variables can be estimated on tree level, plot level and stand level with similar accuracies as traditional field inventories
On the remote sensing of oceanic and atmospheric convection in the Greenland Sea by synthetic aperture radar
In this paper we discuss characteristic properties of radar signatures of oceanic and atmospheric convection features in the Greenland Sea. If the water surface is clean (no surface films or ice coverage), oceanic and atmospheric features can become visible in radar images via a modulation of the surface roughness, and their radar signatures can be very similar. For an unambiguous interpretation and for the retrieval of quantitative information on current and wind variations from radar imagery with such signatures, theoretical models of current and wind phenomena and their radar imaging mechanisms must be utilized. We demonstrate this approach with the analysis of some synthetic aperture radar (SAR) images acquired by the satellites ERS-2 and RADARSAT-1. In once case, an ERS-2 SAR image an a RADARSAT-1 ScanSAR image exhibit pronounced cell-like signatures with length scales on the order of 10-20 km and modulation depths of about 5-6 dB and 9-10 dB, respectively. Simulations with a numerical SAR imagaing model and various input current and wind fields reveal that the signatures in both images can be expained consistently by wind variations on the order of±2.5 ms, but not by surface current variations on realistic orders of magnitude. Accordingly, the observed features must be atmospheric convection cells. This is confirmed by visible typical cloud patterns in a NOAA AVHRR image of the test scenario. In another case, the presence of an oceanic convective chimney is obvious from in situ data, but no signatures of it are visible in an ERS-2 SAR image. We show by numerical simulations with an oceanic convection model and our SAR imaging model that this is consistent with theoretical predictions, since the current gradients associated with the observed chimney are not sufficiently strong to give rise to significant signatures in an ERS-2 SAR image under the given conditions. Further model results indicate that it should be generally difficult to observe oceanic convection features in the Greenland Sea with ERS-2 or RADARSAT-1 SAR, since their signatures resulting from pure wave-current interaction will be too weak to become visible in the noisy SAR images in most cases. This situation will improve with the availability of future high-resolution SARs such as RADARSAT-2 SAR in fine resolution mode (2004) and TerraSAR-X (2005) which will offer significantly reduced speckle noise fluctuations at comparable spatial resolutions and thus a much better visibility of small image variations on spatial scales on the order of a few hundred meters
Impact of Feature Representation on Remote Sensing Image Retrieval
Remote sensing images are acquired using special platforms, sensors and are classified as aerial, multispectral and hyperspectral images. Multispectral and hyperspectral images are represented using large spectral vectors as compared to normal Red, Green, Blue (RGB) images. Hence, remote sensing image retrieval process from large archives is a challenging task. Remote sensing image retrieval mainly consist of feature representation as first step and finding out similar images to a query image as second step. Feature representation plays important part in the performance of remote sensing image retrieval process. Research work focuses on impact of feature representation of remote sensing images on the performance of remote sensing image retrieval. This study shows that more discriminative features of remote sensing images are needed to improve performance of remote sensing image retrieval process
- …