995 research outputs found

    Exploiting Deep Features for Remote Sensing Image Retrieval: A Systematic Investigation

    Full text link
    Remote sensing (RS) image retrieval is of great significant for geological information mining. Over the past two decades, a large amount of research on this task has been carried out, which mainly focuses on the following three core issues: feature extraction, similarity metric and relevance feedback. Due to the complexity and multiformity of ground objects in high-resolution remote sensing (HRRS) images, there is still room for improvement in the current retrieval approaches. In this paper, we analyze the three core issues of RS image retrieval and provide a comprehensive review on existing methods. Furthermore, for the goal to advance the state-of-the-art in HRRS image retrieval, we focus on the feature extraction issue and delve how to use powerful deep representations to address this task. We conduct systematic investigation on evaluating correlative factors that may affect the performance of deep features. By optimizing each factor, we acquire remarkable retrieval results on publicly available HRRS datasets. Finally, we explain the experimental phenomenon in detail and draw conclusions according to our analysis. Our work can serve as a guiding role for the research of content-based RS image retrieval

    EAGLE 2006 – Multi-purpose, multi-angle and multi-sensor in-situ and airborne campaigns over grassland and forest

    Get PDF
    EAGLE2006 - an intensive field campaign - was carried out in the Netherlands from the 8th until the 18th of June 2006. Several airborne sensors - an optical imaging sensor, an imaging microwave radiometer, and a flux airplane – were used and extensive ground measurements were conducted over one grassland (Cabauw) site and two forest sites (Loobos & Speulderbos) in the central part of the Netherlands, in addition to the acquisition of multi-angle and multi-sensor satellite data. The data set is both unique and urgently needed for the development and validation of models and inversion algorithms for quantitative surface parameter estimation and process studies. EAGLE2006 was led by the Department of Water Resources of the International Institute for Geo-Information Science and Earth Observation and originated from the combination of a number of initiatives coming under different funding. The objectives of the EAGLE2006 campaign were closely related to the objectives of other ESA Campaigns (SPARC2004, Sen2Flex2005 and especially AGRISAR2006). However, one important objective of the campaign is to build up a data base for the investigation and validation of the retrieval of bio-geophysical parameters, obtained at different radar frequencies (X-, C- and L-Band) and at hyperspectral optical and thermal bands acquired over vegetated fields (forest and grassland). As such, all activities were related to algorithm development for future satellite missions such as Sentinels and for satellite validations for MERIS, MODIS as well as AATSR and ASTER thermal data validation, with activities also related to the ASAR sensor on board ESA’s Envisat platform and those on EPS/MetOp and SMOS. Most of the activities in the campaign are highly relevant for the EU GEMS EAGLE project, but also issues related to retrieval of biophysical parameters from MERIS and MODIS as well as AATSR and ASTER data were of particular relevance to the NWO-SRON EcoRTM project, while scaling issues and complementary between these (covering only local sites) and global sensors such as MERIS/SEVIRI, EPS/MetOP and SMOS were also key elements for the SMOS cal/val project and the ESA-MOST DRAGON programme. This contribution describes the mission objectives and provides an overview of the airborne and field campaigns

    Classification Modeling for Malaysian Blooming Flower Images Using Neural Networks

    Get PDF
    Image processing is a rapidly growing research area of computer science and remains as a challenging problem within the computer vision fields. For the classification of flower images, the problem is mainly due to the huge similarities in terms of colour and texture. The appearance of the image itself such as variation of lights due to different lighting condition, shadow effect on the object’s surface, size, shape, rotation and position, background clutter, states of blooming or budding may affect the utilized classification techniques. This study aims to develop a classification model for Malaysian blooming flowers using neural network with the back propagation algorithms. The flower image is extracted through Region of Interest (ROI) in which texture and colour are emphasized in this study. In this research, a total of 960 images were extracted from 16 types of flowers. Each ROI was represented by three colour attributes (Hue, Saturation, and Value) and four textures attribute (Contrast, Correlation, Energy and Homogeneity). In training and testing phases, experiments were carried out to observe the classification performance of Neural Networks with duplication of difficult pattern to learn (referred to as DOUBLE) as this could possibly explain as to why some flower images were difficult to learn by classifiers. Results show that the overall performance of Neural Network with DOUBLE is 96.3% while actual data set is 68.3%, and the accuracy obtained from Logistic Regression with actual data set is 60.5%. The Decision Tree classification results indicate that the highest performance obtained by Chi-Squared Automatic Interaction Detection(CHAID) and Exhaustive CHAID (EX-CHAID) is merely 42% with DOUBLE. The findings from this study indicate that Neural Network with DOUBLE data set produces highest performance compared to Logistic Regression and Decision Tree. Therefore, NN has been potential in building Malaysian blooming flower model. Future studies can be focused on increasing the sample size and ROI thus may lead to a higher percentage of accuracy. Nevertheless, the developed flower model can be used as part of the Malaysian Blooming Flower recognition system in the future where the colours and texture are needed in the flower identification process

    Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review

    Get PDF
    This paper investigates recent research on active learning for (geo) text and image classification, with an emphasis on methods that combine visual analytics and/or deep learning. Deep learning has attracted substantial attention across many domains of science and practice, because it can find intricate patterns in big data; but successful application of the methods requires a big set of labeled data. Active learning, which has the potential to address the data labeling challenge, has already had success in geospatial applications such as trajectory classification from movement data and (geo) text and image classification. This review is intended to be particularly relevant for extension of these methods to GISience, to support work in domains such as geographic information retrieval from text and image repositories, interpretation of spatial language, and related geo-semantics challenges. Specifically, to provide a structure for leveraging recent advances, we group the relevant work into five categories: active learning, visual analytics, active learning with visual analytics, active deep learning, plus GIScience and Remote Sensing (RS) using active learning and active deep learning. Each category is exemplified by recent influential work. Based on this framing and our systematic review of key research, we then discuss some of the main challenges of integrating active learning with visual analytics and deep learning, and point out research opportunities from technical and application perspectives-for application-based opportunities, with emphasis on those that address big data with geospatial components

    A Novel System for Content-Based Retrieval of Single and Multi-Label High-Dimensional Remote Sensing Images

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper presents a novel content-based remote sensing (RS) image retrieval system that consists of the following. First, an image description method that characterizes both spatial and spectral information content of RS images. Second, a supervised retrieval method that efficiently models and exploits the sparsity of RS image descriptors. The proposed image description method characterizes the spectral content by three different novel spectral descriptors that are: raw pixel values, simple bag of spectral values and the extended bag of spectral values descriptors. To model the spatial content of RS images, we consider the well-known scale invariant feature transform-based bag of visual words approach. With the conjunction of the spatial and the spectral descriptors, RS image retrieval is achieved by a novel sparse reconstruction-based RS image retrieval method. The proposed method considers a novel measure of label likelihood in the framework of sparse reconstruction-based classifiers and generalizes the original sparse classifier to the case both single-label and multi-label RS image retrieval problems. Finally, to enhance retrieval performance, we introduce a strategy to exploit the sensitivity of the sparse reconstruction-based method to different dictionary words. Experimental results obtained on two benchmark archives show the effectiveness of the proposed system.EC/H2020/759764/EU/Accurate and Scalable Processing of Big Data in Earth Observation/BigEart

    Contributions to the analysis and segmentation of remote sensing hyperspectral images

    Get PDF
    142 p.This PhD Thesis deals with the segmentation of hyperspectral images from the point of view of Lattice Computing. We have introduced the application of Associative Morphological Memories as a tool to detect strong lattice independence, which has been proven equivalent to affine independence. Therefore, sets of strong lattice independent vectors found using our algorithms correspond to the vertices of convex sets that cover most of the data. Unmixing the data relative to these endmembers provides a collection of abundance images which can be assumed either as unsupervised segmentations of the images or as features extracted from the hyperspectral image pixels. Besides, we have applied this feature extraction to propose a content based image retrieval approach based on the image spectral characterization provided by the endmembers. Finally, we extended our ideas to the proposal of Morphological Cellular Automata whose dynamics are guided by the morphological/lattice independence properties of the image pixels. Our works have also explored the applicability of Evolution Strategies to the endmember induction from the hyperspectral image data

    Adversarial Virtual Exemplar Learning for Label-Frugal Satellite Image Change Detection

    Full text link
    Satellite image change detection aims at finding occurrences of targeted changes in a given scene taken at different instants. This task is highly challenging due to the acquisition conditions and also to the subjectivity of changes. In this paper, we investigate satellite image change detection using active learning. Our method is interactive and relies on a question and answer model which asks the oracle (user) questions about the most informative display (dubbed as virtual exemplars), and according to the user's responses, updates change detections. The main contribution of our method consists in a novel adversarial model that allows frugally probing the oracle with only the most representative, diverse and uncertain virtual exemplars. The latter are learned to challenge the most the trained change decision criteria which ultimately leads to a better re-estimate of these criteria in the following iterations of active learning. Conducted experiments show the out-performance of our proposed adversarial display model against other display strategies as well as the related work.Comment: arXiv admin note: substantial text overlap with arXiv:2203.1155
    corecore