250 research outputs found

    Few-shot Object Detection on Remote Sensing Images

    Full text link
    In this paper, we deal with the problem of object detection on remote sensing images. Previous methods have developed numerous deep CNN-based methods for object detection on remote sensing images and the report remarkable achievements in detection performance and efficiency. However, current CNN-based methods mostly require a large number of annotated samples to train deep neural networks and tend to have limited generalization abilities for unseen object categories. In this paper, we introduce a few-shot learning-based method for object detection on remote sensing images where only a few annotated samples are provided for the unseen object categories. More specifically, our model contains three main components: a meta feature extractor that learns to extract feature representations from input images, a reweighting module that learn to adaptively assign different weights for each feature representation from the support images, and a bounding box prediction module that carries out object detection on the reweighted feature maps. We build our few-shot object detection model upon YOLOv3 architecture and develop a multi-scale object detection framework. Experiments on two benchmark datasets demonstrate that with only a few annotated samples our model can still achieve a satisfying detection performance on remote sensing images and the performance of our model is significantly better than the well-established baseline models.Comment: 12pages, 7 figure

    Unlocking the capabilities of explainable fewshot learning in remote sensing

    Full text link
    Recent advancements have significantly improved the efficiency and effectiveness of deep learning methods for imagebased remote sensing tasks. However, the requirement for large amounts of labeled data can limit the applicability of deep neural networks to existing remote sensing datasets. To overcome this challenge, fewshot learning has emerged as a valuable approach for enabling learning with limited data. While previous research has evaluated the effectiveness of fewshot learning methods on satellite based datasets, little attention has been paid to exploring the applications of these methods to datasets obtained from UAVs, which are increasingly used in remote sensing studies. In this review, we provide an up to date overview of both existing and newly proposed fewshot classification techniques, along with appropriate datasets that are used for both satellite based and UAV based data. Our systematic approach demonstrates that fewshot learning can effectively adapt to the broader and more diverse perspectives that UAVbased platforms can provide. We also evaluate some SOTA fewshot approaches on a UAV disaster scene classification dataset, yielding promising results. We emphasize the importance of integrating XAI techniques like attention maps and prototype analysis to increase the transparency, accountability, and trustworthiness of fewshot models for remote sensing. Key challenges and future research directions are identified, including tailored fewshot methods for UAVs, extending to unseen tasks like segmentation, and developing optimized XAI techniques suited for fewshot remote sensing problems. This review aims to provide researchers and practitioners with an improved understanding of fewshot learnings capabilities and limitations in remote sensing, while highlighting open problems to guide future progress in efficient, reliable, and interpretable fewshot methods.Comment: Under review, once the paper is accepted, the copyright will be transferred to the corresponding journa

    GeoAI-enhanced Techniques to Support Geographical Knowledge Discovery from Big Geospatial Data

    Get PDF
    abstract: Big data that contain geo-referenced attributes have significantly reformed the way that I process and analyze geospatial data. Compared with the expected benefits received in the data-rich environment, more data have not always contributed to more accurate analysis. “Big but valueless” has becoming a critical concern to the community of GIScience and data-driven geography. As a highly-utilized function of GeoAI technique, deep learning models designed for processing geospatial data integrate powerful computing hardware and deep neural networks into various dimensions of geography to effectively discover the representation of data. However, limitations of these deep learning models have also been reported when People may have to spend much time on preparing training data for implementing a deep learning model. The objective of this dissertation research is to promote state-of-the-art deep learning models in discovering the representation, value and hidden knowledge of GIS and remote sensing data, through three research approaches. The first methodological framework aims to unify varied shadow into limited number of patterns, with the convolutional neural network (CNNs)-powered shape classification, multifarious shadow shapes with a limited number of representative shadow patterns for efficient shadow-based building height estimation. The second research focus integrates semantic analysis into a framework of various state-of-the-art CNNs to support human-level understanding of map content. The final research approach of this dissertation focuses on normalizing geospatial domain knowledge to promote the transferability of a CNN’s model to land-use/land-cover classification. This research reports a method designed to discover detailed land-use/land-cover types that might be challenging for a state-of-the-art CNN’s model that previously performed well on land-cover classification only.Dissertation/ThesisDoctoral Dissertation Geography 201

    Using CORONA imagery to study land use and land cover change : a review of applications

    Get PDF
    CORONA spy satellites offer high spatial resolution imagery acquired in the 1960s and early 1970s and declassified in 1995, and they have been used in various scientific fields, such as archaeology, geomorphology, geology, and land change research. The images are panchromatic but contain many details of objects on the land surface due to their high spatial resolution. This systematic review aims to study the use of CORONA imagery in land use and land cover change (LULC) research. Based on a set of queries conducted on the SCOPUS database, we identified and examined 54 research papers using such data in their study of LULC. Our analysis considered case-study area distributions, LULC classes and LULC changes, as well as the methods and types of geospatial data used alongside CORONA data. While the use of CORONA images has increased over time, their potential has not been fully explored due to difficulties in processing CORONA images. In most cases, study areas are small and below 5000 km2 because of the reported drawbacks related to data acquisition frequency, data quality and analysis. While CORONA imagery allows analyzing built-up areas, infrastructure and individual buildings due to its high spatial resolution and initial mission design, in LULC studies, researchers use the data mostly to study forests. In most case studies, CORONA imagery was used to extend the study period into the 1960s, with only some examples of using CORONA alongside older historical data. Our analysis proves that in order to detect LULC changes, CORONA can be compared with various contemporary geospatial data, particularly high and very high-resolution satellite imagery, as well as aerial imagery

    UAV first view landmark localization with active reinforcement learning

    Get PDF
    We present an active reinforcement learning framework for unmanned aerial vehicle (UAV) first view landmark localization. We formulate the problem of landmark localization as that of a Markov decision process and introduce an active landmark-localization network (ALLNet) to address it. The aim of the ALLNet is to locate a bounding box that surrounds the landmark in a first view image sequence. To this end, it is trained in a reinforcement learning fashion. Specifically, it employs support vector machine (SVM) scores on the bounding box patches as rewards and learns the bounding box transformations as actions. Furthermore, each SVM score indicates whether or not the landmark is detected by the bounding box such that it enables the ALLNet to have the capability of judging whether the landmark leaves or re-enters a first view image. Therefore, the operation of the ALLNet is not only dominated by the reinforcement learning process but also supplemented by an active learning motivated manner. Once the landmark is considered to leave the first view image, the ALLNet stops operating until the SVM detects its re-entry to the view. The active reinforcement learning model enables training a robust ALLNet for landmark localization. The experimental results validate the effectiveness of the proposed model for UAV first view landmark localization
    • …
    corecore