400 research outputs found
Remote sensing image captioning with pre-trained transformer models
Remote sensing images, and the unique properties that characterize them, are attracting increased attention from computer vision researchers, largely due to their many possible applications. The area of computer vision for remote sensing has effectively seen many recent advances, e.g. in tasks such as object detection or scene classification. Recent work in the area has also addressed the task of generating a natural language description of a given remote sensing image, effectively combining techniques from both natural language processing and computer vision. Despite some previously published results, there nonetheless are still many limitations and possibilities for improvement. It remains challenging to generate text that is fluid and linguistically rich while maintaining semantic consistency and good discrimination ability about the objects and visual patterns that should be described. The previous proposals that have come closest to achieving the goals of remote sensing image captioning have used neural encoder-decoder architectures, often including specialized attention mechanisms to help the system in integrating the most relevant visual features while generating the textual descriptions. Taking previous work into consideration, this work proposes a new approach for remote sensing image captioning, using an encoder-decoder model based on the Transformer architecture, and where both the encoder and the decoder are based on components from a pre-existing model that was already trained with large amounts of data. Experiments were carried out using the three main datasets that exist for assessing remote sensing image captioning methods, respectively the Sydney-captions, the \acrshort{UCM}-captions, and the \acrshort{RSICD} datasets. The results show improvements over some previous proposals, although particularly on the larger \acrshort{RSICD} dataset they are still far from the current state-of-art methods. A careful analysis of the results also points to some limitations in the current evaluation methodology, mostly based on automated n-gram overlap metrics such as BLEU or ROUGE
Automatic Caption Generation for Aerial Images: A Survey
Aerial images have attracted attention from researcher community since long time. Generating a caption for an aerial image describing its content in comprehensive way is less studied but important task as it has applications in agriculture, defence, disaster management and many more areas. Though different approaches were followed for natural image caption generation, generating a caption for aerial image remains a challenging task due to its special nature. Use of emerging techniques from Artificial Intelligence (AI) and Natural Language Processing (NLP) domains have resulted in generation of accepted quality captions for aerial images. However lot needs to be done to fully utilize potential of aerial image caption generation task. This paper presents detail survey of the various approaches followed by researchers for aerial image caption generation task. The datasets available for experimentation, criteria used for performance evaluation and future directions are also discussed
Learning to Evaluate Performance of Multi-modal Semantic Localization
Semantic localization (SeLo) refers to the task of obtaining the most
relevant locations in large-scale remote sensing (RS) images using semantic
information such as text. As an emerging task based on cross-modal retrieval,
SeLo achieves semantic-level retrieval with only caption-level annotation,
which demonstrates its great potential in unifying downstream tasks. Although
SeLo has been carried out successively, but there is currently no work has
systematically explores and analyzes this urgent direction. In this paper, we
thoroughly study this field and provide a complete benchmark in terms of
metrics and testdata to advance the SeLo task. Firstly, based on the
characteristics of this task, we propose multiple discriminative evaluation
metrics to quantify the performance of the SeLo task. The devised significant
area proportion, attention shift distance, and discrete attention distance are
utilized to evaluate the generated SeLo map from pixel-level and region-level.
Next, to provide standard evaluation data for the SeLo task, we contribute a
diverse, multi-semantic, multi-objective Semantic Localization Testset
(AIR-SLT). AIR-SLT consists of 22 large-scale RS images and 59 test cases with
different semantics, which aims to provide a comprehensive evaluations for
retrieval models. Finally, we analyze the SeLo performance of RS cross-modal
retrieval models in detail, explore the impact of different variables on this
task, and provide a complete benchmark for the SeLo task. We have also
established a new paradigm for RS referring expression comprehension, and
demonstrated the great advantage of SeLo in semantics through combining it with
tasks such as detection and road extraction. The proposed evaluation metrics,
semantic localization testsets, and corresponding scripts have been open to
access at github.com/xiaoyuan1996/SemanticLocalizationMetrics .Comment: 19 pages, 11 figure
Changes to Captions: An Attentive Network for Remote Sensing Change Captioning
In recent years, advanced research has focused on the direct learning and
analysis of remote sensing images using natural language processing (NLP)
techniques. The ability to accurately describe changes occurring in
multi-temporal remote sensing images is becoming increasingly important for
geospatial understanding and land planning. Unlike natural image change
captioning tasks, remote sensing change captioning aims to capture the most
significant changes, irrespective of various influential factors such as
illumination, seasonal effects, and complex land covers. In this study, we
highlight the significance of accurately describing changes in remote sensing
images and present a comparison of the change captioning task for natural and
synthetic images and remote sensing images. To address the challenge of
generating accurate captions, we propose an attentive changes-to-captions
network, called Chg2Cap for short, for bi-temporal remote sensing images. The
network comprises three main components: 1) a Siamese CNN-based feature
extractor to collect high-level representations for each image pair; 2) an
attentive decoder that includes a hierarchical self-attention block to locate
change-related features and a residual block to generate the image embedding;
and 3) a transformer-based caption generator to decode the relationship between
the image embedding and the word embedding into a description. The proposed
Chg2Cap network is evaluated on two representative remote sensing datasets, and
a comprehensive experimental analysis is provided. The code and pre-trained
models will be available online at https://github.com/ShizhenChang/Chg2Cap
RS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation Model
Pre-trained Vision-Language Foundation Models utilizing extensive image-text
paired data have demonstrated unprecedented image-text association
capabilities, achieving remarkable results across various downstream tasks. A
critical challenge is how to make use of existing large-scale pre-trained VLMs,
which are trained on common objects, to perform the domain-specific transfer
for accomplishing domain-related downstream tasks. In this paper, we propose a
new framework that includes the Domain Foundation Model (DFM), bridging the gap
between the General Foundation Model (GFM) and domain-specific downstream
tasks. Moreover, we present an image-text paired dataset in the field of remote
sensing (RS), RS5M, which has 5 million RS images with English descriptions.
The dataset is obtained from filtering publicly available image-text paired
datasets and captioning label-only RS datasets with pre-trained VLM. These
constitute the first large-scale RS image-text paired dataset. Additionally, we
tried several Parameter-Efficient Fine-Tuning methods on RS5M to implement the
DFM. Experimental results show that our proposed dataset are highly effective
for various tasks, improving upon the baseline by in
zero-shot classification tasks, and obtaining good results in both
Vision-Language Retrieval and Semantic Localization tasks.
\url{https://github.com/om-ai-lab/RS5M}Comment: RS5M dataset v
RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing Data
In this paper, we introduce the task of visual grounding for remote sensing
data (RSVG). RSVG aims to localize the referred objects in remote sensing (RS)
images with the guidance of natural language. To retrieve rich information from
RS imagery using natural language, many research tasks, like RS image visual
question answering, RS image captioning, and RS image-text retrieval have been
investigated a lot. However, the object-level visual grounding on RS images is
still under-explored. Thus, in this work, we propose to construct the dataset
and explore deep learning models for the RSVG task. Specifically, our
contributions can be summarized as follows. 1) We build the new large-scale
benchmark dataset of RSVG, termed RSVGD, to fully advance the research of RSVG.
This new dataset includes image/expression/box triplets for training and
evaluating visual grounding models. 2) We benchmark extensive state-of-the-art
(SOTA) natural image visual grounding methods on the constructed RSVGD dataset,
and some insightful analyses are provided based on the results. 3) A novel
transformer-based Multi-Level Cross-Modal feature learning (MLCM) module is
proposed. Remotely-sensed images are usually with large scale variations and
cluttered backgrounds. To deal with the scale-variation problem, the MLCM
module takes advantage of multi-scale visual features and multi-granularity
textual embeddings to learn more discriminative representations. To cope with
the cluttered background problem, MLCM adaptively filters irrelevant noise and
enhances salient features. In this way, our proposed model can incorporate more
effective multi-level and multi-modal features to boost performance.
Furthermore, this work also provides useful insights for developing better RSVG
models. The dataset and code will be publicly available at
https://github.com/ZhanYang-nwpu/RSVG-pytorch.Comment: 12 pages, 10 figure
- …