76 research outputs found
Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery
In this paper we discuss the potential and challenges regarding SAR-optical
stereogrammetry for urban areas, using very-high-resolution (VHR) remote
sensing imagery. Since we do this mainly from a geometrical point of view, we
first analyze the height reconstruction accuracy to be expected for different
stereogrammetric configurations. Then, we propose a strategy for simultaneous
tie point matching and 3D reconstruction, which exploits an epipolar-like
search window constraint. To drive the matching and ensure some robustness, we
combine different established handcrafted similarity measures. For the
experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and
MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR
imagery is generally feasible with 3D positioning accuracies in the
meter-domain, although the matching of these strongly hetereogeneous
multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar
(SAR), optical images, remote sensing, data fusion, stereogrammetr
Multi-level Feature Fusion-based CNN for Local Climate Zone Classification from Sentinel-2 Images: Benchmark Results on the So2Sat LCZ42 Dataset
As a unique classification scheme for urban forms and functions, the local
climate zone (LCZ) system provides essential general information for any
studies related to urban environments, especially on a large scale. Remote
sensing data-based classification approaches are the key to large-scale mapping
and monitoring of LCZs. The potential of deep learning-based approaches is not
yet fully explored, even though advanced convolutional neural networks (CNNs)
continue to push the frontiers for various computer vision tasks. One reason is
that published studies are based on different datasets, usually at a regional
scale, which makes it impossible to fairly and consistently compare the
potential of different CNNs for real-world scenarios. This study is based on
the big So2Sat LCZ42 benchmark dataset dedicated to LCZ classification. Using
this dataset, we studied a range of CNNs of varying sizes. In addition, we
proposed a CNN to classify LCZs from Sentinel-2 images, Sen2LCZ-Net. Using this
base network, we propose fusing multi-level features using the extended
Sen2LCZ-Net-MF. With this proposed simple network architecture and the highly
competitive benchmark dataset, we obtain results that are better than those
obtained by the state-of-the-art CNNs, while requiring less computation with
fewer layers and parameters. Large-scale LCZ classification examples of
completely unseen areas are presented, demonstrating the potential of our
proposed Sen2LCZ-Net-MF as well as the So2Sat LCZ42 dataset. We also
intensively investigated the influence of network depth and width and the
effectiveness of the design choices made for Sen2LCZ-Net-MF. Our work will
provide important baselines for future CNN-based algorithm developments for
both LCZ classification and other urban land cover land use classification
SEN12MS -- A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion
The availability of curated large-scale training data is a crucial factor for
the development of well-generalizing deep learning methods for the extraction
of geoinformation from multi-sensor remote sensing imagery. While quite some
datasets have already been published by the community, most of them suffer from
rather strong limitations, e.g. regarding spatial coverage, diversity or simply
number of available samples. Exploiting the freely available data acquired by
the Sentinel satellites of the Copernicus program implemented by the European
Space Agency, as well as the cloud computing facilities of Google Earth Engine,
we provide a dataset consisting of 180,662 triplets of dual-pol synthetic
aperture radar (SAR) image patches, multi-spectral Sentinel-2 image patches,
and MODIS land cover maps. With all patches being fully georeferenced at a 10 m
ground sampling distance and covering all inhabited continents during all
meteorological seasons, we expect the dataset to support the community in
developing sophisticated deep learning-based approaches for common tasks such
as scene classification or semantic segmentation for land cover mapping.Comment: accepted for publication in the ISPRS Annals of the Photogrammetry,
Remote Sensing and Spatial Information Sciences (online from September 2019
MVP: Meta Visual Prompt Tuning for Few-Shot Remote Sensing Image Scene Classification
Vision Transformer (ViT) models have recently emerged as powerful and
versatile models for various visual tasks. Recently, a work called PMF has
achieved promising results in few-shot image classification by utilizing
pre-trained vision transformer models. However, PMF employs full fine-tuning
for learning the downstream tasks, leading to significant overfitting and
storage issues, especially in the remote sensing domain. In order to tackle
these issues, we turn to the recently proposed parameter-efficient tuning
methods, such as VPT, which updates only the newly added prompt parameters
while keeping the pre-trained backbone frozen. Inspired by VPT, we propose the
Meta Visual Prompt Tuning (MVP) method. Specifically, we integrate the VPT
method into the meta-learning framework and tailor it to the remote sensing
domain, resulting in an efficient framework for Few-Shot Remote Sensing Scene
Classification (FS-RSSC). Furthermore, we introduce a novel data augmentation
strategy based on patch embedding recombination to enhance the representation
and diversity of scenes for classification purposes. Experiment results on the
FS-RSSC benchmark demonstrate the superior performance of the proposed MVP over
existing methods in various settings, such as various-way-various-shot,
various-way-one-shot, and cross-domain adaptation.Comment: SUBMIT TO IEEE TRANSACTION
Refined Equivalent Pinhole Model for Large-scale 3D Reconstruction from Spaceborne CCD Imagery
In this study, we present a large-scale earth surface reconstruction pipeline
for linear-array charge-coupled device (CCD) satellite imagery. While
mainstream satellite image-based reconstruction approaches perform
exceptionally well, the rational functional model (RFM) is subject to several
limitations. For example, the RFM has no rigorous physical interpretation and
differs significantly from the pinhole imaging model; hence, it cannot be
directly applied to learning-based 3D reconstruction networks and to more novel
reconstruction pipelines in computer vision. Hence, in this study, we introduce
a method in which the RFM is equivalent to the pinhole camera model (PCM),
meaning that the internal and external parameters of the pinhole camera are
used instead of the rational polynomial coefficient parameters. We then derive
an error formula for this equivalent pinhole model for the first time,
demonstrating the influence of the image size on the accuracy of the
reconstruction. In addition, we propose a polynomial image refinement model
that minimizes equivalent errors via the least squares method. The experiments
were conducted using four image datasets: WHU-TLC, DFC2019, ISPRS-ZY3, and GF7.
The results demonstrated that the reconstruction accuracy was proportional to
the image size. Our polynomial image refinement model significantly enhanced
the accuracy and completeness of the reconstruction, and achieved more
significant improvements for larger-scale images.Comment: 24 page
Fractalkine/CX3CR1 Contributes to Endometriosis-Induced Neuropathic Pain and Mechanical Hypersensitivity in Rats
Pain is the most severe and common symptom of endometriosis. Its underlying pathogenetic mechanism is poorly understood. Nerve sensitization is a particular research challenge, due to the limitations of general endometriosis models and sampling nerve tissue from patients. The chemokine fractalkine (FKN) has been demonstrated to play a key role in various forms of neuropathic pain, while its role in endometriotic pain is unknown. Our study was designed to explore the function of FKN in the development and maintenance of peripheral hyperalgesia and central sensitization in endometriosis using a novel endometriosis animal model developed in our laboratory. After modeling, behavioral tests were carried out and the optimal time for molecular changes was obtained. We extracted ectopic tissues and L4–6 spinal cords to detect peripheral and central roles for FKN, respectively. To assess morphologic characteristics of endometriosis-like lesions—as well as expression and location of FKN/CX3CR1—we performed H&E staining, immunostaining, and western blotting analyses. Furthermore, inhibition of FKN expression in the spinal cord was achieved by intrathecal administration of an FKN-neutralizing antibody to demonstrate its function. Our results showed that implanted autologous uterine tissue around the sciatic nerve induced endometriosis-like lesions and produced mechanical hyperalgesia and allodynia. FKN was highly expressed on macrophages, whereas its receptor CX3CR1 was overexpressed in the myelin sheath of sciatic nerve fibers. Overexpressed FKN was also observed in neurons. CX3CR1/pp38-MAPK was upregulated in activated microglia in the spinal dorsal horn. Intrathecal administration of FKN-neutralizing antibody not only reversed the established mechanical hyperalgesia and allodynia, but also inhibited the expression of CX3CR1/pp38-MAPK in activated microglia, which was essential for the persistence of central sensitization. We concluded that the FKN/CX3CR1 signaling pathway might be one of the mechanisms of peripheral hyperalgesia in endometriosis, which requires further studies. Spinal FKN is important for the development and maintenance of central sensitization in endometriosis, and it may further serve as a novel therapeutic target to relieve persistent pain associated with endometriosis
Sea Ice Extraction via Remote Sensed Imagery: Algorithms, Datasets, Applications and Challenges
The deep learning, which is a dominating technique in artificial
intelligence, has completely changed the image understanding over the past
decade. As a consequence, the sea ice extraction (SIE) problem has reached a
new era. We present a comprehensive review of four important aspects of SIE,
including algorithms, datasets, applications, and the future trends. Our review
focuses on researches published from 2016 to the present, with a specific focus
on deep learning-based approaches in the last five years. We divided all
relegated algorithms into 3 categories, including classical image segmentation
approach, machine learning-based approach and deep learning-based methods. We
reviewed the accessible ice datasets including SAR-based datasets, the
optical-based datasets and others. The applications are presented in 4 aspects
including climate research, navigation, geographic information systems (GIS)
production and others. It also provides insightful observations and inspiring
future research directions.Comment: 24 pages, 6 figure
Mapping horizontal and vertical urban densification in Denmark with Landsat time-series from 1985 to 2018: a semantic segmentation solution
Landsat imagery is an unparalleled freely available data source that allows
reconstructing horizontal and vertical urban form. This paper addresses the
challenge of using Landsat data, particularly its 30m spatial resolution, for
monitoring three-dimensional urban densification. We compare temporal and
spatial transferability of an adapted DeepLab model with a simple fully
convolutional network (FCN) and a texture-based random forest (RF) model to map
urban density in the two morphological dimensions: horizontal (compact, open,
sparse) and vertical (high rise, low rise). We test whether a model trained on
the 2014 data can be applied to 2006 and 1995 for Denmark, and examine whether
we could use the model trained on the Danish data to accurately map other
European cities. Our results show that an implementation of deep networks and
the inclusion of multi-scale contextual information greatly improve the
classification and the model's ability to generalize across space and time.
DeepLab provides more accurate horizontal and vertical classifications than FCN
when sufficient training data is available. By using DeepLab, the F1 score can
be increased by 4 and 10 percentage points for detecting vertical urban growth
compared to FCN and RF for Denmark. For mapping the other European cities with
training data from Denmark, DeepLab also shows an advantage of 6 percentage
points over RF for both the dimensions. The resulting maps across the years
1985 to 2018 reveal different patterns of urban growth between Copenhagen and
Aarhus, the two largest cities in Denmark, illustrating that those cities have
used various planning policies in addressing population growth and housing
supply challenges. In summary, we propose a transferable deep learning approach
for automated, long-term mapping of urban form from Landsat images.Comment: Accepted manuscript including appendix (supplementary file
Multi-level Feature Fusion-based CNN for Local Climate Zone Classification from Sentinel-2 Images: Benchmark Results on the So2Sat LCZ42 Dataset
As a unique classification scheme for urban forms and functions, the local climate zone (LCZ) system provides essential general information for any studies related to urban environments, especially on a large scale. Remote sensing data-based classification approaches are the key to large-scale mapping and monitoring of LCZs. The potential of deep learning-based approaches is not yet fully explored, even though advanced convolutional neural networks (CNNs) continue to push the frontiers for various computer vision tasks. One reason is that published studies are based on different datasets, usually at a regional scale, which makes it impossible to fairly and consistently compare the potential of different CNNs for real-world scenarios. This article is based on the big So2Sat LCZ42 benchmark dataset dedicated to LCZ classification. Using this dataset, we studied a range of CNNs of varying sizes. In addition, we proposed a CNN to classify LCZs from Sentinel-2 images, Sen2LCZ-Net. Using this base network, we propose fusing multilevel features using the extended Sen2LCZ-Net-MF. With this proposed simple network architecture, and the highly competitive benchmark dataset, we obtain results that are better than those obtained by the state-of-the-art CNNs, while requiring less computation with fewer layers and parameters. Large-scale LCZ classification examples of completely unseen areas are presented, demonstrating the potential of our proposed Sen2LCZ-Net-MF as well as the So2Sat LCZ42 dataset. We also intensively investigated the influence of network depth and width, and the effectiveness of the design choices made for Sen2LCZ-Net-MF. This article will provide important baselines for future CNN-based algorithm developments for both LCZ classification and other urban land cover land use classification. Code and pretrained models are available at https://github.com/ChunpingQiu/benchmark-on-So2SatLCZ42-dataset-a-simple-tour
- …