70 research outputs found
Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery
In this paper we discuss the potential and challenges regarding SAR-optical
stereogrammetry for urban areas, using very-high-resolution (VHR) remote
sensing imagery. Since we do this mainly from a geometrical point of view, we
first analyze the height reconstruction accuracy to be expected for different
stereogrammetric configurations. Then, we propose a strategy for simultaneous
tie point matching and 3D reconstruction, which exploits an epipolar-like
search window constraint. To drive the matching and ensure some robustness, we
combine different established handcrafted similarity measures. For the
experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and
MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR
imagery is generally feasible with 3D positioning accuracies in the
meter-domain, although the matching of these strongly hetereogeneous
multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar
(SAR), optical images, remote sensing, data fusion, stereogrammetr
Multi-level Feature Fusion-based CNN for Local Climate Zone Classification from Sentinel-2 Images: Benchmark Results on the So2Sat LCZ42 Dataset
As a unique classification scheme for urban forms and functions, the local
climate zone (LCZ) system provides essential general information for any
studies related to urban environments, especially on a large scale. Remote
sensing data-based classification approaches are the key to large-scale mapping
and monitoring of LCZs. The potential of deep learning-based approaches is not
yet fully explored, even though advanced convolutional neural networks (CNNs)
continue to push the frontiers for various computer vision tasks. One reason is
that published studies are based on different datasets, usually at a regional
scale, which makes it impossible to fairly and consistently compare the
potential of different CNNs for real-world scenarios. This study is based on
the big So2Sat LCZ42 benchmark dataset dedicated to LCZ classification. Using
this dataset, we studied a range of CNNs of varying sizes. In addition, we
proposed a CNN to classify LCZs from Sentinel-2 images, Sen2LCZ-Net. Using this
base network, we propose fusing multi-level features using the extended
Sen2LCZ-Net-MF. With this proposed simple network architecture and the highly
competitive benchmark dataset, we obtain results that are better than those
obtained by the state-of-the-art CNNs, while requiring less computation with
fewer layers and parameters. Large-scale LCZ classification examples of
completely unseen areas are presented, demonstrating the potential of our
proposed Sen2LCZ-Net-MF as well as the So2Sat LCZ42 dataset. We also
intensively investigated the influence of network depth and width and the
effectiveness of the design choices made for Sen2LCZ-Net-MF. Our work will
provide important baselines for future CNN-based algorithm developments for
both LCZ classification and other urban land cover land use classification
SEN12MS -- A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion
The availability of curated large-scale training data is a crucial factor for
the development of well-generalizing deep learning methods for the extraction
of geoinformation from multi-sensor remote sensing imagery. While quite some
datasets have already been published by the community, most of them suffer from
rather strong limitations, e.g. regarding spatial coverage, diversity or simply
number of available samples. Exploiting the freely available data acquired by
the Sentinel satellites of the Copernicus program implemented by the European
Space Agency, as well as the cloud computing facilities of Google Earth Engine,
we provide a dataset consisting of 180,662 triplets of dual-pol synthetic
aperture radar (SAR) image patches, multi-spectral Sentinel-2 image patches,
and MODIS land cover maps. With all patches being fully georeferenced at a 10 m
ground sampling distance and covering all inhabited continents during all
meteorological seasons, we expect the dataset to support the community in
developing sophisticated deep learning-based approaches for common tasks such
as scene classification or semantic segmentation for land cover mapping.Comment: accepted for publication in the ISPRS Annals of the Photogrammetry,
Remote Sensing and Spatial Information Sciences (online from September 2019
Mapping horizontal and vertical urban densification in Denmark with Landsat time-series from 1985 to 2018: a semantic segmentation solution
Landsat imagery is an unparalleled freely available data source that allows
reconstructing horizontal and vertical urban form. This paper addresses the
challenge of using Landsat data, particularly its 30m spatial resolution, for
monitoring three-dimensional urban densification. We compare temporal and
spatial transferability of an adapted DeepLab model with a simple fully
convolutional network (FCN) and a texture-based random forest (RF) model to map
urban density in the two morphological dimensions: horizontal (compact, open,
sparse) and vertical (high rise, low rise). We test whether a model trained on
the 2014 data can be applied to 2006 and 1995 for Denmark, and examine whether
we could use the model trained on the Danish data to accurately map other
European cities. Our results show that an implementation of deep networks and
the inclusion of multi-scale contextual information greatly improve the
classification and the model's ability to generalize across space and time.
DeepLab provides more accurate horizontal and vertical classifications than FCN
when sufficient training data is available. By using DeepLab, the F1 score can
be increased by 4 and 10 percentage points for detecting vertical urban growth
compared to FCN and RF for Denmark. For mapping the other European cities with
training data from Denmark, DeepLab also shows an advantage of 6 percentage
points over RF for both the dimensions. The resulting maps across the years
1985 to 2018 reveal different patterns of urban growth between Copenhagen and
Aarhus, the two largest cities in Denmark, illustrating that those cities have
used various planning policies in addressing population growth and housing
supply challenges. In summary, we propose a transferable deep learning approach
for automated, long-term mapping of urban form from Landsat images.Comment: Accepted manuscript including appendix (supplementary file
Multi-level Feature Fusion-based CNN for Local Climate Zone Classification from Sentinel-2 Images: Benchmark Results on the So2Sat LCZ42 Dataset
As a unique classification scheme for urban forms and functions, the local climate zone (LCZ) system provides essential general information for any studies related to urban environments, especially on a large scale. Remote sensing data-based classification approaches are the key to large-scale mapping and monitoring of LCZs. The potential of deep learning-based approaches is not yet fully explored, even though advanced convolutional neural networks (CNNs) continue to push the frontiers for various computer vision tasks. One reason is that published studies are based on different datasets, usually at a regional scale, which makes it impossible to fairly and consistently compare the potential of different CNNs for real-world scenarios. This article is based on the big So2Sat LCZ42 benchmark dataset dedicated to LCZ classification. Using this dataset, we studied a range of CNNs of varying sizes. In addition, we proposed a CNN to classify LCZs from Sentinel-2 images, Sen2LCZ-Net. Using this base network, we propose fusing multilevel features using the extended Sen2LCZ-Net-MF. With this proposed simple network architecture, and the highly competitive benchmark dataset, we obtain results that are better than those obtained by the state-of-the-art CNNs, while requiring less computation with fewer layers and parameters. Large-scale LCZ classification examples of completely unseen areas are presented, demonstrating the potential of our proposed Sen2LCZ-Net-MF as well as the So2Sat LCZ42 dataset. We also intensively investigated the influence of network depth and width, and the effectiveness of the design choices made for Sen2LCZ-Net-MF. This article will provide important baselines for future CNN-based algorithm developments for both LCZ classification and other urban land cover land use classification. Code and pretrained models are available at https://github.com/ChunpingQiu/benchmark-on-So2SatLCZ42-dataset-a-simple-tour
Unsupervised deep joint segmentation of multi-temporal high resolution images
High/very-high-resolution (HR/VHR) multitemporal images are important in remote sensing to monitor the dynamics of the Earth's surface. Unsupervised object-based image analysis provides an effective solution to analyze such images. Image semantic segmentation assigns pixel labels from meaningful object groups and has been extensively studied in the context of single-image analysis, however not explored for multitemporal one. In this article, we propose to extend supervised semantic segmentation to the unsupervised joint semantic segmentation of multitemporal images. We propose a novel method that processes multitemporal images by separately feeding to a deep network comprising of trainable convolutional layers. The training process does not involve any external label, and segmentation labels are obtained from the argmax classification of the final layer. A novel loss function is used to detect object segments from individual images as well as establish a correspondence between distinct multitemporal segments. Multitemporal semantic labels and weights of the trainable layers are jointly optimized in iterations. We tested the method on three different HR/VHR data sets from Munich, Paris, and Trento, which shows the method to be effective. We further extended the proposed joint segmentation method for change detection (CD) and tested on a VHR multisensor data set from Trento
Do Cities exist in all Shapes and Sizes? An EO based Investigation
In this work, unsupervised clustering on Earth Observation (EO) data was applied to investigate cities' morphology across the globe. Based on 110 cities, we found seven city types of similar morphologic patterns whose geographical distribution underpins the influence of urbanistic culture on the built landscape
Lack of Association of Two Common Polymorphisms rs2910164 and rs11614913 with Susceptibility to Hepatocellular Carcinoma: A Meta-Analysis
BACKGROUND: Single nucleotide polymorphisms (SNPs) in microRNA-coding genes may participate in the process of carcinogenesis by altering the expression of tumor-related microRNAs. It has been suggested that two common SNPs rs2910164 in miR-146a and rs11614913 in miR-196a2 are associated with susceptibility to hepatocellular carcinoma (HCC). However, published results are inconsistent and inconclusive. In the present study, we performed a meta-analysis to systematically summarize the possible association between the two SNPs and the risk for HCC. METHODOLOGY/PRINCIPAL FINDINGS: We conducted a search of case-control studies on the associations of SNPs rs2910164 and/or rs11614913 with susceptibility to HCC in PubMed, EMBASE, ISI Web of Science, Cochrane Central Register of Controlled Trials, ScienceDirect, Wiley Online Library and Chinese National Knowledge Infrastructure databases. Data from eligible studies were extracted for meta-analysis. HCC risk associated with the two polymorphisms was estimated by pooled odds ratios (ORs) and 95% confidence intervals (95% CIs). 5 studies on rs2910164 and 4 studies on rs11614913 were included in our meta-analysis. Our results showed that neither allele frequency nor genotype distribution of the two polymorphisms was associated with risk for HCC in all genetic models. Similarly, subgroup analysis in Chinese population showed no association between the two SNPs and the susceptibility to HCC. CONCLUSIONS/SIGNIFICANCE: This meta-analysis suggests that two common SNPs rs2910164 and rs11614913 are not associated with the risk of HCC. Well-designed studies with larger sample size and more ethnic groups are required to further validate the results
Modeling of transmission distortion for multi-view video in packet lossy networks
In this paper, a mathematical model is proposed to estimate the distortion caused by random packet losses for multi-view video transmission. Based on the study of multiview video coding, the proposed model takes into account the disparity/motion compensation which relates the channel-induced distortion in the current frame with that in the previous frame or the adjacent view, and allows for any motion-compensated and disparity-compensated concealment method at the decoder. Comparative studies between the modeled and simulated distortion results demonstrates that the proposed model is able to estimate the transmission distortion of multi-view video with high accuracy
- …