218 research outputs found
Height estimation from single aerial images using a deep ordinal regression network
Understanding the 3D geometric structure of the Earth's surface has been an
active research topic in photogrammetry and remote sensing community for
decades, serving as an essential building block for various applications such
as 3D digital city modeling, change detection, and city management. Previous
researches have extensively studied the problem of height estimation from
aerial images based on stereo or multi-view image matching. These methods
require two or more images from different perspectives to reconstruct 3D
coordinates with camera information provided. In this paper, we deal with the
ambiguous and unsolved problem of height estimation from a single aerial image.
Driven by the great success of deep learning, especially deep convolution
neural networks (CNNs), some researches have proposed to estimate height
information from a single aerial image by training a deep CNN model with
large-scale annotated datasets. These methods treat height estimation as a
regression problem and directly use an encoder-decoder network to regress the
height values. In this paper, we proposed to divide height values into
spacing-increasing intervals and transform the regression problem into an
ordinal regression problem, using an ordinal loss for network training. To
enable multi-scale feature extraction, we further incorporate an Atrous Spatial
Pyramid Pooling (ASPP) module to extract features from multiple dilated
convolution layers. After that, a post-processing technique is designed to
transform the predicted height map of each patch into a seamless height map.
Finally, we conduct extensive experiments on ISPRS Vaihingen and Potsdam
datasets. Experimental results demonstrate significantly better performance of
our method compared to the state-of-the-art methods.Comment: 5 pages, 3 figure
A dual network for super-resolution and semantic segmentation of sentinel-2 imagery
There is a growing interest in the development of automated data processing workflows that provide reliable, high spatial resolution land cover maps. However, high-resolution remote sensing images are not always affordable. Taking into account the free availability of Sentinel-2 satellite data, in this work we propose a deep learning model to generate high-resolution segmentation maps from low-resolution inputs in a multi-task approach. Our proposal is a dual-network model with two branches: the Single Image Super-Resolution branch, that reconstructs a high-resolution version of the input image, and the Semantic Segmentation Super-Resolution branch, that predicts a high-resolution segmentation map with a scaling factor of 2. We performed several experiments to find the best architecture, training and testing on a subset of the S2GLC 2017 dataset. We based our model on the DeepLabV3+ architecture, enhancing the model and achieving an improvement of 5% on IoU and almost 10% on the recall score. Furthermore, our qualitative results demonstrate the effectiveness and usefulness of the proposed approach.This work has been supported by the Spanish Research Agency (AEI) under project PID2020-117142GB-I00 of the call MCIN/AEI/10.13039/501100011033. L.S. would like to acknowledge the BECAL (Becas Carlos Antonio López) scholarship for the financial support.Peer ReviewedPostprint (published version
DFPENet-geology: A Deep Learning Framework for High Precision Recognition and Segmentation of Co-seismic Landslides
The following lists two main reasons for withdrawal for the public. 1. There
are some problems in the method and results, and there is a lot of room for
improvement. In terms of method, "Pre-trained Datasets (PD)" represents
selecting a small amount from the online test set, which easily causes the
model to overfit the online test set and could not obtain robust performance.
More importantly, the proposed DFPENet has a high redundancy by combining the
Attention Gate Mechanism and Gate Convolution Networks, and we need to revisit
the section of geological feature fusion, in terms of results, we need to
further improve and refine. 2. arXiv is an open-access repository of electronic
preprints without peer reviews. However, for our own research, we need experts
to provide comments on my work whether negative or positive. I then would use
their comments to significantly improve this manuscript. Therefore, we finally
decided to withdraw this manuscript in arXiv, and we will update to arXiv with
the final accepted manuscript to facilitate more researchers to use our
proposed comprehensive and general scheme to recognize and segment seismic
landslides more efficiently.Comment: 1. There are some problems in the method and results, and there is a
lot of room for improvement. Overall, the proposed DFPENet has a high
redundancy by combining the Attention Gate Mechanism and Gate Convolution
Networks, and we need to further improve and refine the results. 2. For our
own research, we need experts to provide comments on my work whether negative
or positiv
Residual Shuffling Convolutional Neural Networks for Deep Semantic Image Segmentation Using Multi-Modal Data
In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation
DEEP FULLY RESIDUAL CONVOLUTIONAL NEURAL NETWORK FOR SEMANTIC IMAGE SEGMENTATION
Department of Computer Science and EngineeringThe goal of semantic image segmentation is to partition the pixels of an image into semantically meaningful parts and classifying those parts according to a predefined label set. Although object recognition
models achieved remarkable performance recently and they even surpass human???s ability to recognize
objects, but semantic segmentation models are still behind. One of the reason that makes semantic
segmentation relatively a hard problem is the image understanding at pixel level by considering global
context as oppose to object recognition. One other challenge is transferring the knowledge of an object
recognition model for the task of semantic segmentation. In this thesis, we are delineating some of the
main challenges we faced approaching semantic image segmentation with machine learning algorithms.
Our main focus was how we can use deep learning algorithms for this task since they require the
least amount of feature engineering and also it was shown that such models can be applied to large scale
datasets and exhibit remarkable performance. More precisely, we worked on a variation of convolutional
neural networks (CNN) suitable for the semantic segmentation task. We proposed a model called deep
fully residual convolutional networks (DFRCN) to tackle this problem. Utilizing residual learning makes
training of deep models feasible which ultimately leads to having a rich powerful visual representation.
Our model also benefits from skip-connections which ease the propagation of information from the
encoder module to the decoder module. This would enable our model to have less parameters in the
decoder module while it also achieves better performance. We also benchmarked the effective variation
of the proposed model on a semantic segmentation benchmark.
We first make a thorough review of current high-performance models and the problems one might
face when trying to replicate such models which mainly arose from the lack of sufficient provided
information. Then, we describe our own novel method which we called deep fully residual convolutional
network (DFRCN). We showed that our method exhibits state of the art performance on a challenging
benchmark for aerial image segmentation.clos
- …